text
stringlengths
1
2.25M
--- abstract: 'Unicellular organisms exhibit elaborate collective behaviors in response to environmental cues. These behaviors are controlled by complex biochemical networks within individual cells and coordinated through cell-to-cell communication. Describing these behaviors requires new mathematical models that can bridge scales – from biochemical networks within individual cells to spatially structured cellular populations. Here, we present a family of multiscale models for the emergence of spiral waves in the social amoeba [*Dictyostelium discoideum*]{}. Our models exploit new experimental advances that allow for the direct measurement and manipulation of the small signaling molecule cAMP used by [*Dictyostelium*]{} cells to coordinate behavior in cellular populations. Inspired by recent experiments, we model the [*Dictyostelium*]{} signaling network as an excitable system coupled to various pre-processing modules. We use this family of models to study spatially unstructured populations by constructing phase diagrams that relate the properties of population-level oscillations to parameters in the underlying biochemical network. We then extend our models to include spatial structure and show how they naturally give rise to spiral waves. Our models exhibit a wide range of novel phenomena including a density dependent frequency change, bistability, and dynamic death due to slow cAMP dynamics. Our modeling approach provides a powerful tool for bridging scales in modeling of [*Dictyostelium*]{} populations.' author: - Javad Noorbakhsh - David Schwab - Allyson Sgro - Thomas Gregor - Pankaj Mehta bibliography: - 'References.bib' title: 'Multiscale modeling of oscillations and spiral waves in *Dictyostelium* populations' --- Introduction ============ Collective behaviors are ubiquitous in nature. They can be observed in diverse systems such as animal flocking, microbial colony formation, traffic jamming, synchronization of genetically engineered bacteria and social segregation in human populations [@Ballerini2008; @Ben-Jacob2000; @Flynn2009; @Danino2010; @Schelling1971]. A striking aspect of many of these systems is that they span a hierarchy of scales and complexity. A common property of such complex systems is that the collective behavior at larger scales can often be understood without a knowledge of many details at smaller scales. This important feature allows one to study the system on multiple distinct spatiotemporal scales and use the information obtained in each scale to develop a more coarse-grained model at a larger scale. This approach has been termed multi-scale modeling and provides a framework for the study of complex systems [@Meier-Schellersheim; @Qutub2009; @Noble2008]. Compared to a fully detailed modeling approach, multi-scale models are more amenable to computer simulations, contain fewer ad hoc parameters and are easier to interpret. As a result these models can be very useful in developing theoretical understanding of complex systems. For example, great success has been achieved in study of pattern formation in microbial colonies by modeling them as a continuum of cells with simple rules such as growth and diffusion [@Ben-Jacob2000]. One interesting system that exhibits different behaviors on different spatiotemporal scales is the social amoeba [*Dictyostelium discoideum*]{} [@Mehta2010]. [*Dictyostelium*]{} has a fascinating lifecycle. It starts as a population of unicellular organisms that can separately grow and divide. However, when starved, these cells enter a developmental program where individual cells aggregate, form multicellular structures, and eventually, fruiting bodies and spores. Upon starvation, cells produce the small signaling molecule cAMP, and excrete it into the environment in periodic bursts. Each cell responds to an increase in the concentration of extracellular cAMP by secreting more cAMP, resulting in pulses that propagate through the environment as spiral waves. Cells eventually aggregate at the center of these spiral waves and search for food collectively [@McMains2008]. In addition to its fascinating life cycle, [*Dictyostelium*]{} is also an important model organism for eukaryotic chemotaxis. The [*Dictyostelium*]{} chemotaxis network is highly conserved among eukaryotes [@Swaney2010], and is thought to be a good model for many medically relevant phenomena ranging from neutrophil chemotaxis and cancer metastasis to cell migration during animal morphogenesis [@Parent2004; @Zernicka-Goetz2006]. There has been extensive work on modeling the [*Dictyostelium*]{} signaling network, starting with the pioneering work by Martiel [*et al.*]{} [@Martiel1987]. These authors suggested that oscillations and spiral waves emerge from negative feedback based on desensitization and adaptation of the cAMP receptor. More recent models extend on this work by incorporating additional proteins known to play a significant role in the [*Dictyostelium*]{} signaling network [@Laub1998]. Although very successful at producing oscillations and spiral patterns, these models are inconsistent with recent quantitative experiments that show cells oscillate even in the presence of saturating levels of extracellular cAMP [@Gregor2010]. Other models have focused on reproducing the eukaryotic chemotaxis network, which shares many molecular components with the signaling network responsible for collective behavior [@Takeda2012; @Wang2012]. These models explore how cells respond to an externally applied pulse of cAMP but do not attempt to model oscillations or spiral waves. Combinations of such models with oscillatory networks represent a possible route for multiscale modeling [@Xiong2010] but have not been extensively studied. Other models have focused on reproducing spiral waves in [*Dictyostelium*]{} populations using reaction diffusion equations and cellular automata [@Aranson1996; @Kessler1993]. While these models tend to be very successful at producing population level behaviors, it is hard to relate these models to the behavior of single cells. This highlights the need for new mathematical models that can bridge length and complexity scales. Recently, there have been tremendous experimental advances in the study of [*Dictyostelium*]{}. Using microfluidics and single-cell microscopy, it is now possible to produce high-resolution time-course data of how single [*Dictyostelium*]{} cells respond to complex temporal cAMP inputs [@Gregor2010; @Xiong2010; @Wang2012; @Cai2011; @Song2006; @Sawai2007; @Cai2010; @Masaki2013]. By combining such quantitative data with ideas from dynamical systems theory and the theory of stochastic processes, we recently [@Sgro2014] proposed a new universal model for the [*Dictyostelium*]{} signaling network, based on an excitable signaling network coupled to a logarithmic “pre-processing" signaling module (see Figure \[fig:Schematic\]). To make a phenomenological model for single and multicellular behavior we exploited the observation that the [*Dictyostelium*]{} signaling network is poised near a bifurcation to oscillation. Each [*Dictyostelium*]{} cell was treated as an excitable FitzHugh-Nagumo model that was coupled to other cells through the concentration of the extracellular cAMP. A central finding of this model was that intracellular noise is a driving force for multicellular synchronization. Inspired by these results, in this paper we analyze a family of models for cells communicating via an external signal such as cAMP. The external signal is detected by the cell, transduced through a preprocessing module which can be linear, logarithmic, or Michaelis-Menten, and then fed into an excitable signaling network. Using these models, we explore the rich population-level behaviors that emerge in coupled oscillator systems from the interplay of stochasticity, excitability, and the dynamics of the external medium. We also extend our models to include space and show that spiral waves naturally emerge in the limit of large population densities. In contrast to earlier models for spiral waves, we can explicitly include the dynamics of extracellular cAMP and treat it distinctly from the dynamics of signaling networks. Our model naturally overlaps with, and complements, the extensive literature of coupled oscillatory and excitable systems. Coupled oscillators have been observed in many different biological systems such as neuronal networks, circadian rhythm, Min system and synthetic biological oscillators [@Traub1989; @Enright1980; @Mirollo1990; @Meinhardt2001; @Danino2010]. Most theoretical models focus on directly coupled oscillators and relatively little work has been done on noisy oscillators coupled through a dynamical external medium such as cAMP [@Schwab2012a; @Schwab2012]. Furthermore, an important aspect of our model is the role played by stochasticity. It is well-known that noisy systems are not easily amenable to traditional methods in dynamical systems theory [@Tanabe2001; @Lindner2004] and concepts such as bifurcation point are ill-defined in this context. For this reason, the [*Dictyostelium*]{} signaling network provides a rich, experimentally tractable system for exploring the physics of noisy oscillators coupled through an external medium. The paper is organized as follows. We start by introducing our family of models. We then construct phase diagrams describing the behavior of spatially-homogenous populations, focusing on the regime where extracellular signaling molecules are degraded quickly compared to the period of oscillations. We then analyze the opposite regime where signaling dynamics is slow and show that this gives rise to novel new behaviors such as dynamic death. Finally, we extend the model to spatially inhomogeneous populations and study how spiral waves naturally arise in these models. We then discuss the biological implications of our results, as well as, the implications of our model for furthering our understanding of coupled oscillators. Modeling *Dictyostelium* Populations {#sec:Model} ===================================== New experimental advances allow for the direct measurement and manipulation of the small signaling molecule cAMP used by [*Dictyostelium*]{} cells to coordinate behavior in cellular populations. In such experimental systems, a few hundred [*Dictyostelium*]{} cells are confined in a microfluidic device. The levels of intracellular cAMP within cells can be measured quantitatively using a Förster Resonance Energy Transfer (FRET)-based sensor [@Gregor2010; @Sgro2014]. This allows for precise, quantitative measurements of the response of the [*Dictyostelium*]{} signaling networks to complex temporal signals of extracellular cAMP. Cells are placed in a microfluidic device at a density $\rho$. The microfluidic device allows for rapid mixing and exchange of extracellular buffer, which ensures that cells experience a uniform and controlled environment. The flow rate of buffer can be experimentally manipulated. Large flows wash away the extracellular cAMP produced by cells, resulting in a larger effective degradation rate, $J$, for extracellular cAMP. It is also possible to add cAMP to the buffer at some rate $\alpha_f$. This experimental set-up is summarized in Figure \[fig:Schematic\]. We start by building models for spatially unstructured populations where the extracellular cAMP concentration is assumed to be uniform. In this case, all cells in the chamber sense the same extracellular cAMP concentrations and we can ignore all spatial effects. To model individual cells, we build upon our recent work [@Sgro2014] where we showed that the dynamics of the [*Dictyostelium*]{} signaling network can be modeled using a simple, universal, excitable circuit: the noisy Fitzhugh-Nagumo (FHN) model. To realistically model the [*Dictyostelium*]{} signaling circuit, it is necessary to augment the FHN with an additional “pre-processing” module that models the signal transduction of extra-cellular cAMP levels upstream of this core excitable circuit (Figure \[fig:Schematic\]B). In the full signaling circuit, extracellular cAMP is detected by receptors on cell membrane. The resulting signal is funneled through several signal transduction modules, ultimately resulting in the production of cAMP. To model this complicated signal transduction process, we use a family of preprocessing modules, whose output serves as an input into the universal excitable circuit. Inspired by the [*Dictyostelium*]{} circuit, we assume that the dynamics of the preprocessing module are fast compared to the excitable dynamics of cAMP signaling circuit. For example, the typical time scale associated with the early signaling protein Ras is of order 30 seconds whereas cAMP oscillations have periods of order 300 seconds [@Takeda2012; @Gregor2010]. This allows us to model the preprocessing modules using a monotonically increasing function, $I(S)$, that relates the output of the preprocessing module to the extracellular cAMP concentration, $S$. In this work, we will consider three different biologically inspired pre-processing modules: (1) a linear module $I(S)=S$ where the extracellular cAMP signal does not undergo any preprocessing; (2) a Michaelis-Menten module, $$I(S)=\frac{\beta S}{S+K_D},$$ where the output is a saturating function of the extracellular cAMP; and (3) the logarithmic module that senses fold changes $$I(S)=a\log{\left(1+S/K\right)}.$$ The output of these modules is fed into a universal, excitable circuit modeled by the FHN. The FHN model consists a set of inter-locking positive and negative feedback loops consisting of an activator, $A$, that quickly activates itself through positive feedback, and on a slower time scale, activates a repressor $R$, that degrades the activator $A$. The FHN model is the prototypical example of an excitable system, and can spike or oscillate depending on the external input. To incorporate the biology of cAMP secretion by [*Dictyostelium*]{} cells in response to external inputs, we assume that when a cell spikes, it releases cAMP into the environment. To determine when a cell spikes, we threshold the activator variable $A$ using a Heaviside function $\Theta(A)$, where $\Theta(x)=1$ if $x>0$ and $\Theta(x)=0$ if $x=0$. Finally, we assume that cells produce and secrete cAMP at a spike-independent basal rate, $\alpha_0$. This can be summarized by the equations $$\begin{aligned} \label{eqn:Model} \frac {dA_i}{ dt} &= A_i-\frac{1}{ 3}{A_i}^3-R_i+I(S)+\eta_i(t) , \qquad i = \{1,2,...,N\}\\\nonumber \frac{dR_i}{dt}&= \epsilon (A_i - \gamma R_i+C) \\\nonumber \frac{dS}{dt}&= \alpha_f + \rho \alpha_0 + \rho D\frac{1}{N}\sum_{i=1}^N \Theta(A_i)-JS, \end{aligned}$$ where $i$ is the index of cells changing from 1 to the total number of cells, $N$. The variable $A_i$ and $R_i$ are the internal states of the $i$’th cell and correspond to activator and repressor, respectively. $S$ is the concentration of extracellular cAMP and $I(S)$ is the preprocessing module, $\rho$ is the density of cells, $D$ measures the amount of cAMP released into the environment when a cell spikes, and $J$ is the total degradation rate of the extracellular cAMP. Finally, we have incorporated stochasticity using a Langevin term, $\eta_i(t)$. In particular, $\eta_i(t)$ is an additive Gaussian white noise term with mean and correlation defined as: $$\begin{aligned} \left<\eta(t)\right>&=0\\\nonumber \left<\eta_i(t)\eta_j(t')\right> &=\sigma^2\delta_{ij}\delta(t-t')\end{aligned}$$ The model and corresponding parameters are summarized in figures \[fig:Schematic\]A and \[fig:Schematic\]B. Using this model, we can explore a series of questions about how the architecture of the [*Dictyostelium*]{} signaling circuit within cells affects population-level behaviors. Recent experimental data suggests that the behavior of [*Dictyostelium*]{} circuit is well described by the logarithmic preprocessing module and responds to fold changes in extracellular cAMP [@Sgro2014]. This leads to natural questions about the consequences of pre-processing in the [*Dictyostelium*]{} signaling circuit. In particular, using our model we will examine how [*Dictyostelium*]{} exploits the interplay between stochasticity, excitability, and signal processing to control population-level behavior. Behavior for large degradation rates of extracellular cAMP {#sec:LargeDegradationRate} ========================================================== The quasi-steady-state limit ---------------------------- In general, the dynamics described by the family of models described by Eq. are quite complex. For this reason, it is worth considering various limits in which the dynamics simplifies. One such limit that can be realized experimentally is the limit where the extracellular cAMP is degraded quickly compared to the dynamics of the [*Dictyostelium*]{} circuit. This limit can be realized experimentally by changing the flow rate of buffer into the microfluidic device (see Fig. \[fig:Schematic\]). In this limit, there exists a separation of time-scales between external medium and individual oscillators and we can treat the extra-cellular cAMP as a fast variable that is always in a quasi-steady state and set $dS/dt=0$ in . In this limit, one has $$\begin{aligned} S\approx{\alpha_f+\rho\alpha_0\over J} + {\rho D\over J}\frac{1}{N}\sum_{i=1}^N \Theta(A_i). \label{eq:SLargeJ}\end{aligned}$$ For the remainder of this section, we will work within this quasi steady-state approximation. A formal definition of what constitutes large $J$ will be discussed in section \[sec:SmallDegradationRate\] where we will give numerical evidence showing that there exist a minimum value $J_m$ above which this approximation is valid. In this limit, it is helpful to divide the extracellular cAMP into two terms that reflect the two mechanisms by which extracellular cAMP is produced (see Fig. \[fig:Schematic\]). First, cells can secrete cAMP at a basal rate $\alpha_0$. We denote the extracellular cAMP produced by this basal leakage, $S_0$, and in the quasi steady-state approximation this is given by $$\begin{aligned} S_0&\equiv{\alpha_f+\rho\alpha_0\over J}. \label{eqn:Axes1}\end{aligned}$$ where the experimental input flow, $\alpha_f$ is also incorporated into the definition. The second mechanism by which extracellular cAMP is produced is through the release of cAMP into the environment when cells spike. We can parameterize the extracellular cAMP produced by this latter mechanism by $$\begin{aligned} \Delta S &\equiv& {\rho D\over J }, \label{eqn:Axes2}\end{aligned}$$ with the total extracellular cAMP produced by spiking given by the expression, $$\begin{aligned} \Delta S \left<\Theta(A)\right> \equiv \Delta S\frac{1}{N}\sum_{i=1}^N \Theta(A_i) \label{squarewave}\end{aligned}$$ To better understand the quantities $S_0$ and $\Delta S$, it is useful to consider an ideal situation where all the cells in a population are perfectly synchronized. In this case, $\left<\Theta(A)\right>$ will periodically switch between $0$ and $1$. Hence $S$ will behave like a square wave with baseline $S_0$ and amplitude $\Delta S$ (see figure \[fig:Schematic\]C). Thus, $S_0$ corresponds to the cAMP levels in the troughs and $S_0+\Delta S$ the levels at peaks. These two quantities provide us with a succinct way to represent our model and in the following section and we will use them to produce phase diagrams in the large $J$ regime. Finally, we note that the square wave form of $S$ is merely a result of our choice of Heaviside function in dynamics of the external medium. Nonetheless, the basic separation of time scales discussed above holds even when the Heaviside function is replaced by a more realistic smooth function. Phase diagrams for population level oscillations {#sec:PhaseDiagram} ------------------------------------------------ Populations of [*Dictyostelium*]{} cells can exhibit several qualitatively distinct behaviors depending on the parameters of our model. Cells in a population can oscillate in phase with each other resulting in synchronized, population-level oscillations. We will call this behavior synchronized oscillations (SO). Alternatively, individual cells may oscillate, but the oscillations of the cells are out of phase. In this case, the phase differences between cells prevent the formation of coherent population level oscillations and we call these incoherent oscillations (IO). Finally, even individual cells may not spike. We will label this behavior No Oscillations (NO). To distinguish between these behaviors, it is useful to define three order parameters: the coherence, the single-cell firing rate, and population firing rate. Coherence measures how synchronized cells are within a population and is $1$ for a completely synchronized population and $0$ for a fully incoherent one (see Appendix \[app:Coherence\] for a formal definition). To determine the rate at which a cell $i$ spikes, we count how often the activator variable $A_i$ becomes positive over some averaging time. We then average the firing rate of individual cells over the population. Finally, we normalize the rate so that the single cell firing rate is $1$ for fast oscillations and is $0$ in the absence of spiking (see Appendix \[app:FiringRate\] for a formal definition). The population firing rate is defined as the firing rate of the average of activator across all cells in the population, $\left< A_i \right>$ and is also normalized to be between 0 and 1. Note that when we calculate the population firing rate we are measuring whether the average activator over all cells exhibits a spike. If cells are unsynchronized, this average activator $\left< A_i \right>$ will not exhibit any spikes. Thus, population firing rate is a measure of spike counts in a population that fires synchronously. Using these order parameters, we constructed phase diagrams characterizing the population level behavior for large degradation rates as a function of $S_0$ and $\Delta S$ (see equations ,). We calculated the coherence, single cell firing rate and population firing rate for equation for our three preprocessing modules as a function of $S_0$ and $\Delta S$ (see Figure \[fig:PhaseDiagrams\]). Each data point on these phase diagrams corresponds to one simulation of equation for a fixed simulation time (see Appendix \[app:ForwardIntegration\]) where $J$, $\alpha_0$, and $\rho$ are kept the same for the whole phase diagram and $\alpha_f$ and $D$ are chosen such that the desired $S_0$ and $\Delta S$ are achieved. Finally, we checked that phase diagram was insensitive to a ten fold increase in the degradation rate $J$ confirming our assumption that the dynamics depend on the parameters only through $\Delta S$ and $S_0$ (see figure \[fig:Alpha0DPhaseDiagJ100\]). This phase diagram contains three qualitatively different regions. We have labeled these different regions with NO for No Oscillation, CO for Coherent Oscillation and IO for Incoherent Oscillation. The crossover between these regions, which will be explained below, is shown by dashed lines and is labeled as CC for Coherent Crossover, IC for Incoherent Crossover and SC for Sensitivity Crossover. Note that the boundaries between different regions is approximate and has been achieved simply by a rough thresholding. In reality there is no sharp transition from one ‘region’ to another but instead due to noise this ‘transition’ happens in a continuous way. This is a general feature of noisy bifurcating systems and in practice, depending on the problem under study, a reasonable threshold has to be chosen to separate different qualitative behaviors. As a result the schematic phase diagrams drawn in figure \[fig:PhaseDiagrams\] are just rough sketches of the qualitative behavior of the system. In the NO region, single cells are silent and oscillations do not occur. As a result, both coherence and single cell firing rate are zero. In this region, the basal level of cAMP, $S_0$, is so small that the cells cannot be excited into oscillation. Note that at all points in the NO region, the parameters are such that individual cells are below the bifurcation to oscillations in the FHN model. However, even below the bifurcation cells occasionally spike due to stochasticity. In the CO region, cells oscillate coherently. This can be seen by noting that single cell firing rate is nonzero and coherence is close to one. By studying multiple time-courses we found that cell populations with coherence values approximately above 0.6 can be considered coherent (see figure \[fig:SamplesForCoherence\]). In the IO region, single cells oscillate but are unsynchronized. On the phase diagrams these are regions with large values of single cell firing rate (close to $1$) and small values of coherence (approximately less than 0.6). In this region individual cells oscillate and, in each firing, secrete cAMP into the environment. However this change in extracellular cAMP is not enough to excite the cells to synchronize. To understand the reason behind this, we need to look at changes in the input, $I(S)$, that the excitable systems receive. For a population of cells that is oscillating coherently, $S(t)$ can be thought of as a square wave oscillating between $S_0$ and $S_0+\Delta S$ (see Figure \[fig:Schematic\]C). Then the input of each excitable module within cells can be visualized as a square wave oscillating between $I_0$ and $I_0+\Delta I$ with: $$\begin{aligned} \label{eqn:DeltaI} I_0&=I(S_0)\\\nonumber \Delta I &= I(S_0+\Delta S)-I_0\end{aligned}$$ If changes in $\Delta I$, are smaller than the FHN’s sensitivity for signal detection, single cells will instead experience a constant input with no discernable fluctuations and cannot be coherent. For a preprocessor with a monotonically increasing convex functional form, $I(S)$ (with $I(0)=0$) such loss of coherence may happen due to very small $\Delta S$ or very large $S_0$. Our phase diagrams exhibit a number of crossovers between the three qualitative behaviors outlined above. The Incoherent Crossover (IC) separates regions with no oscillations from those where cells oscillate incoherently. This transition occurs when $\Delta S$ is not large enough to produce any discernible changes in the external medium. As a result each individual cell goes through this crossover as if it was alone and not communicating with other cells. For these uncoupled cells, as $S_0$ is increased the system gets closer to bifurcation and fires more often. Figure \[fig:SingleCellFreq\] shows this increase in firing rate for a single cell corresponding to $\Delta S=0$. There is also a crossover from the no oscillation region to coherent population level oscillations. We have labeled this the Coherent Crossover (CC). Here, as $S_0$ is increased, individual cells become more likely to spontaneously spike. These spontaneous spikes happen because, given a monotonically increasing function $I(S)$, for larger $S_0$ the excitable system’s input will be closer to bifurcation point, causing the system to become more excitable. As a result, noise can more often kick the system out of its stable fixed point, leading to a spike commonly referred to as an accommodation spike [@Izhikevich2007]. If $\Delta S$ is large enough, a single (or few) cell’s spike will be enough to cause a change in the external medium that can be sensed by other cells. The other cells will then get excited with a small delay, creating the effect of a synchronized spike. Because FHN has a refractory period, for sometime no cell will spike, until the effect is repeated again. The overall behavior in this way seems similar to coherent oscillations, but is in reality periodic synchronized noise-driven spikes that are happening way below the system’s bifurcation point. To show that this effect is noise-dependent we decreased noise by an order of magnitude for the system with logarithmic preprocessor and plotted the results (inset of \[fig:PhaseDiagrams\]C). We observed that CC shifted to the same value of $S_0$ as IC, indicating that the ‘knee’ shaped region (intersection of CC and SC) emerges due to noise. Finally, Sensitivity Crossover (SC) separates regions with coherent oscillation from those with incoherent or no oscillations. As one crosses the SC line, cells lose their ability to detect the changes in the external medium. Each excitable system has a response threshold and cannot respond to abrupt changes in its input if they are below this threshold. In our model this can occur either because $\Delta S$ is very small or due to the nonlinear form of preprocessor. The former case is a manifestation of the fact that for very small changes in the external medium, cells do not have any means of communication. However the latter case requires some further explanation. For two of the preprocessors used in our simulations (i.e. Michaelis-Menten and logarithmic) the function $I(S)$ was chosen to be concave and monotonically increasing. This means that, for a fixed $\Delta S$, as $S_0$ is increased $\Delta I$ in equation decreases. Once $\Delta I$ goes below the detection sensitivity of excitable modules, coherence will be lost. Note that since increasing $S_0$ and/or decreasing $\Delta S$ lead to decrease of $\Delta I$, for larger values of $\Delta S$ a larger value of $S_0$ is required to take the system from coherence to incoherence (assuming that sensitivity of excitable system is roughly independent of baseline $I_0$). This is why in figure \[fig:Schematic\]B,C the slope of SC is positive. An interesting observation is that the preprocessing module into the excitable system can dramatically change the phase diagram of the cellular population. This suggests that it is possible to have different population level behaviors from the same underlying excitable system by changing the upstream components of a network. We now briefly discuss the differences in phase diagrams arising from the choice of pre-processing modules. The first row in figure \[fig:PhaseDiagrams\]A shows the phase diagrams for a linear preprocessor. As can be seen in the schematic (last column), the curve for SC is almost flat (with a slight downward slope), a signature that single cell’s sensitivity to changes in the external medium is almost independent of the baseline $S_0$. However inclusion of a preprocessing module completely changes this effect. Figure \[fig:PhaseDiagrams\]B and figure \[fig:PhaseDiagrams\]C show the results for a Michaelis-Menten and logarithmic preprocessor respectively. Note that in both cases SC has a dramatic positive slope. This is due to the concave monotonically increasing nature of the preprocessors chosen. It is interesting to note that for the logarithmic preprocessor there is an extra ‘knee’ (where CC and SC intersect) that does not exist when Michaelis-Menten is used. A behavior reminiscent of this subregion has been observed experimentally [@Sgro2014] where increasing $S_0$ (by changing input flow) for a synchronously oscillating population destroys the oscillations, whereas further increase leads to incoherent oscillations. This suggests that the system is tuned to this corner. Interestingly, in this region of phase diagram $S_0$ and $\Delta S$ take the smallest possible values that can lead to coherent oscillations. Since $S_0$ and $\Delta S$ both correspond to production of cAMP by the cell, it seems reasonable from an evolutionary point of view if the system is fine-tuned to minimize its cAMP production while achieving the coherent oscillations. Experimentally, it is possible to move horizontally along this phase diagram by changing $\alpha_f$ and move laterally by changing $\rho$ and $J$. One interesting prediction of this phase diagrammatic approach is that for a coherently oscillating population of cells with a certain cell density increasing degradation rate, $J$ should decrease the minimum cAMP flow ($\alpha_f$) required to destroy the oscillations, an observation that has been confirmed experimentally [@Sgro2014]. Experimentally it is possible to change both $\rho$ and $J$. Gregor [*et al*]{} [@Gregor2010] measured population firing rate for different values of $\rho$ and $J$. They showed that there is a data collapse of the population firing rate as a function of $\rho/J$. To understand this behavior, we made phase diagrams as a function of $\rho$ and $J$ (Figure \[fig:rhoJHeatMap\]). The insets show that for large degradation rates, this data collapse occurs for all choices of pre-processing modules. The underlying reason for this is that, as discussed above, in this limit the population dynamics depends on the external medium only through $S_0$ and $\Delta S$. Both these quantities depend on $\rho$ and $J$ through the combination ${\rho \over J}$ (see Eq \[eqn:Axes1\] and Eq. \[eqn:Axes2\]). Frequency increase as a function of density {#sec:FrequencyIncrease} ------------------------------------------- Our model also suggests a mechanism for cell populations to tune their frequency in response to steps of cAMP. An example of a time-course simulation of this behavior is shown in figure \[fig:FrequencyIncrease\]A. In this figure a step of external cAMP is flowed into a population of coherently oscillating cells, leading to an increase in the frequency of oscillations. This frequency increase suggests that populations can tune their frequency by modulating the cAMP secretion and excretion rates. To explain the underlying reason for the frequency increase, it is useful to consider the extreme case of a perfectly synchronized oscillating population. For this case the extracellular cAMP concentration, $S(t)$, will be a square wave that oscillates between $S_0$ and $S_0+\Delta S$ (see figure \[fig:Schematic\]C). As a result, the input to the FHN module will be a square wave oscillating between $I_0$ and $I_0+\Delta I$ (see equation ). Thus, the dynamics of individual cells can be thought of as an FHN that periodically switches between two cubic nullclines, corresponding to the inputs $I_0$ and $I_0+ \Delta I$. A schematic of the phase portrait of this system is shown in figure \[fig:FrequencyIncrease\]B for two different values of $\Delta I$. As can be seen from this figure, a decrease in $\Delta I$ decreases the distance between the two cubic nullclines and leads to a shorter trajectory, hence larger frequency. So any mechanism that can decrease $\Delta I$ can lead to an increase in frequency. One such mechanism is by exploiting the nonlinearity of the preprocessing module. Note that in our example $\alpha_f$ is being increased while other parameters of the system are kept constant. This is equivalent to increasing $S_0$ while keeping $\Delta S$ constant. Since $I(S)$ is a monotonically increasing concave function, given a constant value of $\Delta S$ an increase in $S_0$ will lead to a decrease in $\Delta I$ (see figure \[fig:FrequencyIncrease\]). And this, in turn, leads to an increase in frequency. In practice, there are two other mechanisms that also contribute to frequency increase. However, they are not multicellular effects and happen independent of preprocessing module. The interested reader can refer to Appendix \[app:FrequencyIncrease\] for a detailed description. Note that to observe this behavior we have tuned the parameters of the system to a different point (black dots in figure \[fig:PhaseDiagrams\]C) than what has been used in the rest of the paper. This change of parameters was necessary to ensure that initially the cells were not oscillating at maximum frequency and yet would stay synchronized as $\alpha_f$ was increased. As a result it may not be possible to observe this behavior in wildtype [*Dictyostelium*]{} cells. However, it is of interest as a theoretical mechanism for frequency tuning and has the potential to be observed in [*Dictyostelium*]{} mutants or be implemented in synthetic biological networks of other organisms. Small Degradation Rate and Bistability {#sec:SmallDegradationRate} ====================================== Thus far, we have studied the behavior of our model in the large $J$ regime. In this section, we will instead focus on the regime where this parameter is small. For small values of $J$, the dynamics of the external medium becomes too slow to follow the oscillations of single cells. As a result, cells become unsynchronized. A behavior somewhat similar has been termed ‘dynamic death’ by Schwab [*et al.*]{} [@Schwab2012]. These authors studied a population of oscillators coupled to an external medium and observed incoherence due to inability of external signal to follow individual oscillators. In their system, the cause of incoherence was slow response of the external medium to inputs rather than slow degradation. However, in both cases, the underlying reason for the loss of coherence is the slow dynamics of external signal. We can numerically define a minimum degradation rate, $J_m$, below which the dynamics of the external medium are too slow to sustain population level oscillations. To do so, we identified the boundary separating the region of coherence from incoherence by thresholding the coherence at approximately $0.6$. This boundary is indicated by the black curves in the coherence plots in the first column of Fig. \[fig:rhoJHeatMap\]. We call the smallest value of $J$ on this curve the minimum degradation rate, $J_m$. Figure \[fig:SmallJ\]A shows a raster plot of the oscillations for $J=2 J_m$ and $J=0.5 J_m$, with all other parameters fixed. Notice that decreasing $J$ below $J_m$ completely destroys the ability to have synchronized population-level oscillations. Finally, it is worth emphasizing that due to the stochastic nature of our system, there is no sharp transition from coherence to incoherence at $J_m$. Rather, $J_m$ serves as a crude, but effective scale, for understanding external medium dynamics. To better understand $J_m$, we asked how it scaled with the cell period in the underlying FHN model. Since $J_m$ is a measure of when the external signaling dynamics are slow compared to the signaling dynamics of individual cells, we hypothesized that $J_m$ would scale with the frequency of oscillations in the underlying FHN model. To test this hypothesis, we changed single cell frequencies by changing $\epsilon$ in equation \[eqn:Model\]. We then determined $J_m$ in each case by finding the boundary between coherence and incoherence (see figure \[fig:Boundaries\]). The results are shown in figure \[fig:SmallJ\]B. As postulated, we find that increasing single cell firing rate leads to a higher value of $J_m$. These results numerically confirm our basic intuition that $J_m$ is a measure of when the external signal response is much slower than the time scale on which individual cells respond to stimuli. In section \[sec:PhaseDiagram\] we studied the system in the $J\gg J_m$ regime. Here, we re-examine the phase diagrams changes in the opposite limit when $J$ is decreased below $J_m$. To this end, we produced a set of phase diagrams with different values of $J$. Figure \[fig:BistabilityAll\] shows three representative phase diagrams showing this crossover. Notice that the phase diagram above $J_m$ at $J=3.2J_m$ is very similar to \[fig:PhaseDiagrams\]C; however, decreasing $J$ below $J_m$ to $J_m=0.32J_m$ creates a completely incoherent population in which single cells can oscillate in regions that previously contained no oscillations (NO). This is likely due to the fact that once a cell fires, the secreted cAMP takes a very long time to be degraded. During this time, other cells can spontaneously fire. These spiking events are incoherent but still give rise to elevated levels of external cAMP. More interestingly the transition from the behavior at large degradation rates ($J > J_m$) to small degradation rates ($J<J_m$) happens through an intermediate state with many peculiar data points (the middle row in figure \[fig:BistabilityAll\]A). To ensure that these peculiarities are not simulation artifacts we looked at some of them in more detail. Figure \[fig:BistabilityAll\]B is a time-course of the whole population for the point corresponding to the white circle on \[fig:BistabilityAll\]A. Note that the time-course is exhibiting a burst-like behavior. The system is in a low frequency state for some period of time, then stochastically switches to a high frequency state and after a few cycles switches back to original state. Interestingly, it remains coherent during the whole process. At this point we do not have a conclusive theory as to why this bistability happens. However, a similar behavior had been reported by Schwab [*et al.*]{} [@Schwab2012a] where a population of phase oscillators coupled to an external medium exhibited bistability as mean oscillator frequency was increased. We suspect that a similar mechanism is also in effect here. Spatial Extension of model produces spiral waves {#sec:SpatialModel} ================================================ As a final step in our modeling approach, we extended equation \[eqn:Model\] to model dense population of [*Dictyostelium*]{} cells. Here, we restrict ourselves to discussing the biologically realistic case of a logarithmic preprocessing module $I(S)={a \ln\left( 1+{S\over K}\right)}$, though similar results were obtained for other pre-processing modules. To model dense populations, we treat the activator, $A(x,y)$, repressor, $R(x,y)$, and extracellular cAMP, $S(x,y)$ as a function of the spatial coordinates $x,y$. Furthermore, we explicitly model the diffusion of the extracellular cAMP. This gives rise to a set of reaction-diffusion equations of the form: $$\begin{aligned} \label{eqn:SpatialModel} \frac {dA}{ dt} &= A-\frac{1}{ 3}{A}^3-R+I(S) ,\\\nonumber \frac{dR}{dt}&= \epsilon \left(A - \gamma R+C\right) \\\nonumber \frac{dS}{dt}&=\rho \alpha_0 + \rho D\Theta(A)-JS+\nabla^2S\end{aligned}$$ For simplicity we have not included the noise term, $\eta$, and the input cAMP flow, $\alpha_f$. Furthermore, diffusion coefficient has been set to 1 by absorbing it into the spatial coordinate. We simulated these equations using no-flow boundary conditions and random initial conditions. Figure \[fig:Spatial\] shows a snapshot of activator, $A$, over the whole space. The left column shows the results with initial conditions chosen such that at most one spiral forms (see Appendix \[app:ReactionDiffusion\]). Note that a spiral wave is clearly formed at large values of degradation rate ($J=10$). However, decreasing this parameter while keeping $\rho/J$ constant leads to complete disappearance of the spiral pattern. The right column in figure \[fig:Spatial\] shows the same results with initial conditions that lead to multiple spirals (see Appendix \[app:ReactionDiffusion\]). In this case, a similar disappearance of spiral waves is observed as degradation rate of cAMP is decreased. Disappearance of spiral waves has been observed in [*RegA*]{} mutants[@Sawai2005]. Since [*RegA*]{} intracellularly degrades cAMP, it can be thought of as one contributing factor to degradation rate $J$ in our simplified model. As a result, knocking out this gene could decrease $J$ and have an adverse effect on spiral formation. In this regard, this simple extension of our model is compatible with experiments. Besides models of [*Dictyostelium discoideum*]{}, spiral patterns in excitable media have been observed in many other contexts such as cardiac arrhythmia, neural networks and BZ reactions. In this regard, emergence of spiral patterns in a diffusive excitable medium is not new. However, in the context of [*Dictyostelium discoideum*]{}, a key difference between our model and previous models such as the one proposed by Aranson [*et al*]{} [@Aranson1996] is that in our model only the external medium $S$ can diffuse. Previous models made the biologically unrealistic assumption that the intracellular variables could diffuse and the external medium did not. Discussion ========== During starvation, [*Dictyostelium discoideum*]{} cells secrete periodic spikes of cAMP in response to extracellular cAMP levels and communicate by propagating waves of cAMP across space. We modeled this behavior using a multi-scale modeling approach. We constructed a family of dynamical models that increased in complexity. We started by modeling isolated cells. We then extended to the model to understand spatially-homogenous multicellular populations. Finally, we included the effects of space and diffusion. In our approach, we treated individual cells as noisy excitable systems that receive their input from a preprocessing module which responds to external cAMP concentrations. We coupled these cells through an external medium and studied their oscillations and coherence through phase diagrams. These phase diagrams provided us with a succinct, interpretable representation of our model. Using these diagrams, we found that the complex interplay of multicellularity, stochasticity and signal processing gives rise to a diverse range of phenomena that have been observed experimentally. By including space into this model we were able to produce spiral patterns and study them in different regimes. Using phase diagrams, we showed that the crossover from silence to coherent oscillations is noise-driven. In this process, some cells randomly fire, leading to the sudden secretion of cAMP and an increase in the external cAMP levels. This change in extracellular cAMP levels induces other cells in the population to spike, resulting in synchronized oscillations across the population. This behavior emerges from the complex interplay of cellular communication and stochasticity. In this process, each population-level spike consists of early spikers and late spikers, where the former drives the latter. This behavior is reminiscent of ’pacemaker’ cells which are hypothesized as driving forces for synchronization and pattern formation. But unlike traditional models, in our model no cell is intrinsically a pacemaker. Instead, early spikers are picked at random. Thus, noise is crucial to the observed dynamical behavior of cellular populations. To explore the effect of preprocessor we studied a family of models with different preprocessing modules. We found that the choice of a nonlinear function as the preprocessor leads to a new crossover from coherent oscillations to incoherent oscillations that is non-existent if a linear preprocessor was used. Furthermore, we find that the choice of preprocessors can lead to different responses to noise, with distinct signatures that can be inferred from experimental multicellular data. This allows us to confirm that [*Dictyostelium*]{} cells use a logarithmic preprocessor, a claim that has been suggested based on independent single cell experiments [@Sgro2014]. We encountered several interesting behaviors in our model that have implications for other coupled oscillator systems. For example, we found that the nonlinearities in the preprocessor can lead to a mechanism for populations of oscillators to change their frequency. Furthermore we found that slowing the dynamics of the external medium leads to incoherent oscillations. This behavior has been termed ‘dynamic death’ for coupled oscillators [@Schwab2012; @Schwab2012a], and we find that it occurs through a bistable state. Furthermore, in the spatial extension of our model, we observe a similar loss of spiral patterns due to slow dynamics of the medium. This suggests that the concept of dynamic death can be extended to spatially heterogeneous populations. Synchronization and formation of spiral waves provides a spatial cue for [*Dictyostelium*]{} cells, which guides them toward a common aggregation center. As a result, dynamic death can be undesirable for a population’s survival. It is well known that in wildtype cells phosphodiesterases (PDE) are constantly secreted intra- and extracellularly to degrade cAMP. We suspect that this mechanism may have evolved to avoid incoherence due to dynamic death. Despite the descriptive and predictive success of our simple model [@Sgro2014] it misses several points that could be the subject of future works. For example, we have treated the preprocessor as a static module. However, a more complete model that describes adaptation needs to include the dynamics of this module. Models that contain a feedforward network [@Takeda2012; @Wang2012; @Xiong2010] seem to be good candidates for this purpose. Furthermore, we have ignored the effect of noise in our spatially extended model. It would be interesting to find how noise can affect the random formation of spiral patterns and their stability and explore to what extent a spatially extended model is amenable to the phase-diagrammatic approach proposed here. Finally, it would be interesting to study our model through analytical approaches such as Fokker-Planck equations [@Acebron2004; @Lindner2004] and explore the possibility of new phases of behavior that have been neglected in our study. Acknowledgement =============== We would like to thank Charles Fisher and Alex Lang for useful comments. PM and JN were supported by NIH Grants K25GM086909 (to P.M.). DJS were supported by NIH K25 GM098875-02 and NSF PHY-0957573. The authors were partially supported by NIH Grant K25 GM098875. This work was supported by NIH Grants P50 GM071508 and R01 GM098407, by NSF-DMR 0819860, by an NIH NRSA fellowship (AES), and by Searle Scholar Award 10-SSP-274 (TG). Forward Integration {#app:ForwardIntegration} =================== In all simulations stochastic differential equations (equation ) have been solved using Euler-Maruyama method. The time-step throughout the paper has been $dt=0.005$ unless explicitly stated otherwise. Through trial and error we found that larger time-steps lead to unstable solutions in large parameter regimes and smaller time-steps did not lead to different results. As a result we believe that our choice of time-step produces reliable results. The simulations were started from random initial conditions with $A_i(t=0)$ and $R_i(t=0)$ independently chosen from a Gaussian distribution with mean $0$ and standard deviation $2$ and $S(t=0)$ was set to zero. Although these initial conditions are random and independent for different cells, there still may be correlations between them, meaning that the cells may be partially in phase. To avoid such correlations affecting coherence among cells, we ran each simulation for some waiting time $t_{wt}$ and then continued the simulation for an extra run time $t_{rt}$. The results during the run time are what is shown throughout the paper, while the results during the waiting time were discarded. We found that each simulation required a different amount of waiting time. This was especially dramatic for the case with a very small noise (the inset in figure \[fig:PhaseDiagrams\]C) where an extremely long waiting time was required. To determine the proper waiting time, we ran each simulation for multiple waiting times and compared the results. Usually when waiting time was too short we could see patterns of ‘bleeding’ in the phase diagram that could be avoided at longer waiting times. By comparing these results in each figure we established a waiting time $t_{wt}$ during which the system could ’forget’ its initial conditions. Firing Rate {#app:FiringRate} =========== To find the firing rate, $\mathcal{R}(A)$, of a signal $A(t)$ we thresholded the signal compared to zero and counted the number of positive ‘islands’. By positive ’islands’ we mean continuous positive intervals (i.e. $A(t)>0$) that are flanked by negative intervals (i.e. $A(t)\le0$). Such a definition would produce a correct measure of firing rate, if the signal was smooth. However, due to the noisy nature of the simulations, spurious small islands may form, which could be mistakenly counted as extra peaks. To avoid these undesirable counts we will filter out any small ‘islands’. The procedure is as follows: We first threshold the signal by defining $B(t)$: $$\begin{aligned} B(t)= \left\{ \begin{array}{l l} 1 & \quad A(t)>0\\ 0 & \quad \text{otherwise} \end{array} \right.\end{aligned}$$ To get rid of any noise-induced small ’islands’ in $B(t)$ we pass it through a low-pass filter. This is done by convolving the signal with: $$\begin{aligned} H(t)= \left\{ \begin{array}{l l} 1 & \quad 0\le t \le \tau\\ 0 & \quad \text{otherwise} \end{array} \right.\end{aligned}$$ where in all simulations $\tau=1$. To ensure that real peaks are not filtered out, this time-scale is chosen much larger than a typical spurious island width, but much smaller than any real peak width that we ever observed in our simulations. The result of the convolution is then thresholded again to give $$\begin{aligned} L(t)= \left\{ \begin{array}{l l} 1 & \quad B(t)*H(t)>0\\ 0 & \quad \text{otherwise} \end{array} \right.\end{aligned}$$ where star stands for convolution. We then count the number of positive ‘islands’ in $L(t)$ which correspond to the number of peaks in $A(t)$. The result is then divided by total time to give the value for firing rate, $\mathcal{R}(A)$. We tested this method on several signals and it was in perfect agreement with counts done manually. Coherence {#app:Coherence} ========= We defined a measure of coherence among a population of oscillators, $\mathcal{F}$, by treating each cell as a phase oscillator. This was done by treating variables $(A,R)$ of each oscillator as Cartesian coordinates and transforming them into polar coordinates. We then adopt the same definition for coherence used by Kuramoto [@Acebron2005]. The definition is such that for a perfectly incoherent system $\mathcal{F}=0$ and for a perfectly coherent system $\mathcal{F}=1$. The mathematical definition of this quantity is as follows: $$\begin{aligned} A_0&=\int_{t_{wt}}^{t_{wt}+t_{rt} }dt{1\over N}\sum_{k=1}^N A_k(t)\\\nonumber R_0&=\int_{t_{wt}}^{t_{wt}+t_{rt} } dt{1\over N}\sum_{k=1}^N R_k(t)\\\nonumber Z_k(t)&=\left(A_k(t)-A_0\right)+\left(R_k(t)-R_0\right)i\equiv r_k(t) e^{i\phi_k(t)}\\\nonumber \mathcal{F}&={1\over t_{rt}}\int_{t_{wt}}^{t_{wt}+t_{rt} } dt{1\over N}\sum_{k=1}^N e^{i\phi_k(t)}\end{aligned}$$ where $t_{wt}$ and $t_{rt}$ are respectively the waiting time and run time of the simulation (see Appendix \[app:ForwardIntegration\]). Figure \[fig:SamplesForCoherence\] provides a pictorial view of how $\mathcal{F}$ corresponds to coherence among a population of cells. It is easy by eye to pick coherence for populations with $\mathcal{F}\gtrsim 0.6$, whereas smaller values seem incoherent. Finally note that for a deterministic silent population this measure is ill-defined and will be equal to $1$. But, since in all of our multicellular simulations noise is present, we instead have $\mathcal{F}\approx 0$ whenever cells are not oscillating. Reaction Diffusion Simulations {#app:ReactionDiffusion} ============================== The spatial simulations were done using Euler method with Neumann boundary conditions. The spatial grid spacing was $\Delta x=0.5$ and time steps were chosen according to Neumann stability criterion, $dt={\Delta x^2\over 8}$. The initial conditions were set by laying a coarser grid of different sizes on top of the simulation box and setting random values for $A$ and $R$ within each cell of the coarse grid. Initially $S$ was set to zero across space. Simulations were run for some period of time until patterns appeared. The intersection points on the coarse grid, where a single point has four neighbors with different values, serve as the possible seeds for spiral centers. Hence a $2\times 2$ coarse grid leads to at most a single spiral on the center of simulation box (figure \[fig:Spatial\]A) and a $20\times 20$ grid lead to many more spirals (figure \[fig:Spatial\]B). In the latter case at most $19\times 19$ spiral centers can form. However in practice, due to random choice of initial conditions, typical length scale of spirals and topological constraints, this number tends to be much smaller. Single Cell Mechanisms of Frequency Increase {#app:FrequencyIncrease} ============================================ The mechanism introduced in section \[sec:FrequencyIncrease\] is not the only reason for the frequency increase observed in figure \[fig:FrequencyIncrease\]A. There is in fact a single cell frequency change that should not be confused with what has been described here. This single cell effect can be further separated into a deterministic component and a stochastic component. Figure \[fig:SingleCellFreq\] shows the response of a single FHN to an input $I$ with and without noise. Due to the choice of nullclines for our model, an increase in $I(S)$ increases the frequency of the noiseless FHN, once the input crosses the Hopf bifurcation. Furthermore addition of noise smears the bifurcation point and creates a graded increase in frequency of single cells. As a result any flow of cAMP into a population of cells leads to a frequency increase on a single cell level that becomes amplified by the multicellular mechanism described above. ![**Model Schematic -** [**A)**]{} Schematic of the experimental setup. A population of [*Dictyostelium discoideum*]{} cells with density $\rho$ is placed in a microfluidic chamber. cAMP is flown into the chamber with rate $\alpha_f$ and the medium is washed out with rate $J$. The concentration of extracellular cAMP is labeled by $S$. **B)** Schematic of cell model. Extracellular cAMP concentration ($S$) is detected by the cell and preprocessed through the function $I(S)$. The result is fed into an excitable system with internal variables $A$ and $R$. The value of $A$ is then thresholded and amplified by $D$ to produce more cAMP for secretion. Simultaneously cAMP is also being produced with a constant rate $\alpha_0$ and leaks into the extracellular environment. [**C)**]{} An idealized time-course of extracellular cAMP concentration ($S$) is shown in the large $J$ regime where the concentration changes according to a square wave with baseline $S_0$ and amplitude $\Delta S$. We refer to $S_0$ and $\Delta S$ as the background cAMP and firing-induced cAMP, respectively.[]{data-label="fig:Schematic"}](Schematic.pdf){width="0.8\linewidth"} ![**System Phase Diagram -** [**A)**]{} The first three plots from left are phase diagrams of coherence, single cell firing rate and population firing rate as a function of $S_0$ and $\Delta S$, in the large $J$ regime for linear preprocessing. The dashed line corresponds to values of $\Delta S$ and $S_0$ for which $D=2, \alpha_0=1$ and $\alpha_f=0$ with variable $\rho$ and $J$. Parameters are $J=10, \epsilon = 0.2, \gamma= 0.5, C = 1, \sigma= 0.15, dt= 0.005, t_{wt}=1000, t_{rt}=4000, N= 100$ and $I(S)=S$. The rightmost plot is a schematic of the phase diagrams marked with different regions. The regions consist of NO: No Oscillation, CO: Coherent Oscillation, IO: Incoherent Oscillation. For easier reference to different transitions the following lines have been introduced: SC: Sensitivity Crossover, IC: Incoherent Crossover, CC: Coherent Crossover [**B)**]{} Same plots as in (A) with a Michaelis Menten preprocessor. Parameters are same as in (A) with $t_{wt}= 11000$ and $I(S)=\beta S/(S+K_D)$ where $K_D=2.0, \beta=1.5$. The dashed line is plotted for $D=1000, \alpha_0=1000$ and $\alpha_f=0$. [**C)**]{} Same plots as in (A) with logarithmic preprocessor. The black dots correspond to parameter values chosen in figure \[fig:FrequencyIncrease\]A. The dashed line is plotted for $D=1000, \alpha_0=1$ and $\alpha_f=0$. Parameters are same as in (A) with $I(S)=a \ln(S/K+1)$ where $a=0.058, K=10^{-5}$ . Inset is the same plots for a noise level 10 times smaller ($\sigma=0.015$) run for a longer waiting time ($t_{wt}=50000$). []{data-label="fig:PhaseDiagrams"}](PhaseDiagrams){width="\linewidth"} ![**Effect of $\rho$ and $J$ -** [**A)**]{} Plot of coherence, single cell firing rate and population firing rate for different values of $\rho$ and $J$ with linear preprocessor. Parameters same as in figure \[fig:PhaseDiagrams\]A with $\alpha_0=1, D=2, \alpha_f=0$ corresponding to dashed line in figure \[fig:PhaseDiagrams\]A. The black curve in the coherence graph is where coherence is equal to $0.6$, marking an approximate boundary for crossover between coherence and incoherence. The dashed line is the leftmost line with constant $J$ that intersects with the balck curve. We have called the value of $J$ on this line $J_m$. The inset is population firing rate as a function of $\rho/J$, showing a data collapse for which data points are taken from the population firing rate heat map. To avoid effects of small degradation rate only values with $J>3J_m$ are plotted in the inset. [**B)**]{} Same plot as in (A) with Michaelis-Menten preprocessing. Parameters same as in \[fig:PhaseDiagrams\]B with $\alpha_0=1000, D=1000$ corresponding to dashed line in figure \[fig:PhaseDiagrams\]B. The inset is plotted for $J>10J_m$. [**C)**]{} Same plot as in (A) with logarithmic preprocessing. Parameters same as in \[fig:PhaseDiagrams\]C with $\alpha_0=1, D=1000$ corresponding to dashed line in figure \[fig:PhaseDiagrams\]C. The inset is plotted for $J>10J_m$. []{data-label="fig:rhoJHeatMap"}](rhoJHeatMap){width="\linewidth"} ![**Frequency Increase -** [**A)**]{} Time-course of a cell population in response to a step in input flow ($\alpha_f$). Parameters same as in figure \[fig:PhaseDiagrams\]C with $J=10, \rho=1, \alpha_0=0.01, D=9000$. Midway through the simulation, $\alpha_f$ is changed abruptly from $0$ to $100$. [**B)**]{} Blue curve shows the preprocessing function, $I(S)$, for different values of external cAMP, $S$. For two different input values, $S_0$ and $S_0'$ a constant change, $\Delta S$, leads to different changes in $I(S)$ (shown by $\Delta I$ and $\Delta I'$) such that for $S_0'>S_0$ we get $\Delta I'<\Delta I$. Phase portraits corresponding to $\Delta I$ and $\Delta I'$ are shown on the right side, showing a smaller distance between the two nullclines in the latter case and a consequent shorter trajectory over a period of oscillation. The trajectory of the system alternates between two cubic nullclines (red curves) leading to an effectively longer trajectory for larger $\Delta I$.[]{data-label="fig:FrequencyIncrease"}](FreqIncrease){width="\linewidth"} ![ **Small $J$ Regime -** [**A)**]{} A raster plot of $A$ as a function of time for degradation rate ($J$) greater and smaller than $J_m$ showing a crossover to incoherence as $J$ is decreased. Each row in the raster plot corresponds to time-course of activator of one cell within the population. Parameters same as in figure \[fig:PhaseDiagrams\]C with $\rho=1, \alpha_f=0$ [**B)**]{} Plot of $J_m$ as a function of single cell firing rate. Firing rate is chnaged by changing $\epsilon$ while keeping all other parameters same as in part (A). The inset shows how the single cell firing rate changes as a function of $\epsilon$. Parameters same as in part A. []{data-label="fig:SmallJ"}](SmallJ){width="\linewidth"} ![ **Bistability at Crossover to Small $J$ -** [**A)**]{} Plot of coherence, single cell firing rate and population firing rate as a function of of $\log_{10}(S_0)$ and $\log_{10}(\Delta S)$ for three different values of degradation rate $J$. The white circle corresponds to one point on the phase diagram with $J=1.6J_m$ for which a time-course is shown in figure B. Parameters same as in figure \[fig:PhaseDiagrams\]C [**B)**]{} A section of the time course of the system is shown with parameters chosen corresponding to the white circle in figure A (middle row). Each thin curve with a different color corresponds to time course of the activator of one cell. For presentation purposes nnly 10 cells are shown (picked at random). The black curve is the time-course of the average of activators of all cells. Parameters same as in part C with $J=0.5, \rho=1, D= 10^{1.8}J, \alpha_0=10^{-5.1}J$ []{data-label="fig:BistabilityAll"}](BistabilityAll){width="\linewidth"} ![ **Spatial Simulations -** Simulation results of spatially extended model at different values of $J$. The colors shown represent different levels of $A$. Each row corresponds to a different value of $J$ and $\rho$ such that $\rho/J$ remains the same. The left column corresponds to initial conditions chosen from a $2 \times2$ coarse grid of random values that is overlayed on the simulation box (see Appendix \[app:ReactionDiffusion\]). And the right column shows the same simulation with initial conditions set on a $20 \times20$ coarse grid. Parameters were kept the same as in \[fig:rhoJHeatMap\]C with $\rho=0.1J$. Simulations were done on a $100\times100$ box with grid spacing $\Delta x=0.5$ and time steps according to $dt={\Delta x^2\over 8}$. []{data-label="fig:Spatial"}](Spirals){width="\linewidth"} ![ **SI - Phase Diagram with Large J -** Same simulation as in figure \[fig:PhaseDiagrams\]A with larger degradation rate ($J=100$). All the other parameters were kept the same as in figure \[fig:PhaseDiagrams\]A. []{data-label="fig:Alpha0DPhaseDiagJ100"}](Alpha0DPhaseDiagJ100){width="\linewidth"} ![**SI - Single Cell Firing Rate -** Firing rate as a function of constant input $I$ for noisy ($\sigma=0.15$) and noiseless ($\sigma=0$) FHN. Parameters same as in figure \[fig:PhaseDiagrams\]C.[]{data-label="fig:SingleCellFreq"}](SingleCellFreq){width="0.6\linewidth"} ![ **SI - Samples for Coherence -** Raster plots of a population of 100 oscillating cells are shown as a function of time for different levels of coherence $\mathcal{F}$. From this figure it can be seen that $\mathcal{F}\approx0.6$ is a reasonable threshold for separating coherence from incoherence. []{data-label="fig:SamplesForCoherence"}](SamplesForCoherence){width="\linewidth"} ![ **SI - Boundaries -** Boundary of coherence heat map is plotted for different values of $\epsilon$. Each curve is plotted similar to the black curve in figure \[fig:rhoJHeatMap\]C. Parameters same as in figure \[fig:rhoJHeatMap\]C. []{data-label="fig:Boundaries"}](Boundaries){width="0.6\linewidth"}
--- abstract: | In this work, we have studied the logarithmic entropy corrected holographic dark energy (LECHDE) model with Granda-Oliveros (G-O) IR cutoff. The evolution of dark energy (DE) density $\Omega'_D$, the deceleration parameter, $q$, and equation of state parameter (EoS), $\omega_{\Lambda}$, are calculated. We show that the phantom divide may be crossed by choosing proper model parameters, even in absence of any interaction between dark energy and dark matter. By studying the statefinder diagnostic and $\omega_{\Lambda}-\omega_{\Lambda}^{\prime}$ analysis, the pair parameters $\{r,s\}$ and $(\omega_{\Lambda}-\omega_{\Lambda}^{\prime})$ is calculated for flat GO-LECHDE universe. At present time, the pair $\{r,s\}$ can mimic the $\Lambda$CDM scenario for a value of $\alpha/\beta\simeq 0.87$, which is lower than the corresponding one for observational data ($\alpha/\beta=1.76$) and for Ricci scale ($\alpha/\beta=2$). We find that at present, by taking the various values of ($\alpha/\beta$), the different points in $r-s$ and $(\omega_{\Lambda}-\omega_{\Lambda}^{\prime})$ plans are given. Moreover, in the limiting case for a flat dark dominated universe at infinity ($t\rightarrow \infty$), we calculate $\{r,s\}$ at G-O scale. For Ricci scale ($\alpha = 2$, $\beta = 1$) we obtain $\{r=0,s=2/3\}$. author: - 'A. Khodam-Mohammadi $^{1}$,  Antonio Pasqua $^2$,  M. Malekjani $^1$,  Iuliia Khomenko $^3$,  M. Monshizadeh $^4$' title: 'Statefinder diagnostic of logarithmic entropy corrected holographic dark energy with Granda-Oliveros IR cut-off' --- Introduction ============ It is widely accepted among cosmologists and astrophysicists that our universe is experiencing an accelerated expansion. The evidences of this accelerated expansion are given by numerous and complementary cosmological observations, like the SNIa [@perlmutter; @astier], the CMB anisotropy, observed mainly by WMAP (Wilkinson Microwave Anisotropy Probe) [bennett-09-2003,spergel-09-2003]{}, the Large Scale Structure (LSS) [tegmark,abz1,abz2]{} and X-ray [@allen] experiments. In the framework of standard Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmology, a missing energy component with negative pressure (known as Dark Energy (DE)) is the source of this expansion. Careful analysis of cosmological observations, in particular of WMAP data [@bennett-09-2003; @spergel-09-2003; @peiris] indicates that almost 70 percent of the total energy of the universe is occupied by DE, whereas DM occupies almost the rest (the barionic matter represents only a few percent of the total energy density). The contribution of the radiation is practically negligible.The nature of DE is still unknown and many candidates have been proposed in order to describe it (see [@copeland-2006; @Padmanabhan-07-2003; @peebles] and references therein for good reviews).The time-independent cosmological constant $\Lambda $ with equation of state (EoS) parameter $\omega =-1$ is the earliest and simplest DE candidate. However, cosmologists know that $\Lambda $ suffers from two main difficulties: the fine-tuning and the cosmic coincidence problems [@copeland-2006]. The former asks why the vacuum energy density is so small (about $10^{-123}$ times smaller than what we observe) [@weinberg] and the latter says why vacuum energy and DM are nearly equal today (which represents an incredible coincidence if no internal connections between them are present).Alternative candidates for DE problem are the dynamical DE scenarios with no longer constant but time-varying $\omega $. It has been shown by observational data analysis of SNe-Ia that the time-varying DE models give a better fit compared with a cosmological constant. A good review about the problem of DE, including a survey of some theoretical models, can be found in [@miao].An important advance in the study of black hole theory and string theory is the suggestion of the so called holographic principle: according to it, the number of degrees of freedom of a physical system should be finite, it should scale with its bounding area rather than with its volume [@thooft] and it should be constrained by an infrared cut-off [@cohen]. The Holographic DE (HDE), based on the holographic principle proposed by [fischler]{}, is one of the most interesting DE candidates and it has been widely studied in literature [@enqvist-02-2005; @shen; @zhangX-08-2005; @zhangX-11-2006; @sheykhi-03-01-2010; @huang-08-2004; @hsu; @guberina-05-2005; @guberina-05-2006; @gong-09-2004; @elizalde-05-2005; @jamil-01-2010; @jamil1; @jamil2; @jamil3; @jamil4; @jamil5; @setare-11-2006; @setare-01-2007; @setare-05-2007; @setare-01-05-2007; @setare-09-2007; @setare-10-2007; @setare-11-2007; @setare-08-2008; @setare-02-2010; @2011setare; @2011khodam; @sheykhi-11-2009]. The HDE model have also been constrained and tested by various astronomical observations [enqvist-02-2005,shen,zhangX-08-2005,zhangX-07-2007,feng-02-2005,kao,micheletti,wangY,zhangX-05-2009]{} as well as by the anthropic principle [@huang-03-2005].Applying the holographic principle to cosmology, the upper bound of the entropy contained in the universe can be obtained [@fischler]. Following this line, [@li-12-2004] suggested the following constraint on the energy density: $$\rho _{\Lambda }\leq 3c^{2}M_{p}^{2}L^{-2},$$ where $c$ is a numerical constant, $L$ indicates the IR cut-off radius, $M_{p}=(8\pi G)^{-1/2}\simeq 10^{18}$GeV is the reduced Planck mass ($G$ is the gravitational constant) and the equality sign holds only when the holographic bound is saturated. Obviously, in the derivation of HDE, the black hole entropy (denoted with $S_{BH}$) plays an important role. As it is well known, $S_{BH}=A/(4G)$, where $A\approx L^{2}$ is the area of the horizon. However, this entropy-area relation can be modified as [banerjee-04-2008,banerjee-06-2008,banerjee-05-2009]{}: $$S_{BH}=\frac{A}{4G}+\tilde{\alpha}\log \left( \frac{A}{4G}\right) +\tilde{\beta}, \label{2}$$where $\tilde{\alpha}$ and $\tilde{\beta}$ are dimensionless constants. These corrections can appear in the black hole entropy in Loop Quantum Gravity (LQG). They can also be due to quantum fluctuation, thermal equilibrium fluctuation or mass and charge fluctuations. The quantum corrections provided to the entropy-area relationship leads to curvature correction in the Einstein-Hilbert action and viceversa [@cai-08-2009; @nojiri-2001; @zhu]. Using the corrected entropy-area relation given in Eq. (\[2\]), the energy density $\rho _{\Lambda }$ of the logarithmic entropy-corrected HDE (LECHDE) can be written as [@wei-10-2009]: $$\rho _{\Lambda }=3\alpha M_{p}^{2}L^{-2}+\gamma _{1}L^{-4}\log \left( M_{p}^{2}L^{2}\right) +\gamma _{2}L^{-4}, \label{3}$$where $\gamma _{1}$ and $\gamma _{2}$ are two dimensionless constants. In the limiting case of $\gamma _{1}=\gamma _{2}=0$, Eq. (\[3\]) yields the well-known HDE density.The second and the third terms in Eq. (\[3\]) are due to entropy corrections: since they can be comparable to the first term only when $L$ is very small, the corrections they produce make sense only at the early evolutionary stage of the universe. When the universe becomes large, Eq. (\[3\]) reduce to the ordinary HDE. It is worthwhile to mention that the IR cut-off $L$ plays an important role in HDE model. By assuming particle horizon as IR cut-off, the accelerated expansion can not be achieved [@hsu2], while for Hubble scale, event horizon, apparent horizon and Ricci scale, this fact may be achieved [@sheykhi-03-01-2010; @pavon2; @odintsov; @pavon1; @zimdahl].\ Recently, Granda and Oliveros (G-O), proposed a new IR cut-off for HDE model, namely ‘new holographic DE’, which includes a term proportional to $\overset{.}{H}\ \ $ and one proportional to $H^2$ [@grandaoliveros; @granda2]. Despite of the HDE based on the event horizon, this model depends on local quantities, avoiding in this way the causality problem.\ The investigation of cosmological quantities such as the EoS parameter $\omega _{\Lambda }$, deceleration parameter $q$ and statefinder diagnosis have attracted a great deal of attention in new cosmology. Since the various DE models give $H>0$ and $q<0$ at the present time, the Hubble and deceleration parameters can not discriminate various DE models. A higher order of time derivative of scale factor is then required. Sahni et al. [@sahni] and Alam et al. [@alam], using the third time derivative of scale factor $a\left( t \right)$, introduced the statefinder pair {r,s} in order to remove the degeneracy of $H$ and $q$ at the present time. The statefinder pair is given by: $$\begin{aligned} r&=&\frac{\overset{...}{a}}{aH^{3}}, \label{3s1}\\ s&=&\frac{r-1}{3(q-1/2)}. \label{3s}\end{aligned}$$ Many authors have been studied the properties of various DE models from the viewpoint of statefinder diagnostic [@state1; @state2; @state3; @state4; @state5; @state6]. This paper is organized as follows. In Section 2, we describe the physical contest we are working in and we derive the EoS parameter $\omega _{\Lambda } $, the deceleration parameter $q$ and $\Omega _{\Lambda }^{\prime }$ for GO-LECHDE model. In Section 3, the statefinder diagnosis and $\omega-\omega^{}\prime$ analysis of this model are investigated. We finished our work with some concluding remarks. cosmological properties ======================= The energy density of GO-LECHDE in Planck mass unit (i.e. $M_P=1$) is given by $$\begin{aligned} \rho _{\Lambda } =\frac{3}{L_{GO}^{2}}\left[ 1+\frac{1}{3}L_{GO}^{-2}\left( 2\gamma _{1}\log L_{GO}+\gamma _{2}\right) \right] =\frac{3}{L_{GO}^{2}}\Gamma\end{aligned}$$where we defined $\Gamma =1+\frac{1}{3}L_{GO}^{-2}\left( 2\gamma _{1}\log L_{GO}+\gamma _{2}\right) $ for simplicity. The Granda-Oliveros IR cutoff given by [@grandaoliveros; @khodam]: $$L_{GO}=\left( \alpha H^{2}+\beta \dot{H}\right) ^{-1/2}, \label{4}$$where $\alpha $ and $\beta $ are two constant.\ The line element of FLRW universe is given by: $$ds^{2}=-dt^{2}+a^{2}\left( t\right) \left( \frac{dr^{2}}{1-kr^{2}}+r^{2}\left( d\theta ^{2}+\sin ^{2}\theta d\varphi ^{2}\right) \right) , \label{7}$$where $t$ is the cosmic time, $a\left( t\right) $ is a dimensionless scale factor (which is function of the cosmic time $t$), $r$ is referred to the radial component, $k$ is the curvature parameter which can assume the values $-1,\,0$ and $+1$ which yield, respectively, a closed, a flat or an open FLRW universe and $\left( \theta ,\varphi \right) $ are the angular coordinates.\ The Friedmann equation for non-flat universe dominated by DE and DM has the form: $$H^{2}+\frac{k}{a^{2}}=\frac{1}{3}\left( \rho _{\Lambda }+\rho _{m}\right) , \label{8}$$where $\rho _{\Lambda }$ and $\rho _{m}$ are, respectively, the energy densities of DE and DM.We also define the fractional energy densities for DM, curvature and DE, respectively, as: $$\begin{aligned} \Omega _{m} &=&\frac{\rho _{m}}{\rho _{cr}}=\frac{\rho _{m}}{3H^{2}}, \label{9} \\ \Omega _{k} &=&\frac{\rho _{k}}{\rho _{cr}}=\frac{k}{H^{2}a^{2}}, \label{10} \\ \Omega _{\Lambda } &=&\frac{\rho _{\Lambda }}{\rho _{cr}}=\frac{\rho _{\Lambda }}{3H^{2}} \notag \\ &=&L_{GO}^{-2}H^{-2}\Gamma , \label{11}\end{aligned}$$where $\rho _{cr}=3H^{2}$ represents the critical energy density. Recent observations reveal that $\Omega _{k}\cong 0.02$ [@sperge], which support a closed universe with a small positive curvature.Using the Friedmann equation given in Eq. (\[8\]), Eqs. (\[9\]), ([10]{}) and (\[11\]) yield: $$1+\Omega _{k}=\Omega _{m}+\Omega _{\Lambda }. \label{12}$$In order to preserve the Bianchi identity or the local energy-momentum conservation law, i.e. $\nabla _{\mu }T^{\mu \nu }=0$, the total energy density $\rho _{tot}=\rho _{\Lambda }+\rho _{m}$ must satisfy the following relation: $$\dot{\rho}_{tot}+3H\left( 1+\omega _{tot}\right) \rho _{tot}=0, \label{13}$$where $\omega _{tot}\equiv p_{tot}/\rho _{tot}$ represents the total EoS parameter. In an non-interacting scenario of DE-DM, the energy densities of DE and DM $\rho _{\Lambda }$ and $\rho _{m}$ are preserved separately and the equations of conservation assume the following form: $$\begin{aligned} \dot{\rho}_{\Lambda } &+&3H\rho _{\Lambda }\left( 1+\omega _{\Lambda }\right) =0, \label{14} \\ \dot{\rho}_{m} &+&3H\rho _{m}=0. \label{15}\end{aligned}$$The derivative with respect to the cosmic time $t$ of $L_{GO}$ is given by: $$\dot{L}_{GO}=-H^{3}L_{GO}^{3}\left( \alpha \frac{\dot{H}}{H^{2}}+\beta \frac{\ddot{H}}{2H^{3}}\right) . \label{16}$$Using Eq. (\[16\]), the derivative with respect to the cosmic time $t$ of the energy density $\rho _{\Lambda }$ given in Eq. (\[3\]) can be written as: $$\begin{aligned} \dot{\rho}_{\Lambda }&=& 6H^{3}\left( \alpha \frac{\dot{H}}{H^{2}}+\beta \frac{\ddot{H}}{2H^{3}}\right)\times \nonumber\\ &&\left\{1+\frac{1}{3}L_{GO}^{-2}\left[ \gamma _{1}\left( 4\log L-1\right) +2\gamma _{2}\right] \right\}. \label{17}\end{aligned}$$Differentiating the Friedmann equation given in Eq. (\[8\]) with respect to the cosmic time $t$ and using Eqs. (\[11\]), (\[12\]), (\[15\]) and (\[17\]), we can write the term $\alpha \frac{\dot{H}}{H^{2}}+\beta \frac{\ddot{H}}{2H^{3}}$ as: $$\alpha \frac{\dot{H}}{H^{2}}+\beta \frac{\ddot{H}}{2H^{3}}=\frac{1+\frac{\dot{H}}{H^{2}}+\left( \frac{u}{2}-1\right) \Omega _{\Lambda }}{\{1+\frac{1}{3}L_{GO}^{-2}\left[ \gamma _{1}\left( 4\log L_{GO}-1\right) +2\gamma _{2}\right] \}}, \label{18}$$where $u=\rho _{m}/\rho _{\Lambda }=\Omega _{m}/\Omega _{\Lambda }=(1+\Omega _{k})/\Omega _{\Lambda }-1$ is the ratio of energy densities of DM and DE. Using the expression of $L_{GO}$ given in Eq. (\[4\]) and the energy density of DE given in Eq. (\[7\]), we obtain that the term $\frac{\dot{H}}{H^2}$ can be written as: $$\frac{\dot{H}}{H^{2}}=\frac{1}{\beta }\left( \frac{\Omega _{\Lambda }}{\Gamma }-\alpha \right) . \label{19}$$Therefore, Eq. (\[17\]) yields: $$\dot{\rho}_{\Lambda }=\frac{6H^{3}\Omega _{\Lambda }}{\beta }\left( \frac{1}{\Gamma }-\frac{\alpha -\beta }{\Omega _{\Lambda }}+\frac{\beta \left( u-2\right) }{2}\right) , \label{20}$$Differentiating the expression of $\Omega _{\Lambda }$ given in Eq. (\[11\]) with respect to the cosmic time $t$ and using the relation $\dot{\Omega}_{\Lambda }=H\Omega _{\Lambda }^{\prime }$, we obtain the evolution of the energy density parameter as follow: $$\Omega _{\Lambda }^{\prime }=\frac{2\Omega _{\Lambda }}{\beta }\left( \frac{1}{\Gamma }-\frac{\alpha -\beta }{\Omega _{\Lambda }}+\frac{\beta u}{2}\right) . \label{21}$$The dot and the prime denote, respectively, the derivative with respect to the cosmic time $t$ and the derivative with respect to $x=\ln a$.Finally, using Eqs. (\[11\]), (\[14\]) and (\[20\]), the EoS parameter $\omega _{\Lambda }$ and the deceleration parameter (defined as $q=-1-\frac{\dot{H}}{H^{2}}$) as functions of $\Omega _{\Lambda }$ and $\Gamma$ are given, respectively, by: $$\omega _{\Lambda }=-\frac{2}{3 \Omega _{\Lambda }}\left[ 1-\frac{\alpha}{\beta}+\frac{\Omega _{\Lambda }}{\beta \Gamma } \right]-\frac{1+u}{3} , \label{22}$$$$q=\left( \frac{\alpha}{\beta}-1 -\frac{\Omega _{\Lambda }}{\beta\Gamma }\right). \label{23}$$We can easily observe that the EoS parameter $\omega_{\Lambda}$ and the deceleration parameter $q$ given, respectively, in Eqs. (\[22\]) and (\[23\]) are related each other by the following relation: $$\omega _{\Lambda }=\frac{2}{3\Omega _{\Lambda }}q-\frac{1+u}{3}. \label{23a}$$Moreover, using Eqs. (\[11\]) and (\[23\]), we can derive that: $$L_{GO}^{-2}H_{GO}^{-2}=\frac{\Omega _{\Lambda }}{\Gamma }=\alpha -\beta -\beta q=\alpha -\beta \left( 1+q\right) . \label{24}$$From Eqs. (\[14\]) and (\[15\]), the evolution of $u$ is governed by: $$u^{\prime }=3u\omega _{\Lambda }. \label{28}$$At Ricci scale, i.e. when $\alpha =2$ and $\beta =1$, Eqs. (\[22\]) and (\[23\]) reduce, respectively, to: $$\omega _{\Lambda }=-\frac{2}{3\Omega _{\Lambda }}\left( \frac{\Omega _{\Lambda }}{\Gamma }-1\right)-\frac{1+u}{3} , \label{25}$$$$q=1-\frac{\Omega _{\Lambda }}{\Gamma }, \label{26}$$and the evolution of the energy density parameter given in Eq. (\[21\]) reduces to: $$\Omega _{\Lambda }^{\prime }=\left[ 2\left( \frac{\Omega _{\Lambda }}{\Gamma}-1\right) \right] +u\Omega _{\Lambda }=-\Omega _{\Lambda }(1+3\omega _{\Lambda }). \label{27}$$By choosing the proper model parameters, it can be easily shown that the equation of state parameter $\omega_{\Lambda }$ given in Eqs. (\[22\]) and (\[25\]), may cross the phantom divide. Moreover, from Eqs. (\[23\]) and (\[26\]), we can see that the transition between deceleration to acceleration phase can be happened for various model parameters.\ In a flat dark dominated universe, i.e. when $\gamma _{1}=\gamma _{2}=0$ or at infinity ($t\rightarrow \infty$), $\Omega _{\Lambda }=1$, $\Omega _{k}=0$ and $u=0$, we find that the Hubble parameter $H$ reduces to: $$H=\frac{\beta }{\alpha -1}\left( \frac{1}{t}\right) . \label{29}$$Moreover, the EoS parameter $\omega _{\Lambda }$ and the deceleration parameter $q$ given in Eqs. (\[22\]) and (\[23\]) reduce, respectively, to: $$\begin{aligned} \omega _{\Lambda }^{\infty}&=&-\frac{2}{3}\left( \frac{1-\alpha }{\beta }\right) -1, \label{30}\\ q^{\infty}&=&\frac{\alpha -1}{\beta }-1. \label{31}\end{aligned}$$Also in this case the phantom wall can be achieved for $\alpha \leq 1,~\beta>0$. In Ricci scale in this limit, Eqs. (\[30\]), (\[31\]) reduce to $$\omega_{\Lambda}^{R,\infty}=\frac{-1}{3},~~~q^{R,\infty}=0,$$ which corresponds to an expanding universe without any acceleration. Statefinder diagnostic ====================== We now want to derive the statefinder parameters $\{r,s\}$ for GO-LECHDE model in the flat universe.\ The Friedmann equation given in Eq. (\[8\]) yields, after some calculations: $$\frac{\dot{H}}{H^{2}}=-\frac{3}{2}\left( 1+\omega _{\Lambda }\Omega _{\Lambda }\right) . \label{s1}$$Taking the time derivation of Eq. (\[s1\]) and using Eq. (\[21\]), we obtain: $$\frac{\ddot{H}}{H^{3}}=\frac{9}{2}\left[ 1+\omega _{\Lambda }^{2}\Omega _{\Lambda }(1+\Omega )+\frac{7}{3}\omega _{\Lambda }\Omega _{\Lambda }-\frac{1}{3}\omega _{\Lambda }^{\prime }\Omega _{\Lambda }\right] . \label{s2}$$Using the definition of $H$ (i.e. $H=\dot{a}/a$), the statefinder parameter $r$ given in Eq. (\[3s1\]) can be written as: $$r=1+3\frac{\dot{H}}{H^{2}}+\frac{\ddot{H}}{H^{3}}. \label{s3}$$Substituting Eqs (\[19\]), (\[23\]) and (\[s2\]) in Eqs. (\[s3\]) and (\[3s\]), pair parameters $\{r,s\}$ can be written: $$\begin{aligned} r &=&1+6\omega _{\Lambda }\Omega _{\Lambda }+\frac{9}{2}\omega _{\Lambda }^{2}\Omega _{\Lambda }(1+\Omega _{\Lambda })-\frac{3}{2}\omega _{\Lambda }^{\prime }\Omega _{\Lambda }, \label{s4} \\ s &=&\beta \Gamma \Omega _{\Lambda }\left[ \frac{4\omega _{\Lambda }+3\omega _{\Lambda }^{2}(1+\Omega _{\Lambda })-\omega _{\Lambda }^{\prime }}{\Gamma (2\alpha -3\beta )-2\Omega _{\Lambda }}\right] \label{s4-1}.\end{aligned}$$At early time, when $\omega_{\Lambda}\rightarrow 0$, the pair relations (\[s4\]) show that that statefinder parameters tends to $\{r=1,s=0\}$, which coincides with the location of the $\Lambda$CDM fixed point in $r-s$ plane. Using Eq. (\[22\]), the evolution of EoS parameter $\omega_{\Lambda}$ can be written as: $$\begin{aligned} \omega _{\Lambda }^{\prime }&=&\frac{2\Omega _{\Lambda }^{\prime }}{3\beta \Omega _{\Lambda }^{2}}\left( \frac{3}{2}\beta -\alpha \right)\notag \\&& +\frac{4}{3\beta \Gamma ^{2}}\left(\frac{L_{GO}^{\prime }}{L_{GO}}\right)\left( 1+\frac{2\gamma _{1}}{3L_{GO}^{2}}-\Gamma \right),\label{s5}\end{aligned}$$where from Eqs. (\[11\]) and (\[16\]), the term $\left(\frac{L_{GO}^{\prime }}{L_{GO}}\right)$ can be calculated as: $$\begin{aligned} \frac{L_{GO}^{\prime }}{L_{GO}} &=&-\frac{\Gamma }{\Omega _{\Lambda }}\left( \alpha \frac{\dot{H}}{H^{2}}+\beta \frac{\ddot{H}}{2H^{3}}\right) \label{s6} \\ &=&\frac{3\Gamma }{2}\left\{ \frac{1+\omega _{\Lambda }}{1+\frac{1}{3}L_{GO}^{-2}\left[ \gamma _{1}\left( 4\log L_{GO}-1\right) +2\gamma _{2}\right] }\right\} . \notag\end{aligned}$$At present epoch of the Universe ($\Omega _{\Lambda }\approx 0.72$, $u\approx 0.4)$, the EoS parameter $\omega _{\Lambda }$ given in Eq. (\[23a\]) reduces to: $$\omega _{\Lambda }\approx 0.93q-0.47.$$Then, the universe exists in accelerating phase (i.e $q<0$) if $\omega _{\Lambda }<-0.47$ and the phantom divide $\omega _{\Lambda }=-1$, may be crossed provided that $q\lesssim -0.5$. This condition implies $\frac{\dot{H}}{H^{2}}\gtrsim -0.58$ and, from Eq. (\[24\]), we derive: $$L_{GO-0}^{-2}H_{0}^{-2}\lesssim \alpha -0.42\beta,$$ $$\Omega_{0 \Lambda }(\beta\Gamma _{0})^{-1}\gtrsim \frac{\alpha}{\beta} -0.42 .$$By inserting the above quantities in Eqs. (\[21\]) and (\[s5\]), we have $\omega _{\Lambda }^{\prime }\gtrsim -1.86\left( \alpha /\beta -3/2\right) $, which gives: $$\begin{aligned} r_{0} &\approx &2\left(\frac{\alpha}{\beta}\right)-0.75, \\ s_{0} &\approx &-0.62\left(\frac{\alpha}{\beta}\right)+0.54.\end{aligned}$$ Recently, Wang and Xu [@wangY] have constrained the new HDE model in non-flat universe using observational data. The best fit values of $( \alpha ,\beta ) $ with their confidence level they found are $\alpha =0.8824_{-0.1163}^{+0.2180}( 1\sigma ) _{-0.1378}^{+0.2213}( 2\sigma ) $ and\ $\beta =0.5016_{-0.0871}^{+0.0973}( 1\sigma ) _{-0.1102}^{+0.1247}( 2\sigma ) $ . Using these values, the pair parameters $\{r,s\}$, at present epoch, become $\{ r=2.77,s=-0.55\} $, which are far from $\Lambda $CDM model values (i.e., $\left\{ r=1,s=0\right\} $). Moreover, it shows that $s<0,$ which corresponds to a phantom-like DE. However, in order to mimic these parameters to $\Lambda $CDM scenario at present epoch, the ratio of $\alpha /\beta $ must be approximately $0.87$, which is lower than the value obtained with observational data.\ At Ricci scale (i.e., when $\alpha /\beta=2$), at present time, pair parameters assume the values $\{r=3.25,s=-0.70\}$. It is worthwhile to mention that by increasing the value of $\alpha /\beta$ from 0.87, the distance from $\Lambda $CDM fixed point in $r-s$ diagram become longer.\ In the limiting case of $t\rightarrow \infty $ or for ordinary new HDE ($\gamma _{1}=\gamma _{2}=0,\Gamma =1$), in flat dark dominated universe ($u=0,\Omega _{\Lambda }=1$), we find that: $$\begin{aligned} r &=&\frac{1}{\beta ^{2}}\left( \alpha -\beta -1\right) \left( 2\alpha -\beta -4\right) \label{io3}, \\ s &=&\frac{2\left( 2\alpha ^{2}-3\beta \alpha +5\beta -6\alpha +4\right) }{3\beta \left( 2\alpha -3\beta -2\right) }\label{io4}.\end{aligned}$$At Ricci scale ($\alpha =2,\beta =1$), Eqs. (\[io3\]) and (\[io4\]) reduce, respectively, to :$$\begin{aligned} r=0,~~s=\frac{2}{3}.\end{aligned}$$ Moreover the $\omega-\omega^{\prime}$ analysis is another tool to distinguish between the different models of DE [@wei2007]. In this analysis the standard $\Lambda$CDM model corresponds to the fixed point $(\omega_{\Lambda}=-1,\omega_{\Lambda}^{\prime}=0)$. At present time, for $\alpha/\beta=0.87$ which corresponds to$\Lambda$CDM fixed point in $r-s$ diagram, $(\omega_{\Lambda}=-1,\omega_{\Lambda}^{\prime}=1.17)$. for the observational quantities, ($\alpha/\beta=1.76$), we find: $(\omega_{\Lambda}=-1,\omega_{\Lambda}^{\prime}=-0.48)$, and for Ricci scale these are $(\omega_{\Lambda}=-1,\omega_{\Lambda}^{\prime}=-0.93)$. Therefore we see that $\omega_{\Lambda}^{\prime}$ become smaller for higher value of $\alpha/\beta$ at present. Conclusion ========== In this paper, we have extended the work made by Granda and Oliveros [@grandaoliveros] to the logarithmic entropy corrected HDE (LECHDE) model. This model has been arisen from the black hole entropy which may lie in the entanglement of quantum field between inside and outside of the horizon. We obtained the evolution of energy density $\Omega _{\Lambda }^{\prime }$, the deceleration parameter $q$ and EoS parameter $\omega_{\Lambda} $ of the new LECHDE model for non-flat universe. We saw that, by choosing the proper model parameters, the equation of state parameter $\omega _{\Lambda }$ may cross the phantom divide and also the transition between deceleration to acceleration phase could happen. At last, we studied the GO-LECHDE model from the viewpoint of statefinder diagnostic and $\omega_{\Lambda}-\omega_{\Lambda}^{\prime}$ analysis, which is a crucial tool for discriminating different DE models. Also, the present value of $\{r, s\}$ can be viewed as a discriminator for testing different DE models if it can be extracted from precise observational data in a model-independent way. The studying at present time, when $\omega_{\Lambda}$ remains around the phantom wall, $\omega_{\Lambda}\approx -1$ and our universe evolves in acceleration phase, pair values of $\{ r,s \}$ was calculated with respect to model parameters $\alpha, \beta$. By using the observational data which was obtained by Wang and Xu [@wangY], where $\alpha/\beta=1.76$, we obtained $\{ r=2.77 ,s=-0.55\}$. For Ricci scale, which has $\alpha/\beta=2$, the pair value assume the values $\{ r=3.25,s=-0.7 \}$. Also, choosing $\alpha/\beta=0.87$, we found $\{ r=1,s=0 \}$ which is corresponds to $\Lambda$CDM scenario. We shaw that increasing value of $\alpha/ \beta$, conclude the ascending distance from $\Lambda$CDM fixed point. In the limiting case, at infinity, for flat dark dominated universe at Ricci scale, we found $\{ r=0,s=2/3 \}$, which corresponds to an expanding universe without any acceleration ($q=0$). In $\omega_{\Lambda}-\omega_{\Lambda}^{\prime}$ analysis at present time, we found that the higher value of $\alpha/\beta$ obtains the smaller value of $\omega_{\Lambda}^{\prime}$.\ In this model the statefinder pairs is determined by parameters $\alpha,~\beta,~\gamma_1,~\gamma_2$. These parameters would be obtained by confronting with cosmic observational data. Giving the wide range of cosmological data available, in the future we expect to further constrain our model parameter and test the viability of our model. [200]{} Abazajian, K., Adelman-McCarthy, J. K., Ag[ü]{}eros, M. A., et al.: Astron. J., **128**, 502 (2004) Abazajian, K., Adelman-McCarthy, J. K., Ag[ü]{}eros, M. A., et al.: Astron. J., **129**, 1755 (2005) Alam, U., Sahni, V., Saini, T.D., Starobinsky, A.A.: Mon. Not. R. Astron. Soc. **344**, 1057 (2003) Allen, S. W., Schmidt, R. W., Ebeling, H., Fabian, A. C., van Speybroeck, L. 2004: Mon. Not. R. Astron. Soc., **353**, 457 (2004) Astier, P., Guy, J., Regnault, N., et al.: Astron. astrophys., **447**, 31, (2006) Banerjee, R., Majhi, B. R.: Physics Letters B, **662**, 62 (2008a) Banerjee, R., Ranjan Majhi, B.: Journal of High Energy Physics, **6**, 95 (2008b) Banerjee, R., Modak, S. K.: Journal of High Energy Physics, **5**, 63 (2009) Bennett, C. L., Hill, R. S., Hinshaw, G., et al.: Astrophys. J. Ser., **148**, 97 (2003) Cai, R.-G., Cao, L.-M., Hu, Y.-P. 2009: Classical and Quantum Gravity, **26**, 155018 (2009) Cohen, A. G., Kaplan, D. B., Nelson, A. E.: Physical Review Letters, **82**, 4971 (1999) Copeland, E. J., Sami, M., Tsujikawa, S.: International Journal of Modern Physics D, **15**, 1753 (2006) Duran, I., Pavon, D.: Phys. Rev. D **83**, 023504 (2011). Elizalde, E., Nojiri, S., Odintsov, S. D., Wang, P.: Phys. Rev. D, **71**, 103504 (2005) Enqvist, K., Hannestad, S., Sloth, M. S.: J. Cosmol. Astropart. Phys., **2**, 4 (2005) Feng, B., Wang, X., Zhang, X.: Physics Letters B, **607**, 35 (2005) Fischler, W., Susskind, L.: arXiv:hep-th/9806039 Gong, Y.: Phys. Rev. D, **70**, 064029 (2004) Granda, L. N., Oliveros, A.: Phys. Lett. B **669**, 275 (2008) Granda, L. N., Oliveros, A.: Physics Letters B, 671, 199 (2009) Guberina, B., Horvat, R., Stefancic, H.: J. Cosmol. Astropart. Phys., **5**, 1 (2005) Guberina, B., Horvat, R., Nikoli[ć]{}, H.: Physics Letters B, **636**, 80 (2006) Hsu, S. D. H.: Physics Letters B, **594**, 13 (2004) Hsu, S. D. H.: Phys. Lett. B **669**, 275 (2008). Huang, Q.-G., Li, M.: J. Cosmol. Astropart. Phys., **8**, 13 (2004) Huang, Q.-G., Li, M.: J. Cosmol. Astropart. Phys. **3**, 1 (2005) Jamil, M., Farooq, M. U.: International Journal of Theoretical Physics, **49**, 42 (2010a) Jamil, M., Farooq, M. U.: J. Cosmol. Astropart. Phys. **03**, 001 (2010b) Kao, H.-C., Lee, W.-L., Lin, F.-L. (2005): Phys. Rev. D, **71**, 123518 (2005) Karami, K., Khaledian,M. S. and Jamil, M.: Phys. Scr.**83**, 025901 (2011) Khodam-Mohammadi, A., Malekjani, M.: Communications in Theoretical Physics, **55**, 942 (2011a) Khodam-Mohammadi, A.: Modern Physics Letters A, **26**, 2487 (2011) Khodam-Mohammadi, A., Malekjani, M.: Astrophys. Space Sci. **331**, 265 (2011b) Li, M.: Physics Letters B, **603**, 1 (2004) Li, M., Li, X.-D., Wang, S., Wang, Y.: Communications in Theoretical Physics, **56**, 525 (2011) Malekjani, M., Khodam-Mohammadi, A., Int. J. Mod. Phys. D**19**, 1857 (2010) Malekjani, M., Khodam-Mohammadi, A., Nazari-pooya, N.: Astrophys. Space Sci. **332**, 515 (2011a) Malekjani, M., Khodam-Mohammadi, A., Nazari-pooya, N.: Astrophys. Space Sci. **334**, 193 (2011b) Malekjani, M., Khodam-Mohammadi, A.: Int. J. Theor. Phys. **51**, 3141 (2012) Malekjani, M., Khodam-Mohammadi, A.: Astrophys. Space Sci.**343**, 451461 (2013) Micheletti, S. M. R.: J. Cosmol. Astropart. Phys. **4**, 9 (2010) Nojiri, S., Odintsov, S. D.: International Journal of Modern Physics A, **16**, 3273 (2001) Nojiri, S. and Odintsov, D. S.: Gen. Rel. Grav. **38**, 1285 (2006). Padmanabhan, T.: Phys. Rep., **380**, 235 (2003) Pasqua, A., et al.: Astrophys. space Sci. **340**, 199 (2012) Pavon, D. and Zimdahl, W.: Phys. Lett. B **628**, 206 (2005). Peebles, P. J., Ratra, B.: Reviews of Modern Physics, **75**, 559 (2003) Peiris, H. V., Komatsu, E., Verde, L., et al.: Astrophys. J. Suppl. Ser., **148**, 213 (2003) Perlmutter, S., Aldering, G., Goldhaber, G., et al.: Astrophys. J., 517, 565 (1999) Sahni, V., Shtanov, Y.: J. Cosmol. Astropart. Phys., **11**, 14 (2003) Setare, M. R., Jamil, M.: Europhys.Lett.**92**, 49003 (2010a) Setare, M. R.: Physics Letters B, **642**, 421 (2006) Setare, M. R.: Physics Letters B, **644**, 99 (2007a) Setare, M. R.: European Physical Journal C, **50**, 991 (2007b) Setare, M. R.: Physics Letters B, **648**, 329 (2007c) Setare, M. R.: Physics Letters B, **653**, 116 (2007d) Setare, M.: Physics Letters B, **654**, 1 (2007e) Setare, M. R.: European Physical Journal C, **52**, 689 (2007f) Setare, M. R., Vagenas, E. C.: Physics Letters B, **666**, 111 (2008) Setare, M. R., Jamil, M.: J. Cosmol. Astropart. Phys., **2**, 10 (2010b) Setare, M. R., Jamil, M.: General Relativity and Gravitation, **43**, 293 (2011) Shen, J., Wang, B., Abdalla, E., Su, R.-K.: Physics Letters B, **609**, 200 (2005) Sheykhi, A.: Physics Letters B, **681**, 205 (2009) Sheykhi, A.: Classical and Quantum Gravity, **27**, 025007 (2010) Sheykhi, A., et al., Gen. Relativ. Gravit.**44**, 623 (2012) Spergel, D. N., Verde, L., Peiris, H. V., et al.: Astrophys. J. Ser., **148**, 175 (2003) Spergel, D. N., Bean, R., Dor[é]{}, O., et al.: Astrophys. J. Ser., **170**, 377 (2007) Tegmark, M., Strauss, M. A., Blanton, M. R., et al.: Phys. Rev. D, **69**, 103501 (2004) ’t Hooft, G. (1993): arXiv:gr-qc/9310026 Wang, Y., Xu, L.: Phys. Rev. D, **81**, 083523 (2010) Wei, H.: Communications in Theoretical Physics, **52**, 743 (2009) Wei, H., Cai, R.G.: Phys. Lett. B **655**, 1 (2007) Weinberg, S.: Reviews of Modern Physics, **61**, 1 (1989) Zhang, X., Wu, F.-Q.: Phys. Rev. D, **72**, 043524 (2005) Zhang, X.: Phys. Rev. D, **74**, 103505 (2006) Zhang, X., Wu, F.-Q.: Phys. Rev. D, **76**, 023502 (2007) Zhang, X.: Phys. Rev. D, **79**, 103509 (2009) Zhu, T., Ren, J.-R.: European Physical Journal C, **62**, 413 (2009) Zimdahl, W., Pavon, D.: Class. Quant. Grav. **26**, 5461 (2007).
--- abstract: 'Optimal control methods for implementing quantum modules with least amount of relaxative loss are devised to give best approximations to unitary gates under relaxation. The potential gain by optimal control using relaxation parameters against time-optimal control is explored and exemplified in numerical and in algebraic terms: it is the method of choice to govern quantum systems within subspaces of weak relaxation whenever the drift Hamiltonian would otherwise drive the system through fast decaying modes. In a standard model system generalising decoherence-free subspaces to more realistic scenarios, open[[grape]{}]{}-derived controls realise a [cnot]{} with fidelities beyond $95$% instead of at most $15\%$ for a standard Trotter expansion. As additional benefit it requires control fields orders of magnitude lower than the bang-bang decouplings in the latter.' author: - 'T. Schulte-Herbr[ü]{}ggen' - 'A. Sp[ö]{}rl' - 'N. Khaneja' - 'S.J. Glaser' bibliography: - 'control21.bib' title: Optimal Control for Generating Quantum Gates in Open Dissipative Systems --- Introduction ============ Using experimentally controllable quantum systems to perform computational tasks or to simulate other quantum systems [@Fey82; @Fey96] is promising: by exploiting quantum coherences, the complexity of a problem may reduce when changing the setting from classical to quantum. Protecting quantum systems against relaxation is therefore tantamount to using coherent superpositions as a resource. To this end, decoherence-free subspaces have been applied [@ZR97LCW98], bang-bang controls [@VKLabc] have been used for decoupling the system from dissipative interaction with the environment, while a quantum Zeno approach [@MisSud77] may be taken to projectively keep the system within the desired subspace [@FacPas02]. Controlling relaxation is both important and demanding [@VioLloyd01; @LW03; @FTP+05; @CHH+06], also in view of fault-tolerant quantum computing [@KBL+01] or dynamic error correction [@KV08]. Implementing quantum gates or quantum modules experimentally is in fact a challenge: one has to fight relaxation while simultaneously steering the quantum system with all its basis states into a linear image of maximal overlap with the target gate. — Recently, we showed how near time-optimal control by [[grape]{}]{}[@GRAPE] take pioneering realisations from their fidelity-limit to the decoherence-limit [@PRA07]. In spectroscopy, optimal control helps to keep the state in slowly relaxing modes of the Liouville space [@KLG; @XYOFR04; @Poetz06]. In quantum computing, however, the entire basis has to be transformed. For generic relaxation scenarios, this precludes simple adaptation to the entire Liouville space: the gain of going along protected dimensions is outweighed by losses in the orthocomplement. Yet embedding logical qubits as decoherence-protected subsystem into a larger Liouville space of the encoding physical system raises questions: is the target module reachable within the protected subspace by admissible controls? In this category of setting, the extended gradient algorithm open[[grape]{}]{}turns out to be particularly powerful to give best approximations to unitary target gates in relaxative quantum systems thus extending the toolbox of quantum control, see [e.g.]{} [@Lloyd00; @PK02; @GZC03; @OTR04; @GRAPE; @PRA05; @Tarn05; @ST04+06; @MSZ+06; @dAll08]. Moreover, building upon a precursor of this work [@PRL_decoh], it has been shown in [@PRL_decoh2] that non-Markovian relaxation models can be treated likewise, provided there is a finite-dimensional embedding such that the embedded system itself ultimately interacts with the environment in a Markovian way. Time dependent $\Gamma(t)$ have recently also been treated in the Markovian [@Rabitz07; @Rabitz07b] and non-Markovian regime [@Lidar08a]. Here we study model systems that are [*fully controllable*]{} [@SJ72JS; @BW79; @TOSH-Diss; @Alt03], i.e. those in which—neglecting relaxation for the moment—to any initial density operator $\rho$, the entire unitary orbit $\mathcal{U}(\rho):= \{U\rho\, U^{-1}\, |\, U \; {\rm unitary}\}$ can be reached [@AA03] by evolutions under the system Hamiltonian (drift) and the experimentally admissible controls. Moreover, certain tasks can be performed within a subspace, e.g. a subspace protected totally or partially against relaxation explicitly given in the equation of motion. Theory ====== Unitary modules for quantum computation require synthesising a simultaneous linear image of all the basis states spanning the Hilbert space or subspace on which the gates shall act. It thus generalises the spectroscopic task to transfer the state of a system from a given initial one into maximal overlap with a desired target state. Preliminaries ------------- The control problem of maximising this overlap subject to the dynamics being governed by an equation of motion may be addressed by our algorithm [[grape]{}]{}[@GRAPE]. For state-to-state transfer in spectroscopy, one simply refers to the Hamiltonian equations of motion known as Schr[ö]{}dinger’s equation (for pure states of closed systems represented in Hilbert space) or to Liouville’s equation (for density operators in Liouville space) $$\begin{aligned} \dot{{\ensuremath{| \psi \rangle}{}}} &=& -i H\; {\ensuremath{| \psi \rangle}{}}\\ \dot{\rho} &=& -i\,[H,\rho]\;.$$ In quantum computation, however, the above have to be lifted to the corresponding operator equations, which is facilitated using the notations ${\operatorname{Ad}}_U(\cdot):= U(\cdot)U^\dagger$ and ${\operatorname{ad}}_H\,(\cdot):= [H,(\cdot)]$ with $U:=e^{-itH}$ obeying $$\label{eqn:exp_ad} e^{-it{\operatorname{ad}}_H}(\cdot) = {\operatorname{Ad}}_U(\cdot) $$ and using ‘$\circ$’ for the composition of maps in $$\begin{aligned} \dot{U} &=& -i H\; U\\ \tfrac{\rm d}{{\rm d} t}\, {\rm Ad}_U &=& -i {\operatorname{ad}}_H\;\circ\;{\operatorname{Ad}}_U\;.\end{aligned}$$ These operator equations of motion occur in two scenarios for realising quantum gates or modules $U(T)$ with maximum trace fidelities: The normalised quality function (setting $N:=2^n$ for an $n$" qubit system henceforth) $$f' := \tfrac{1}{N}\, {\operatorname{Re}}{\operatorname{tr}}\{U_{\rm target}^\dagger U(T)\}$$ covers the case where overall global phases shall be respected, whereas if a global phase is immaterial [@PRA05] (while the fixed phase relation between the matrix columns is kept as opposed to ref. [@Tesch04]), the quality function $$f := \tfrac{1}{N}\, {\operatorname{Re}}{\operatorname{tr}}\{{\operatorname{Ad}}_{U_{\rm target}}^\dagger {\operatorname{Ad}}_{U(T)}^{\phantom{\dagger}}\} = \big| f' \big|^2$$ applies. The latter identity is most easily seen [@PRA05] in the so-called $vec$-representation [@HJ2] of $\rho$ where one gets the conjugation superoperator ${\operatorname{Ad}}_{U} = \bar{U}\otimes U$ (with $\bar U$ denoting the complex conjugate) and the commutator superoperator ${\operatorname{ad}}_{H} = {\ensuremath{{\rm 1 \negthickspace l}{}}}\otimes H - H^t\otimes{\ensuremath{{\rm 1 \negthickspace l}{}}}$. Open [[grape]{}]{} ------------------ Likewise, under relaxation introduced by the operator $\Gamma$ (which may, e.g., take GKS-Lindblad form), the respective Master equations for state transfer [@Alt03] and its lift for gate synthesis read $$\begin{aligned} \label{eqn:master} \dot{\rho} &=& -(i {\operatorname{ad}}_H \,+\,\Gamma)\; \rho\\ \dot F &=& -(i {\operatorname{ad}}_H \,+\,\Gamma)\;\circ\; F \quad.\label{eqn:super_master}\end{aligned}$$ Again with $N:=2^n$ in $n$“ qubit system, $F$ denotes a [*quantum map*]{} in $GL(N^2)$ as linear image over all basis states of the Liouville space representing the open system. The Lie-semigroup properties of $F(t)$ have recently been elucidated in detail [@DHKS08]: it is important to note that only in the special (and highly unusual) case of $[{\operatorname{ad}}_H \,,\,\Gamma\,]=0$ the map $F(t)$ boils down to a mere contraction of the unitary conjugation ${\operatorname{Ad}}_{U}$. In general, however, one is faced with an intricate interplay of the respective coherent ($i{\operatorname{ad}}_H$) and incoherent ($\Gamma$) part of the time evolution: it explores a much richer set of quantum maps than contractions of ${\operatorname{Ad}}_{U}$, as expressed in [@DHKS08] in terms of a $\mathfrak k,\mathfrak p$” decomposition of the generators in $\mathfrak{gl}(N^2,\mathbb C)$ of quantum maps. As will be shown below, it is this interplay that ultimately entails the need for relaxation-optimised control based on the full knowledge of the Master Eqn. (\[eqn:super\_master\]), while in the special case of mere contractions of ${\operatorname{Ad}}_{U}$, tracking maximum qualities against fixed final times (‘top curves’, [*vide infra*]{}, e.g. Fig. \[fig:compare\_all\] (a) upper panel) obtained for $\Gamma = 0$ plus an estimate on the eigenvalues of $\Gamma$ suffice to come up with good guesses of controls. $$\begin{CD} {\rho_0 = \rho_{SE}(0)\otimes \rho_B(0)} @>{\quad{\operatorname{Ad}}_W(t)\quad}>> {\rho(t) = W(t)\rho_0 W^\dagger(t)}\\ @V{\Pi_{SE}}V{{\operatorname{tr}}_B}V @V{\Pi_{SE}}V{{\operatorname{tr}}_B}V \\ {\rho_{SE}(0)}@>{\quad\;F_{SE}(t)\quad\;}>> {\rho_{SE}(t)}\\ @V{\Pi_S}V{{\operatorname{tr}}_E}V @V{\Pi_S}V{{\operatorname{tr}}_E}V \\ {\rho_S(0)} @>{\quad\;\, F_S(t) \quad\;\,}>> {\rho_S(t)} \end{CD}$$ Now for a Markovian Master equation to make sense in terms of physics, it is important that the quantum subsystem of concern is itself coupled to its environment in a way justifying to neglect any memory effects. This means the characteristic time scales under which the environment correlation functions decay have to be sufficiently smaller than the time scale for the quantum evolution of the subsystem (see, e.g., [@BreuPetr02]). — More precisely, as exemplified in Fig. \[fig:CD\], we will assume that either the quantum system itself ($S$) or a finite-dimensional embedding of the system ($SE$) can be separated from the environmental bath ($B$) such that (at least) one of the quantum maps of the reduced system $F_S(t)$ or $F_{SE}(t)$ is Markovian and allows for a description by a completely positive semigroup [@GKS76; @Lind76; @Davies76], if the time evolution for the universal composite of (embedded) system plus bath is unitary. Examples where $F_S(t)$ is Markovian have been given in a precursor [@PRL_decoh] to this study, while a concrete setting of a qubit ($S$) coupled on a non-Markovian scale to a two-level fluctuator ($E$), which in turn interacts in a Markovian way with a bosonic bath ($B$) has been described in detail in [@PRL_decoh2]. Henceforth, for describing the method we will drop the subscript to the quantum map $F(t)$ and tacitly assume we refer to the smallest embedding such that the map is Markovian and governed by Eqn. (\[eqn:super\_master\]). Moreover, if the Hamiltonian is composed of the [*drift term*]{} $H_d$ and [*control terms*]{} $H_j$ with piecewise constant [*control amplitudes*]{} $u_j(t_k)$ for $t_k\in[0,T]$ $$H(t_k) := H_d + \sum_j u_j(t_k) H_j \quad\text{with}\; u_j(t_k)\in \mathcal U \subseteq \mathbb R$$ then Eqn. (\[eqn:super\_master\]) defines a [*bilinear control system*]{}. With these stipulations, the [[grape]{}]{}algorithm can be lifted to the superoperator level in order to cope with open systems by numerically optimising the trace fidelity $$\label{eqn:f_tr} f_{\rm tr}:= {\operatorname{Re}}\,{\operatorname{tr}}\{{\operatorname{Ad}}_{U_{\rm target}}^\dagger F(T)\}$$ for fixed final time $T$. For simplicity, we henceforth assume equal time spacing $\Delta t:= t_k-t_{k-1}$ for all time slots $k=1,2,\dots,M$, so $T=M\cdot\Delta t$. Therefore $F(T)=F_M\cdot F_{M-1}\cdots F_k\cdots F_2\cdot F_1$ with every map taking the form $F_k = \exp\{-(i{\operatorname{ad}}_{H(t_k)} + \Gamma(t_k))\Delta t\}$ leads to the derivatives $$\begin{split} \tfrac{\partial f_{\rm tr}}{\partial u_j(t_k)}& = -{\operatorname{Re}}\ {\operatorname{tr}}\big\{ {\operatorname{Ad}}^{\dagger}_U \cdot F_M \cdot F_{M-1} \cdots F_{k+1}\;\times\\ &\times \big(i {\operatorname{ad}}_{H_j} + \; \tfrac{\partial\Gamma(u_j(t_k))} {\partial u_j(t_k)}\big) F_k \Delta t\times F_{k-1} \cdots F_2\cdot F_1 \big\} \end{split}$$ for the recursive gradient scheme $$\label{eqn:recursion} u_j^{(r+1)}(t_k) = u_j^{(r)}(t_k) + \alpha_r \tfrac{\partial f_{\rm tr}}{\partial u_j(t_k)}\quad,$$ where often the uniform $\Delta t$ is absorbed into the step size $\alpha_r >0$. It gives the update from iteration $r$ to $r+1$ of the control amplitude $u_j$ to control $H_j$ in time slot $t_k$. ### Numerical Setting {#numerical-setting .unnumbered} Numerical open[[grape]{}]{}typically started from some $50$ initial conditions to each fixed final time taking then some $r=10-30 \times 10^3$ iterations (see Eqn. \[eqn:recursion\]) to arrive at one point in the top curve shown as upper trace in Fig. \[fig:compare\_all\]. In contrast, for finding time-optimised controls in the closed reference system, we used [[grape]{}]{}for tracking top curves: this is done by performing optimisations with fixed final time, which is then successively decreased so as to give a [*top curve*]{} $g(T)$ of quality against duration of control, a standard procedure used in, e.g., Ref. [@PRA05]. Finding controls for each fixed final time was typically starting out from some $20$ random initial control sequences. Convergence to one of the points in time (where Fig. \[fig:compare\_all\] shows mean and extremes for a familiy of $15$ different such optimised control sequences) required some $r=1000$ recursive iterations each. — Numerical experiments were carried out on single workstations with $512$ MHz to $1.2$ GHz tact rates and 512 MB RAM. Clearly, there is no guarantee of finding the global optimum this way, yet the improvements are substantial. Exploring Applications by Model Systems ======================================= By way of example, the purpose of this section is to demonstrate the power of optimal control of open quantum systems as a realistic means for protecting from relaxation. In order to compare the results with idealised scenarios of ‘decoherence-free subspaces’ and ‘bang-bang decoupling’, we choose two model systems that can partially be tracted by algebraic means. Comparing numerical results with analytical ones will thus elucidate the pros of numerical optimal control over previous approaches. — In order to avoid misunderstandings, however, we should emphasize our algorithmic approach to controlling open systems (open[[grape]{}]{}) is [*by no means limited*]{} to operating within such predesigned subspaces of weak decoherence: e.g., in Ref. [@PRL_decoh2] we have worked in the full Liouville space of a non-Markovian target system. Yet, not only are subspaces of weak decoherence practically important, they also lend themselves to demonstrate the advantages of relaxation-optimised control in the case of Markovian systems with time [*in*]{}dependent relaxation operator $\Gamma$, which we focus on in this section. The starting point is the usual encoding of one logical qubit in Bell states of two physical ones $$\label{eqn:BellStates} \begin{split} {{\ensuremath{| 0 \rangle}{}}}_L &:=\tfrac{1}{\sqrt{2}} \{{\ensuremath{| 01 \rangle}{}}+{\ensuremath{| 10 \rangle}{}}\}={\ensuremath{| \psi^+ \rangle}{}}\\ {{\ensuremath{| 1 \rangle}{}}}_L &:=\tfrac{1}{\sqrt{2}} \{{\ensuremath{| 01 \rangle}{}}-{\ensuremath{| 10 \rangle}{}}\}={\ensuremath{| \psi^- \rangle}{}} \end{split}$$ Four elements then span a Hermitian operator subspace protected against $T_2$-type relaxation $$\mathcal{B}:={\rm span}_{\mathbb R}\,\{{\ensuremath{| \psi^\pm \rangle \langle \psi^\pm |}{}}\}\quad.$$ This can readily be seen, since for any $\rho\in\mathcal{B}$ $$\label{eqn:zz-protect} \Gamma_0(\rho): = [zz,[zz,\rho]]=0\quad,$$ where henceforth we use the short-hand $zz:=\sigma_z\otimes\sigma_z/2$ and likewise $xx$ as well as ${\ensuremath{{\rm 1 \negthickspace l}{}}}\mu\nu{\ensuremath{{\rm 1 \negthickspace l}{}}}:=\tfrac{1}{2}{\ensuremath{{\rm 1 \negthickspace l}{}}}_2\otimes\sigma_\mu\otimes\sigma_\nu\otimes{\ensuremath{{\rm 1 \negthickspace l}{}}}_2$ for $\mu,\nu\in\{x,y,z,{\ensuremath{{\rm 1 \negthickspace l}{}}}\}$. Interpreting Eqn. \[eqn:zz-protect\] as perfect protection against $T_2$" type decoherence is in line with the slow-tumbling limit of the Bloch-Redfield relaxation by the spin tensor $A_{2,(0,0)}:= \tfrac{1}{\sqrt{6}}(\tfrac{3}{2}\,zz - \mathbf{I}_1 \mathbf{I}_2)$ [@EBW87] $$\label{eqn:GammaT2} \Gamma_{T_2}(\rho): = [A_{2,(0,0)}^\dagger,[A_{2,(0,0)}^{\phantom{\dagger}},\rho]] = \tfrac{9}{24}[zz,[zz,\rho]]=0\;.$$ For the sake of being more realistic, the model relaxation superoperator mimicking dipole-dipole relaxation within the two spin pairs in the sense of Bloch-Redfield theory is extended from covering solely $T_2$-type decoherence to mildly including $T_1$ dissipation by taking (for each basis state $\rho$) the sum [@EBW87] $$\label{eqn:GammaT1T2} \Gamma(\rho)\;:= \sum\limits_{m_1,m_2=-1}^1 \big[A_{2,(m_1,m_2)}^\dagger\,,\,\big[A_{2,(m_1,m_2)}^{\phantom{\dagger}}\,,\,\rho\big]\big]\;,$$ in which the zeroth-order tensor $A_{2,(0,0)}\sim zz$ is then scaled 100 times stronger than the new terms. So the resulting model relaxation rate constants finally become $T_2^{-1} : T_1^{-1} = 4.027$ s$^{-1} : 0.024$ s$^{-1} \simeq 170\;:\;1$. Controllability Combined with Protectability against Relaxation --------------------------------------------------------------- In practical applications of a given system, a central problem boils down to [*simultaneously*]{} solving two questions: (i) is the (sub)system fully controllable and (ii) can the (sub)system be decoupled from fast relaxing modes while being steered to the target. It is for answering these questions in algebraic terms that we have chosen the following coupling interactions: if the two physical qubits are coupled by a Heisenberg" XX interaction and the controls take the form of $z$-pulses acting jointly on the two qubits with opposite sign, one obtains the usual fully controllable logical single qubit over $\mathcal{B}$, because $${\langle (z{\ensuremath{{\rm 1 \negthickspace l}{}}}-{\ensuremath{{\rm 1 \negthickspace l}{}}}z), (xx+yy)\rangle}_{\rm Lie}{\ensuremath{\overset{\rm rep}{=}}}\mathfrak{su}(2)\quad, $$ where ${\langle\cdot\rangle}_{\rm Lie}$ denotes the Lie closure under commutation (which here gives $(yx-xy)$ as third generator to $\mathfrak{su}(2)$). ### Model System I {#model-systemi .unnumbered} By coupling two of the above qubit pairs with an Ising-ZZ interaction as in Refs. [@LiWu02; @WuLi02; @ZanLloyd04] one gets the standard logical two-spin system serving as our reference [*System I*]{}: it is defined by the drift Hamiltonian $H_{D1}$ and the control Hamiltonians $H_{C1}, H_{C2}$ $$\label{eqn:sys-I} \begin{split} H_{D1} :=& J_{xx}\;(xx{\ensuremath{{\rm 1 \negthickspace l}{}}}{\ensuremath{{\rm 1 \negthickspace l}{}}}+{\ensuremath{{\rm 1 \negthickspace l}{}}}{\ensuremath{{\rm 1 \negthickspace l}{}}}xx + yy{\ensuremath{{\rm 1 \negthickspace l}{}}}{\ensuremath{{\rm 1 \negthickspace l}{}}}+{\ensuremath{{\rm 1 \negthickspace l}{}}}{\ensuremath{{\rm 1 \negthickspace l}{}}}yy) + J_{zz}\,{\ensuremath{{\rm 1 \negthickspace l}{}}}zz {\ensuremath{{\rm 1 \negthickspace l}{}}}\qquad\\ H_{C1} :=& z{\ensuremath{{\rm 1 \negthickspace l}{}}}{\ensuremath{{\rm 1 \negthickspace l}{}}}{\ensuremath{{\rm 1 \negthickspace l}{}}}-{\ensuremath{{\rm 1 \negthickspace l}{}}}z {\ensuremath{{\rm 1 \negthickspace l}{}}}{\ensuremath{{\rm 1 \negthickspace l}{}}}\\ H_{C2} :=& {\ensuremath{{\rm 1 \negthickspace l}{}}}{\ensuremath{{\rm 1 \negthickspace l}{}}}z{\ensuremath{{\rm 1 \negthickspace l}{}}}- {\ensuremath{{\rm 1 \negthickspace l}{}}}{\ensuremath{{\rm 1 \negthickspace l}{}}}{\ensuremath{{\rm 1 \negthickspace l}{}}}z\;,\\[-5mm] \end{split}$$ where the coupling constants are set to $J_{xx} = 2$ Hz and $J_{zz} = 1$ Hz. Hence, over the $T_2$“ decoherence protected subspace spanned by the four-qubit Bell basis $\mathcal{B}\otimes\mathcal{B}$ one obtains a fully controllable logical two-qubit system $${\langle {H_{D1}}, {H_{C1}}, {H_{C2}} \rangle}_{\rm Lie}\,\big|_{\mathcal{B}\otimes\mathcal{B}} {\ensuremath{\overset{\rm rep}{=}}}\mathfrak{su}(4)\quad. $$ As illustrated in Fig. \[fig:automorphism\], in the eigenbasis of $\Gamma$ (of Eqn. \[eqn:GammaT1T2\]) the Hamiltonian superoperators ${\operatorname{ad}}_H$ take block diagonal form, where the first block acts on the Liouville subspace $\mathcal{B}\otimes\mathcal{B}$ spanning the states protected against $T_2$-type relaxation. Thus in more abstract terms (and recalling Eqn. \[eqn:exp\_ad\]), the Hamiltonians of System I restricted to the $T_2$” protected block, $\{{\operatorname{ad}}_{H_{D1}}, {\operatorname{ad}}_{H_{D1}}, {\operatorname{ad}}_{H_{D1}}\}\big|_{\mathcal{B}\otimes\mathcal{B}}$, generate ${\operatorname{Ad}}_{SU(4)}$ as group of [*inner automorphisms*]{} over the protected states. ![\[fig:automorphism\] (Colour online) In a physical four-qubit system for encoding two logical qubits, the Hamiltonians (in their superoperator representations of ${\operatorname{ad}}_H$) take the form of $256\times 256$ matrices. In the eigenbasis of $\Gamma$ (Eqn. \[eqn:GammaT1T2\]), the drift Hamiltonian ${\operatorname{ad}}_{H_{D1}}$ of [*System I*]{} block diagonalises into slowly relaxing modes (blue) with relaxation rate constants in the interval $[0\, s^{-1}, 0.060\, s^{-1}]$, moderately relaxing modes (magenta) with $[4.01\, s^{-1}, 4.06\, s^{-1}]$, and fast relaxing modes (red) with $[8.02\, s^{-1}, 8.06\, s^{-1}]$. In [*System II*]{} the Hamiltonian ${\operatorname{ad}}_{H_{D1+D2}}$ comprises off-diagonal blocks (empty boxes) that make the protected modes exchange with the fast decaying ones. \[NB: for pure $T_2$" relaxation (Eqn. \[eqn:GammaT2\]), the relaxation-rate eigenvalues would further degenerate to $0\, s^{-1}$ (‘decoherence-free’), $4\, s^{-1}$ (medium) and $8\, s^{-1}$ (fast) while maintaining the same block structure\].\ ](blockHr.eps){width=".99\columnwidth"} ### Model System II {#model-systemii .unnumbered} Now, by extending the Ising-ZZ coupling between the two qubit pairs to an isotropic Heisenberg-XXX interaction, one gets what we define as [*System II*]{}. Its drift term with the coupling constants being set to $J_{xx} = 2$ Hz and $J_{xyz} = 1$ Hz reads $$\label{eqn:sys-II} \begin{split} H_{D1+D2}\;\; :=\;\; &J_{xx}\,\big(xx{\ensuremath{{\rm 1 \negthickspace l}{}}}{\ensuremath{{\rm 1 \negthickspace l}{}}}+{\ensuremath{{\rm 1 \negthickspace l}{}}}{\ensuremath{{\rm 1 \negthickspace l}{}}}xx + yy{\ensuremath{{\rm 1 \negthickspace l}{}}}{\ensuremath{{\rm 1 \negthickspace l}{}}}+{\ensuremath{{\rm 1 \negthickspace l}{}}}{\ensuremath{{\rm 1 \negthickspace l}{}}}yy\big)\\ + &J_{xyz}\;\;\,\big({\ensuremath{{\rm 1 \negthickspace l}{}}}xx {\ensuremath{{\rm 1 \negthickspace l}{}}}+ {\ensuremath{{\rm 1 \negthickspace l}{}}}yy {\ensuremath{{\rm 1 \negthickspace l}{}}}+ {\ensuremath{{\rm 1 \negthickspace l}{}}}zz {\ensuremath{{\rm 1 \negthickspace l}{}}}\big) \end{split}$$ and it takes the system out of the decoherence-protected subspace due to the off-diagonal blocks in Fig. \[fig:automorphism\]; so the dynamics finds its Lie closure in a much larger algebra isomorphic to $\mathfrak{so}(12)$, $$\dim {\langle(H_{D1+D2}), H_{C1}, H_{C2} \rangle}_{\rm Lie} = 66\;,$$ to which $\mathfrak{su}(4)$ is but a subalgebra. Note that $e^{-i\pi H_{C\nu}} (H_{D1+D2}) e^{i\pi H_{C\nu}} = H_{D1-D2}$ for either $\nu=1,2$. So invoking Trotter’s formula $$\lim\limits_{n\to\infty} \big(e^{-i(H_{D1+D2})/(2n)} e^{-i(H_{D1-D2})/(2n)}\big)^n = e^{-i H_{D1}}$$ it is easy to see that the dynamics of System II may reduce to the subspace of System I in the limit of infinitely many switchings of controls $H_{C1}$ or $H_{C2}$ and free evolution under $H_{D1+D2}$. It is in this [*decoupling limit*]{} that System II encodes a fully controllable logical two-qubit system over the then [*dynamically protectable basis states*]{} of $\mathcal{B}\otimes\mathcal{B}$. In the following paragraph we may thus compare the numerical results of decoherence-protection by optimal control with alternative pulse sequences derived by paper and pen exploiting the Trotter limit. As an example we choose the CNOT gate in a logical two-qubit system encoded in the protected four-qubit physical basis $\mathcal{B}\otimes\mathcal{B}$. \ Results on Performing Target Operations under Simultaneous Decoupling --------------------------------------------------------------------- The model systems are completely parameterised by their respective Master equations, i.e. by putting together the Hamiltonian parts of Eqns. (\[eqn:sys-I\]) for System I or Eqn. (\[eqn:sys-II\]) for System II and the relaxative part expressed in Eqn. (\[eqn:GammaT1T2\]). We will thus compare different scenarios of approximating the logical CNOT target gate (${\operatorname{Ad}}_{U_{\rm CNOT}}$) by the respective quantum map $F(T)$ while at the same time, the logical two-qubit subsystem has to be decoupled from the fast decaying modes in order to remain within a weakly relaxing subspace. This is what makes it a demanding simultaneous optimisation task. — The numerical and analytical results are summerised in Fig. \[fig:compare\_all\]; they come about as follows. ### Comparison of Relaxation-Optimised and Near Time-Optimal Controls With decoherence-avoiding numerically optimised controls one obtains a fidelity beyond $95\%$, while near time-optimal controls show a broad scattering as soon as relaxation is taken into account: among the family of 15 sequences generated, serendipity may help some of them to reach a quality of $85$ to $90\%$, while others perform as bad as giving $65\%$. With open[[grape]{}]{}performing about two standard deviations better than the mean obtained without taking relaxation into account, only $2.5 \%$ of near time-optimal control sequences would roughly be expected to reach a fidelity beyond $95\%$ just by chance. Fig. \[fig:compare\_t\_r\] then elucidates how the new decoherence avoiding controls keep the system almost perfectly within the slowly-relaxing subspace, whereas conventional near time-optimal controls partly sweep through the fast-relaxing subspace thus leading to inferior quality. \ ![\[fig:compare\_t\_r\] (Colour online) (a) Time evolution of all the protected basis states under a typical time-optimised control of Fig.\[fig:compare\_all\]. Projections into the slowly-relaxing and fast-relaxing parts of the Liouville space are shown. (b) Same for the new decoherence-avoiding controls. [*System II*]{} (see text) then stays almost entirely within the $T_2$" protected subspace. ](QF_a650ms.eps "fig:") ![\[fig:compare\_t\_r\] (Colour online) (a) Time evolution of all the protected basis states under a typical time-optimised control of Fig.\[fig:compare\_all\]. Projections into the slowly-relaxing and fast-relaxing parts of the Liouville space are shown. (b) Same for the new decoherence-avoiding controls. [*System II*]{} (see text) then stays almost entirely within the $T_2$" protected subspace. ](QF_t650ms_inset.eps "fig:") ![image](trotter2.eps){width="1.7\columnwidth"} ### Comparison to Paper-and-Pen Solutions Algebraic alternatives to numerical methods of optimal control exploit Trotter’s formula for remaining within the slowly-relaxing subspace when realising the target, see, e.g. [@SVB+05]. Though straightforward, they soon become unhandy as shown in Fig. \[fig:expansions\]. Assuming for the moment that to any evolution under a drift $H_d$ the inverse evolution under $-H_d$ is directly available, the corresponding “naive” expansions take almost 3 times the length of the numerical results, yet requiring much stronger control fields ($1-17$ kHz instead of $50$ Hz) as shown in Fig. \[fig:compare\_all\]. In practice, however, the inverse is often not immediately reachable, but will require waiting for periodicity. For instance, in the Trotter decomposition of Fig. \[fig:expansions\] (c), the Ising term $H_{ZZ}:={\ensuremath{{\rm 1 \negthickspace l}{}}}zz {\ensuremath{{\rm 1 \negthickspace l}{}}}$ as part of the drift Hamiltonian $H_{D1+D2}$ is also needed with negative sign so that all terms governed by $J_{xyz}$ in Eqn. \[eqn:sys-II\] cancel and only the Heisenberg“ XX terms governed by $J_{XX}$ survive. But $H_{ZZ}$ cannot be sign-reversed directly by the $z$” controls in the sense ${\ensuremath{{\rm 1 \negthickspace l}{}}}zz {\ensuremath{{\rm 1 \negthickspace l}{}}}\mapsto - {\ensuremath{{\rm 1 \negthickspace l}{}}}zz {\ensuremath{{\rm 1 \negthickspace l}{}}}$ since it clearly commutes with the $z$" controls. Thus one will have to choose evolution times ($\tau_2$ in Fig. \[fig:expansions\]) long enough to exploit (quasi) periodicity. However, $H_{D1+D2}$ shows eigenvalues lacking periodicity within practical ranges altogether. Moreover, the non-zero eigenvalues of $H_{D1+D2}$ do not even occur in pairs of opposite sign, hence there is no unitary transform $U: H \mapsto -H = UAU^\dagger$ to reverse them, and [*a forteriori*]{} there is no local control that could do so either [@PRA_inv]. Yet, when shifting the coupling to $J_{xx}=2.23$ Hz to introduce a favourable quasi-periodicity, one obtains almost perfect projection ($f_{{\operatorname{tr}}}\geq 1-10^{-10}$) onto the inverse drift evolution of System II, to wit $U^{-1}:=e^{+i\tfrac{\pi}{4} H_{D_1+D_2}}$ after $3.98$ sec and onto $-U^{-1}$ after $1.99$ sec. Thus the identity ${\operatorname{Ad}}_{(-U^{-1})} = {\operatorname{Ad}}_{(U^{-1})}$ may be exploited to cut the duration for implementing ${\operatorname{Ad}}_{U^{-1}}$ to $1.99$ sec. Yet, even with these facilitations, the total length required for a realistic Trotter decomposition (with an overall trace fidelity of $f_{\rm tr}\geq 94.1$ % in the absence of decoherence) amounts to some $28.5$ sec as shown in Fig. \[fig:compare\_all\]. Moreover, as soon as one includes very mild $T_1$-type processes, the relaxation rate constants in the decoherence-protected subspace are no longer strictly zero (as for pure $T_2$-type relaxation), but cover the interval $[0\, {\rm s}^{-1}, 0.060\, {\rm s}^{-1}]$. Under these realistic conditions, a Trotter expansion gives no more than $15$% fidelity, while the new numerical methods allow for realisations beyond $95$% fidelity in the same setting (even with the original parameter $J_{xx}=2.0$ Hz). Discussion ========== In order to extract strategies of how to fight relaxation by means of optimal control, we classify open quantum systems (i) by their dynamics being Markovian or non-Markovian and (ii) by the (Liouville) state space directly representing logical qubits either directly without encoding or indirectly with one logical qubit being encoded by several physical ones. So the subsequent discussion will lead to assigning different potential gains to different scenarios as summerised in Table \[tab:strategy\]. Before going into them in more detail, recall (from the section on numerical setting) that the [*top curve*]{} $g_0(T)$ shall denote the maximum fidelity against final times $T$ as obtained for the analogous closed quantum system (i.e. setting $\Gamma=0$) by way of numerical optimal control. Moreover, define $T_*$ as the smallest time such that $g_0(T_*)=1 - \varepsilon$, where $\varepsilon$ denotes some error-correction threshold. First (I), consider the simple case of a Markovian quantum system with no encoding between logical and physical qubits, and assume $g_0(T)$ has already been determined. If as a trivial instance (I.a) one had a uniform decay rate constant $\gamma$ so $\Gamma=\gamma{\ensuremath{{\rm 1 \negthickspace l}{}}}$, then the fidelity in the presence of relaxation would simply boil down to $f(T) = g_0(T)\cdot e^{-T\cdot\gamma}$. Define $T'_*:={\operatorname{argmax}}\{f(T)\}$ and pick the set of controls leading to $g_0(T'_*)$ calculated in the absence of relaxation for tracking $g_0(T)$. In the simplest setting, they would already be ‘optimal’ without ever having resorted to optimising an explicitly open system. More roughly, the time-optimal controls at $T=T_*$ already provide a good approximation to fighting relaxation if $T_* - T'_*\geq0$ is small, i.e. if $\gamma >0$ is small. Next consider a Markovian system without coding, where $\Gamma\neq\gamma{\ensuremath{{\rm 1 \negthickspace l}{}}}$ is not fully degenerate (I.b). Let $\{\gamma_j\}$ denote the set of (the real parts of the) eigenvalues of $\Gamma$. Then, by convexity of $\{e^{-\gamma t}\;|\; t, \gamma > 0\}$, the following rough [^1] yet useful limits to the fidelity $f(T)$ obtainable in the open system apply $$g_0(T)\cdot \exp\{- \frac{T}{N^2}\sum_{j=1}^{N^2}\gamma_j\} \lesssim f(T) \lesssim g_0(T)\cdot \frac{1}{N^2} \sum_{j=1}^{N^2} e^{-\gamma_j T}\;.$$ Hence the optimisation task in the open system amounts to approximating the target unitary gate (${\operatorname{Ad}}_{U_{\rm target}}$) by the quantum map $F(T)$ resulting from evolution under the controls subject to the condition that modes of different decay rate constants $\gamma_j\neq\gamma_k$ are interchanged to the least possible amount during the entire duration $0\leq t \leq T$ of the controls. An application of this strategy known in NMR spectroscopy as TROSY [@Perv97] makes use of differential line broadening [@Redfield87] and partial cancellation of relaxative contributions. Clearly, unless the eigenvalues $\gamma_j$ do not significantly disperse, the advantage by optimal control under explicit relaxation will be modest, since the potential gain in this scenario relates to the variance $\sigma^2(\{\gamma_j\})$. [l|cccc]{}\ Category && Markovian && non-Markovian\ \ encoding: && &&\ protected subspace && big && (difficult[^2])\ no encoding: && &&\ full Liouville space && small–medium&& medium–big [@PRL_decoh2]\ The situation becomes significantly more rewarding when moving to the category (II) of optimisations restricted to a weakly relaxing (physical) subspace used to encode logical qubits. A focus of this work has been on showing that for Markovian systems encoding logical qubits, the knowledge of the relaxation parameters translates into significant advantages of relaxation-optimised controls over time-optimised ones. This is due to a dual effect: open[[grape]{}]{}readily decouples the encoding subsystem from fast relaxing modes while simultaneously generating a quantum map of (close to) best match to the target unitary. Clearly, the more the decay of the subspace differs from its embedding, the larger the advantage of relaxation-optimised control becomes. Moreover, as soon as the relaxation-rate constants of the protected subsystem also disperse among themselves, modes of different decay should again only be interchanged to the least amount necessary—thus elucidating the very intricate interplay of simultaneous optimisation tasks that makes them prone for numerical strategies. In contrast, in the case of entirely unknown relaxation characteristics, where, e.g., model building and system identification of the relaxative part is precluded or too costly, we have demonstrated that guesses of time-optimal control sequences as obtained from the analogous closed system may—just by chance—cope with relaxation. This comes at the cost of making sure a sufficiently large family of time-optimal controls is ultimately tested in the actual experiment for selecting among many such candidates by trial and error—clearly no more than the second best choice after optimal control under explicitly known relaxation. In the non-Markovian case, however, it becomes in general very difficult to find a common weakly relaxing subspace for encoding (II.b): there is no Master equation of GKS-Lindblad form, the $\Gamma(t)$ of which could serve as a guideline to finding protected subspaces. Rather, one would have to analyse the corresponding non-Markovian Kraus maps for weakly contracted subspaces allowing for encodings. — However, in non-Markovian scenarios, the pros of relaxation-optimised control already become significant without encoding as has been demonstrated in [@PRL_decoh2]. Simultaneous Transfer in Spectroscopy {#simultaneous-transfer-in-spectroscopy .unnumbered} ------------------------------------- Finally, note that the presented algorithm also solves (as a by-product) the problem of [*simultaneous*]{} state-to-state transfer that may be of interest in coherent spectroscopy [@Science98]. While Eqns. \[eqn:super\_master\] and \[eqn:f\_tr\] refer to the full-rank linear image $F$, one may readily project onto the states of concern by the appropriate projector $\Pi$ to obtain the respective dynamics and quality factor of the subsystem $$\begin{aligned} \Pi\;\dot{F} &=& - \Pi\;(i {\operatorname{ad}}_H \,+\,\Gamma)\;\circ\; F\\ f^{(\Pi)}_{\rm tr} &=& \tfrac{1}{{\operatorname{rk}}\Pi}\;{\operatorname{Re}}\,{\operatorname{tr}}\{\Pi^t\;F_{\rm target}^\dagger \Pi\;F(T)^{\phantom{\dagger}}\}\end{aligned}$$ reproducing Eqn. \[eqn:master\] in the limit of $\Pi$ being a rank“ 1 projector. While such rank” 1 problems under relaxation were treated in [@KLG], the algorithmic setting of open[[grape]{}]{}put forward here allows for projectors of arbitrary rank, e.g., $1\leq {\operatorname{rk}}\Pi \leq N$ for $n$ spin" $\tfrac{1}{2}$ qubits with $N:=2^n$. Clearly, the rank equals the number of orthogonal state-to-state optimisation problems to be solved [*simultaneously*]{}. Conclusions and Outlook ======================= We have provided numerical optimal-control tools to systematically find near optimal approximations to unitary target modules in open quantum systems. The pros of relaxation-optimised controls over time-optimised ones depend on the specific experimental scenario. We have extensively discussed strategies for fighting relaxation in Markovian and non-Markovian settings with and without encoding logical qubits in protected subspaces. Numerical results have been complemented by algebraic analysis of controllability in protected subspaces under simultaneous decoupling from fast relaxing modes. To complement the account on non-Markovian systems in [@PRL_decoh2], the progress is quantitatively exemplified in a typical Markovian model system of four physical qubits encoding two logical ones: when the Master equation is known, the new method is systematic and significantly superior to near time-optimal realisations, which in turn are but a guess when the relaxation process cannot be quantitatively characterised. In this case, testing a set of $10-20$ such near time-optimal control sequences empirically is required for getting acceptable results with more confidence, yet on the basis of trial and error. As follows by controllability analysis, Trotter-type expansions allow for realisations within slowly-relaxing subspaces in the limit of infinitely many switchings. However, in realistic settings for obtaining inverse interactions, they become so lengthy that they only work in the idealised limit of both $T_2$ and $T_1$-decoherence-free subspaces, but fail as soon as very mild $T_1$-relaxation processes occur. Optimal control tools like open[[grape]{}]{}are therefore the method of choice in systems with known relaxation parameters. They accomplish decoupling from fast relaxing modes with several orders of magnitude less decoupling power than by typical bang-bang controls. Being applicable to spin and pseudo-spin systems, they are anticipated to find broad use for fighting relaxation in practical quantum control. In a wide range of settings the benefit is most prominent when encoding the logical system in a protected subspace of a larger physical system. However, the situation changes upon shifting to a timevarying $\Gamma(t)$ [@Rabitz07], or to more advanced non-Markovian models with $\Gamma\big(u(t)\big)$ depending on time via the control amplitudes $u(t)$ on timescales comparable to the quantum dynamic process. Then the pros of optimal control extend to the entire Liouville space, as shown in [@PRL_decoh2]. In order to fully exploit the power of optimal control of open systems the challenge is shifted to (i) thoroughly understanding the relaxation mechanisms pertinent to a concrete quantum hardware architecture and (ii) being able to determine its relaxation parameters to sufficient accuracy. This work was presented in part at the conference [pracqsys]{}, Harvard, Aug. 2006. It was supported by the integrated [eu]{} project [qap]{} as well as by [*Deutsche Forschungsgemeinschaft*]{}, [dfg]{}, in [sfb]{} 631. Fruitful comments on the e-print version by the respective groups of F. Wilhelm, J. Emerson and R. Laflamme during a stay at [iqc]{}, Waterloo as well as by B. Whaley on a visit to [uclb]{} are gratefully acknowledged. [^1]: Note that for these limits to hold, one has to assume the averaging in the unitary part $g_0(T) = \tfrac{1}{N^2} {\operatorname{tr}}\{{\operatorname{Ad}}_U^\dagger F_U(T)\}$ and in the dissipative part may be performed independently, for which there is no guarantee unless every scalar product contributing to ${\operatorname{tr}}\{{\operatorname{Ad}}_U^\dagger F_U(T)\}$ is (nearly) equal. [^2]: The problem actually roots in finding a viable protected subspace rather than drawing profit from it.
--- author: - 'Ronen Eldan[^1]' title: Thin Shell Implies Spectral Gap up to Polylog via a Stochastic Localization Scheme --- Introduction ============ The starting point of this paper is a conjecture by Kannan, Lovász, and Simonovits (in short, the KLS conjecture) about the isoperimetric inequality for convex bodies in $\RR^n$. Roughly speaking, The KLS conjecture asserts that, up to a universal constant, the most efficient way to cut a convex body into two parts is with a hyperplane. To be more precise, given convex body $K \subset \RR^n$ whose barycenter is at the origin, and a subset $T \subset K$ with $Vol_n(T) = R Vol_n(K)$, the KLS conjecture suggests that $$\label{eqisop} Vol_{n-1}(\partial T \cap Int(K)) \geq R C \inf_{\theta \in \Sph} Vol_{n-1} (K \cap \theta^\perp)$$ for some universal constant $C>0$, whenever $R \leq \frac{1}{2}$. Here, $Vol_{n-1}$ stands for the $(n-1)$-dimensional volume, $\Sph$ is the unit sphere, $\theta^\perp$ is the hyperplane passing through the origin whose normal direction is $\theta$ and $Int(K)$ is the interior of $K$.\ \ The point of this paper is to reduce this conjecture to the case where $T$ is an ellipsoid, up to a logarithmic correction.\ \ In order to give a precise formulation of the KLS conjecture, we begin with some notation. A probability density $\rho: \RR^n \rightarrow [0, \infty)$ is called [*log-concave*]{} if it takes the form $\rho = \exp(-H)$ for a convex function $H: \RR^n \rightarrow \RR \cup \{ \infty \}$. A probability measure is log-concave if it has a log-concave density. The uniform probability measure on a convex body is an example for a log-concave probability measure, as well as, say, the gaussian measure in $\RR^n$. A log-concave probability density decays exponentially at infinity, and thus has moments of all orders. For a probability measure $\mu$ on $\RR^n$ with finite second moments, we consider its barycenter $b(\mu) \in \RR^n$ and covariance matrix $Cov(\mu)$ defined by $$b(\mu) = \int_{\RR^n} x d \mu(x), \ \ \ \ \ \ Cov(\mu) = \int_{\RR^n} (x - b(\mu)) \otimes (x - b(\mu)) d \mu(x)$$ where for $x \in \RR^n$ we write $x \otimes x$ for the $n\times n$ matrix $(x_i x_j)_{i,j=1,\ldots,n}$. A log-concave probability measure $\mu$ on $\RR^n$ is [*isotropic*]{} if its barycenter lies at the origin and its covariance matrix is the identity matrix.\ Given a measure $\mu$, Minkowski’s boundary measure of a Borel set $A \subset \RR^n$, is defined by, $$\mu^+(A) = \liminf_{\eps \to 0^+} \frac{\mu(A_\eps) - \mu(A)}{\eps}$$ where $$A_\eps := \{x \in \RR^n; ~~ \exists y, ~ |x-y| \leq \eps \}$$ is the $\eps$-extension of $A$.\ \ The main point of this paper is to find an upper bound for the constant, $$\label{defg} G_n^{-1} := \inf_{\mu} \inf_{A \subset \RR^n} \frac{\mu^+(A)}{\mu(A)}$$ where $\mu$ runs over all isotropic log-concave measures in $\RR^n$ and $A \subset \RR^n$ runs over all Borel sets with $\mu(A) \leq \frac{1}{2}$.\ The constant $G_n$ is known as the optimal inverse *Cheeger* constant. According to a result of Ledoux, [@ledoux], the quantity $G_n^{-2}$ is also equivalent, up to a universal constant, to the optimal *spectral gap* constant of isotropic log-concave measures in $\RR^n$ (see (\[gnpoincare\]) below). For an extensive review of this constant and equivalent formulations, see [@Mil]. One property of $G_n$ of particular importance in this note is, $$\label{gnpoincare} \frac{1}{C} G_n^2 \leq \sup_{\mu} \sup_{\varphi} \frac{\int \varphi^2 d \mu}{ \int |\nabla \varphi|^2 d \mu} \leq C G_n^2$$ Where $\mu$ runs over all isotropic log-concave measures and $\varphi$ runs over all smooth enough functions with $\int \varphi d \mu = 0$ and $C>0$ is some universal constant. In [@KLS], it is conjectured that, There exists a universal constant $C$ such that $G_n < C$ for all $n \in \mathbb{N}$. In this note we will show that, up to a small correction, the above is implied by a seemingly weaker hypothesis.\ Next, we would like to formulate the *thin-shell conjecture*. Let $\sigma_n \geq 0$ satisfy $$\sigma_n^2 = \sup_X \EE \left [(|X| - \sqrt{n})^2 \right ] \label{eq_1356}$$ where the supremum runs over all isotropic, log-concave random vectors $X$ in $\RR^n$. The **shin-shell conjecture** (see Anttila, Ball and Perissinaki [@ABP] and Bobkov and Koldobsky [@BK]) asserts the following: There exists a universal constant $C$ such that, $$\label{thinshell} \sigma_n < C$$ for all $n \in \mathbb{N}$. An application of (\[gnpoincare\]) with the function $\varphi(x) = |x|^2$ shows that the thin-shell conjecture is *weaker* than the KLS conjecture.\ The first nontrivial bound for $\sigma_n$ was given by Klartag in [@K1], who showed that $\sigma_n \leq C \frac{n^{1/2}}{\log (n+1)}$. Several improvements have been introduced around the same method, see e.g [@K2] and [@fleury]. The best known bound for $\sigma_n$ at the time of this note is due to Guedon and E. Milman, in [@GM], extending previous works of Klartag, Fleury and Paouris, who show that $\sigma_n \leq C n^{\frac{1}{3}}$. The thin-shell conjecture was shown to be true for several specific classes of convex bodies, such as bodies with a symmetry for coordinate reflections (Klartag, [@K_uncond]) and certain *random* bodies (Fleury, [@fleury2]).\ It was found by Sudakov, [@sudakov], that the parameter $\sigma_n$ is highly related to almost-gaussian behaviour of certain marginals of a convex body, a fact now known as the central limit theorem for convex sets [@K1]. This theorem asserts that most of the one-dimensional marginals of an isotropic, log-concave random vector are approximately gaussian in the sense that the Kolmogorov distance to the standard gaussian distribution of a typical marginal has roughly the order of magnitude of $\sigma_n / \sqrt{n}$. Therefore the conjectured bound (\[thinshell\]) actually concerns the quality of the gaussian approximation to the marginals of high-dimensional log-concave measures.\ \ The first theorem of this note reads,\ \[mainthm1\] There exists a constant $C>0$ such that for all $n \geq 2$, $$G_n \leq C \sqrt{(\log n) \sum_{k=1}^n \frac{\sigma_k^2}{k}}.$$ Note that, in particular, for any constant $\kappa > 0$ such that $\sigma_n \leq n^{\kappa}$ for all $n \in \mathbb{N}$, one gets $G_n \leq C (\sqrt{\log n}) n^{\kappa}$. Under the *thin-shell conjecture*, the theorem gives $G_n < C \log n$. Plugging the results of this paper into the currently best known bound for $\sigma_n$ (proven in [@GM]), $\sigma_n \leq C n^{1/3}$, it follows that $$G_n \leq C n^{1/3} \sqrt{\log n}.$$ This slightly improves the previous bound, $G_n \leq C n^{5/12}$, which is a corollary of [@GM] and [@Bobkov]. In [@EK1], B. Klartag and the author have found a connection between the thin-shell hypothesis and another well known conjecture related to convex bodies, known as the *hyperplane conjecture*. The methods of this paper share some common lines with the methods in [@EK1]. In a very recent paper of K.Ball and V.H. Nguyen, [@bn], a connection between the KLS conjecture and the hyperplane conjecture that applies for individual log-concave measures has also been established. They show that the isotropic constant of a log concave measure which attains a spectral gap is bounded by a constant which depends exponentially on the spectral gap. Compare this result with the result in [@Bobkov]. Bobkov’s theorem states that for any log-concave random vector $X$ and any smooth function $\varphi$, one has $$\frac{Var[ \varphi(X)]}{ \EE \left [ |\nabla \varphi(X)|^2 \right] } \leq C \EE[|X|] \sqrt{ Var[|X|]}.$$ Under the *thin-shell* hypothesis, Bobkov’s theorem gives $G_n \leq C n^{1/4}$. The bound in theorem \[mainthm1\] will rely on the following intermediate constant which corresponds to a slightly stronger *thin shell* bound. Define, $$\label{defkn} K_n^2 := \sup_X \sup_{\theta \in \Sph} \sum_{i,j=1}^n \EE[X_i X_j \langle X, \theta \rangle]^2,$$ where the supremum runs over all isotropic log-concave random vectors $X$ in $\RR^n$. Obviously, an equivalent definition of $K_n$ will be, $$K_n := \sup_{\mu} \left | \left |\int_{\RR^n}x_1 x \otimes x d \mu(x) \right | \right |_{HS}$$ where the supremum runs over all isotropic log-concave measures in $\RR^n$. Here, $|| \cdot ||_{HS}$ stands for the Hilbert-Schmidt norm of a matrix.\ \ There is a simple relation between $K_n$ and $\sigma_n$, namely, \[ksigma\] There exists a constant $C>0$ such that for all $n \geq 2$, $$K_n \leq C \sqrt{ \sum_{k=1}^n \frac{\sigma_k^2}{k}}.$$ Theorem \[mainthm1\] will be a consequence of the above lemma along with, \[gk\] There exists a constant $C>0$ such that for all $n \geq 2$, $$G_n \leq C K_n \sqrt{\log n}.$$ The constant $K_n$ satisfies the following bound: $$K_n^{-1} \geq c \inf_{\mu} \inf_{E \subset \RR^n} \frac{\mu^+(E)}{\mu(E)}$$ where $\mu$ runs over all isotropic log-concave measures in $\RR^n$, $E$ runs over all *ellipsoids* with $\mu(E) \leq \frac{1}{2}$ and $c>0$ is some universal constant. This shows that up to the extra factor $\sqrt{ \log n}$, in order to control the minimal possible surface area among *all possible subsets* of measure $\frac{1}{2}$ on the class of isotropic log-concave measures, it is enough to control the surface area of *ellipsoids*. See section 6 below for details. We move on to the second result of this paper, a stability result for the *Brunn-Minkowski Inequality*. The Brunn-Minkowski inequality states, in one of its normalizations, that $$Vol_n \left( \frac{K + T}{2} \right) \geq \sqrt{Vol_n(K) Vol_n(T)} \label{eq_928}$$ for any compact sets $K, T \subset \RR^n$, where $(K + T) / 2 = \{ (x + y) / 2 ; x \in K, y \in T \}$ is half of the Minkowski sum of $K$ and $T$. When $K$ and $T$ are closed convex sets, equality in (\[eq\_928\]) holds if and only if $K$ is a translate of $T$. When there is an almost-equality in (\[eq\_928\]), $K$ and $T$ are almost translates of each other in a certain sense (which varies between different estimates). Estimates of this form, often referred to as *stability estimates*, appear in Diskant [@diskant], in Groemer [@groemer], and in Figalli, Maggi and Pratelli [@FMP1; @FMP2], Segal [@segal]. The result [@FMP2], which is essentially the strongest result in its category, and other existing stability estimates share a common thing: the bounds become worse as the dimension increases. In a recent paper, [@EK2], Klartag and the author suggested that the correct bounds might actually become better as the dimension increases, as demonstrated by certain results. The estimates presented here may be viewed as a continuation of this line of research. In order to formulate our result, we define the two constants $$\kappa = \liminf_{n \to \infty} \frac{\log \sigma_n}{\log n}, ~~~ \tau_n = \max\left (1, \max_{1 \leq j \leq n} \frac{\sigma_j}{j^\kappa} \right ),$$ so that $\sigma_n \leq \tau_n n^\kappa$. Note that the thin-shell conjecture implies $\kappa = 0$ and $\tau_n < C$.\ \ Our main estimate reads, \[mainthm2\] For every $\epsilon > 0$ there exists a constant $C(\epsilon)$ such that the following holds: Let $K,T$ be convex bodies whose volume is $1$ and whose barycenters lie at the origin. Suppose that the covariance matrix of the uniform measure on $K$ is equal to $L_K Id$ for a constant $L_K > 0$. Denote, $$\label{defV1} V = Vol_n \left ( \frac{K+T}{2} \right ),$$ and define $$\delta = C(\epsilon) L_K V^5 \tau_n n^{2 (\kappa - \kappa^2) + \epsilon}.$$ Then, $$Vol_n( K_\delta \cap T ) \geq 1 - \epsilon.$$ Some remarks: It follows from theorem 1.4 in [@EK2] that the above estimate is true with $$\delta = C(\epsilon) \sqrt{\tau_n} n^{1/4 + \kappa / 2} V^{5/2}.$$ If $\kappa \geq 1/4$, then the result we prove here weaker than the one in [@EK2]. However, under the thin shell hypothesis, the result of this paper becomes stronger, and is in fact tight up to the term $C(\epsilon) n^{\epsilon}$. This tightness is demonstrated, for instance, by taking $K$ and $T$ to be the unit cube and a unit cube truncated by a ball of radius $\sqrt n$ and normalized to be isotropic. Using the bound in [@GM], the theorem gives $$\delta = C(\epsilon) n^{\frac{4}{9} + \epsilon} V^5 L_K.$$ Note that if the assumption (\[defV1\]) is dropped, even if the covariance matrices of $K$ and $T$ are assumed to be equal, the best corresponding bound would be $\delta = C \sqrt n L_K$ as demonstrated, for example, by a cube and a ball. The above bound complements, in some sense, the result proven in [@FMP1], which reads, $$Vol_n((K + x_0) \Delta T)^2 \leq n^7 ( Vol_n((K+T)/2) - 1)$$ for some choice of $x_0$, where $\Delta$ denotes the symmetric difference between the sets. Unlike the result presented in this paper, the result in [@FMP1] gives much more information as the expression $Vol_n((K+T)/2) - 1$ approaches zero. On the other hand the result presented here already gives some information when $Vol_n((K+T)/2) = 10$. The structure of this paper is as follows: In section 2, we construct a stochastic localization scheme which will be the main ingredient our proofs. In section 3, we establish a bound for the covariance matrix of the measure throughout the localization process, which will be essential for its applications. In section 4, we prove theorem \[mainthm1\] and in section 5 we prove theorem \[mainthm2\] and its corollaries. In section 6 we tie some loose ends.\ Throughout this note, we use the letters $c, \tilde{c}, c^{\prime}, C, \tilde{C}, C^{\prime}, C''$ to denote positive universal constants, whose value is not necessarily the same in different appearances. Further notation used throughout the text: for a Borel measure $\mu$ on $\RR^n$, $supp(\mu)$ is the minimal closed set of full measure. The Euclidean unit ball is denoted by $B_n = \{ x \in \RR^n ; |x| \leq 1 \}$. Its boundary is denoted by $\Sph$. We write $\nabla \varphi$ for the gradient of the function $\varphi$, and $\nabla^2 \varphi$ for the Hessian matrix. For a positive semi-definite symmetric matrix $A$, we denote its largest eigenvalue by $||A||_{OP}$. For any matrix $A$, we denote the sum of its diagonal entries by $Tr(A)$, and by $||A||_{HS}^2$ we denote the sum of the eigenvalues of the matrix $A^T A$. For two densities $f$, $g$ on $\RR^n$, define the *Wasserstein distance*, $W_2(f, g)$, by $$W_2(f, g)^2 = \inf_\xi \int_{\RR^n \times \RR^n} |x - y|^2 d \xi(x,y)$$ where the infimum is taken over all measures $\xi$ on $\RR^{2n}$ whose marginals onto the first and last $n$ coordinates are the measures whose densities are $f$ and $g$ respectively (see, e.g. [@villani] for more information).\ Finally, for a continuous time stochastic process $X_t$, we denote by $d X_t$ the differential of $X_t$, and by $[X]_t$ the quadratic variation of $X_t$. For a pair of continuous time stochastic processes $X_t, Y_t$, the quadratic covariation will be denoted by $[X, Y]_t$.\ \ *Acknowledgements* I owe this work to countless useful discussions I have had with my supervisor, Bo’az Klartag, through which I learnt the vast part of what I know about the subject, as well as about related topics, and for which I am grateful. I would also like to thank Vitali and Emanuel Milman and Boris Tsirelson for inspiring discussions and for their useful remarks on a preliminary version of this note. Finally, I would like to thank the anonymous referee for doing a tremendous job reviewing a preliminary version of this paper, thanks to his/her ideas the proofs are significantly simpler, shorter and more comprehensible. A stochastic localization scheme ================================ In this section we construct the localization scheme which will be the principal component in our proofs. The construction will use elementary properties of semimartingales and stochastic integration. For definitions, see [@durett].\ \ For the construction, we assume that we are given some isotropic random vector $X \in \RR^n$ with density $f(x)$. Well-known concentration bounds for log-concave measures (see, e.g., section 2 of [@K2]) will allow us to assume throughout the paper that $$\label{compactsupport} supp(f) \subseteq n B_n,$$ where $B_n$ is the Euclidean ball of radius 1.\ \ We begin with some definitions. For a vector $c \in \RR^n$ and an $n \times n$ matrix $B$, we write $$V_f(c,B) = \int_{\RR^n} e^{\langle c, x \rangle - \frac{1}{2} \langle B x, x \rangle } f(x) dx.$$ Define a vector valued function, $$a_f(c,B) = V_f^{-1} (c,B) \int_{\RR^n} x e^{\langle c, x \rangle - \frac{1}{2} \langle B x, x \rangle } f(x) dx,$$ and a matrix valued function, $$A_f(c,B) = V_f^{-1} (c,B) \int_{\RR^n} (x - a_f(c,B)) \otimes (x - a_f(c,B)) e^{\langle c, x \rangle - \frac{1}{2} \langle B x, x \rangle } f(x) dx.$$ The assumption (\[compactsupport\]) ensures that $V_f$, $a_f$ and $A_f$ are smooth functions of $c, B$.\ \ Let $W_t$ be a standard Wiener process and consider the following system of stochastic differential equations: $$\label{stochastic1} c_0 = 0, ~~ d c_t = A_f^{-1/2}(c_t, B_t) dW_t + A_f^{-1}(c_t, B_t) a_f(c_t, B_t) dt,$$ $$B_0 = 0, ~~ d B_t = A_f^{-1}(c_t, B_t) dt.$$ Taking into account the fact that the functions $A_f, a_f$ are smooth and that $A_f(c, B)$ is positive definite for all $c,B$, we can use a standard existence and uniqueness theorem (see e.g., [@oksendal], section 5.2) to ensure the existence and uniqueness of a solution in some interval $0 \leq t \leq t_0$, where $t_0$ is an almost-surely positive random variable.\ \ Next, we construct a 1-parameter family of functions $\Gamma_t(f)$ by defining,\ $$\label{defineF} F_t(x) = V_f^{-1} (c_t, B_t) e^{\langle c_t, x \rangle - \frac{1}{2} \langle B_t x, x \rangle }$$ and $$\Gamma_t(f)(x) = f(x) F_t(x).$$ Also, abbreviate $$a_t = a_f(c_t, B_t), ~~ A_t = A_f(c_t, B_t), ~~ V_t = V_f(c_t, B_t), ~~ f_t = \Gamma_t(f),$$ so that $a_t$ and $A_t$ are the barycenter and the covariance matrix of the function $f_t$.\ The following lemma may shed some light on this construction. \[basicform\] The function $F_t$ satisfies the following set of equations: $$\label{contloc} F_0(x) = 1, ~~ d F_t(x) = \langle x - a_t, A_t^{-1/2} d W_t \rangle F_t(x),$$ $$a_t = \int_{\RR^n} x f(x) F_t(x) dx, ~~ A_t = \int_{\RR^n} (x - a_t) \otimes (x - a_t) f(x) F_t(x) dx,$$ for all $x \in \RR^n$ and all $0 \leq t \leq t_0$. *Proof:*\ Fix $x \in \RR^n$. We will show that $d F_t(x) = \langle x - a_t, A_t^{-1/2} d W_t \rangle F_t(x)$. The correctness of the other equations is obvious. Define, $$G_t(x) = V_t F_t(x) = e^{\langle c_t, x \rangle - \frac{1}{2} \langle B_t x, x \rangle }.$$ Equation (\[stochastic1\]) clearly implies that $[B]_t = 0$. Let $Q_t(x)$ denote the quadratic variation of the process $\langle x, c_t \rangle$. We have, $$d \langle x, c_t \rangle = \langle A_t^{-1/2} x, dW_t + A_t^{-1/2} a_t dt \rangle.$$ It follows that, $$d Q_t(x) = \langle A_t^{-1} x, x \rangle dt.$$ Using Itô’s formula, we calculate $$d G_t(x) = \left ( \langle x, d c_t \rangle - \frac{1}{2} \langle d B_t x, x \rangle + \frac{1}{2} d Q_t(x) \right ) G_t(x) =$$ $$\left ( \langle x, A_t^{-1/2} dW_t + A_t^{-1} a_t dt \rangle - \frac{1}{2} \langle A_t^{-1} x, x \rangle dt + \frac{1}{2} \langle A_t^{-1} x, x \rangle dt \right ) G_t(x) =$$ $$\langle x, A_t^{-1/2} dW_t + A_t^{-1} a_t dt \rangle G_t(x).$$ Next, we calculate, $$d V_t(x) = d \int_{\RR^n} e^{\langle c_t, x \rangle - \frac{1}{2} \langle B_t x, x \rangle } f(x) dx =$$ $$\int_{\RR^n} d G_t(x) f(x) dx = \int_{\RR^n} \langle x, A_t^{-1/2} dW_t + A_t^{-1} a_t dt \rangle G_t(x) f(x) dx =$$ $$V_t \left \langle a_t, A_t^{-1/2} dW_t + A_t^{-1} a_t dt \right \rangle.$$ So, using Itô’s formula again, $$d V_t^{-1} = - \frac{d V_t}{V_t^2} + \frac{d [V]_t}{V_t^3} =$$ $$- V_t^{-1} \left \langle a_t, A_t^{-1/2} dW_t + A_t^{-1} a_t dt \right \rangle + V_t^{-1} \langle A_t^{-1} a_t, a_t \rangle.$$ Applying Itô’s formula one last time yields, $$d F_t(x) = d (V_t^{-1} G_t(x)) =$$ $$G_t(x) d V_t^{-1} + V_t^{-1} d G_t(x) + d [V^{-1}, G(x)]_t =$$ $$- V_t^{-1} \left \langle a_t, A_t^{-1/2} dW_t + A_t^{-1} a_t dt \right \rangle G_t(x) + V_t^{-1} \langle A_t^{-1} a_t, a_t \rangle G_t(x) +$$ $$+ V_t^{-1} \langle x, A_t^{-1/2} dW_t + A_t^{-1} a_t dt \rangle G_t(x) - \langle A_t^{-1/2} a_t, A_t^{-1/2} x \rangle V_t^{-1} G_t(x) dt =$$ $$\langle A_t^{-1/2} dW_t, x - a_t \rangle F_t(x).$$ This finishes the proof.\ In view of the above lemma it can be seen that, in some sense, the above is just the continuous version of the following iterative process: at every time step, multiply the function by a linear function equal to $1$ at the barycenter, whose gradient has a random direction distributed uniformly on the ellipsoid of inertia. This construction may also be thought of as a variant of the Brownian motion on the Riemannian manifold constructed in [@EK1]. Rather than defining the process $F_t$ through equations (\[stochastic1\]) and (\[defineF\]), one may alternatively define it directly with the infinite system of stochastic differential equations in formula (\[contloc\]). In this case, the existence and uniqueness of the solution can be shown using [@KX Theorem 5.2.2, page 159] (however, some extra work is needed in order to show that the conditions of this theorem hold). In the remainder of this note, most of the calculations involving the process $f_t$ will use the formula (\[contloc\]) rather than the formulas (\[stochastic1\]) and (\[defineF\]).\ \ The remaining part of this section is dedicated to analyzing some basic properties of $\Gamma_t(f)$. We begin with: \[basic1\] The process $\Gamma_t(f)$ satisfies the following properties:\ \ (i) The function $\Gamma_t(f)$ is almost surely well defined, finite and log-concave for all $t > 0$.\ (ii) For all $t > 0$, $\int_{\RR^n}f_t(x) dx = 1$.\ (iii) The process has a semi-group property, namely, $$\Gamma_{s+t}(f) \sim \frac{1}{\sqrt{\det A_s}} \Gamma_{t}(\sqrt{\det A_s} \Gamma_s(f) \circ L^{-1}) \circ L,$$ where $$L(x) = A_s^{-1/2}(x - a_s).$$ (iv) For every $x \in \RR^n$, the process $f_t(x)$ is a martingale. In order to prove (i), we will first need the following technical lemma: \[basic1.5\] For every dimension $n$, there exists a constant $c(n) > 0$ such that, $$\PP (A_{t} \geq c(n) Id, ~~ \forall 0 \leq t \leq c(n)) \geq c(n).$$ The proof of this lemma is postponed section 6.\ \ *Proof of lemma \[basic1\]:*\ To prove (i), we have to make sure that $A_t^{-1/2}$ does not blow up. To this end, define $t_0 = \inf \{t |~~ \det A_t = 0 \}$. By continuity, $t_0 > 0$. Equation (\[defineF\]) suggests that $f_t$ is log-concave for all $t < t_0$. The fact that $t_0 = \infty$ will be proven below.\ We start by showing that both (ii) and (iii) hold for any $t < t_0$.\ We first calculate, using (\[contloc\]), $$d \int_{\RR^n} f(x) F_t(x) dx = \int_{\RR^n} f(x) d F_t(x) dx =$$ $$\int_{\RR^n} f(x) F_t(x) \langle A_t^{-1/2} d W_t, x - a_t \rangle dx = 0,$$ with probability 1. The last equality follows from the definition of $a_t$ as the barycenter of the measure $f(x) F_t(x) dx$. We conclude (ii).\ \ We continue with proving (iii). To do this, fix some $0 < s < t_0 - t$ and write, $$\label{normilization} L(x) = A_s^{-1/2}(x - a_s).$$ We normalize $f_s$ by defining, $$g(x) = \sqrt{\det A_s} f_s(L^{-1}(x)),$$ which is clearly an isotropic probability density. Let us inspect $\Gamma_t(g(x))$. We have, using (\[contloc\]), $$d \Gamma_t(g)(x) |_{t=0} = g(x) \langle x, d W_t \rangle = \sqrt{\det A_s} f_s(L^{-1}(x)) \langle L (L^{-1}(x)), d W_t \rangle =$$ $$\sqrt{\det A_s} f_s(L^{-1}(x)) \langle L^{-1}(x) - a_s , A_s^{-1/2} d W_t \rangle,$$ On the other hand, $$d f_s(L^{-1}(x)) = f_s(L^{-1}(x)) \langle L^{-1}(x) - a_s , A_s^{-1/2} d W_s \rangle$$ in other words, $$d \Gamma_t(\sqrt{\det A_s} \Gamma_s(f) \circ L^{-1}) \left |_{t=0} \right . \sim \sqrt{\det A_s} d \Gamma_t(f) \circ L^{-1} \left |_{t=s} \right .$$ which proves (iii).\ \ We are left with showing that $t_0 = \infty$. To see this, write, $$s_1 = \min \{t ~~; ~~ ||A_t^{-1}||_{OP} = c^{-1}(n) \},$$ where $c(n)$ is the constant from lemma \[basic1.5\]. Note that, by continuity, $s_1$ is well-defined and almost-surely positive. When time $s$ comes, we may define $L_1$ as in (\[normilization\]), and continue running the process on the function $f \circ L_1^{-1}$ as above. We repeat this every time $||A_t^{-1}||_{OP}$ hits the value $c^{-1}(n)$, thus generating the hitting times $s_1,s_2,...$. Lemma \[basic1.5\] suggests that, $$\PP \left . \left (s_{i+1} - s_i > c(n)~ \right | ~ s_1,s_2,...,s_i \right ) > c(n),$$ which implies that, almost surely, $s_{i+1} - s_{i} > c(n)$ for infinitely many values of $i$. Thus, $\lim_{n \to \infty} s_n = \infty$ almost surely, and so $t_0 = + \infty$.\ Part (iv) follows immediately from formula (\[contloc\]). The lemma is proven.\ \ Our next task is to analyze the path of the barycenter $a_t = \int_{\RR^n}x f_t(x) dx$. We have, using (\[contloc\]), $$\label{pathbc} d a_t = d \int_{\RR^n}x f(x) F_t (x) dx = \int_{\RR^n}x f(x) F_t(x) \langle x - a_t, A_t^{-1/2} d W_t \rangle dx =$$ $$\left ( \int_{\RR^n}(x - a_t) \otimes (x - a_t) f_t(x) dx \right ) (A_t^{-1/2} d W_t) = A_t^{1/2} d W_t.$$ where the third equality follows from the defition of $a_t$, which implies, $$\int_{\RR^n}a_t f(x) F_t(x) \langle x - a_t, A_t^{-1/2} d W_t \rangle = 0.$$ One of the crucial points, when using this localization scheme, will be to show that the barycenter of the measure does not move too much throughout the process. For this, we would like to attain upper bounds on the eigenvalues of the matrix $A_t$. We start with a simple observation:\ \ Equation (\[defineF\]) shows that the measure $f_t$ is log-concave with respect to the measure $e^{- \frac{1}{2} |B_t^{1/2} x|^2}$. The following result, which is well-known to experts, shows that measures which possess this property attain certain concentration inequalities. \[bakryemery\] There exists a universal constant $\Theta >0$ such that the following holds: Let $\phi: \RR^n \to \RR$ be a convex function and let $K>0$. Suppose that, $$d \mu(x) = Z e^{-\phi(x) - \frac{1}{2 K^2} |x|^2} dx$$ is a probability measure whose barycenter lies at the origin. Then,\ (i) For all Borel sets $A \subset \RR^n$, with $0.1 \leq \mu(A) \leq 0.9$, one has, $$\mu(A_{K \Theta}) \geq 0.95$$ where $A_{K \Theta}$ is the $K \Theta$-extension of $A$, defined in the previous section.\ (ii) For all $\theta \in \Sph$, $$\int \langle x, \theta \rangle^2 d \mu(x) \leq \Theta K^2.$$ *Proof:*\ Denote the density of $\mu$ by $\rho(x)$. Let $B$ be the complement of $A_{K \Theta}$, where the constant $\Theta$ will be chosen later on. Define, $$f(x) = \rho(x) \mathbf{1}_{A}, ~~ g(x) = \rho(x) \mathbf{1}_{B}.$$ Note that for $x \in A$ and $y \in B$, we have $|x-y|>K \Theta$. Thus, by the parallelogram law, $$\left | \frac{x+y}{2} \right |^2 \leq \frac{|x|^2 + |y|^2}{2} - \frac{1}{4} K^2 \Theta^2,$$ which implies, $$e^{-\frac{1}{2 K^2} \left |\frac{x+y}{2} \right |^2} \geq \sqrt{e^{-\frac{1}{2K^2} |x|^2} e^{-\frac{1}{2K^2} |x|^2} } e^{\frac{1}{8} \Theta^2}.$$ Since the function $\phi$ is assumed to be convex, we obtain $$\rho \left( \frac{x+y}{2} \right ) \geq \sqrt{f(x) g(y)}e^{\frac{1}{8} \Theta^2}.$$ Now, using the Prekopa-Leindler theorem, we attain $$\mu(A) \mu(B) = \int_{\RR^n} f(x) dx \int_{\RR^n} g(x) dx \leq e^{-\frac{1}{4} \Theta^2},$$ so, $$\mu(A_{K \Theta}) \geq 1 - \frac{e^{-\frac{1}{4} \Theta^2}}{\mu(A)} \geq 1 - 10 e^{-\frac{1}{4} \Theta^2}.$$ Clearly, a large enough choice of the constant $\Theta$ gives (i). To prove (ii), we define, $$g(t) = \mu(\{x; \langle x, \theta \rangle \geq t \}),$$ and take $A = \{x; \langle x, \theta \rangle < g^{-1}(0.5) \}$. An application on (i) on the set $A$ gives, $$g(g^{-1}(0.5) + K \Theta) \leq 0.05$$ since $g$ is log-concave, we attain $$g(g^{-1}(0.5) + t K \Theta) \leq 0.05^t, ~~ \forall t > 1$$ and in the same way, one can attain, $$1 - g(g^{-1}(0.5) - t K \Theta) \leq 0.05^t, ~~ \forall t > 1.$$ Part (ii) of the proposition is a direct consequence of the last two equations.\ \ Plugging (\[defineF\]) into part (ii) of this theorem gives, $$\label{goodbound} A_t \leq \Theta ||B_t^{-1}||_{OP} Id \leq \Theta \left( \int_0^t \frac{ds}{||A_s||_{OP}} \right )^{-1} Id, ~~~ \forall t>0.$$ By our assumption (\[compactsupport\]) we deduce that $A_t$ is bounded by $n^2 Id$, which immediately gives $$\label{poorbound} A_t < \frac{\Theta n^2}{t} Id.$$ $$~$$ The bound (\[poorbound\]) will be far from sufficient for our needs, and the next section is dedicated to attaining a better upper bound. However, it is good enough to show that the barycenter, $a_t$, converges in distribution to the density $f(x)$.\ \ Indeed, (\[poorbound\]) implies that $$\label{atconverges} \lim_{t \to \infty} W_2(f_t, \delta_{a_t}) = 0$$ where $\delta_{a_t}$ is the probability measure supported on $\{a_t\}$. In other words the probability density $f_t(x)$ converges to a delta measure. By the martingale property, part (iv) of lemma \[basic1\], we know that $\EE[f_t(x)] = f(x)$, thus, $X_t := a_t$ converges, in Wasserstein metric, to the original random vector $X$ as $t \to \infty$. It is interesting to compare this construction with the construction by Lehec in [@Lehec]. In both cases, a certain Itô process converges to a given log-concave measure. In the result of Lehec, the convergence is ensured by applying a certain adapted *drift*, while here, it is ensured by adjusting the *covariance matrix* of the process. We end this section with a simple calculation in which we analyze the process $\Gamma_t(f)$ in the simple case that $f$ is the standard Gaussian measure. While the calculation will not be necessary for our proofs, it may provide the reader a better understanding of the process. Define, $$f(x) = (2 \pi)^{-n/2} e^{-|x|^2 / 2}.$$ According to formula (\[defineF\]), the function $f_t$ takes the form, $$f_t(x) = V_t^{-1} \exp \left ( \langle x, c_t \rangle - \frac{1}{2} \left \langle (B_t + Id) x, x \right \rangle \right )$$ where $V_t \in R, c_t \in \RR^n$ are certain Itô processes. It follows that the covariance matrix $A_t$ satisfies, $$A_t^{-1} = B_t + Id.$$ Recall that $B_t = \int_0^t A_s^{-1} ds$. It follows that, $$\frac{d}{dt} B_t = B_t + Id, ~~ B_0 = 0.$$ So, $$B_t = (e^t - 1) Id,$$ which gives, $$A_t = e^{-t} Id.$$ Next, we use (\[pathbc\]) to derive that, $$d a_t = e^{-t/2} d W_t,$$ which implies, $$a_t \sim W_{1 - \exp(-t)}.$$ We finally get, $$f_t = e^{nt/2} (2 \pi)^{-n/2} \exp \left (-\frac{1}{2} e^t \left |(x - W_{1 - \exp(-t)}) \right |^2 \right ).$$ Analysis of the matrix $A_t$ ============================ In the previous section we saw that the covariance matrix of the $f_t$, $A_t$, satisfies (\[poorbound\]). The goal of this section is to give a better bound, which holds also for small $t$. Namely, we want to prove: \[mainsec3\] There exist universal constants $C,c > 0$ such that for all $n \geq 2$ the following holds: Let $f:\RR^n \to \RR_+$ be an isotropic, log concave probability density. Let $A_t$ be the covariance matrix of $\Gamma_t(f)$. Then,\ (i) Define the event $F$ by, $$\label{deff} F := \left \{ ||A_t||_{OP} < C K_n^2 (\log n) e^{-c t}, ~~ \forall t > 0 \right \}.$$ One has, $$\label{onb1} \PP(F) ~~\geq~~ 1 - (n^{-10}).$$ (ii) For all $t > 0$, $\EE[Tr(A_t)] \leq n$.\ (iii) Whenever the event $F$ holds, the following also holds:\ For all $t > \frac{1}{K_n^2 \log n}$ there exists a convex function $\phi_t(x)$ such that the function $f_t$ is of the form, $$\label{ftForm2} f_t(x) = \exp \left ( - \left | \frac{x}{C K_n \sqrt{\log n}} \right |^2 - \phi_t(x) \right).$$ Before we move on to the proof, we will establish some simple properties of the matrix $A_t$. Our first task is to find the differential of process $A_t$. We have, using Itô’s formula with equation (\[contloc\]), $$\label{dAt1} d A_t = d \int_{\RR^n}(x - a_t) \otimes (x - a_t) f_t(x) dx =$$ $$\int_{\RR^n}(x - a_t) \otimes (x - a_t) d f_t(x) dx - 2 \int_{\RR^n} d a_t \otimes (x - a_t) f_t(x) dx -$$ $$- 2 \int_{\RR^n}(x - a_t) \otimes d [a_t, f_t(x)]_t dx + d [a_t, a_t] \int_{\RR^n} f_t(x) dx.$$ Let us try to understand each of this terms. The second term is, $$\int_{\RR^n} d a_t \otimes (x - a_t) f_t(x) dx = d a_t \otimes \int_{\RR^n} (x - a_t) f_t dt = 0.$$ Recall that by (\[pathbc\]), $d a_t = A_t^{1/2} dW_t$, which gives, $$\label{dAt2} d [a_t, a_t]_t = A_t dt,$$ and $$d[a_t, f_t(x)] = f_t(x) A_t^{1/2} A_t^{-1/2} x dt = f_t(x) x dt,$$ which implies, $$\label{dAt3} \int_{\RR^n} (x - a_t) \otimes d [a_t, f_t(x)]_t dx = \int_{\RR^n} (x - a_t) \otimes x f_t(x) dx dt =$$ $$\int_{\RR^n} (x - a_t) \otimes (x - a_t) f_t(x) dx dt = A_t dt$$ Plugging equations (\[dAt1\]), (\[dAt2\]) and (\[dAt3\]) together gives, $$d A_t = \int_{\RR^n}(x - a_t) \otimes (x - a_t) d f_t(x) dx - A_t dt,$$ so we finally get, $$d A_t = \int_{\RR^n}(x - a_t) \otimes (x - a_t) \langle x - a_t, A_t^{-1/2} dW_t \rangle f_t(x) dx - A_t dt.$$ Note that the term $A_t dt$ is positive definite, hence, the fact it appears in the differential can only make all of the eigenvalues of $A_t$ smaller (as a matter of fact, this term induces a rather strong drift of all the eigenvalues towards $0$, which we will not even use). Consequently, we can define $\tilde A_t = A_t + \int_0^t A_s ds$, so that $$\label{datilde} d \tilde A_t = \int_{\RR^n}(x - a_t) \otimes (x - a_t) \langle x - a_t, A_t^{-1/2} d W_t \rangle f_t(x) dx$$ and $\tilde A_0 = A_0 = Id$. Clearly, $A_t \leq \tilde A_t$ for all $t>0$. In order to control $||A_t||_{OP}$, it is thus enough to bound $||\tilde A_t||_{OP}$.\ \ For a fixed value of $t$, let $v_1,...,v_n$ be an orthonormal basis, with respect to which $\tilde A_t$ is diagonal, and write $\alpha_{i,j} = \langle v_i, \tilde A_t v_j \rangle $ for the entries of $\tilde A_t$ with respect to this basis. Equation (\[datilde\]) can be written, $$d \alpha_{i,j} = \int_{\RR^n} \langle x, v_i \rangle \langle x, v_j \rangle \langle A_t^{-1/2} x, d W_t \rangle f_t (x + a_t) dx.$$ Next, denote $$\label{defxi} \xi_{i,j} = \frac{1}{\sqrt{\alpha_{i,i} \alpha_{j,j}}} \int_{\RR^n} \langle x, v_i \rangle \langle x, v_j \rangle A_t^{-1/2} x f_t (x + a_t) dx.$$ So, $$\label{dalpha} d \alpha_{i,j} = \sqrt{\alpha_{i,i} \alpha_{j,j}} \langle \xi_{i,j}, d W_t \rangle,$$ and $$\frac{d}{dt} [\alpha_{i,j}]_t = \alpha_{i,i} \alpha_{j,j} |\xi_{i,j}|^2.$$ $$~$$ As we will witness later, behaviour norm of the matrix $\tilde A_t$ depends highly on the norms of the vectors $\xi_{i,j}$, which induce a certain repulsion between the eigenvalues. The next lemma will come in handy when we need to bound these norms: \[xibounds\] The vectors $\xi_{i,j}$ satisfy the following bounds:\ (i) For all $1 \leq i \leq n$, $|\xi_{i,i}| < C$ for some universal constant $C>0$.\ (ii) For all $1 \leq i \leq n$, $\sum_{j=1}^n | \xi_{i,j} |^2 \leq K_n^2$. *Proof:*\ Since $A_t^{1/2} v_i = \sqrt{\alpha_{i,i}} v_i$ for all $1 \leq i \leq n$, we have $$\xi_{i,j} = \int_{\RR^n} \langle A_t^{-1/2} x, v_i \rangle \langle A_t^{-1/2} x, v_j \rangle A_t^{-1/2} x f_t (x + a_t) dx$$ Define again, as above, $\tilde f_t(x) = \sqrt {\det A_t} f_t(A_t^{1/2} x + a_t)$. By substituting $y = A_t^{-1/2} x$, the equation becomes $$\label{defxigood} \xi_{i,j} = \int_{\RR^n} \langle y, v_i \rangle \langle y, v_j \rangle y \tilde f_t (y) dy.$$ Recall that $\tilde f_t$ is isotropic. The last equation shows that the vectors $\xi_{i,j}$, in some sense, do not depend on the position of $f$. Using the Cauchy-Schwartz inequality, one has $$\label{estxi1} |\xi_{i,i}| = \left | \int_{\RR^n} \langle y, v_i \rangle^2 y \tilde f_t(y) dy \right | = \left | \int_{\RR^n} \langle y, v_i \rangle^2 \left \langle y, \frac{\xi_{i,i}} {|\xi_{i,i}|} \right \rangle \tilde f_t(y) dy \right | \leq$$ $$\sqrt{\int_{\RR^n} \langle y, v_i \rangle^4 \tilde f_t(y) dy \int_{\RR^n} \left \langle y, \frac{\xi_{i,i}} {|\xi_{i,i}|} \right \rangle^2 \tilde f_t(y) dy}.$$ A well-known fact about isotropic log-concave measures (see for example [@LV Lemma 5.7]) is that for every $p>0$ there exists a constant $c(p)$ such that for every isotropic log-concave density $\rho(x)$ on $\RR^n$ and every $\theta \in \Sph$, $$\int_{\RR^n} |\langle x, \theta \rangle|^p \rho(x) dx \leq c(p).$$ Using this with (\[estxi1\]) establishes (i). Next, by the definition of $K_n$, we have for all $1 \leq i \leq n$, $$\sum_{j=1}^n |\xi_{i,j}|^2 = \sum_{j=1}^n \sum_{k=1}^n \left | \int_{\RR^n} \langle y, v_i \rangle \langle y, v_j \rangle \langle y, v_k \rangle \tilde f_t (y) dy \right |^2 =$$ $$\left | \left | \int_{\RR^n} y \otimes y \langle y, v_i \rangle \tilde f_t (y) dy \right| \right |_{HS}^2 \leq K_n^2.$$ The lemma is proven.\ \ We are now ready to prove the main proposition of the section.\ \ *Proof of proposition \[mainsec3\]*:\ We fix a positive integer $p$ whose value will be chosen later, and define, $$\label{defst} S_t = Tr \left (\tilde A_t^p \right).$$ Since $S_t$ is a smooth function of the coefficients $\{ \alpha_{i,j} \}$, which are Itô processes (assuming that the basis $v_1,...,v_n$ is fixed), $S_t$ itself is also an Itô process. Fix some $t>0$. Our next goal will be to find $d S_t$. To that end, define $\Gamma$ to be the set of $(p+1)$-tuples, $(j_1,,..,j_{p+1})$, such that $j_i \in \{1,...,n\}$ for all $1 \leq i \leq p+1$ and such that $j_1=j_{p+1}$. It is easy to verify that, $$\label{stpaths} S_t = \sum_{(j_1,...,j_{p+1}) \in \Gamma} \alpha_{j_1,j_2} \alpha_{j_2,j_3} \cdots \alpha_{j_p,j_{p+1}}.$$ Since $Tr(\tilde A_t^p)$ does not depend on the choice of orthogonal coordinates, after fixing the value of $t$, we are free to choose our coordinates such that the matrix $\tilde A_t$ is diagonal, thus assuming that $\alpha_{i,j}=0$ whenever $i \neq j$ and that (\[dalpha\]) holds (in other words, we calculate the differential $d S_t$ using a basis $v_1,...,v_n$ which depends on $t$. However, after fixing the value of $t$, the calculation itself is with respect to a fixed basis). A moment of reflection reveals that, in this case, the term $d (\alpha_{j_1,j_2} \cdots \alpha_{j_p,j_{p+1}})$ can be non-zero only if there are at most two distinct indices $i_1, i_2$ such that $j_{i_1} \neq j_{i_1 + 1}$ and $j_{i_2} \neq j_{i_2 + 1}$. We are left with two types of terms whose differential is non-zero. The first type of term contains no off-diagonal entries, and has the form $(\alpha_{i,i})^p$. Using equation (\[dalpha\]), we calculate its differential, $$\label{term1est} d \alpha_{i,i}^p = p \alpha_{i,i}^{p-1} d \alpha_{i,i} + \frac{1}{2} p(p-1) \alpha_{i,i}^{p-2} d [\alpha_{i,j}]_t =$$ $$p \alpha_{i,i}^{p} \langle \xi_{i,i}, d W_t \rangle + p(p-1) \alpha_{i,i}^p |\xi_{i,i}|^2 dt.$$ The second type of term will contain exactly two off-diagonal entries, and due to the symmetry of the matrix and the constraint $j_{1} = j_{p+1}$, it has the form: $$(\alpha_{i,i})^{k_1} \alpha_{i,j} (\alpha_{j,j})^{k_2} \alpha_{j,i} (\alpha_{i,i})^{k_3} = (\alpha_{i,i})^k (\alpha_{j,j})^{p-k-2} (\alpha_{i,j})^{2}$$ where $i \neq j$ and $0 \leq k \leq p-2$. Keeping in mind that $\alpha_{i,j} = 0$, we calculate, $$d \left ( (\alpha_{i,i})^k (\alpha_{j,j})^{p-k-2} (\alpha_{i,j})^{2} \right ) = (\alpha_{i,i})^k (\alpha_{j,j})^{p-k-2} \left (2 \alpha_{i,j} d \alpha_{i,j} + d [\alpha_{i,j}]_t \right ) =$$ $$(\alpha_{i,i})^{k+1} (\alpha_{j,j})^{p-k-1} |\xi_{i,j}|^2 dt.$$ We may clearly assume $\alpha_{1,1} \geq \alpha_{2,2} \geq ... \geq \alpha_{n,n}$, which implies that for $i < j$ and for all values of $k$, one has $$\label{term2est} d \left ( (\alpha_{i,i})^k (\alpha_{j,j})^{p-k-2} (\alpha_{i,j})^{2} \right ) \leq (\alpha_{i,i})^p |\xi_{i,j}|^2 dt$$ Inspect the equation (\[stpaths\]). For every $1 \leq i \leq n$, the expansion on the right hand side contains exactly one term of the first type, and for every distinct $i,j$ with $i \neq j$, it contains $\frac{p(p-1)}{2}$ terms of the second type (or otherwise, for all choices such that $i < j$, it contains $p(p-1)$ terms of this type). Using (\[term1est\]) and (\[term2est\]), we conclude $$d S_t \leq \sum_{i=1}^n p \alpha_{i,i}^{p} \langle \xi_{i,i}, d W_t \rangle + p(p-1) \alpha_{i,i}^p |\xi_{i,i}|^2 dt + \sum_{1 \leq i, j \leq n \atop i < j} p(p-1) (\alpha_{i,i})^p |\xi_{i,j}|^2 dt \leq$$ $$\sum_{i=1}^n p \alpha_{i,i}^{p} \langle \xi_{i,i}, d W_t \rangle + p^2 \sum_{i=1}^n \alpha_{i,i}^p \sum_{j=1}^n |\xi_{i,j}|^2 dt \leq$$ $$\sum_{i=1}^n p \alpha_{i,i}^{p} \langle \xi_{i,i}, d W_t \rangle + p^2 S_t K_n^2 dt$$ where in the last inequality we used the part (ii) of lemma \[xibounds\].\ \ A well-known property of Itô processes is existence and uniqueness of the decomposition $S_t = M_t + E_t$, where $M_t$ is a local martingale and $E_t$ is an adapted process of locally bounded variation. In the last equation, we attained, $$\label{dst} d E_t \leq p^2 K_n^2 S_t dt,$$ and also, $$\frac{d [S]_t}{dt} = \left | \sum_{i=1}^n p \alpha_{i,i}^{p} \xi_{i,i} \right |^2.$$ Using part (i) of lemma \[xibounds\] yields, $$\label{dst2} \frac{d [S]_t}{dt} \leq C p^2 S_t^2.$$ Next, we use the unique decomposition $\log S_t = Y_t + Z_t$ where $Y_t$ is a local martingale, $Z_t$ is an adapted process of locally bounded variation and $Y_0 = 0$. According to Itô’s formula and formula (\[dst2\]), $$\label{dyt} \frac{d [Y]_t}{dt} = \frac{1}{S_t^2} \frac{d [S]_t}{dt} \leq C p^2.$$ By Dambis / Dubins-Schwartz theorem, we know that there exists a standard Wiener process $\tilde W_t$ such that $Y_t$ has the same distribution as $\tilde W_{[Y]_t}$. An application of the so-called reflection principle gives, $$\PP \left (\max_{t \in [0, p] } \tilde W_t \geq t p \right ) =$$ $$2 \PP (\tilde W_p \geq t p ) < C e^{-\frac{1}{2} t^2 p }.$$ Choosing $t$ to be a large enough universal constant, $C_1$, yields $$\PP \left (\max_{t \in [0, p] } \tilde W_t \geq C_1 p \right ) < e^{-10 p},$$ (where we used the fact that $p \geq 1$). Using (\[dyt\]), we attain $$\PP \left (\max_{t \in \left [0, \frac{1}{p} \right ] } Y_t > C_2 p \right ) < e^{- 10 p}$$ for some universal constant $C_2 > 0$. We now use Itô’s formula again, this time with formula (\[dst\]), to get $$\frac{d}{dt} Z_t = \frac{1}{S_t} \frac{d}{dt} E_t - \frac{1}{2 S_t^2} \frac{d [S]_t}{dt} \leq K_n^2 p^2.$$ The last two equations and the legitimate assumption that $K_n \geq 1$ give, $$\PP \left (\max_{t \in \left [0, \frac{1}{K_n^2 p} \right ] } \log S_t - \log n > C p \right ) < e^{- 10 p}.$$ We choose $p = \lceil \log n \rceil$ to get, $$\PP \left (\max_{t \in \left [0, \frac{1}{K_n^2 \log n} \right ]} S_t^{1 / \lceil \log n \rceil} > C' \right ) < \frac{1}{n^{10}},$$ for some universal constant $C'>0$. Define the event $F$ as the complement of the event in the equation above, $$F := \left \{\max_{t \in \left [0, \frac{1}{K_n^2 \log n} \right ]} S_t^{1/\lceil \log n \rceil} \leq C' \right \}.$$ Clearly, whenever the event $F$ holds, we have, $$\label{wheneholds} ||A_t||_{OP} \leq ||\tilde A_t||_{OP} \leq C', ~~~ \forall t \in \left [0, \frac{1}{K_n^2 \log n} \right ].$$ Our next task is to bound the norm for larger values of $t$. To this end, recall the bound (\[goodbound\]). Recalling that $B_t = \int_0^t A_s^{-1} ds$, and applying (\[goodbound\]) gives, $$\frac{d}{dt} B_t = A_t^{-1} \geq \frac{Id}{\Theta ||B_t^{-1}||_{OP}}.$$ So, $$\label{btexp1} \frac{d}{dt} \frac{1}{||B_t^{-1}||_{OP}} \geq \frac{1}{\Theta ||B_t^{-1}||_{OP}}.$$ By the definition of $B_t$ and by (\[wheneholds\]), it follows that whenever $F$ holds one has, $$\label{btexp2} \frac{1}{||B_{\delta^2}^{-1}||_{OP}} \geq C \delta^2$$ where $\delta^2 = \frac{1}{K_n^2 \log n}$. Equations (\[btexp1\]) and (\[btexp2\]) imply, $$B_{t} \geq c \delta^2 e^{(t - \delta^2) / \Theta } Id, ~~~ \forall t > \delta^2$$ which gives, using (\[goodbound\]), $$A_{t} \leq C \delta^{-2} e^{(\delta^2 - t) / \Theta } Id.$$ Part(i) of the proposition is established. In order to prove the bound for $\EE[Tr(A_t)]$, write $S_t = \sum_{i=1}^n Tr(\tilde A_t)$. Setting $p = 1$ in (\[defst\]) gives, $\frac{d}{dt} \EE[S_t] = 0$, which implies (ii). Part (iii) of the proposition follows directly from equations (\[btexp2\]) and (\[defineF\]). The proposition is complete.\ Proposition \[bakryemery\] gives an immediate corollary to part (iii) of proposition \[mainsec3\]: \[goodcheeger\] There exist universal constants $c, \Theta >0$ such that whenever the event $F$ defined in (\[deff\]) holds, the following also holds:\ Define $\delta = \frac{1}{K_n \sqrt{\log n}}$. Let $t > \delta^2$ and let $E \subset \RR^n$ be a measurable set which satisfies, $$\label{defR} 0.1 \leq \int_E f_t(x) dx \leq 0.9.$$ One has, $$\int_{E_{\Theta / \delta } \setminus E} f_t(x) dx \geq c$$ where $E_{\Theta / \delta}$ is the $\frac \Theta \delta$-extension of $E$, defined in the introduction. Thin shell implies spectral gap =============================== In this section we use the localization scheme constructed in the previous sections in order to prove theorem \[mainthm1\].\ \ Let $f(x)$ be an isotropic log-concave probability density in $\RR^n$ and let $E \subset \RR^n$ be a measurable set. Suppose that, $$\int_E f(x) dx = \frac{1}{2}.$$ Our goal in this section is to show that, $$\label{needtoshow11} \int_{E_{\Theta / \delta} \setminus E} f(x) dx \geq c$$ for some universal constants $c, \Theta >0$, where $\delta = \frac{1}{K_n \sqrt{\log n}}$ and $E_{\Theta / \delta}$ is the $\frac \Theta \delta$-extension of $E$.\ \ The idea is quite simple. Define $f_t := \Gamma_t(f)$, the localization of $f$ constructed in section 2, and fix $t > 0$. By the martingale property of the localization, we have, $$\label{martlocE} \int_{E_{\Theta / \delta} \setminus E} f(x) dx = \EE \left [ \int_{E_{\Theta / \delta} \setminus E} f_t(x) dx \right ].$$ Corollary \[goodcheeger\] suggests that if $t$ is large enough, the right term can be bounded from below if we only manage to bound the integral $\int_E f_t(x) dx$ away from 0 and from 1.\ \ Define, $$g(t) = \int_E f_t(x) dx.$$ In view of the above, we would like to prove: \[gtbound\] There exists a universal constant $T>0$ such that, $$\PP \left (0.1 \leq g \left ( t \right ) \leq 0.9 \right ) > 0.5, ~~~ \forall t \in [0,T].$$ *Proof:*\ We calculate, using (\[contloc\]), $$\label{dgt} d g(t) = \int_E f_t(x) \langle x - a_t, A_t^{-1/2} dW_t \rangle dx =$$ (substitute $y = A_t^{-1/2}(x - a_t)$) $$\sqrt{\det A_t} \int_{A_t^{-1/2}(E - a_t)} f_t(A_t^{1/2} y + a_t) \langle y, d W_t \rangle dy =$$ $$\left \langle \sqrt{\det A_t} \int_{A_t^{-1/2}(E - a_t)} f_t(A_t^{1/2} y + a_t) y dy , d W_t \right \rangle.$$ Define, $$\tilde f_t = \sqrt{\det A_t} f_t(A_t^{1/2} y + a_t), ~~~E_t = A_t^{-1/2} (E - a_t)$$ The above equation becomes, $$\label{dgt2} d g(t) = \left \langle \int_{E_t} y \tilde f_t(y) dy, d W_t \right \rangle.$$ Assume, for now, that $\int_{E_t} y \tilde f_t(y) dy \neq 0$ and define $\theta = \frac{\int_{E_t} y \tilde f_t(y) dy}{|\int_{E_t} y \tilde f_t(y) dy|}$. Observe that, by definition, $\tilde f_t$ is isotropic. Consequently, $$\left |\int_{E_t} y \tilde f_t(y) dy \right| = \left | \int_{E_t} \langle y, \theta \rangle \tilde f_t(y) dy \right | \leq$$ $$\int_{E_t} |\langle y, \theta \rangle| \tilde f_t(y) dy \leq \sqrt{ \int_{E_t} \langle y, \theta \rangle^2 \tilde f_t(y) dy} \leq 1.$$ We therefore learn that, $$\frac{d}{dt} [g]_t \leq 1, ~~ \forall t>0.$$ Define $h(t) = (g(t) - 0.5)^2$. By Itô’s formula, $$d h(t) = 2 (g(t) - 0.5) d g(t) + d [g]_t.$$ Plugging the last two equations together gives, $$E[(g(t) - 0.5)^2] \leq t.$$ The lemma follows from an application of Chebyshev’s inequality.\ \ The last ingredient needed for our proof is a theorem of E. Milman, [@Mil2 Theorem 2.1]. The following is a weaker formulation of this theorem which will be suitable for us:\ \[thmmilman\] Suppose that a log-concave probability measure $\mu$ satisfies the following: there exist two constants, $0 < \lambda < \frac{1}{2}$ and $\Theta > 0$, such that for all measurable $E \subset \RR^n$ with $\mu(E) \geq \frac{1}{2}$, one has $\mu(E_{\Theta}) \geq 1 - \lambda$. In this case, the measure $\mu$ satisfies the following isoperimetric inequality:\ For all measurable $E \subset \RR^n$ with $\mu(E) \leq \frac{1}{2}$, $$\label{milman} \frac{\mu^+(E)}{\mu(E)} \geq \frac{1 - 2 \lambda}{\Theta}.$$ Note that equation (\[milman\]) is the exact type of inequality defining the constant $G_n$ in equation (\[defg\]). We are now ready to prove the main proposition of this section.\ \ *Proof of proposition \[gk\]*:\ Let $T$ be the constant from lemma \[gtbound\]. Denote, $$G = \left \{ 0.1 \leq g(T) \leq 0.9 \right \} \cap F.$$ where $F$ is the event defined in (\[deff\]). According to lemma \[gtbound\] and to (\[onb1\]), one has $\PP(G) > 0.4$ for all $n \geq 2$.\ \ By (\[martlocE\]) and by corollary \[goodcheeger\], there exist universal constants $\tilde c, \Theta > 0$ such that $$\int_{E_{\Theta / \delta} \setminus E} f(x) dx = \EE \left [ \int_{E_{\Theta / \delta} \setminus E} f_T(x) dx \right ] \geq$$ $$P(G) \EE \left [ \left . \int_{E_{\Theta / \delta} \setminus E} f_T(x) dx ~~ \right | G \right ] \geq \tilde c.$$ The result now follows directly from an application of theorem \[thmmilman\].\ In the above proof, we used E. Milman’s result in order to reduce the theorem to the case where $\int_E f(x) dx$ is exactly $\frac{1}{2}$, as well as to attain an isoperimetric inequality from a certain concentration inequality for distance functions. Alternatively, we may have replaced propsition \[bakryemery\] with an essentially stronger result due to Bakry-Emery, proven in [@BE] (see also Gross, [@Gross]). Their result, which relies on the *hypercontractivity principle*, asserts that a density of the form (\[defineF\]) actually possesses a respective Cheeger constant. Using this fact, we may have directly bounded from below the surface area of any set with respect to the measure whose density is $f_t$. The proof of lemma \[ksigma\] is in section 6. Along with this lemma, we have established theorem \[mainthm1\]. Stability of the Brunn-Minkowski inequality ============================================= The main goal of this section is to prove theorem \[mainthm2\].\ \ The idea of the proof is as follows: Given two log-concave densities, $f$ and $g$, we run the localization process we constructed in section 2 on both functions, so that their corresponding localization processes are coupled together in the sense that we take the same Wiener process $W_t$ for both functions. Recall formula (\[atconverges\]), whose point is that the barycenters of the localized functions $f_t$ and $g_t$ converge, in the Wasserstein metric, to the measures whose densities are $f$ and $g$, respectively. In view of this, it is enough to consider the paths of the barycenters and show that they remain close to each other along the process. Recall that if $a_t$ is the barycenter of $f_t$, we have $d a_t = A_t^{1/2} d W_t$. This formula tells us that as long as we manage to keep the covariance matrices of $f_t$ and $g_t$ approximately similar to each other, the barycenters will not move too far apart. In order to do this, we use an idea from [@EK2]: when the integral of the supremum convolution of two given densities is rather small, these densities can essentially be regarded as parallel sections of an isotropic convex body, which means, by thin-shell concentration, that the corresponding covariance matrices cannot be very different from each other.\ \ We begin with some notation. For two functions $f,g: \RR^n \to \RR^+$, denote by $H_\lambda (f,g)$ the supremum convolution of the two functions, hence, $$H (f,g)(x) := \sup_{y \in \RR^n} \sqrt{f (x + y) g (x - y)}.$$ Next, define, $$K(f,g) = \int_{\RR^n} H (f,g)(x) dx.$$ $$~$$ The following lemma is a variant of lemma 6.5 from [@EK2]. \[transportation\] There exists a universal constant $C>0$ such that the following holds: Let $f,g$ be log-concave probability densities in $\RR^n$. Define, $$A = Cov(f)^{-1/2} Cov(g) Cov(f)^{-1/2} - Id,$$ and let $\{\delta_i \}_{i=1}^n$ be the eigenvalues of $A$ such that the order of $|\delta_i - 1|$ is decreasing. Then, $$\label{smallis} |\delta_i - 1| \leq C K(f,g)^4, ~~ \forall 1 \leq i \leq n$$ and, $$\label{largeis} |\delta_i - 1| \leq C K(f,g) \tau_n i^{\kappa - \frac{1}{2}}, ~~\forall (\log K(f,g))^{C_1} \leq i \leq n$$ where $C,C_1 > 0$ are universal constants. Our main ideas in this section are contained in the following lemma:\ \[mainlemmasec5\] Let $\epsilon > 0$ and let $f$, $g$ be log-concave probability densities in $\RR^n$ such that $f$ is isotropic and the barycenter of $g$ lies at the origin. In that case, there exist two densities, $\tilde f, \tilde g$, which satisfy, $$\tilde f(x) \leq f(x), ~~\tilde g(x) \leq g(x), ~~ \forall x \in \RR^n,$$ $$\int_{\RR^n} \tilde f(x) dx = \int_{\RR^n} \tilde g(x) dx \geq 1 - \epsilon$$ and, $$\label{smallw2} W_2(\tilde f, \tilde g) \leq \frac{C}{\epsilon^{6}} \tau_n K(f,g)^{5} n^{2 (\kappa - \kappa^2) + \epsilon}$$ *Proof:* As explained in the beginning of the section, we will couple between the measures $f$ and $g$ in means of coupling between the processes $\Gamma_t(f)$ and $\Gamma_t(g)$. To that end, we define, as in (\[contloc\]), $$F_0(x) = 1, ~~~ d F_t(x) = \langle A_t^{-1/2} d W_t, x - a_t \rangle F_t(x)$$ where, $$a_t = \frac{\int_{\RR^n} x f(x) F_t(x) dx }{ \int_{\RR^n} f(x) F_t(x) dx}$$ is the barycenter of $f F_t$, and, $$A_t = \int_{\RR^n} (x - a_t) \otimes (x - a_t) f(x) F_t(x) dx$$ is the covariance matrix of $f F_t$. As usual denote $f_t = F_t f$.\ Next, we define, $$G_0(x) = 1, ~~~ d G_t(x) = \langle A_t^{-1/2} d W_t, x - b_t \rangle G_t(x)$$ where, $$b_t = \frac{\int_{\RR^n} x g(x) G_t(x) dx }{ \int_{\RR^n} g(x) G_t(x) dx},$$ and denote $g_t(x) = g(x) G_t(x)$.\ \ Finally, we “interpolate” between the two processes by defining, $$H_0(x) = 1, ~~~ d H_t(x) = \langle A_t^{-1/2} d W_t, x - (a_t + b_t) / 2 \rangle,$$ and, $$h_t(x) = H_t(x) H(f,g)(x).$$ By a similar calculation to the one carried out in lemma \[basic1\], we learn that for all $t \geq 0$, $\int f_t(x) dx = \int g_t(x) dx = 1$. Fix $x,y \in \RR^n$. An application of Itô’s formula yields $$d \log f_t(x+y) = \langle x + y - a_t, A_t^{-1/2} d W_t \rangle - \frac{1}{2} |A_t^{-1/2}(x + y - a_t)|^2 dt,$$ $$d \log g_t(x-y) = \langle x - y - b_t, A_t^{-1/2} d W_t \rangle - \frac{1}{2} |A_t^{-1/2}(x - y - b_t)|^2 dt,$$ and $$d \log h_t(x) = \left \langle x - \frac{a_t + b_t}{2}, A_t^{-1/2} d W_t \right \rangle - \frac{1}{2} |A_t^{-1/2} (x - (a_t + b_t) / 2) |^2 dt.$$ Consequently, $$2 d \log h_t(x) \geq d \log f_t(x + y) + d \log g_t(x - y).$$ It follows that, $$h_t(x) \geq H(f_t, g_t)(x).$$ Define $S_t = \int_{\RR^n} h_t(x) dx$. The definition of $H_t$ suggests that $S_t$ is a martingale. By the Dambis / Dubins-Schwarz theorem, there exists a non-decreasing function $A(t)$ such that, $$S_t = K(f,g) + \tilde W_{A(t)}$$ where $\tilde W_t$ is distributed as a standard Wiener process. Since $S_t \geq 1$ almost surely, it follows from the Doob’s maximal inequality theorem that, $$\label{maxksmall} \PP (G_t) \geq 1 - \epsilon / 2, ~~~ \forall s > 0.$$ where, $$\label{defeventf} G_t = \left \{ \max_{s \in [0, t] } S_s \leq \frac{2 K(f,g)}{\epsilon} \right \}.$$ Next, define, $$F_t := \left \{ ||A_s||_{OP} < C K_n^2 (\log n) e^{-t}, ~~ \forall 0 \leq s \leq t \right \}.$$ where $C$ is the same constant as in (\[deff\]). Finally, denote $E_t = G_t \cap F_t$. By proposition 3.1 and equation (\[maxksmall\]), $P(E_t) > 1 - \epsilon$ for all $t > 0$. Define a stopping time by the equation, $$\rho = \sup \{t | ~ E_t \mbox{ holds} \}.$$ Our next objective is to define the densities $\tilde f, \tilde g$ by, in some sense, neglecting the cases where $E_t$ does not hold. We begin by defining the density $\tilde f_t$ by the following equation, $$\int_B \tilde f_t(x) dx = \EE \left [\mathbf{1}_{E_t} \int_B f_t(x) dx \right ],$$ for all measurable $B \subset \RR^n$. Likewise, we define $$\int_B \tilde g_t(x) dx = \EE \left [\mathbf{1}_{E_t} \int_B g_t(x) dx \right ].$$ Recall that $f(x) = \EE[f_t(x)]$ for all $x \in \RR^n$ and $t > 0$. It follows that, $$\int_{\RR^n} \tilde f_t(x) dx = \int_{\RR^n} \tilde g_t(x) dx = P(E_t) \geq 1 - \epsilon,$$ and that $$\tilde f_t(x) \leq f(x), ~~ \tilde g_t(x) \leq g(x), ~~ \forall x \in \RR^n.$$ $$~$$ We construct a coupling between $\tilde f_t$ and $\tilde g_t$ by defining a measure $\mu_t$ on $\RR^n \times \RR^n$ using the formula $$\mu_t(A \times B) = \EE \left [ \mathbf{1}_{E_t} \int_{A \times B} f_t(x) g_t(y) dx dy \right ],$$ for any measurable sets $A,B \subset \RR^n$. It is easy to check that $\tilde f_t$ and $\tilde g_t$ are the densities of the marginals of $\mu_t$ onto its first and last $n$ coordinates respectively. Thus, by definition of the Wasserstein distance, $$W_2(\tilde f_t, \tilde g_t) \leq \left ( \int_{\RR^n \times \RR^n} |x-y|^2 d \mu_t(x,y) \right )^{1/2} =$$ $$\left ( \EE \left [\mathbf{1}_{E_t} \int_{\RR^n \times \RR^n} |x-y|^2 f_t(x) g_t(y) dx dy \right ] \right )^{1/2} \leq$$ $$\left ( \EE \left [\mathbf{1}_{E_t} \left ( W_2(f_t, a_t) + W_2(g_t, b_t) + |a_t - b_t| \right )^2 \right ] \right )^{1/2}.$$ Now, thanks to formula (\[atconverges\]), we can take $T$ large enough (and deterministic) such that, $$\label{w2bytau} W_2(\tilde f_T, \tilde g_T) \leq 2 \left ( \EE \left [\mathbf{1}_{E_T} |a_T - b_T|^2 \right ] \right )^{1/2} + 1 \leq$$ $$2 \left ( \EE \left [|a_{T \wedge \rho} - b_{T \wedge \rho} |^2 \right ] \right )^{1/2} + 1.$$ We will define $\tilde f := \tilde f_T$ and $\tilde g := \tilde g_T$. In view of the last equation, our main goal will be to attain a bound for the process $|a_t - b_t|$. A similar calculation to the one carried out in (\[pathbc\]) gives, $$\label{datdbt} d a_t = A_t^{1/2} d W_t, ~~ d b_t = C_t A_t^{-1/2} d W_t.$$ where, $$C_t = \int_{\RR^n} (x - b_t) \otimes (x - b_t) g_t(x) dx$$ is the covariance matrix of $g_t$. Therefore, $$d |a_t - b_t|^2 = 2 \langle a_t - b_t, d a_t \rangle - 2 \langle a_t - b_t, d b_t \rangle +$$ $$\langle d a_t, d a_t \rangle + \langle d b_t, d b_t \rangle - 2 \langle d a_t, d b_t \rangle.$$ The first two terms are martingale. We use the unique decomposition $$|a_t - b_t|^2 = M_t + N_t$$ where $M_t$ is a local martingale and $N_t$ is an adapted process of locally bounded variation. We get, $$\frac{d}{dt} N_t = \langle d a_t - d b_t, d a_t - d b_t \rangle =$$ $$\langle (A_t - C_t) A_t^{-1/2} d W_t, ( A_t - C_t) A_t^{-1/2} d W_t \rangle =$$ $$||A_t^{1/2} (I - A_t^{-1/2} C_t A_t^{-1/2})||_{HS}^2.$$ By the Optional Stopping Theorem, $$\label{defdt} \EE \left [ |a_{t \wedge \rho} - b_{t \wedge \rho}|^2 \right ] = \EE[N_{t \wedge \rho}] = \EE \left [\int_0^{t \wedge \rho} ||D_s||_{HS}^2 ds \right ]$$ where $D_t = A_t^{1/2} (I - A_t^{-1/2} C_t A_t^{-1/2})$. Our next task is to use lemma \[transportation\] to bound $||D_t||_{HS}$ under the assumption f$t < \tau$.\ \ We start by denoting the eigenvalues of the matrix $I - A_t^{-1/2} C_t A_t^{-1/2}$ by $\delta_i$, in decreasing order, and the eigenvalues of the matrix $A_t$ by $\lambda_i$, also in decreasing order. By theorem 1 in [@tam], $$\label{traceineq} ||D_t||_{HS}^2 \leq \sum_{j=1}^n \lambda_j \delta_j^2.$$ By lemma \[transportation\], we learn that $$\label{deltasmall} \delta_j \leq \frac{C K(f_t,g_t)^{5} \tau_n j^{\kappa}}{\sqrt{j}}.$$ Plugging this into (\[traceineq\]) yields, $$||D_t||_{HS}^2 \leq C K(f_t,g_t)^{10} \tau_n^2 \sum_{j=1}^n \lambda_j j^{2 \kappa - 1}.$$ Fix some constant $(1 - 2 \kappa) < \alpha < 1$, whose value will be chosen later. For now, we assume that $\kappa > 0$. Using Hölder’s inequality, we calculate, $$\label{calcholder} ||D_t||_{HS}^2 \leq C K(f_t,g_t)^{10} \tau_n^2 \left ( \sum_{j=1}^n \lambda_j^{1 / (1-\alpha)} \right )^{1 - \alpha} \left (\sum_{j=1}^n j^{(2 \kappa - 1) / \alpha} \right)^{\alpha} \leq$$ $$C K(f_t,g_t)^{10} \tau_n^2 \left ( \lambda_1^{1 / (1-\alpha) - 1} \sum_{j=1}^n \lambda_j \right )^{1 - \alpha} \left (1 + \int_1^n t^{(2 \kappa - 1) / \alpha} \right )^{\alpha} \leq$$ $$C K(f_t,g_t)^{10} \tau_n^2 \lambda_1^{\alpha} (\beta n)^{1 - \alpha} \left (n^{(2 \kappa - 1) / \alpha + 1} + 2 \right )^\alpha \left ( \frac{1}{(2 \kappa - 1) / \alpha + 1} \right )^\alpha$$ where $\beta = \frac{1}{n} \sum_{j=1}^n \lambda_j$. Recall that $\alpha > (1 - 2 \kappa)$, which gives, $$\label{calcalpha} \left ( n^{(2 \kappa - 1) / \alpha + 1} + 2 \right )^\alpha \leq 3 n^{\alpha} n^{2 \kappa - 1}.$$ Take $\alpha$ such that $\epsilon = \alpha - (1 - 2 \kappa)$. Equations (\[calcholder\]) and (\[calcalpha\]) give, $$||D_t||_{HS}^2 \leq \frac{C'}{\epsilon} K(f_t,g_t)^{10} \tau_n^2 \beta^{1 - \alpha} \lambda_1^{\alpha} n^{2 \kappa} \leq$$ $$\frac{C''}{\epsilon} K(f_t,g_t)^{10} \tau_n^2 \max(\beta, 1) \lambda_1^{1 - 2 \kappa + \epsilon} n^{2 \kappa}.$$ Recall that we assume that $t < \tau$. By the definition of $\tau$, we get $\lambda_1 \leq C \tau_n^2 n^{2 \kappa} \log n$ and $K(f_t, g_t) \leq 2 K(f,g) / \epsilon$. Part (ii) of proposition \[mainsec3\] implies $\EE[\beta] \leq 1$. Plugging these facts into the last equation gives, $$\EE \left [ ||D_t||_{HS}^2 ~ \right ] \leq \frac{C}{\epsilon^{11}} K(f,g)^{10} \tau_n^2 \left(\tau_n^2 n^{2 \kappa} \log n\right )^{1 - 2 \kappa + \epsilon} n^{2 \kappa} e^{-t} \leq$$ $$\leq \frac{C'}{\epsilon^{11}} K(f,g)^{10} \tau_n^2 n^{4 \kappa - 4 \kappa^2 + \epsilon} e^{-t}.$$ Finally, using equations (\[w2bytau\]) and (\[defdt\]), we conclude, $$\label{distsmall} W_2(\tilde f_T, \tilde g_T)^2 \leq \EE \left [ \int_0^{T \wedge \rho} ||D_s||_{HS}^2 ds \right ] \leq$$ $$\frac{C}{\epsilon^{11}} K(f,g)^{10} \tau_n^2 n^{4 \kappa - 4 \kappa^2 + \epsilon}.$$ The proof is complete.\ In the above lemma, if we replace the assumption that $f$ is isotropic by the assumption that $f,g$ are log-concave with respect to the Gaussian measure, then following the same lines of proof while using proposition \[bakryemery\], one may improve the bound (\[smallw2\]) and get, $$W_2(\tilde f, \tilde g) \leq C(\epsilon) K(f,g) \sqrt{\log n}.$$ We move on to the proof of theorem \[mainthm2\].\ \ *Proof of theorem \[mainthm2\]:* Let $K,T$ be convex bodies of volume $1$ such that the covariance matrix of $K$ is $L_k^2 Id$. Fix $\epsilon > 0$. Define, $$f(x) = 1_{K / L_K}(x) L_K^n, ~~~ g(x) = 1_{T / L_K}(x) L_K^n,$$ so both $f$ and $g$ are probability measures and $f$ is isotropic. We have, $$K(f,g) = Vol_n \left (\frac{K+T}{2} \right ) = V.$$ We use lemma \[mainlemmasec5\], which asserts that there exist two measures $\tilde f$, $\tilde g$, such that, $$\label{fgprop1} \tilde f(x) \leq f(x), ~~ \tilde g(x) \leq g(x), ~~ \forall x \in \RR^n,$$ $$\label{fgprop2} \int \tilde f(x) dx = \int \tilde g(x) dx \geq 1 - \epsilon$$ and such that, $$W_2 (\tilde f, \tilde g) \leq \Theta$$ where $\Theta = C(\epsilon) V^{5} \tau_n n^{2 (\kappa - \kappa^2) + \epsilon}$. Since $\tilde g$ is supported on $T$, it follows that, $$\int_{K} d^2 (x, T / L_T) \tilde f(x) dx \leq \Theta^2$$ where $d(x,T / L_T) = \inf_{y \in (T / L_T)} |x - y|$. Denote, $$K_\alpha = \{x \in K / L_K; ~ d(x,T) \geq \alpha \Theta \}.$$ It follows from Markov’s inequality and from (\[fgprop1\]) and (\[fgprop2\]) that, $$Vol_n(K_\alpha) \leq L_K^{-n} \left (\epsilon + \frac{1}{\alpha^2} \right ).$$ Finally, taking $\delta = L_K \Theta / \sqrt{\epsilon}$ gives $$\label{markov} Vol_n(K \setminus T_{\delta}) \leq 2 \epsilon.$$ This completes the proof.\ \ Tying up loose ends =================== We begin the section with the proof of lemma \[ksigma\] which gives an upper bound for the constant $K_n$ in terms of $\tau_n$ and $\kappa$.\ \ *Proof of lemma \[ksigma\]:* Let $X$ be an isotropic, log concave random vector in $\RR^n$, and fix $\theta \in \Sph$. Denote $A = \EE[X \otimes X \langle X, \theta \rangle]$. Our goal is to show, $$||A||_{HS}^2 \leq C \sum_{k=1}^n \frac{\sigma_k^2}{k}.$$ Let $k \leq n$ and let $E_k$ be a subspace of dimension $k$. Denote $P(X) = Proj_{E_k}(X)$ and $Y = |P(X)| - \sqrt k$. By definition of $\sigma_k$, $$Var[Y] \leq \sigma_k^2$$ Note that, by the isotropicity of $X$, $\EE[|P(X)|^2] = k$. It easily follows that, $$Var[|P(X)|^2] \leq C k Var[Y] \leq C k \sigma_k^2.$$ Using the last inequality and applying Cauchy-Schwartz gives, $$\left | \EE [\langle X, \theta \rangle |P(X)|^2] \right | \leq \sqrt{Var[\langle X, \theta \rangle] Var[|P(X)|^2]} \leq C \sqrt{k} \sigma_k$$ or, in other words, $$\left | Tr[Proj_{E_k} A Proj_{E_k}] \right | \leq C \sqrt k \sigma_k.$$ Let $\lambda_1,...,\lambda_\ell$ be the non-negative eigenvalues of $A$ in decreasing order. The last inequality implies that the matrix $Proj_{E_k} A Proj_{E_k}$ has at least one eigenvalue smaller than $C \sqrt{\frac{1}{k}} \sigma_k$. Consequently, by taking $E_k$ to be the subspace spanned by the $k$ first corresponding eigenvectors we learn that $$\lambda_k^2 < C \frac{\sigma_k^2}{k}, ~~ \forall k \leq \ell.$$ In the same manner, if $\zeta_1,...,\zeta_{n-\ell}$ are the negative eigenvalues of $A$, one has $\zeta_k^2 < C \frac{\sigma_k^2}{k}$.\ We can thus calculate, $$||A||_{HS}^2 = \sum_{k=1}^\ell \lambda_k^2 + \sum_{k=1}^{n - \ell} \zeta_k^2 \leq 2 C \sum_{k=1}^n \frac{\sigma_k^2}{k}.$$ The proof is complete.\ \ Next, in order to provide the reader with a better understanding of the constant $K_n$, we introduce two new constants. First, define $$Q_n^2 = \sup_{X,Q} \frac{Var[Q(X)]}{\EE \left [|\nabla Q(X)|^2 \right ]}$$ where the supremum runs over all isotropic log-concave random vectors, $X$, and all quadratic forms $Q(x)$. Next, define $$R_n^{-1} = \inf_{\mu, E} \frac{\mu^+(E)}{\mu(E)}$$ where $\mu$ runs over all isotropic log-concave measures and $E$ runs over all ellipsoids with $\mu(E) \leq 1/2$.\ There exist universal constants $C_1, C_2$ such that $$K_n \leq C_1 Q_n \leq C_2 R_n.$$ The proof of the right inequality is standard and uses the coarea formula and the Cauchy-Schwartz inequality. We will prove the left inequality. To that end, fix an isotropic log-concave random vector $X$, denote $A = \EE[X \otimes X X_1]$. We have, $$||A||_{HS} = \sup_{B} \frac{Tr(BA)}{||B||_{HS}}$$ where $B$ runs over all symmetric matrices. Let $B$ be a symmetric matrix. Fix coordinates under which $B$ is diagonal, and write $X = (X_1,...,X_n)$ and $B = diag \{a_1,..,a_n\}$. Define $Q(x) = \langle B x, x \rangle$. We have, $$Tr(BA) = \EE \left [X_1 \sum_{i=1}^n a_i X_i^2 \right ] \leq \sqrt{\EE \left [X_1^2 \right ]} \sqrt{Var \left [\sum_{i=1}^n a_i X_i^2 \right ]} =$$ $$\sqrt{Var[Q(X)]} \leq \sqrt{2 Q_n^2 \sum_{i=1}^n a_i^2 \EE[X_i^2 ]} = \sqrt{2} Q_n ||B||_{HS}.$$ So, $$||A||_{HS} \leq \sqrt{2} Q_n.$$ This shows that $K_n \leq C Q_n$. We suspect that there exists a universal constant $C>0$ such that $K_n \leq C \sigma_n$, but we are unable to prove that assertion. We move on to the proof of lemma \[basic1.5\].\ *Proof of lemma \[basic1.5\]:*\ Throughout the proof, all the constants $c, c_1, c_2,...$ may depend only on the dimension $n$. Recall that $f(x)$ is assumed to be isotropic and log-concave. It is well-known that there exist two constants $c_1, c_2>0$, such that $$f(|x|) \geq c_1, ~~ \forall |x| \leq c_2.$$ (see for example [@LV Theorem 5.14]). Define $g(x) = c_1 \mathbf{1}_{\{ |x| \leq c_2 \} }$. It is also well-known (see for example [@LV Lemma 5.7]) that there exist two constants $c_3, c_4 > 0$ such that $$\int_{\RR^n} f(x) e^{\langle x, y \rangle } dx \leq c_3, ~~ \forall |y| < c_4,$$ which implies that whenever $|c| < c_4$ and $B$ is positive semi-definite, $$V_f(c, B) = \int_{\RR^n} e^{\langle c, x \rangle - \frac{1}{2} \langle B x, x \rangle } f(x) dx \leq c_3.$$ It follows that for all $|c| < c_4$ and $B \leq Id$ (in the sense of positive matrices), one has $$\label{covmatbig} A_f(c,B) \geq$$ $$c_3^{-1} \int_{\RR^n} (x - a_f(c,B)) \otimes (x - a_f(c,B)) e^{- c_4 |x| - \frac{1}{2} |x|^2 } g(x) dx \geq$$ $$c_3^{-1} c_1 \int_{\{|x| \leq c_2\}} x \otimes x e^{- c_4 |x| - \frac{1}{2} |x|^2 } dx = c_5 Id$$ for some constant $c_5>0$. Define the stopping times, $$T_1 = \sup \{t > 0; ~|c_t| < c_4 \}, ~~ T_2 = \sup \{t > 0; ~B_t \geq Id \}, ~~ T = \min(T_1,T_2).$$ Note that according to (\[covmatbig\]), $$\label{covmatbig2} A_t \geq c_5 Id, ~~ \forall t \leq T,$$ so the lemma would be concluded if we manage to show that $$\label{Tlarge} \PP(T > c) > c$$ for some constant $c>0$.\ \ Define the event $E = \{T_2 \leq T_1\}$. Whenever $E$ holds, we have the following: First, using (\[covmatbig\]), $$A_t \geq c_5 Id, ~~ \forall t \leq T_2.$$ Recall that $\frac{d}{dt} B_t = A_t^{-1}$. It follows that $$B_t \leq c_5^{-1} t, ~~ \forall t \leq T_2.$$ By taking $t = T_2$ in the last equation, we learn that $T = T_2 \geq c_5$ whenever $E$ holds, so $$T_2 \leq T_1 \Rightarrow T \geq c_5.$$ Therefore, it is enough to prove that $P(T_1 > c) > c$ for some $c>0$. Furthermore, in the following we are able to assume that $P(E) \leq 0.1$.\ \ To that end, consider the defining equation (\[stochastic1\]) and use Itô’s formula to attain $$\label{dct2} d |c_t|^2 = 2 \langle c_t, A_t^{-1/2} dW_t \rangle + 2 \langle A_t^{-1} a_t, c_t \rangle dt + ||A_t^{-1/2}||_{HS}^2 dt.$$ Define the process $e_t$ by the equations, $$e_t = 0, ~~ d e_t = 2 \langle c_t, A_t^{-1/2} dW_t \rangle.$$ Using (\[covmatbig2\]), we deduce that whenever $t < T$, one has $$\label{qvbound} [e]_t = 4 \int_0^t \langle A_t^{-1/2} c_t, A_t^{-1/2} c_t \rangle \leq 4 c_4^2 c_5^{-1} t.$$ Using the Dambis / Dubins-Schwartz theorem, we know that there exists a standard Wiener process $\tilde W_t$ such that $e_t$ has the same distribution as $\tilde W_{[e]_t}$. An elementary property of the standard Wiener process is that there exists a constant $c_6 > 0$, such that $$\PP(F) \geq 0.9$$ where $$F = \left \{ \max_{0 \leq s \leq c_6} \tilde W_s \leq c_4^2 / 2 \right \}.$$ Define $\delta = \min \left (T, \frac{c_6}{4 c_4^2 c_5^{-1}} \right)$. Note that, by (\[qvbound\]), $$\label{fsubset} F \subseteq \left \{ \max_{0 \leq t \leq \delta} e_t \leq c_4^2 / 2 \right \}.$$ Another application of (\[covmatbig2\]), this time with the assumption (\[compactsupport\]) gives, $$\label{predbound} \int_0^{t} ||A_s^{-1/2}||_{HS}^2 ds + 2 \left | \langle A_s^{-1} a_s, c_s \rangle \right | ds \leq n c_5^{-1} (1 + c_4) t \leq c_7 t$$ for all $t < T$, and for a constant $c_7$. By plugging (\[fsubset\]) and (\[predbound\]) into (\[dct2\]), we learn that whenever $F$ holds, one has $$|c_t|^2 \leq c_4^2 / 2 + c_7 t, ~~ \forall t \leq \delta.$$ If we assume that $\delta = T_1$, the above gives $c_4^2 = |c_{T_1}|^2 \leq c_4^2 / 2 + c_7 {T_1}$ which implies $T \geq \frac{c_4^2}{2 c_7}$ (here, we used the assumption that $T_1 \leq T_2$). Thus, whenever $F \cap E^C$ holds, we have $T = T_1 \geq \min \left (\frac{c_4^2}{2 c_7}, \frac{c_6}{4 c_4^2 c_5^{-1}} \right )$. A union bound gives $\P(F \cap E^C) \geq 0.8$. The lemma is complete.\ \ [GGM]{} M. Anttila, K. Ball, I. Perissinaki, [*The central limit problem for convex bodies*]{}. Trans. Amer. Math. Soc., 355, no. 12, (2003), 4723–4735. D. Bakry and M. Emery, [*Diffusions hypercontractives*]{}, in Séminaire de probabilités, XIX, 1983/84, vol. 1123 of Lecture Notes in Math., Springer, Berlin, 1985, pp. 177–206. K. Ball V.H. Ngyuen, [*Entropy jumps for random vectors with log-concave density and spectral gap.*]{} Preprint. S. Bobkov, [*On isoperimetric constants for log-concave probability distributions, in Geometric Aspects of Functional Analysis* ]{} Israel Seminar 2004-2005, Springer Lecture Notes in Math. 1910 (2007), 8188. S. Bobkov, A. Koldobsky, [*On the central limit property of convex bodies.* ]{} Geometric aspects of functional analysis, Lecture Notes in Math., 1807, Springer, Berlin, (2003), 44–52. Bourgain, J., [*On the distribution of polynomials on high-dimensional convex sets.* ]{} Geometric aspects of functional analysis, Israel seminar (1989–90), Lecture Notes in Math., 1469, Springer, Berlin, (1991), 127–137. Diskant, V. I., [*Stability of the Solution of the Minkowski Equation*]{} (in Russian). Sibirsk. Mat. 14 (1973), 669–673, 696. English translation in Siberian Math. J. 14 (1973), 466-469. R Durrett, [*Stochastic Calculus: A Practical Introduction*]{} Cambdidge university press, 2003. Eldan, R., Klartag, B., [*Approximately gaussian marginals and the hyperplane conjecture*]{}. Proc. of a workshop on “Concentration, Functional Inequalities and Isoperimetry”, Contemporary Math., vol. 545, Amer. Math. Soc., (2011), 55–68. Eldan, R., Klartag, B., [*Dimensionality and the stability of the Brunn-Minkowski inequality*]{}. Annali SNS, 2011. B. Fleury, [*Concentration in a thin euclidean shell for log-concave measures* ]{}, J. Func. Anal. 259 (2010), 832841. B. Fleury, [*Poincaré inequality in mean value for Gaussian polytopes*]{}, Probability theory and related fields, Volume 152, Numbers 1-2, 141-178. Figalli, A., Maggi, F., Pratelli, A., [*A refined Brunn-Minkowski inequality for convex sets.*]{} Ann. Inst. H. Poincaré Anal. Non Linéaire, vol. 26, no. 6, (2009), 2511–-2519. Figalli, A., Maggi, F., Pratelli, A., [*A mass transportation approach to quantitative isoperimetric inequalities.*]{} Invent. Math., vol. 182, no. 1, (2010), 167–- 211. O. Guedon, E. Milman, [*Interpolating thin-shell and sharp large-deviation estimates for isotropic log-concave measures*]{}, 2010 M. Gromov and V. D. Milman. [*A topological application of the isoperimetric inequality.*]{} Amer. J. Math., 105(4):843–854, 1983 Groemer, H., [*On the Brunn–-Minkowski theorem*]{}. Geom. Dedicata, vol. 27, no. 3, (1988), 357–-371. L. Gross [ *Logarithmic Sobolev inequalities*]{} Amer. J. Math. 97 (1975), no. 4, 1061-1083 L. Gross, [ *Logarithmic Sobolev inequalities and contractivity properties of semigroups, Dirichlet forms*]{} Varenna, 1992, 54-88, Lecture Notes in Math., 1563, Springer, Berlin, 1993. J. Lehec, [*Representation formula for the entropy and functional inequalities.*]{} arXiv: 1006.3028, 2010. G. Kallianpur, J. Xiong, [*Stochastic Differential Equations in Infinite Dimensional Spaces*]{} Institute of mathematical statistics, lecture notes - monograph series. California, USA, 1995. Klartag, B., [*A central limit theorem for convex sets.*]{} Invent. Math., 168, (2007), 91–131. Klartag, B., [*Power-law estimates for the central limit theorem for convex sets.*]{} J. Funct. Anal., Vol. 245, (2007), 284–310. Klartag, B., [*A Berry-Esseen type inequality for convex bodies with an unconditional basis.* ]{} Probab. Theory Related Fields, vol. 145, no. 1-2, (2009), 1–-33. Klartag, B., [*Power-law estimates for the central limit theorem for convex sets.* ]{} wherever, 2007. R. Kannan, L. Lovász, and M. Simonovits. Isoperimetric problems for convex bodies and a localization lemma. Discrete Comput. Geom., 13(3-4):541–559, 1995 M. Ledoux, [*Spectral gap, logarithmic Sobolev constant, and geometric bounds.*]{} Surveys in differential geometry. Vol. IX, 219-240, Surv. Differ. Geom., IX, Int. Press, Somerville, MA, 2004. L. Lovász and S. Vempala, [*The geometry of logconcave functions and sampling algorithms.*]{} Random Structures & Algorithms, Vol. 30, no. 3, (2007), 307–358. E. Milman, [*On the role of Convexity in Isoperimetry, Spectral-Gap and Concentration*]{}, Invent. Math. 177 (1), 1-43, 2009. E. Milman, [*Isoperimetric Bounds on Convex Manifolds*]{}, Contemporary Math., proceedings of the Workshop on “Concentration,Functional Inequalities and Isoperimetry” in Florida, November 2009. B. Oksendal [ *Stochastic Differential Equations: An Introduction with Applications.*]{} Berlin: Springer. ISBN 3-540-04758-1, (2003). R. Osserman, [*Bonnesen-style isoperimetric inequalities.* ]{} Amer. Math. Monthly, 86, no. 1, (1979), 1–-29. G. Pisier, [*The volume of convex bodies and Banach space geometry.* ]{} Cambridge Tracts in Mathematics, 94. Cambridge University Press, Cambridge, 1989. A. Segal, [*Remark on Stability of Brunn-Minkowski and Isoperimetric Inequalities for Convex Bodies*]{}. To appear in Gafa Seminar notes. V.N. Sudakov, [*Typical distributions of linear functionals in finite-dimensional spaces of high dimension.*]{} (Russian) Dokl. Akad. Nauk SSSR 243 (1978), no. 6, 1402-1405. T. Tam, [*On Lei-Miranda-Thompson’s result on singular values and diagonal elements*]{}. Linear Algebra and Its Applications, 272 (1998), 91-101. Villani, C., [*Topics in optimal transportation.* ]{} Graduate Studies in Mathematics, 58. American Mathematical Society, Providence, RI, 2003. [*e-mail address: roneneldan@gmail.com*]{} [^1]: Supported in part by the Israel Science Foundation and by a Marie Curie Grant from the Commission of the European Communities.
--- abstract: 'Let $P$ be a set of $n$ points on the plane in general position. We say that a set $\Gamma$ of convex polygons with vertices in $P$ is a convex decomposition of $P$ if: Union of all elements in $\Gamma$ is the convex hull of $P,$ every element in $\Gamma$ is empty, and for any two different elements of $\Gamma$ their interiors are disjoint. A minimal convex decomposition of $P$ is a convex decomposition $\Gamma''$ such that for any two adjacent elements in $\Gamma''$ its union is a non convex polygon. It is known that $P$ always has a minimal convex decomposition with at most $\frac{3n}{2}$ elements. Here we prove that $P$ always has a minimal convex decomposition with at most $\frac{10n}{7}$ elements.' author: - | M. Lomelí-Haro\ Instituto de Física-Universidad Autónoma de San Luis Potosí\ lomeli@ifisica.uaslp.mx title: Minimal Convex Decompositions --- Introduction ============ Let $P_n$ denote a set of $n$ points on the plane in general position. We denote as $Conv(P_n)$ the convex hull of $P_n$ and $c$ the number of its vertices, and given a polygon $\alpha$ we denote as $\alpha^o$ its interior. We say that a set $\Gamma=\{\gamma_1,\gamma_2,...,\gamma_k\}$ of $k$ convex polygons with vertices in $P_n$ is a [*convex decomposition*]{} of $P_n$ if: (C1) Every $\gamma_i \in \Gamma$ is empty, that is, $P_n \cap \gamma_i^o = \emptyset$ for $i=1,2,...,k.$ (C2) For every two different $\gamma_i,$ $\gamma_j \in \Gamma,$ $\gamma_i^o \cap \gamma_j^o = \emptyset.$ (C3) $\gamma_1 \cup \gamma_2 \cup ... \cup \gamma_k = Conv(P_n)$. In [@openproblems] they conjectured that for every $P_n$ there is a convex decomposition with at most $n+1$ elements. This was disproved in [@aichholzer] giving an $n$–point set such that every convex decomposition has at least $n+2$ elements. Later in this direction, in [@garcia] they give a point set $P_n$ on which every convex decomposition has at least $\frac{11n}{10}$ elements. We are interested in convex decompositions of $P_n$ with as few elements as possible. A [*triangulation*]{} of $P_n$ is a convex decomposition $T = \{t_1,t_2,...,t_k\}$ on which every $t_i$ is a triangle. In [@simflippingedges] they prove that any triangulation $T$ of $P_n,$ has a set $F$ of at least $\frac{n}{6}$ edges that, by removing them we obtain $|F|$ convex quadrilaterals with disjoint interiors. So $\Gamma = T \setminus F$ is a convex decomposition yielding the bound $|\Gamma| \leq \frac{11n}{6}-c-2.$ We have the following definition. Let $\Gamma$ be a convex decomposition of $P_n.$ If the union of any two different elements in $\Gamma$ is a nonconvex polygon, then $\Gamma$ will be called [*minimal convex decomposition.*]{} In [@descconv] they show that any given set $P_n$ always has a minimal convex decomposition with at most $\frac{3n}{2}-c$ elements. Here we improve this bound giving a minimal convex decomposition of $P_n$ with at most $\frac{10n}{7}-c$ elements. Minimal Convex Decompositions ============================= Let $p_1=(x_1,y_1)$ be the element in $P_n$ with the lowest $y$–coordinate. If there are two points with same $y$–coordinate we take $p_1$ as the element with the smallest $x$–coordinate. We label every $p \in P_n\setminus \{p_1\}$ according to the angle $\theta$ between the line $y = y_1$ and the line $\overline{p_1 p}.$ The point $p$ will be labeled $p_{i+1}$ if it has the $i$–th smallest angle $\theta,$ see Figure \[my\_proof\_03\](a). For $i = 3,4,...,n-1,$ we say $p_i$ is negative, labeled $-,$ if $p_i \in Conv( \{p_1,p_{i-1},p_{i+1}\})^o.$ Otherwise we say $p_i$ is positive, labeled $+.$ See Figure \[my\_proof\_03\](b). Let ${A}$ and ${B}$ be the subsets of $P_n$ containing all positive and negative elements respectively. We divide ${A}$ into subsets of consecutive points as follows: If $p_3 \in {A},$ we define $A_1 = \{p_3,...,p_{3+r-1}\},$ as the subset with $r$ consecutive positive points, where $p_{r+3}$ is negative or $p_{r+3} = p_n.$ If $p_3 \not \in {A}$ then $A_1 = \emptyset.$ Suppose that $p_{n-1} \in {A}.$ For $i \geq 2$ let $A_i' = {A} \setminus \left(A_1 \cup ... \cup A_{i-1} \right),$ and let $A_i = \{p_j,p_{j+1},...,p_{j+r-1}\},$ where $r\geq 1,$ $p_j$ has the smallest index in $A_i',$ and $p_{j+r}$ is negative or $p_{j+r} = p_{n}.$ Let $k$ be the number of such $A_i$ sets obtained. If $p_{n-1} \not \in {A},$ we make $A_{k-1}$ the block containing the element in ${A}$ with the highest label, and then $A_k = \emptyset.$ In an analogous way we partition ${B}$ into $B_1$, $B_2$, ..., $B_{k-1}.$ Let $V$ be the polygon with vertex set ${A}\cup \{p_1,p_2,p_n\},$ and let $U'$ be the set of at most $c-2$ regions $Conv(P_n) \setminus V.$ We call $U$ the vertex set of $U'.$ We obtain a minimal convex decomposition $\Gamma$ of $P_n$ induced by polygons in $V$ and $U$ in the following way: \(1) If $A_j=\{p_i,p_{i+1},...,p_{i+r-1}\}$, we make ${\cal A}_j = A_j \cup \{p_1,p_{i-1},p_{i+r}\}$. ${\cal A}_j$ is the vertex set of an empty convex $(|A_j|+3)$–gon. In case that $A_1 = \emptyset$ (or $A_k = \emptyset$) then ${\cal A}_1 = \{p_1,p_{2},p_{3}\}$ (${\cal A}_k = \{p_1,p_{n-1},p_{n}\}$). There are $k$ of such polygons. \(2) If $B_j=\{p_i,p_{i+1},...,p_{i+r-1}\},$ we make ${\cal B}_j = B_j\cup \{p_{i-1},p_{i+r}\}$. ${\cal B}_j$ is the vertex set of an empty convex $(|B_j|+2)$–gon. There are $k-1$ of them. \(3) Every $B_j = \{p_i,p_{i+1},...,p_{i+r-1}\}$ induces $|B_j|-1$ triangles ${{\triangle}}p_1 p_m p_{m+1},$ for $m = i,i+1,...,i+r-2.$ There are $|B_1| - 1 + |B_2| - 1 + ... + |B_{k-1}|-1$ of these triangles. Let $T_B$ be the set of them. \(4) $U'$ can be subdivided in $|A_1|+|A_2|+...+|A_k| - (c-3)$ triangles with vertices in $U$ satisfying (C1) and (C2). Make $T_U$ the set of such triangles. Hence, $\Gamma = \cup_{i} ({\cal A}_i \cup {\cal B}_i) \cup T_U \cup T_B$ is a convex decomposition of $P_n.$ See Figure \[convdescAB\]. We have that $|\Gamma| = k + k-1 + |T_B|+|T_U|$ [*i.e.*]{} $$\label{cardinalidad} |\Gamma| = n+k-c.$$ Convex decomposition with at most $\frac{10n}{7}-c$ elements ------------------------------------------------------------ We proceed now to show that every collection $P_n$ with $c$ vertices in $Conv(P_n)$ has a convex decomposition $\Gamma$ such that $|\Gamma| \leq \frac{10}{7}n-c.$ We use the following notation: If in a given collection $P_n$ we find that $p_3,$ $p_5,$ $p_7,$ ... are negative and $p_4,$ $p_6,$ $p_8,$ ... are positive, we say that $P_n$ is a $\pm$ set. Next result is for $\pm$ sets. \[lemamine\] Let $P_n$ be a $\pm$ set. Then $P_n$ has a convex decomposition $\Gamma$ with $\frac{4n}{3}-c$ elements, where $c$ is the number of vertices in $Conv(P_n)$. [[**Proof:** ]{}]{}For $i = 2,8,...,n-6,$ we make $Q_i = \{p_1, p_{i}, p_{i+1}, ..., p_{i+6}\},$ and let $T_i=\{t_1,t_2,...,t_9\}$ be a set of triangles with vertices in $Q_i$ such that $t_1 = {{\triangle}}p_1 p_{i} p_{i+1}$, $t_2 = {{\triangle}}p_1 p_{i+1} p_{i+3}$, $t_3 = {{\triangle}}p_1 p_{i+3} p_{i+5}$, $t_4 = {{\triangle}}p_1 p_{i+5} p_{i+6}$, $t_5 = {{\triangle}}p_i p_{i+1} p_{i+2}$, $t_6 = {{\triangle}}p_{i+1} p_{i+2} p_{i+3}$, $t_7 = {{\triangle}}p_{i+2} p_{i+3} p_{i+4}$, $t_8 = {{\triangle}}p_{i+3} p_{i+4} p_{i+5}$ and $t_9 = {{\triangle}}p_{i+4} p_{i+5} p_{i+6},$ as shown in Figure \[qidelta\]. We obtain a set $\Gamma_i$ of convex polygons, joining elements in $T_i,$ to get a minimal convex decomposition of $P_n.$ We make a final modification on positive and negative points as follows: Given 3 consecutive points labeled $+,$ $p_i,$ $p_j$ and $p_k$ ($i<j<k$), if $p_j \in Conv(\{p_1,p_i, p_k\})^o$ we label $p_j$ as $+ -$, otherwise we label $p_j$ as $+ +.$ Analogously we modify labels $-$ to $- -$ and $- +.$ We proceed now to make case analysis over the labels in $p_{i+2},$ $p_{i+3}$ and $p_{i+4}.$ Let $\ell$ be the line containing $p_{i+1}$ and $p_{i+3},$ make ${\cal D}$ the open half plane bounded by $\ell$ containing $p_1,$ and ${\cal U} = \mathbb{R}^2 \setminus ({\cal D} \cup \{\ell\}).$ Given two polygons $\alpha$ and $\beta$ sharing an edge $e,$ we denote as $\alpha \uplus \beta$ the polygon $\alpha \cup (\beta -\{e\}).$ [**Case (a).**]{} $p_{i+2}$ and $p_{i+4}$ have label $+ +$ and $+ -$ respectively. We have: [*Subcase 1.*]{} Suppose that $p_{i+3}$ is $- -.$ If $p_{i}\in {\cal D}$ and the pentagon $P = t_6 \uplus t_7 \uplus t_8$ is convex, then $\Gamma_i =\{t_1 \uplus t_2,t_3,t_4,t_5,P, t_9\}.$ If $P$ is not convex, $\Gamma_i =\{t_1 \uplus t_2,t_3 \uplus t_8, t_4,t_5,t_6 \uplus t_7, t_9\}.$ See Figure \[casoA1\](a). If $p_i \in {\cal U},$ and the hexagon $H = t_5 \uplus t_6 \uplus t_7 \uplus t_8$ is convex, $\Gamma_i =\{t_1, t_2, t_3, t_4, H, t_9\}.$ If $H$ is not convex, $\Gamma_i =\{t_1,t_2,t_3 \uplus t_8,t_4,t_5 \uplus t_6 \uplus t_7, t_9\}.$ See Figure \[casoA1\](b). [*Subcase 2.*]{} Suppose that $p_{i+3}$ is $- +.$ If $p_{i}$ and $p_{i+4}$ are in ${\cal D},$ then make $H=t_1 \uplus t_2 \uplus t_3 \uplus t_8$ and $\Gamma_i =\{H,t_4,t_5,t_6,t_7, t_9\}$ (see Figure \[casoA2\](a)). Now, if $p_i \in {\cal U}$ (and $p_{i+4} \in {\cal D}$) $H$ is missing $t_1,$ so $\Gamma_i =\{t_1,t_2 \uplus t_3 \uplus t_8,t_5 \uplus t_6,t_4,t_7, t_9\}$ (see Figure \[casoA2\](b)). On the other hand if $p_{i+4} \in {\cal U}$ (and $p_{i} \in {\cal D}$), $H$ is missing $t_8,$ so $\Gamma_i =\{t_1 \uplus t_2 \uplus t_3,t_4,t_5,t_6 \uplus t_7,t_8, t_9\}$ (see Figure \[casoA2\](c)). Finally if $p_i, p_{i+2}, p_{i+4} \in {\cal U},$ $\Gamma_i =\{t_1,t_2 \uplus t_3,t_4,t_5 \uplus t_6 \uplus t_7,t_8, t_9\}$ (see Figure \[casoA2\](d)). [**Case (b).**]{} Both $p_{i+2}$ and $p_{i+4}$ have label $+ -.$ Observe that $\{p_{i},p_{i+2},p_{i+4},p_{i+6}\}$ is the set of vertices of a convex quadrilateral $q,$ so we make $\Gamma_i = \{ t_1, t_2 \uplus t_6, t_3 \uplus t_8, t_4, t_5, t_7, t_9, q \},$ $U = U\setminus \{p_{i+2},p_{i+4}\}$ and $U' = U'\setminus q.$ [**Case (c).**]{} $p_{i+2}$ and $p_{i+4}$ both have label $+ +.$ We are making a similar analysis as in Case (a): Suppose that $p_{i+3}$ is $- -.$ If $p_i \in {\cal D}$ and hexagon $H = t_5 \uplus t_6 \uplus t_7 \uplus t_8$ is convex, $\Gamma_i =\{t_1,t_2,t_3,t_4,H, t_9\}.$ If $H$ is not convex, we make $\Gamma_i =\{t_1 \uplus t_2,t_3,t_4,t_5,t_6 \uplus t_7 \uplus t_8, t_9\}.$ See Figure \[casoC1\](a). If $p_{i}\in {\cal U},$ $H$ is always convex, so $\Gamma_i =\{t_1,t_2,t_3,t_4,H, t_9\}.$ See Figure \[casoC1\](b). When $p_{i+3}$ is $- +,$ $\Gamma_i$ has the same polygons as in subcase 2 of Case (a). And if $p_{i+2}$ and $p_{i+4}$ have label $+ -$ and $+ +$ respectively, we obtain $\Gamma_i$ analogously as in Case (a). Lets make $R_i = \gamma_i \uplus \gamma_{i+1}$ where $\gamma_i$ is the polygon containing $t_4$ in $Q_i,$ and $\gamma_{i+1}$ is the polygon containing $t_1$ in $Q_{i+1},$ and let $b$ be the number of $Q_i$ sets as in case (b). We obtain a minimal convex decomposition of $P_n$ by finding $\Gamma_2,$ $\Gamma_8,$ ... ,$\Gamma_{n-6},$ obtaining the $\frac{n}{2}-c-2b$ triangles in $T_U,$ and getting $R_i$ by removing edges $p_1p_{i},$ for $i = 8, 14, 20, ... ,n-6.$ So $\Gamma$ is such that $|\Gamma| = \left(6\frac{n}{6} + 2b \right) + \left( \frac{n}{2}-c-2b \right) - \left(\frac{n}{6} \right) = \frac{4n}{3} - c.$ We have the following observation. [**Observation 1.**]{} Let $\gamma$ be the vertex set of a convex polygon, and let $p$ be a point in $Conv(\gamma)^o,$ then $\gamma \cup p$ has a minimal convex decomposition with 3 elements. We proceed now to prove our main theorem. Let $P_n$ be an $n$–point set on the plane in general position. Then $P_n$ has a minimal convex decomposition with at most $\frac{10}{7}n-c$ elements. [[**Proof:** ]{}]{}Let $k$ be the number of polygons ${\cal A}_i$ described above. If $k \leq \frac{3n}{7},$ we apply Equation (\[cardinalidad\]) to find a convex decomposition with $n + k - c \leq n + \frac{3n}{7}-c$ elements. If $k = \frac{n}{2}$, $P_n$ is a $\pm$ set, and it has a convex decomposition with $\frac{4n}{3}-c$ elements, by Lemma \[lemamine\]. In case that $\frac{3n}{7}< k < \frac{n}{2},$ we consider every ${\cal A}_i.$ Let $I = {\cal A}_i \cap Conv(P_n)^o.$ If $I={\cal A}_i$ let $q_i$ be the element in ${\cal A}_i$ with the highest coordinate $y$ (if there are 2 points with this coordinate, we make $q_i$ the element having the greatest $x$ coordinate of them). If $I \not = {\cal A}_i,$ let $q_i$ be the element with the highest label in ${\cal A}_i - I,$ and make $r_i$ in $B_i$ the element with minimum $y$ coordinate, if there are 2 with the same coordinate, we make $r_i$ the element with maximum $x$ coordinate. See Figure \[coleccionPM\]. We make $P' = \{q_1, r_1, q_2, r_2, ... , q_{k-1}, r_{k-1}, q_k\} \cup \{p_1,p_2,p_n\}.$ $P'$ is a $\pm$ set with $2k+2$ elements. By Lemma \[lemamine\], $P'$ has a convex decomposition $\Gamma'$ with $\frac{4}{3} (2k+2) -c$ elements. Let $S$ be the set $P_n - P',$ where $|S|= n - 2k - 2.$ By Observation 1, we find that every element in $S$ when is added increases in 2 the number of polygons, so $P'$ and $S$ induce a minimal convex decomposition $\Gamma$ of $P'\cup S=P_n$ with $\frac{4}{3} (2k+2) -c + 2|S|$ elements. Substituting $|S|$ we have that $|\Gamma| = 2n-\frac{4}{3}k-c-\frac{4}{3}.$ Using the fact that $k \geq \frac{3n}{7}$ we obtain that $\Gamma$ is such that $|\Gamma| \leq \frac{10n}{7}-c.$ Concluding remarks ================== Analogously to triangulation of $P_n,$ we can define [*convex quadrangulation*]{}. It would be interesting characterizing $n$–point sets that accept a convex quadrangulation. [99]{} . [*The point set order type data base: A collection of applications and results*]{}. In Proc. 13th Annual Canadian Conference on Computational Geometry CCCG 2001, pages 17-20, Waterloo, Ontario, Canada, 2001 . Int. J. Comp. Geometry, 13, no. 2, 113-133, (2004). , [*Planar Point Sets with Large Minimum Convex Partitions*]{}. 22nd European Workshop on Computational Geometry, Delphi, Greece, pp. 51-54, 2006. , [*A Note on Convex decompositions*]{}. Graphs and Combinatorics Vol. 20, no. 2, pp. 223-231, (2004). . $10^{nth}$ Canadian Conference on Computational Geometry, McGill University, Montreal, 1998.
--- abstract: 'An iterative method is derived for image reconstruction. Among other attributes, this method allows constraints unrelated to the radiation measurements to be incorporated into the reconstructed image. A comparison is made with the widely used Maximum-Likelihood Expectation-Maximization (MLEM) algorithm.' author: - 'Clinton DeW. Van Siclen' date: 4 January 2011 title: 'Iterative method for solution of radiation emission/transmission matrix equations' --- Imaging by radiation emission or transmission effectively produces a set of linear equations to be solved. For example, in the case of coded aperture imaging, the solution is a reconstructed set of radiation sources, while in the case of x-ray interrogation, the solution is a set of attenuation coefficients for the voxels comprising the volume through which the x-ray beam passes. The linear equations have the form$$d_{i}=\sum\limits_{j=1}^{J}M_{ij}\mu_{j} \label{e1}$$ where the set $\left\{ d_{i}\right\} $ corresponds to the radiation intensity distribution recorded at a detector (a detector pixel is labeled by the index $i$), the set $\left\{ \mu_{j}\right\} $ is the solution, and the matrix element $M_{ij}$ connects the *known* $d_{i}$ to the *unknown* $\mu_{j}$. Typically the matrix $M$ is non-square so that $\left\{ \mu_{j}\right\} $ cannot be obtained by standard matrix methods. (And note that, when the set of equations is large, it can be difficult to ascertain *a priori* whether the equation set is over- or under-determined.) In any case the matrix equation $d=M\mu$ may be solved by the iterative method that is derived as follows. Clearly this method will feature a relation between $\mu_{j}^{(n)}$ and $\mu_{j}^{(n-1)}$, where $n$ is the iteration number. Consider the two equations for $\mu_{j}^{(n)}$ and $\mu_{j}^{(n-1)}$,$$d_{i}^{(n)}={\textstyle\sum\nolimits_{j}} M_{ij}\mu_{j}^{(n)} \label{e2}$$$$d_{i}^{(n-1)}={\textstyle\sum\nolimits_{j}} M_{ij}\mu_{j}^{(n-1)} \label{e3}$$ and rewrite the latter as$$d_{i}=\frac{d_{i}}{d_{i}^{(n-1)}}{\textstyle\sum\nolimits_{j}} M_{ij}\mu_{j}^{(n-1)}\text{.} \label{e4}$$ Then the relationship between $\mu_{j}^{(n)}$ and $\mu_{j}^{(n-1)}$ is obtained by setting ${\textstyle\sum\nolimits_{i}} d_{i}^{(n)}={\textstyle\sum\nolimits_{i}} d_{i}$:$${\textstyle\sum\nolimits_{i}} \left( {\textstyle\sum\nolimits_{j}} M_{ij}\mu_{j}^{(n)}\right) ={\textstyle\sum\nolimits_{i}} \left( \frac{d_{i}}{d_{i}^{(n-1)}}{\textstyle\sum\nolimits_{j}} M_{ij}\mu_{j}^{(n-1)}\right)$$$${\textstyle\sum\nolimits_{j}} \left\{ \mu_{j}^{(n)}{\textstyle\sum\nolimits_{i}} M_{ij}\right\} ={\textstyle\sum\nolimits_{j}} \left\{ \mu_{j}^{(n-1)}{\textstyle\sum\nolimits_{i}} \left( \frac{d_{i}}{d_{i}^{(n-1)}}M_{ij}\right) \right\}$$$$\mu_{j}^{(n)}=\mu_{j}^{(n-1)}\frac{1}{{\textstyle\sum\nolimits_{i}} M_{ij}}{\textstyle\sum\nolimits_{i}} \left( \frac{d_{i}}{d_{i}^{(n-1)}}M_{ij}\right) \text{.} \label{e5}$$ Note that this last equation can be written$$\mu_{j}^{(n)}=\mu_{j}^{(n-1)}\frac{\left\langle \frac{d_{i}}{d_{i}^{(n-1)}}M_{ij}\right\rangle _{i}}{\left\langle M_{ij}\right\rangle _{i}} \label{e6}$$ where the last factor is essentially a weighted average of all $d_{i}/d_{i}^{(n-1)}$. Thus the set $\left\{ \mu_{j}^{(n)}\right\} $ approaches a solution $\left\{ \mu_{j}\right\} $ by requiring ${\textstyle\sum\nolimits_{i}} d_{i}^{(n)}={\textstyle\sum\nolimits_{i}} d_{i}$ at each iteration; in effect, by requiring all $d_{i}^{(n)}\rightarrow d_{i}$. The iteration procedure alternates between use of Eq. (\[e3\]) and Eq. (\[e5\]) until all $d_{i}^{(n)}$ are as close to $d_{i}$ as desired. For the first ($n=1$) iteration, an initial set $\left\{ \mu_{j}^{(0)}\right\} $ is chosen, which produces the set $\left\{ d_{i}^{(0)}\right\} $ according to Eq. (\[e3\]). These values are used in Eq. (\[e5\]), so producing the set $\left\{ \mu_{j}^{(1)}\right\} $. And so on… That a final set $\left\{ \mu_{j}^{(n)}\right\} $ is a solution $\left\{ \mu_{j}\right\} $ to the matrix equation $d=M\mu$ is verified by checking that all $d_{i}^{(n)}=d_{i}$ to within a desired tolerance. Some cautions and opportunities follow from this simple derivation of Eq. (\[e5\]). A caution is that, in the event the equation set is *under*-determined, different initial sets $\left\{ \mu_{j}^{(0)}\right\} $ will lead to different final sets $\left\{ \mu_{j}\right\} $ that satisfy the matrix equation. The corresponding opportunity is that this problem may be mitigated to some extent by the addition, to the original set of equations, of linear equations that further constrain the $\mu_{j}$ (perhaps derived from, for example, independent knowledge of some of the contents of a container under interrogation). In general the $d_{i}$ appearing in a constraint equation will have nothing to do with radiation intensity. The form of any added constraints, and the initial choice $\left\{ \mu _{j}^{(0)}\right\} $, must allow all $\mu_{j}^{(n)}\rightarrow\mu_{j}$ and $d_{i}^{(n)}\rightarrow d_{i}$ monotonically. In particular, care should be taken when a constraint has one or more coefficients $M_{ij}<0$, as that affects the denominator ${\textstyle\sum\nolimits_{i}} M_{ij}$ in Eq. (\[e5\]) (a straightforward fix may be to reduce the magnitudes of all $M_{ij}$ coefficients and $d_{i}$ in that constraint equation by a multiplicative factor). In any event, the acceptability of a constraint equation is easily ascertained by monitoring the behavior $d_{i}^{(n)}\rightarrow d_{i}$ for that constraint. Note that *all* solutions $\left\{ \mu_{j}\right\} $ to a set of equations that includes additional constraints with $d_{i}>0$ and all $M_{ij}\geq0$ are accessible from sets $\left\{ \mu_{j}^{(0)}\right\} $ of initial values, and further that *any* set $\left\{ \mu_{j}^{(0)}\right\} $ will produce a solution $\left\{ \mu_{j}\right\} $. This suggests that, for this implementation of constraints, a superposition of many solutions may give a good probabilistic reconstruction. To achieve this, consider that the innumerable solutions to the set of equations may be regarded as points in a $J$-dimensional space ($J$ is the number of elements in a solution $\left\{ \mu_{j}\right\} $). These points must more-or-less cluster, producing a cluster *centroid* that is itself a solution. While the centroid solution $\left\{ \mu_{j}^{(c)}\right\} $ has no intrinsic special status (as all cluster points represent equally likely reconstructions), it may be taken to *represent* the particular set of equations. The cluster size, which indicates the degree to which solutions are similar to one other, should decrease as constraints are added. A logical measure of the cluster size is$$\sigma_{\text{cluster}}=\left\langle \left( \mathbf{x}_{c}-\mathbf{x}_{k}\right) \cdot\left( \mathbf{x}_{c}-\mathbf{x}_{k}\right) \right\rangle _{k}^{1/2} \label{e7}$$ where $\mathbf{x}_{c}$ is the centroid vector and $\mathbf{x}_{k}$ is the vector corresponding to the *k*th solution. Thus the quantity $\sigma_{\text{cluster}}/\sqrt{J}$, which represents the standard deviation of the innumerable values of an arbitrary element $\mu_{j}$, is a useful measure of the variation among solutions $\left\{ \mu_{j}\right\} $. In general it is desirable that the variation *among* solutions be much less than the variation *within* the centroid solution, which is$$\sigma_{\mu}^{(c)}=\left\langle \left( \mu_{j}^{(c)}-\overline{\mu^{(c)}}\right) ^{2}\right\rangle _{j}^{1/2} \label{e8}$$ where $\overline{\mu^{(c)}}=\left\langle \mu_{j}^{(c)}\right\rangle _{j}$. In that case ($\sigma_{\text{cluster}}J^{-1/2}\ll\sigma_{\mu}^{(c)}$) the centroid solution is little changed by additional constraints, so suggesting that the centroid solution $\left\{ \mu_{j}^{(c)}\right\} $ may be regarded as the sought-after reconstruction. Another caution follows from the fact that the denominator ${\textstyle\sum\nolimits_{i}} M_{ij}$ in Eq. (\[e5\]) is the sum of all elements in column $j$ of matrix $M$. This iterative method can of course be used without explicitly converting a set of linear equations into a matrix equation (or *several* sets of equations into a *single* matrix equation), but in that event very careful attention must be paid to get the factors ${\textstyle\sum\nolimits_{i}} M_{ij}$ right. It may be noticed that Eq. (\[e5\]) is similar to the so-called Maximum-Likelihood Expectation-Maximization (MLEM) algorithm (see refs. [@r1] and [@r2] for derivations of the latter, and see numerous papers in the recent imaging literature for applications of it). The MLEM, which is derived from physical considerations having to do with radiation emission and detection, purports to find the set $\left\{ \mu_{j}\right\} $ that *maximizes* the probability $P\left( \left\{ d_{i}\right\} |\left\{ \mu_{j}\right\} \right) $, which is the probability of realizing the observed set $\left\{ d_{i}\right\} $ given a set $\left\{ \mu_{j}\right\} $. This is in contrast to the derivation above, which leads to Eq. (\[e5\]) as simply a method to find a solution to a matrix equation (a set of linear equations). The derivation presented here makes clear how the iterative procedure should be implemented for an application, and allows constraints to be added to the original set of equations (those produced by the imaging exercise) thereby enabling a more-accurate reconstruction when the original equation set is under-determined. This work was supported in part by the INL Laboratory Directed Research and Development Program under DOE Idaho Operations Office Contract DE-AC07-05ID14517. [9]{} L. A. Shepp and Y. Vardi, Maximum likelihood reconstruction for emission tomography, *IEEE Trans. Med. Imag.* **MI-1**, 113 (1982). K. Lange and R. Carson, EM reconstruction algorithms for emission and transmission tomography, *J. Comput. Assist. Tomography* **8**, 306 (1984).
--- abstract: 'We show that a closed almost Kähler 4-manifold of globally constant holomorphic sectional curvature $k\geq 0$ with respect to the canonical Hermitian connection is automatically Kähler. The same result holds for $k<0$ if we require in addition that the Ricci curvature is $J$-invariant. The proofs are based on the observation that such manifolds are self-dual, so that Chern–Weil theory implies useful integral formulas, which are then combined with results from Seiberg–Witten theory.' address: - 'Department of Mathematics, Bronx Community College of CUNY, Bronx, NY 10453, USA.' - 'Universitätsstrasse 14, 86159 Augsburg, Germany.' author: - Mehdi Lejmi - Markus Upmeier bibliography: - 'biblio-OWR.bib' title: ' Closed almost Kähler 4-manifolds of constant non-negative Hermitian holomorphic sectional curvature are Kähler. ' --- Introduction ============ Dating back to the Goldberg conjecture [@MR0238238], the question how the geometry of a closed almost Kähler $4$-manifold can force the integrability of an almost complex structure has been considered by many authors. This conjecture has been verified for non-negative scalar curvature by Sekigawa [@MR787184]. Much else of what is known has now been subsumed by Apostolov–Armstrong–Drăghici in [@MR1921552], where it is shown that the third Gray curvature condition is in fact sufficient. There are also strong results for the $*$-Ricci curvature [@MR1604803], [@MR1317012]. Assuming non-negative scalar curvature, the third Gray condition can be relaxed to the Ricci tensor being $J$-invariant [@MR1675380]. Strengthening the Einstein condition, Blair [@MR1043404] has shown that almost Kähler manifolds of constant sectional curvature are flat Kähler. For non-flat examples it is natural to consider instead the holomorphic sectional curvature, where one restricts the sectional curvature to $J$-invariant planes. Restricting to a subclass of the Gray–Hervella classification, say almost Kähler, the problem of classifying manifolds of constant holomorphic sectional curvature was posed by Gray–Vanhecke [@MR531928]. A related problem is understanding when the notions of pointwise constant and globally constant holomorphic sectional curvature agree. This holds for nearly Kähler manifolds [@MR0358648], but there are non-compact counterexamples in the almost Kähler case [@MR1806472]. Moreover, for almost Kähler manifolds the classification problem so far has remained inconclusive (see also [@MR1638199; @MR745268]). The purpose of this paper is to prove such a classification result for the Hermitian holomorphic sectional curvature instead of the Riemannian one. We obtain optimal results for non-negative curvature, while in the negative case we need to impose the ‘natural’ condition of the Ricci tensor being $J$-invariant. Note that constant Hermitian holomorphic sectional curvature does not obviously imply the Einstein condition, nor that any of the scalar curvatures are constant. Assuming this, we also obtain a partial result in Corollary \[Mehdi-obs\]. Overview of results ------------------- Let $(M,g,J,F)$ be an almost Hermitian $4$-manifold. Define the (first canonical) Hermitian connection by $$\label{canonical-connection} \nabla_X Y \coloneqq D^g_XY - \frac12 J(D^g_XJ)Y.$$ From its curvature we derive the Hermitian holomorphic sectional curvature $$\label{sectional-curvature} H(X) \coloneqq -R^\nabla_{X,JX,X,JX},$$ a function on the unit tangent bundle. We now state the main results of this paper. \[main-theorem\] Let $M$ be a closed almost Kähler $4$-manifold of globally constant Hermitian holomorphic sectional curvature $k\geq 0$. Then $M$ is Kähler–Einstein, holomorphically isometric to: ($k>0$) : $\mathbb{C}P^2$ with the Fubini–Study metric. ($k=0$) : a complex torus or a hyperelliptic curve with the Ricci-flat Kähler metric. Recall here that a hyperelliptic curve is a quotient of a complex torus by a finite free group action. This classification is well-known for Kähler manifolds (see [@MR1393941 Theorems 7.8, 7.9] and [@MR0052869; @MR0063740] in the simply-connected case and [@MR1348147 Theorem 2] in general). The main goal of this paper is to show that $J$ is automatically integrable. In case $k<0$ we shall prove the following weaker result: \[main-theorem-negative\] Let $M$ be a closed almost Kähler $4$-manifold of pointwise constant Hermitian holomorphic sectional curvature $k < 0$. Assume also that the Ricci tensor is $J$-invariant. Then $M$ is Kähler–Einstein, holomorphically isometric to a compact quotient of the complex hyperbolic ball $\mathbb{B}^4$ with the Bergman metric. The proofs rely on the following pointwise result of independent interest, in which $M$ may be non-compact: \[thm:char\] Let $M$ be an almost Hermitian $4$-manifold. The holomorphic sectional curvature with respect to the Hermitian connection is constant $k$ at the point $p\in M$ if and only if at that point 1. $W^-=0$, 2. $\ast \rho = r$. Condition ii) may also be expressed using the (Riemannian) Ricci tensor, see Proposition \[Einstein-Kaehler\] (we refer to [@MR793346] in the Hermitian case). Hence in proving Theorems \[main-theorem\], \[main-theorem-negative\] we may restrict attention to *self-dual manifolds*, meaning $W^-=0$. Their classification is an old and in general still open problem, but under additional assumptions many results have been obtained. See [@MR1956815; @MR1348147; @MR867684; @MR861766] for results and further overview. Our main theorems can also be regarded in this way. Strategy of proof ----------------- The first step is to reformulate constant Hermitian holomorphic sectional curvature in terms of the Riemannian curvature tensor (Theorem \[thm:char\]). This an algebraic argument at a point, based on the decomposition of the Riemannian curvature tensor in dimension $4$ and the explicit nature of the gauge potential in . In some sense, the assumption of constant curvature is played off against the symmetries of the Riemannian curvature tensor. This is carried out in Section \[sec:self-dual\], after having recalled some preliminaries in the next section. The next step is then in Section \[sec:21\] to improve in the almost Kähler case our understanding of the Hermitian curvature tensor. It is remarkable that in we obtain information on the full curvature tensor, even though our assumptions depend only upon its $(1,1)$-part. Up to this point our arguments are mostly algebraic. To proceed, we must exploit consequences of the differential Bianchi identity. Thus in Section \[sec:integral-formulas\] we formulate the index theorems for the signature and the Euler characteristic using Chern–Weil theory. Applied to the Levi-Civita and the Hermitian connection, we obtain further information , , . The formulas are then used in Section \[sec:proofs\] to show Kählerness under further topological restrictions. Finally, combined with deep results from Seiberg–Witten theory, these results imply Theorem \[main-theorem\] in the case $k\geq 0$. Theorem \[main-theorem-negative\] ($k<0$) follows by combining our results with formulas for the Bach tensor obtained in [@MR1782093]. These formulas require the Ricci tensor to be $J$-invariant. It is well possible that this additional assumption in Theorem \[main-theorem-negative\] may be removed. Preliminaries ============= Conventions ----------- Throughout let $(M,J,g,F)$ be an almost Hermitian $4$-manifold. Thus ${J\colon TM \to TM}$ is an almost complex structure, $g$ is a Riemannian metric for which $J$ is orthogonal, and $F=g(J\cdot,\cdot)$. Later we will also assume $dF=0$ so that we have an almost Kähler structure. Recall that an almost Hermitian manifold is Kähler precisely when $J$ is parallel for the Levi-Civita connection $D^g J=0$. Let $(z_1,z_2)$ be a local orthonormal frame of $T^{1,0}M$ for the induced Hermitian metric $h(Z,W)=g_\C(Z,\bar{W})$ on $TM\otimes \C= T^{1,0}M \oplus T^{0,1}M$ split in the usual fashion. Using the dual frame, the fundamental form is $F=i(z^1\bar{z}^{\bar{1}} + z^2\bar{z}^{\bar{2}})$. All tensors are extended complex linearly and we adopt the summation convention. Two-forms on $4$-manifolds -------------------------- The Hodge operator decomposes the bundle of two-forms into the self-dual and anti-self-dual parts $$\label{2formsSDASD} \Lambda^2=\Lambda^+ \oplus \Lambda^-.$$ For the structure group $U(2) \subset SO(4)$ we may split further $$\begin{aligned} \label{dual-and-selfdual} \Lambda^+\otimes \C &= \Lambda^{2,0} \oplus \Lambda^{0,2}\oplus \C\cdot F, &\Lambda^-\otimes \C &= \Lambda^{1,1}_0.\end{aligned}$$ Here $\Lambda^{1,1}_0$ stands for complex $(1,1)$-forms pointwise orthogonal to $F$. Gauge potential --------------- The gauge potential ${A=\nabla - D^g}$ of the canonical connection with respect to the Levi-Civita connection $D^g$ is complex anti-linear $$\label{A-cplx-antilinear} A_X\circ J = -J\circ A_X.$$ In dimension four, the almost Kähler condition is equivalent to $$\label{def:21symplectic} A_{JX}=-J\circ A_X.$$ Note that $M$ is Kähler $\iff A=0$. Curvature decomposition ----------------------- Regard the *Hermitian curvature* (and similarly the Riemannian curvature) as a bilinear form on $\Lambda^2$, grouping $XY$ and $ZW$, by $$\label{Rconvention} R^\nabla_{XYZW} \coloneqq g([\nabla_X,\nabla_Y] Z - \nabla_{[X,Y]}Z,W).$$ When we decompose $\Lambda^2$ into direct summands, we get a corresponding decomposition of $R^\nabla$ into a matrix of bilinear forms, where the first entry corresponds to the rows. The representing matrix of a bilinear form with respect to a basis (one-dimensional summands) will be indicated by ‘$\equiv$’. All of the algebraic properties of the *Riemannian* curvature tensor $R^g$ are summarized in the following representation with respect to and , see [@MR1604803; @MR867684]: $$\label{LC-decomposition} \begin{aligned} -R^g&= \kbordermatrix{&\Lambda^+& &\Lambda^-\\ & W^+ + \frac{s_g}{12}g & \vrule & R_0\\\cline{2-4} &R_0^T&\vrule&W^- + \frac{s_g}{12}g }\\ &= \kbordermatrix{&\C F&\Lambda^{2,0}\oplus\Lambda^{0,2}&&\Lambda_0^{1,1}\\ &d\cdot g&W_F^+ &\vrule& R_F\\ & (W_F^+)^T&W_{00}^+ + \frac{c}{2}g &\vrule& R_{00}\\ \cline{2-5} & R_F^T&R_{00}^T &\vrule&W^- + \frac{s_g}{12}g } \end{aligned}$$ Here $W^\pm$ are the *Weyl curvatures*, trace-free symmetric bilinear forms on $\Lambda^\pm$ and $s_g$ denotes the Riemannian scalar curvature. The *$*$-scalar curvature* is $$\begin{aligned} \label{def:starScalar} s_*&\coloneqq 4d, &\frac{s_g}{4}&=c+d.\end{aligned}$$ Moreover $R_0 \colon \Lambda^+ \otimes \Lambda^- \to \R$ corresponds to the *trace-free Riemannian Ricci tensor* $r_0$, namely $R_0(\cdot)=\frac12 \{r_0,\cdot\}$ is the anti-commutator, see [@gauduchon2010calabi (A.1.8)]. Finally $R_0^T(x,y)\coloneqq R_0(y,x)$, and $R_{00}, R_F$ denote further restrictions in the first argument. We take the tensorial norm for bilinear forms, even when they are symmetric. Ricci forms ----------- Having torsion, the curvature tensor of the canonical connection has fewer symmetries than the Riemannian one; for example the algebraic Bianchi identity no longer holds. By contracting indices we now obtain two *Ricci forms* $$\label{def:three-ricci-forms} \begin{aligned} \rho &\coloneqq i R\indices{^\nabla_{\alpha\bar\beta\gamma}^\gamma} z^\alpha\wedge \bar{z}^{\bar\beta},\\ r &\coloneqq iR\indices{^\nabla_\gamma^\gamma_{\lambda\bar\mu}} z^\lambda\wedge\bar{z}^{\bar\mu}. \end{aligned}$$ The *Chern* and *Hermitian scalar curvatures* are obtained by a further trace $$\begin{aligned} s_C &\coloneqq \Lambda(\rho) = \Lambda(r) = R\indices{^\nabla_\alpha^\alpha_\gamma^\gamma}\label{sC}, &s_H &\coloneqq 2s_C.\\ \end{aligned}$$ In the Kähler case, the canonical and the Levi-Civita connection agree so that both forms in are equal to the usual Ricci form. Holomorphic sectional curvature ------------------------------- For $Z\in T^{1,0}M$ the holomorphic sectional curvature is defined from the $(1,1)$-part of the curvature as $$\label{sectional-curvature} H(Z) = \frac{R^\nabla_{Z,\bar{Z},Z,\bar{Z}}}{h(Z,Z)h(Z,Z)}.$$ The holomorphic sectional curvature is constant at the point $p\in M$ if is a constant $k(p)$ for all $Z\in T^{1,0}_pM$. We say it is *pointwise constant* if $H$ is constant at each point of $M$. If the constant $k$ is the same at every point $p\in M$ we speak of *globally constant* holomorphic sectional curvature. Note that the Hermitian connection is always understood. Relation to Self-dual Manifolds {#sec:self-dual} =============================== In this section we will prove Theorem \[thm:char\]. Preparatory lemmas ------------------ Being the complexification of a real tensor, the $(1,1)$-part of the curvature has the following form in the basis $(z_{1\bar{1}}, z_{1\bar{2}}, z_{2\bar{1}}, z_{2\bar{2}})$: $$\label{Rbasis} R^\nabla|_{\Lambda^{1,1}\otimes \Lambda^{1,1}}\equiv\begin{pmatrix} k & \bar{a} & a & w\\ \bar{a'} & \bar{x} & \bar{v} & \bar{b}\\ a' & v & x & b\\ u & \bar{b'} & b' & l \end{pmatrix},\quad k,l, u, w \in \R.$$ We first show that the $(1,1)$-part of $R^\nabla$ is automatically restricted further. \[lem:curv-restrict\] Let $M$ be an almost Hermitian $4$-manifold. With respect to the decomposition $\Lambda^{1,1}=\C F\oplus \Lambda_0^{1,1}$ we have $$\label{curv-restricted} R^\nabla|_{\Lambda^{1,1}\otimes \Lambda^{1,1}} = -\kbordermatrix{ & \C F& \Lambda_0^{1,1}\\ &\frac{s_C}{2}g & R_F\\ &R_F^T+\beta_0 & W^- + \frac{s_g}{12}g }$$ for some $\beta_0 \colon \Lambda_0^{1,1} \otimes \C F \to \C$. Moreover $a+b=a'+b'$ and $v=\frac{s_g}{12}$ in . By direct calculation (or see [@MR867684 p. 25]) $$R^\nabla_{XYZW} = R^g_{XYZW} + \underset{\alpha\coloneqq}{\underbrace{g((\nabla_X A_Y - \nabla_Y A_X - A_{[X,Y]}) Z, W)}} - \underset{\beta\coloneqq}{\underbrace{ g([A_X, A_Y]Z,W)}}. $$ By , $\alpha \in \Lambda^2 \otimes \Lambda^{2,0+0,2}$, so only $\beta$ will contribute to the restriction of $R^\nabla$. Since $[A_X,A_Y]$ is complex-linear, $\beta \in \Lambda^2\otimes \Lambda^{1,1}$. On $\Lambda^{1,1}$ consider the orthonormal basis $$\begin{aligned} \label{ONB} \left(\frac{i}{\sqrt{2}}(z_{1\bar{1}}+z_{2\bar{2}}), z_{1\bar{2}}, z_{2\bar{1}}, \frac{i}{\sqrt{2}}(z_{1\bar{1}}-z_{2\bar{2}})\right).\end{aligned}$$ In dimension $4$, $\beta$ is in fact restricted to $\Lambda^2 \otimes \C F$. Indeed, the explicit formula $$\label{explicit-beta} \beta(X,Y,z_\alpha,\bar{z}_{\bar\beta}) = A\indices{_{[X\alpha\gamma}}A\indices{_{Y]\bar\beta}^\gamma}$$ shows $\beta_{XY1\bar{2}}=\beta_{XY2\bar{1}}=\beta_{XY1\bar{1}}-\beta_{XY2\bar{2}}=0$ (here ‘$[\,]$’ denotes anti-symmetrization): $$\beta|_{\Lambda^{1,1}\otimes \Lambda^{1,1}} = \kbordermatrix{ & \C F & \Lambda_0^{1,1}\\ &\tilde{\beta}\cdot g & 0\\ &\beta_0 & 0 }.$$ Complexifying and restricting to $\Lambda^{1,1}\otimes\Lambda^{1,1}$ we hence have $$\label{Rcorrection} R^\nabla=R^g + \alpha - \beta = -\kbordermatrix{ & \C F & \Lambda_0^{1,1}\\ & d\cdot g_\C & R_F\\ &R_F^T & W^- + \frac{s_g}{12}g_\C }+0 - \kbordermatrix{ & \C F & \Lambda_0^{1,1}\\ &\tilde{\beta}\cdot g_\C & 0\\ &\beta_0 & 0 }.$$ Note that the upper left corner of is $-s_C/2$ by definition . A change of basis shows that the bilinear form is represented in the basis by $$\label{Rnewbasis} R^\nabla|_{\Lambda^{1,1}\otimes \Lambda^{1,1}} \equiv \begin{pmatrix} \frac{-k-l-u-w}{2} & \frac{i}{\sqrt{2}}(\overline{a+b'}) & \frac{i}{\sqrt{2}}(a+b') & \frac{-k+l-u+w}{2}\\ \frac{i}{\sqrt{2}}(\overline{b+a'}) & \bar{x} & \bar{v} & \frac{i}{\sqrt{2}}(\overline{a'-b})\\ \frac{i}{\sqrt{2}}(b+a') & v & x & \frac{i}{\sqrt{2}}(a'-b)\\ \frac{-k+l+u-w}{2} & \frac{i}{\sqrt{2}}(\overline{a-b'}) & \frac{i}{\sqrt{2}}(a-b') & \frac{-k-l+u+w}{2} \end{pmatrix}.$$ In this basis the inner product on $2$-forms has the matrix $$g_\C \equiv \begin{pmatrix} 1\\ &&-1\\ &-1\\ &&&1 \end{pmatrix},$$ so by comparing and and using that $W^-+\frac{s_g}{12}g$ is symmetric we get $$a+b = a'+ b',\quad v=\frac{s_g}{12} \in \R.$$ For later use we note that in this basis the condition $W^-=0$ means that the lower right $3\times3$-submatrix of reduces to $$\begin{pmatrix} &\frac{s_g}{12}\\ \frac{s_g}{12}\\ &&-\frac{s_g}{12} \end{pmatrix}.$$ In other words, it means $x=0, a=b', u+2v+w=k+l$. \[holLemma-curv\] The holomorphic sectional curvature is constant $k$ at a point if and only if reduces at that point to $$\label{ChernR-balas} R^\nabla|_{\Lambda^{1,1}\otimes \Lambda^{1,1}}\equiv \begin{pmatrix} k & \bar{a} & a & w\\ -\bar{a} & 0 & \bar{v} & \bar{b}\\ -a & v & 0 & b\\ u & -\bar{b} & -b & k \end{pmatrix}\quad\text{with}\quad u+v+\bar{v}+w=2k.$$ Let $Z=xz_1+yz_2$ for arbitrary $x,y\in \C$, and expand both sides of the equation $k\cdot h(Z,Z)^2 = R^\nabla(Z,\bar{Z},Z,\bar{Z})$. We have $$k\cdot h(Z,Z)^2 = k|x|^4+2k|x|^2|y|^2 + k|y|^4$$ while for the right hand side $$\begin{aligned} &|x|^4 R_{z_1\bar{z}_1z_1\bar{z}_1}+ |y|^4 R_{z_2\bar{z}_2z_2\bar{z}_2}\\ &+|x|^2|y|^2 \left(R_{z_1\bar{z}_1z_2\bar{z}_2}+R_{z_2\bar{z}_2z_1\bar{z}_1}+R_{z_2\bar{z}_1z_1\bar{z}_2}+R_{z_1\bar{z}_2z_2\bar{z}_1}\right)\\ &+x^2\bar{y}^2 R_{z_1\bar{z}_2z_1\bar{z}_2} +\bar{x}^2y^2 R_{z_2\bar{z}_1z_2\bar{z}_1}\\ &+x^2\bar{x}\bar{y}\left( R_{z_1\bar{z}_1z_1\bar{z}_2}+R_{z_1\bar{z}_2z_1\bar{z}_1} \right)\\ &+\bar{x}^2xy\left( R_{z_2\bar{z}_1z_1\bar{z}_1}+R_{z_1\bar{z}_1z_2\bar{z}_1} \right)\\ &+ \bar{x}y^2\bar{y} \left( R_{z_2\bar{z}_2z_2\bar{z}_1}+R_{z_2\bar{z}_1z_2\bar{z}_2} \right)\\ &+ x\bar{y}^2y \left( R_{z_2\bar{z}_2z_1\bar{z}_2}+ R_{z_1\bar{z}_2z_2\bar{z}_2} \right)\end{aligned}$$ Since $x,y \in \C$ are arbitrary, this an equality between polynomials in the variables $x,\bar{x},y,\bar{y}$. For them to agree, all coefficients must be equal. The above is a simplified proof of a theorem of Balas [@MR779217] for Hermitian manifolds. Proof of Theorem  ----------------- First note that by Lemma \[lem:curv-restrict\] we automatically have $a+b=a'+b'$ and $v=\frac{s_g}{12}\in \R$ in . Lemma \[holLemma-curv\] shows that $H$ is constant $k$ at a point if and only if $$\label{eq:holk1} x=0,\quad a'=-a,\quad a=-b,\quad k=l,\quad u+2v+w=2k.$$ On the other hand, by and the following discussion, self-duality means $$\label{eq:holk2} x=0,\quad a=b',\quad u+2v+w=k+l.$$ From we read off $$\begin{aligned} \rho &= i(k+w)z^{1\bar{1}} + i\overline{(a'+b)}z^{1\bar{2}} + i(a'+b)z^{2\bar{1}} + i(u+l)z^{2\bar{2}}\\ r &= i(k+u)z^{1\bar{1}} + i\overline{(a+b')}z^{1\bar{2}} + i(a+b')z^{2\bar{1}}+i(w+l)z^{2\bar{2}}\end{aligned}$$ Therefore $\ast \rho = r$ means $$\label{forms-dual} k=l,\quad a'+b=-(a+b').$$ Under the general assumption $a+b=a'+b'$ and $v\in \R$ it is easy to verify that is equivalent to with . $\square$ Sharper results for almost Kähler manifolds {#sec:21} =========================================== We now obtain more information on the terms in $$R^\nabla=R^g + \alpha - \beta$$ Full description of $\beta$ --------------------------- Let $M$ be an almost Kähler $4$-manifold. Then $$\begin{aligned} \label{betanorms} \tilde{\beta}&=-\frac12|A|^2, &\|\beta_0\|^2&=\frac14|A|^4.\end{aligned}$$ Here we use the convention $|A|^2 = \frac12 \sum_{i=1}^4 \operatorname{tr}\left( A_{e_i}^T A_{e_i} \right)$. More precisely, from we have $\beta \in \Lambda^{1,1}\otimes \C\cdot F$. Using one sees $$\label{eqn:betamatrix} \beta= \kbordermatrix{&\C F&\Lambda^{2,0}\oplus\Lambda^{0,2}&&\Lambda_0^{1,1}\\ &\tilde\beta &0 &\vrule& 0\\ & 0&0 &\vrule& 0\\ \cline{2-5} &\beta_0&0&\vrule&0 } $$ where $$\begin{aligned} \tilde\beta &= -|A_{112}|^2-|A_{212}|^2, &\beta_0 &= \begin{pmatrix} i\sqrt{2}\cdot A_{112}\overline{A_{212}}\\ i\sqrt{2}\cdot A_{212}\overline{A_{112}}\\ |A_{212}|^2-|A_{112}|^2 \end{pmatrix}.\end{aligned}$$ In the upper left corner of we recover the well-known formula ([@MR1782093], [@gauduchon2010calabi (9.4.5)]) $$\label{SstarSHerm} \frac{s_* - s_H}{4} = \frac12|A|^2.\qedhere$$ \[Einstein-Kaehler\] Let $M$ be a self-dual almost Hermitian $4$-manifold. Then $M$ has constant holomorphic sectional curvature at $p\in M$ precisely when $$\label{RFbeta0} R_F=-\frac12 \beta_0^T.$$ at that point. In particular, when $M$ is almost Kähler and $M$ has pointwise constant holomorphic sectional curvature, then: $M$ is Kähler $\iff R_F=0$. Putting the self-duality condition from the proof of Theorem \[thm:char\] into we get representative matrices $$-R_F\equiv (\sqrt{2}i\bar{a}, \sqrt{2} ia, v+w-k),\quad R_F^T + \beta_0 \equiv -\begin{pmatrix} \sqrt{2}i\bar{b}\\ \sqrt{2}ib\\ \frac{u+v-k}{2} \end{pmatrix}$$ with respect to the orthonormal basis . Hence a self-dual manifold has constant holomorphic sectional curvature precisely when holds. In the almost Kähler case we may use to get $$\label{RF2} \|R_F\|^2 = \frac14 \|\beta_0\|^2 = \frac{|A|^4}{16}.\qedhere$$ Constant holomorphic sectional curvature ---------------------------------------- Let $M$ be almost Kähler of constant holomorphic sectional curvature $k$. Then we have $$\label{fullChernDecomposition} -R^\nabla= \kbordermatrix{&\C F&\Lambda^{2,0}\oplus\Lambda^{0,2}&&\Lambda_0^{1,1}\\ &\frac{s_C}{2} g&0&\vrule& R_F\\ &W_F^+&0&\vrule& R_{00}\\ \cline{2-5} &-R_F^T&0&\vrule& \frac{s_g}{12}g }$$ Put all the facts $\alpha \in \Lambda^2\otimes \Lambda^{2,0+0,2}$, $R^\nabla \in \Lambda^2\otimes \Lambda^{1,1}$, and , into the formula $R^\nabla=R^g + \alpha - \beta$. We now collect some formulas that will be useful. For these, assume that $M$ is almost Kähler and has pointwise constant holomorphic sectional curvature $k$. Comparing the upper left corners of and and then using we see $$\label{sCv} s_C = 4k-2v.$$ Recall also $$\label{sgv} s_g = 12v.$$ Since $M$ is almost Kähler we have for the $*$-scalar curvature [@gauduchon2010calabi (9.4.5)] $$\label{sStarV} s_* = 4s_C - s_g = 16k-20v.$$ Moreover can now be written $$\label{RFv} |R_F|^2 = \frac14\left( \frac{s_* - s_H}{4} \right)^2 = (k-2v)^2$$ Putting these formulas into shows: \[integrability-criterion\] Let $M$ be almost Kähler of pointwise constant holomorphic sectional curvature $k$. We then have $v\leq \frac{k}{2}$ with equality if and only if $M$ is Kähler.$\square$ The formulas also show that when $M$ has globally constant holomorphic sectional curvature, the constancy of any of $s_g, s_*, s_C$ is equivalent to that of $v$. Integral formulas {#sec:integral-formulas} ================= Having understood the pointwise (algebraic) implications of constant holomorphic sectional curvature, we now turn to properties that do not hold for general algebraic curvature tensors. Thus we formulate the consequences of Chern–Weil theory, which stem ultimately from the differential Bianchi identity. We will assume in this section that $M$ is an almost Kähler $4$-manifold of pointwise constant holomorphic sectional curvature, but generalizations are possible. Chern–Weil theory ----------------- Given an arbitrary metric connection $\nabla$ on $TM$ and a polynomial $P$ on $\mathfrak{so}(4)$, invariant under the adjoint action of $SO(4)$, one obtains a differential form $P(R^\nabla)$ by substituting the indeterminants by the curvature $R^\nabla\colon \Lambda^2 \to \mathfrak{so}(TM)$. The upshot of Chern–Weil theory (see for example [@MR0440554]) is that $P(R^\nabla)$ defines a closed form whose cohomology class is independent of $\nabla$. In particular, the integral over $M$ remains the same for all connections. In the $4$-dimensional case, it suffices to consider the Pontrjagin and Pfaffian polynomials. To express these conveniently, use the metric to identify $\mathfrak{so}(TM)\cong \Lambda^2$ and decompose as above $$-R^\nabla = \kbordermatrix{&\Lambda^+& &\Lambda^-\\ & R_{sd}^+& \vrule & R_{asd}^+\\\cline{2-4} & R_{sd}^- & \vrule & R_{asd}^- }.$$ Then $$\label{char-forms} \begin{aligned} p_1(R^\nabla) &= \frac{1}{4\pi^2} \left(\|R_{sd}^+\|^2+\|R_{sd}^-\|^2 - \|R_{asd}^+\|^2 - \|R_{asd}^-\|^2 \right)\operatorname{vol},\\ \operatorname{Pf}(R^\nabla) &= \frac{1}{8\pi^2}\left(\|R_{sd}^+\|^2-\|R_{sd}^-\|^2 - \|R_{asd}^+\|^2 + \|R_{asd}^-\|^2 \right)\operatorname{vol}, \end{aligned}$$ where the norm is induced by the usual inner product $\operatorname{tr}(f^*g)$ on $\operatorname{End}(\Lambda^2)$. For the Levi-Civita connection we evaluate using , $W^-=0$, and $\|I\|^2=3$: $$\label{p1Dg} \begin{aligned} p_1(D^{g}) &= \frac{1}{4\pi^2}\left( \|W^+ +\frac{s_g}{12}I\|^2 + \|R_0^*\|^2 - \|R_0\|^2 - \|W^- + \frac{s_g}{12}I\|^2\right)\operatorname{vol}_g\\ &= \frac{1}{4\pi^2}(\|W^+\|^2 - \|W^-\|^2)\operatorname{vol}_g\\ &= \frac{1}{4\pi^2}\|W^+\|^2\operatorname{vol}_g\\ \end{aligned}$$ and[^1] $$\label{Pfaffian-LC} \begin{aligned} \operatorname{Pf}(D^g) &= \frac{1}{8\pi^2}(\|W^+ +\frac{s_g}{12}I\|^2 + \|W^- + \frac{s_g}{12}I\|^2 - \|R_0^*\|^2 - \|R_0\|^2)\operatorname{vol}_g\\ &=\frac{1}{8\pi^2} (\|W^+\|^2 + \|W^-\|^2 + \frac{1}{24}s_g^2 - 2\|R_0\|^2)\operatorname{vol}_g\\ &= \frac{1}{8\pi^2} (\|W^+\|^2 + \frac{1}{24}s_g^2 - 2\|R_0\|^2)\operatorname{vol}_g \end{aligned}$$ Similarly, for the Hermitian connection we evaluate using : $$\begin{aligned} \label{p1Nabla}p_1(\nabla) &= \frac{1}{4\pi^2} \left( \frac{s_C^2}{4} + \|W_F^+\|^2 + \|R_{00}\|^2 - \frac{s_g^2}{48} \right)\operatorname{vol}_g\\ \operatorname{Pf}(\nabla) &= \frac{1}{8\pi^2} \left(\frac{s_C^2}{4}+\|W_F^+\|^2 + \frac{s_g^2}{48} - 2\|R_F\|^2 - \|R_{00}\|^2 \right)\operatorname{vol}_g\end{aligned}$$ Index theorems -------------- The pre-factors in are chosen so that for the signature and Euler characteristic the classical index theorems hold (see [@MR0440554]): $$\label{indextheorem} \begin{aligned} \sigma &= \frac13 \int_M p_1(R^\nabla)\\ \chi &= \int_M \operatorname{Pf}(R^\nabla) \end{aligned}$$ This gives us two expressions for the signature and for the Euler characteristic . Equating these leads to the same conclusion in both cases: Let $M$ be a closed almost Kähler $4$-manifold of pointwise constant holomorphic sectional curvature $k$. Then (we omit the volume form) $$\label{Chern-Weil-Conclusion} \int_M |W_F^+|^2 + |W_{00}^+|^2 + 4(5k-7v)(k-2v) = \int_M |R_{00}|^2$$ By the above remarks $$\int_M p_1(D^g)= \int_M p_1(\nabla)$$ and we insert , . Then using we get $$\|W^+\|^2 = 2\|W_F^+\|^2 + \|W_{00}^+\|^2 + \frac{1}{6}(3s_C-s_g)^2.$$ since in the almost Kähler case $s_*=4d$, $\frac{s_g}{4}=c+d$, and $\frac{s_g+s_*}{2}=2s_C$. Finally insert – to obtain the conclusion . Doing the same computation for the Pfaffian, use $\|R_0\|^2 = \|R_{00}\|^2 + \|R_F\|^2$. This leads to the same formula . Putting and – into gives the following: Let $M$ be a closed almost Kähler $4$-manifold of pointwise constant holomorphic sectional curvature $k$. Then $$\begin{aligned} \label{chi} \chi &= \frac{-1}{8\pi^2} \int_M |W_{00}^+|^2+(60v^2-72 kv + 18k^2)\\ \frac{3}{2}\sigma &= \frac{1}{8\pi^2} \int_M 2|W_F^+|^2 + |W_{00}^+|^2 + 6(2k-3v)^2 \enskip\geq\enskip 0\label{sigma}\end{aligned}$$ Combined with [@MR1604803 Lemma 3] we conclude: \[Mehdi-obs\] Let $M$ be closed almost Kähler of globally constant holomorphic sectional curvature $k$. Suppose $M$ is simply connected (or, more generally, that $5\chi + 6\sigma \neq 0$) and that any of $s_g, s_*, s_C$ is constant. Then $M$ is Kähler. Suppose by contradiction that $J$ is not integrable, so that $v<\frac{k}{2}$ at some point, by Lemma \[integrability-criterion\]. Then and show that $|N|^2$ is a non-zero constant. Hence by [@MR1604803 Lemma 3] we must have $$5\chi + 6\sigma = 0.$$ This contradicts $\chi=2+b_2 > 0$ and $\sigma\geq 0$ from . Proof of Main Theorems {#sec:proofs} ====================== Intermediate results -------------------- Before proving Theorem \[main-theorem\] we need to establish two preliminary results that give Kählerness under topological restrictions. \[flatcase\] Let $M$ be closed almost Kähler $4$-manifold of pointwise constant holomorphic sectional curvature $k$. Suppose $\sigma=0$ for the signature. Then $k=0$ and $M$ is Kähler, with a Ricci-flat metric. By , $\sigma=0$ implies $W_F^+=0$, $W_{00}^+ = 0$, and $v=\frac23 k$. Putting this into shows $R_{00}=0$ and $k=0$, since the left hand side reduces to $-\frac49 k^2\cdot\operatorname{Vol}(M)$ and the right hand side is non-negative. Hence integrability follows from Lemma \[integrability-criterion\]. Then from we also get $R_F=0$, so $R_0=R_{00}+R_F=0$. Hence the metric is Ricci-flat. We next show a ‘reverse’ Bogomolov–Miyaoka–Yau inequality. \[thm-sigma-chi\] If $M$ is closed almost Kähler of globally constant holomorphic sectional curvature $k \geq 0$ then $$\label{sigma-chi} 3\sigma \geq \chi.$$ Equality holds if and only if $M$ is Kähler (even Kähler–Einstein). Estimate follows by combining , to get $$\label{nice1} 3 \sigma - \chi = \frac{1}{8\pi^2}\int_M |W_F^+|^2 + 3|R_{00}|^2 + 6k(k-2v)$$ and the fact $v\leq \frac{k}{2}$. Putting $3\sigma=\chi$ into shows $$\label{temp12049u12} 0 = \int_M |W_F^+|^2 + 3|R_{00}|^2 + 6k(k-2v)$$ which is the sum of three non-negative terms. Hence all summands vanish. When $k\neq 0$ is a global non-zero constant we get $$\int_M (k-2v)\operatorname{vol}= 0.$$ Since $k-2v\geq 0$ this means $k=2v$ everywhere, hence integrability by Lemma \[integrability-criterion\]. If on the other hand we suppose $k=0$ then gives $W_F^+ = 0$, $R_{00}=0$. Putting this into shows $$0 = \int_M |W_{00}^+|^2 + 56v^2$$ and so $v=0$. We again therefore have $v=\frac{k}{2}=0$ and may apply Lemma \[integrability-criterion\]. Proof of Theorem \[main-theorem\] --------------------------------- By Lemma \[integrability-criterion\] we know that if $v=\frac{k}{2}$ everywhere, then $M$ is Kähler. Let us argue by contradiction and therefore suppose that $v<\frac{k}{2}$ somewhere (the case $k>0$ can also be deduced directly). Then $$\int_M c_1(TM) \cup \omega = \int_M \frac{s_C}{2\pi} = \int_M \frac{4k-2v}{2\pi} = \underbrace{\int_M \frac{3k}{2\pi}}_{\geq 0} + \int_M \frac{k-2v}{2\pi} > 0.$$ According to results in Seiberg–Witten theory of Taubes [@MR1324704], Lalonde–McDuff [@MR1432456], and Liu [@MR1418572], this implies that $M$ is symplectomorphic to a ruled surface or to $\C P^2$ (see LeBrun [@MR3358072 Section 2] for an overview, or also [@MR1417784 Theorem 1.2]). In the case $\C P^2$ we get a contradiction to Proposition \[thm-sigma-chi\]. Suppose therefore that $M$ is a ruled surface. Since $\sigma \leq 0$ for ruled surfaces, by we must have $\sigma=0$, contradicting Proposition \[flatcase\]. Hence we have shown that $M$ is Kähler. The classification stated in Theorem \[main-theorem\] now follows from [@MR1348147 Theorem 2]. $\square$ Proof of Theorem \[main-theorem-negative\] ------------------------------------------ By Theorem \[thm:char\] we have $W^-=0$. Hence the Bach tensor vanishes and so [@MR1782093 Remark 2, p. 13] applies $$\label{conclusion-AD} 0=\int_M \left( \frac{|ds_g|^2}{2}-s_g\cdot |r_0|^2 \right)\operatorname{vol}.$$ The proof in [@MR1782093] is a consequence of a Weitzenböck formula and the $J$-invariance of the Ricci tensor is used in a crucial way. Since $s_g=12v \leq 6k < 0$ we have in two non-negative terms. Hence $s_g$ is constant and $R_0=0$. Therefore $M$ is Kähler–Einstein by Proposition \[Einstein-Kaehler\].$\square$ Further Discussion ------------------ Besides removing in Theorem \[main-theorem-negative\] the condition that the Ricci curvature is $J$-invariant, we mention the following open problems: 1. Analogous results in the non-compact case. The case of Lie groups will be the topic of an upcoming paper of L. Vezzoni and the first named author [@Lej-Vez]. 2. Counterexamples to Schur’s Theorem. Are there almost Kähler manifolds of pointwise constant *Hermitian* holomorphic sectional curvature that are not globally constant? In particular, are there compact examples? 3. There may be an alternative approach to our results using the twistor space of $M$. Note that in our situation, the tautological almost complex structure on the twistor space is integrable. What are the further geometric implications of constant holomorphic sectional curvature in terms of the twistor space? [^1]: Traditionally one writes $2\|R_0\|^2=\frac12 \|r_0\|^2$ in terms of the Ricci tensor $r_0$.
--- abstract: 'We propose a model to tackle classification tasks in the presence of very little training data. To this aim, we approximate the notion of exact match with a theoretically sound mechanism that computes a probability of matching in the input space. Importantly, the model learns to focus on elements of the input that are relevant for the task at hand; by leveraging highlighted portions of the training data, an error boosting technique guides the learning process. In practice, it increases the error associated with relevant parts of the input by a given factor. Remarkable results on text classification tasks confirm the benefits of the proposed approach in both balanced and unbalanced cases, thus being of practical use when labeling new examples is expensive. In addition, by inspecting its weights, it is often possible to gather insights on what the model has learned.' bibliography: - 'main.bib' --- \[section\]
--- abstract: 'Massive stars shape their surroundings with mass loss from winds during their lifetimes. Fast ejecta from supernovae, from these massive stars, shocks this circumstellar medium. Emission generated by this interaction provides a window into the final stages of stellar evolution, by probing the history of mass loss from the progenitor. Here we use Chandra and Swift x-ray observations of the type II-P/L SN 2013ej to probe the history of mass loss from its progenitor. We model the observed x-rays as emission from both heated circumstellar matter and supernova ejecta. The circumstellar density profile probed by the supernova shock reveals a history of steady mass loss during the final 400 years. The inferred mass loss rate of $3 \times 10^{-6} {\rm \; M_\odot \; yr^{-1}}$ points back to a 14 $M_\odot$ progenitor. Soon after the explosion we find significant absorption of reverse shock emission by a cooling shell. The column depth of this shell observed in absorption provides an independent and consistent measurement of the circumstellar density seen in emission. We also determine the efficiency of cosmic ray acceleration from x-rays produced by Inverse Compton scattering of optical photons by relativistic electrons. Only about 1 percent of the thermal energy is used to accelerate electrons. Our x-ray observations and modeling provides stringent tests for models of massive stellar evolution and micro-physics of shocks.' author: - Sayan Chakraborti - Alak Ray - Randall Smith - Raffaella Margutti - David Pooley - Subhash Bose - Firoza Sutaria - Poonam Chandra - 'Vikram V. Dwarkadas' - Stuart Ryder - Keiichi Maeda bibliography: - 'master.bib' title: 'Probing Final Stages of Stellar Evolution with X-Ray Observations of SN 2013ej' --- Introduction ============ One of the central problems in astrophysics is the mapping of stellar properties onto the properties of supernovae that they may or may not produce. Mass, spin, metallicity, and binarity are some of the parameters which are thought to determine the final outcome of stellar evolution [@2003ApJ...591..288H]. Type II-P supernovae are produced by red supergiants, between 8 and 17 $M_\odot$ in mass [@2009MNRAS.395.1409S]. X-ray lightcurves of type II-P supernovae point to an upper limit of 19 $M_\odot$ for their progenitors [@2014MNRAS.440.1917D]. Yet not all stars with such masses necessarily give rise to type II-P supernovae. In the final stages of stellar evolution the cores of massive stars rapidly burn through elements of progressively higher atomic numbers [@1978ApJ...225.1021W]. This may cause rapid variation in the energy output of the core. However, the outer layers of these stars need approximately a Kelvin-Helmholtz time scale $\rm (\sim 10^6 \; yr)$ to adjust to these changes. Therefore, surface properties like luminosity and mass loss rate, should not change on short timescales in direct response. However recent observations of luminous outbursts and massive outflows from Luminous Blue Variable progenitors [@2010AJ....139.1451S] months to years before certain supernovae, like SN 2009ip [@2013MNRAS.430.1801M; @2013Natur.494...65O; @2013ApJ...767....1P; @2014ApJ...780...21M], call this paradigm into question . The pre-supernova evolution of massive stars shape their environments by winds and ionizing radiation. The interaction of the supernova ejecta with this circumstellar matter produces radio and x-ray emission. Our ongoing program [@2012ApJ...761..100C; @2013ApJ...774...30C] is to observe these x-rays using various sensitive instruments and model their emission mechanism. In this work, we use Chandra and Swift x-ray observations of SN 2013ej to probe the history of mass loss from its progenitor during the last 400 years before explosion. At early times we find significant absorption of reverse shock emission by a cooling shell. We also determine the efficiency of cosmic ray acceleration from x-rays produced by Inverse Compton scattering of optical photons by relativistic electrons. Our results demonstrate that sensitive and timely x-ray observations of young nearby supernovae, coupled with modeling of the emission and absorption produced by shocked plasmas, provide stringent tests for models of pre-supernova massive stellar evolution. Observations of SN 2013ej ========================= SN 2013ej exploded in the nearby galaxy M74 [@2013CBET.3606....1K; @2013CBET.3609....1V; @2013CBET.3609....3W; @2013CBET.3609....4D] and was observed in multiple bands. It was initially classified as a type II-P supernova [@2013ATel.5275....1L; @2013ATel.5466....1L; @2014JAVSO..42..333R] with a slow rise [@2014MNRAS.438L.101V], but due to its fast decline [@2015ApJ...807...59H] it was later re-classified as a type II-L supernova [@2015MNRAS.448.2608V; @2015ApJ...806..160B]. @2015ApJ...799..208S have shown that supernovae of type II-P and II-L form a continuum of lightcurve properties like plateau duration. SN 2013ej falls somewhere along this continuum. In this work we adopt a distance of $d \sim 9.57 \pm 0.70$ Mpc and an explosion date of 23.8 July 2013 [@2015ApJ...806..160B]. Details of the x-ray observations, carried out by us and used in this work are given below and in Table \[obstab\]. Swift XRT Observations ---------------------- The Swift XRT observed SN 2013ej in x-ray bands starting from 2013 July 30 until 2013 July 31. @2013ATel.5243....1M analyzed and reported data collected during the first 15 ks of observations. The x-ray counterpart of SN2013ej was found to be separated from nearby sources. The significance of the x-ray source detection in the Swift observation was 5.2 sigma. In this work we use x-ray data collected over a longer duration of 73.4 ks by the XRT. The X-ray counterpart of SN2013ej, as seen by the XRT, is 45“ away from an ULX source M74 X-1 which is clearly detected and resolved. It is also 15” from an X-ray source J013649.2+154527 observed by Chandra. These circumstances make follow-up Chandra observations with superior angular resolution particularly important. Chandra X-ray Observations -------------------------- After the initial detection by Swift, we triggered our Target of Opportunity observations with the Chandra X-ray Observatory for 5 epochs. The first Chandra observation was approximately 10 ks and the subsequent four observations were all $\sim 40$ ks each. All exposures were carried out using Chandra ACIS-S CCDs without any grating. The details of observations of SN 2013ej with Swift and Chandra x-ray instruments are given in Table \[obstab\]. These data from each epoch of observations were processed separately, but identically. The spatial and spectral analyses were performed after this initial processing by following the prescription[^1] from the Chandra Science Center using CIAO 4.7 with CALDB 4.6.9. The initial data processing steps were identical to that of SN 2004dj [@2012ApJ...761..100C] and SN 2011ja [@2013ApJ...774...30C]. Photons recorded in level 2 events were filtered by energy to select only those above 0.3 keV and below 10 keV. The selected photons were projected back on to sky coordinates and the emission from the supernova was easily identified. The portion of the sky containing the supernova was masked and a light curve was generated from the remaining counts. Cosmic ray induced flares were identified in this light curve, using times where the count rate flared $3 \sigma$ above the mean. A good time interval table was generated by excluding these flares. This was used to further select photons from the useful exposure times reported in Table \[obstab\]. The spectra, response matrices and background count rates were then generated from these filtered photons. To retain the highest available spectral resolution, we did not bin these data. All subsequent steps use this processed data. ![X-ray fluxes observed from SN 2013ej. Total fluxes (red, with $1 \sigma$ uncertainties) can be split into soft (green) and hard (blue) components. Note that the hard component dominates at first. However, it drops off rapidly as the Inverse Compton flux dies off and the thermal plasmas become cooler. The soft component does not drop off as rapidly, because the reduced emission is somewhat offset by reduced absorption at late times.[]{data-label="fluxes"}](fluxes.eps){width="\columnwidth"} Modeling the X-Rays =================== The expansion of fast supernova ejecta drives a strong forward shock into circumstellar matter [@1974ApJ...188..501C], and heats it to $\sim 100$ keV. The expansion also causes rapid adiabatic cooling in the ejecta. However, an inward propagating reverse shock is generated by the deceleration of the ejecta by the ambient medium [@1974ApJ...188..335M], reheating it to $\sim 1$ keV or even higher. We use thermal and non-thermal emission processes, as well as absorption, occurring in these shocked regions to model x-rays observed (Fig \[fluxes\]) from SN 2013ej. Note that the hard x-ray flux, initially the dominant part, rapidly declines and beyond $\sim40$ days the total flux is dominated by the soft x-ray flux. The observed spectrum (Fig \[ufs\]) is represented as the sum of these emission components, passed through the appropriate absorption components and folded in with the relevant response matrices. The XSPEC model we used is $\mathit{tbabs(tbabs(apec)+bremss+powerlaw)}$. Here external absorption is modeled by the first $\mathit{tbabs}$ and internal by the second one. The $\mathit{apec}$ component represents thermal emission from reverse shock while the $\mathit{bremss}$ represents that from forward shock. The Inverse Compton component is represented by $\mathit{powerlaw}$. Thermal Emission ---------------- The reverse shock climbs up against the steep ejecta profile of the supernova and therefore encounters larger densities than the forward shock. The temperature of the reverse shock can in many cases be right where Chandra is most sensitive. Therefore thermal emission from the reverse shock is likely to be the dominant component at late times beyond a month [@2012ApJ...761..100C]. The thermal emission from the forward shock can become important if the emission from the reverse shock is absorbed. The thermal x-rays from the reverse shock are composed of bremsstrahlung and line emission. used time-dependent ionization balance and multilevel calculations to model the line emission from the reverse shock. @2012ApJ...761..100C have shown that it is safe to assume collisional ionization equilibrium while trying to model the line emission from the reverse shocked material. The strengths of lines from a plasma in equilibrium can be determined from its temperature and composition. We use the APEC code [@2001ApJ...556L..91S] to model the thermal emission from the reverse shock. The thermal emission from the forward shock is modeled simply as bremsstrahlung radiation with a normalization of $N_{\rm bremss}$. We expect it to be too tenuous and hot to produce any significant lines in the Chandra or XRT bands [@2012ApJ...761..100C]. Non-Thermal Emission -------------------- The forward shocks, apart from heating the circumstellar material, also accelerate cosmic rays. The relativistic electrons at the forward shock lose energy via synchrotron emission, which is detected in the radio [@1982ApJ...259..302C], and Inverse Compton scattering of optical photons into the x-rays [@2012ApJ...761..100C]. Here we model the Inverse Compton emission as a power law in XSPEC with a normalization of $N_{\rm IC}$. An electron population described by a power law with index $p$ generates Inverse Compton scattered x-rays with a photon index $(p + 1)/2$. ![Unfolded x-ray spectra of SN 2013ej from Swift XRT at early times (in red) and Chandra ACIS at later times (in black) with $1\sigma$ uncertainties. The Swift spectrum represents the earliest epoch and has comparable contribution from thermal reverse shock emission, thermal forward shock emission and Inverse Compton scattering. The later time Chandra spectra from 5 epochs listed in Table \[obstab\], are stacked together only for display but analyzed separately. Note that the late time Chandra spectra are softer than the early time Swift spectrum. The Chandra spectra are dominated by thermal emission from the reverse shock.[]{data-label="ufs"}](ufs_revised-v2.eps){width="\columnwidth"} Absorption components --------------------- We consider two absorption components, both modeled using the Tuebingen-Boulder ISM absorption model [@2000ApJ...542..914W]. We consider the external absorption to be a constant in time as it is likely produced by material far away from the supernova. Radiative cooling of the reverse shocked material leads to the formation of a dense cool shell [@2003LNP...598..171C] which can obscure the emission from the reverse shock. We model this as a time-varying internal absorption component. [l r c l r c]{} Jul 30 - Aug 9 & 13.0 & $(3.89 \pm 0.58) \times10^{42}$ & Swift & 73.4 & $(7.1 \pm 1.2) \times10^{-14}$\ Aug 21 & 28.9 & $(2.19 \pm 0.27) \times10^{42}$ & Chandra& 9.8 & $(9.8 \pm 3.2) \times10^{-15}$\ Sep 21 & 59.7 & $(1.29 \pm 0.03) \times10^{42}$ & Chandra& 39.6 & $(1.0 \pm 0.2) \times10^{-14}$\ Oct 7 - 11 & 78.0 & $(1.00 \pm 0.02) \times10^{42}$ & Chandra& 38.4 & $(7.0 \pm 1.0) \times10^{-15}$\ Nov 14 & 114.3 & $(8.13 \pm 0.38) \times10^{40}$ & Chandra& 37.6 & $(7.0 \pm 1.3) \times10^{-15}$\ Dec 15 & 145.1 & $(5.13 \pm 0.24) \times10^{40}$ & Chandra& 40.4 & $(4.8 \pm 0.8) \times10^{-15}$\ \[obstab\] X-ray Spectral Fitting ====================== All x-ray data are loaded into XSPEC and fitted in the manner described in @2012ApJ...761..100C. Since these data are unbinned, individual spectral channels can have a low number of photons, disallowing the use of a $\chi^2$ statistic. We therefore adopt the $W$ statistic generalization of the @1979ApJ...228..939C statistic. We need to fit 6 epochs with 10 parameters each. Since there is not enough information in the observed spectra to simultaneously determine all 60 parameters, it is necessary to hold some of them constant or constrained. We describe these restricted parameters below. All fitted parameters are reported in Table \[fitparms\]. Constant parameters {#constant} ------------------- @2015ApJ...806..160B find no excess reddenning in the optical emission from SN 2013ej beyond what is expected from the Galactic absorption. We therefore hold the external absorption column constant, at the Galactic value of $n_{\rm ext} = 4.8 \times 10^{20}$ atoms cm$^{-2}$ determined from the Leiden Argentine Bonn (LAB) Survey of Galactic HI . A visual inspection of the spectra reveals a bump at $\sim 1$ keV, which is likely produced by a blend of lines, but not enough resolved features to determine the metallicity of the plasma. We therefore set the relative metal abundances in APEC following . The overall metallicity is set to $Z=0.295 Z_\odot$, which is equal to that of the nearby HII region number 197 of . In the absence of prominent sharply resolved lines, the redshift cannot be determined from the spectra. We therefore fix it to the host galaxy redshift of $z=2.192\times 10^{-3}$ from NED. The early spectrum at the first epoch is hard, with a possible contribution from Inverse Compton scattering. But there is unlikely to be enough information to be able to determine the slope of this component. We therefore fix the photon index to $\alpha_{\rm IC}=2$, which is expected on theoretical grounds for an electron index of $p=3$ and has been observed in SN 2004dj [@2012ApJ...761..100C]. Constrained parameters {#constrainted} ---------------------- Here we constrain various parameters which determine how the shape of the spectra change in time. To derive these relations, we assume a steady mass loss rate from the progenitor. If the mass loss is significantly variable, the data will rule out the model. We allow the absorption column depth of the cool shell to be determined by the best-fit to these data. However, the value of relative depth of the column at 6 epochs are tied to each other using the relation $$n_{\rm cool} \propto t^{-1}$$ from @2003LNP...598..171C. This removes 5 free parameters. At each epoch the temperature of the forward shock can be related to that of the reverse shock. Using the self similar solution for a supernova ejecta interacting with a steady wind [@1982ApJ...258..790C], we find, $$T_{\rm cs} = (n-3)^2 T_{\rm rev},$$ where $n$ is the power law index of the ejecta profile. Following @1999ApJ...510..379M we use $n=12$, as is appropriate for a red supergiant progenitor. Fixing the forward shock temperature to be $81$ times the reverse shock temperature at each of the epochs removes 6 free parameters. The temperature of the reverse shocked plasma also goes down slowly in time, as $$T_{\rm rev} \propto t^{-\frac{2}{n-2}}.$$ The temperature of the $\mathit{apec}$ component at one epoch is therefore allowed to vary but its values at all other epochs are linked to each other using this proportionality. This removes 5 more free parameters. The emission measures of the plasma at the forward and reverse shocks can be similarly related. Self similar solutions [@2003LNP...598..171C] provide the physical relation between the emission measures as $$\int n_e n_H dV_{\rm rev} = \frac{(n-3)(n-4)^2}{4 (n-2)} \int n_e n_H dV_{\rm cs}.$$ Two more factors arise because we are forced to use two different models for the emissions, namely APEC and bremsstrahlung. In XSPEC, the APEC model [@2001ApJ...556L..91S] represents the emission measure as $\int n_e n_H dV$, whereas the $\mathit{bremss}$ model [@1975ApJ...199..299K] uses $\int n_e n_I dV$. To resolve this, we approximate $n_I=n_H+n_{He}$ with the Helium abundance from . Furthermore, there is an arbitrary numerical difference in the normalizations of the models. Accounting for these three issues, we set the $\mathit{bremss}$ norm $N_{\rm bremss}$ to be $0.0228$ times the $\mathit{apec}$ norm $N_{\rm APEC}$ at each epoch. This eliminates another 6 free parameters. Only the earliest epoch is likely to have significant contribution from Inverse Compton scattering of optical photons by relativistic electrons. @2006ApJ...651..381C have shown that the Inverse Compton flux, is expected to fall off as $$N_{\rm IC} \equiv E \frac{dL_{\rm IC}}{dE} \propto L_{\rm bol} t^{-1},$$ where $L_{\rm bol}$ is the bolometric luminosity of the supernova which provides the seed photons to be up-scattered. We relate the norm of the $\mathit{powerlaw}$ component, representing the Inverse Compton emission, at all later epochs to that of the first epoch using this relation. To estimate the bolometric luminosity before 30 days, we use the B and V band luminosities from @2014JAVSO..42..333R and the bolometric correction prescribed by @2009ApJ...701..200B. Beyond 30 days, we use the bolometric luminosity reported in @2015ApJ...806..160B by integrating the emission from the infrared to ultraviolet. All the bolometric luminosities used are reported in Table \[obstab\]. Goodness and Uncertainties -------------------------- Having obtained the best fit, we tested the goodness of the fit by generating a set of 1000 simulated spectra, at each epoch, with a parameter distribution that is derived from the covariance matrix of parameters at the best fit. We note that goodness testing is a misnomer for this process as it can never determine whether a particular fit is good, only if it is significantly bad or not. Only 60 percent of these sets of fake data have a fit statistic better than the fiducial fit. If the observations were indeed generated by the model the most likely percentage, of fake data that have a fit statistic better than the fiducial fit, is 50. However, the likelihood of the percentage lying outside the range of 40 to 60 percent, is $0.8$. Since the outcome of the goodness test is quite likely, our data do not rule out the model. We therefore consider the model to be acceptable. ![Correlation between cool shell absorption column depth and reverse shock plasma temperature at the first epoch. 1 (blue), 2 (green) and 3 (red) $\sigma$ uncertainty contours are obtained by marginalizing the results of our MCMC run. A larger absorbing column can hide lower energy emission from a cooler reverse shocked plasma, giving rise to the negative correlation. The closed $3 \sigma$ contour demonstrates that even with the uncertainty in the reverse shock temperature we need a non-zero column depth in the cooling shell absorption component at the $3 \sigma$.[]{data-label="revtemp_shellabs"}](nH_kT-v3.eps){width="\columnwidth"} In order to better understand the uncertainties in the determined parameters we ran a Markov Chain Monte Carlo simulation. 200 walkers were initiated using the fit covariance matrix as the proposal distribution. They were allowed to walk for 400,000 steps, after rejecting the first 40,000 steps. They were evolved following the Goodman-Weare algorithm [@2012ApJ...745..198H] implemented in XSPEC [@2013HEAD...1311704A]. The uncertainties for each parameter were determined by marginalizing over all other parameters. Two pairs of parameters were found to have noteworthy correlations and are discussed below. The uncertainty in the column depth of the cool shell influences the uncertainties in the reverse shock temperature (see Figure \[revtemp\_shellabs\]) and the Inverse Compton flux density (see Figure \[ic\_shellabs\]). A heavier absorbing column can hide the lower energy emission from a colder plasma, leading to the negative correlation with the reverse shock temperature. A heavier absorbing column, having hidden much of the reverse shock emission, also allows for a larger hot bremsstrahlung contribution from the forward shock. This explains away more of the harder photons, thus requiring a lesser contribution from the Inverse Compton component. This causes the column depth to be also negatively correlated with the Inverse Compton flux. Results ======= We interpret the plasma parameters determined from the model fits to our observations in terms of a physical description of the supernova and its progenitor. ![Correlation between cool shell absorption column depth and Inverse Compton emission at the first epoch. 1 (blue), 2 (green) and 3 (red) $\sigma$ uncertainty contours are obtained by marginalizing the results of our MCMC run. A larger absorbing column can hide reverse shock emission allowing harder forward shock emission to dominate the spectrum. This makes the thermal contribution to the spectrum harder and therefore requires less Inverse Compton emission to explain the high energy photons, giving rise to the negative correlation. The closed $2 \sigma$ contour demonstrates that after marginalizing over the uncertainties in the thermal components, we need a non-zero contribution from the non-thermal Inverse Compton component at the $2 \sigma$ level.[]{data-label="ic_shellabs"}](nH_ic-v3.eps){width="\columnwidth"} Shock velocity -------------- @1982ApJ...259..302C related the temperature of the forward shocked material with the shock velocity. used this to derive the temperature of the reverse-shocked material in terms of the forward shock velocity. The reverse shocked plasma is expected to be dense enough [@2012ApJ...761..100C] to reach ionization equilibrium. Under such conditions, @2012ApJ...761..100C have inverted this relation to express the forward shock velocity, which is a property of the supernova explosion, to the reverse shock temperature which is an observable, as $$V_{\rm cs} = 10^4 \; \sqrt{\frac{kT_{\rm rev}}{1.19 \; {\rm keV}}} \; \; {\rm km \; s^{-1}}.$$ Since the best-fit temperature is $1.1\pm0.2$ keV (see Fig \[revtemp\_shellabs\]), the implied velocity at 12.96 days is $V_{\rm cs}=(9.7 \pm 1.1)\times 10^3$ km s$^{-1}$. This is faster than the shock velocity observed in SN 2004dj [@2012ApJ...761..100C]. Also, note that the forward shock is expected to be faster than the photosphere. As expected, $V_{\rm cs}$ here is faster than velocities seen in optical line-widths [@2015ApJ...806..160B]. ![Pre supernova mass loss rate from the progenitor as a function of time before explosion with $1\sigma$ uncertainties. The mass loss rates are derived using thermal emission from shocked plasma measured in the x-rays. Note that our measurements are consistent with a steady mass loss rate of $\dot{M}=(2.6 \pm 0.2) \times 10^{-6} {\rm \; M_\odot \; yr^{-1}}$ for the last 400 years of pre-supernova stellar evolution.[]{data-label="mdot_tminus"}](mdot_tminus.eps){width="\columnwidth"} Mass loss history ----------------- Mass loss from the progenitor sets up the circumstellar density within which the supernova interacts. The circumstellar density determines the emission measure of the forward shocked material. This can be related to the emission measure of the reverse shocked material using self similar solutions [@2012ApJ...761..100C]. Considering only the contribution of Hydrogen and Helium to the mass and number density of the outermost shells of the supernova, $\rho= 1.17 {\rm\; amu} \times n_e = 1.34 {\rm\; amu} \times n_H$. Therefore we modify the emission measure derived by @2012ApJ...761..100C for the reverse-shocked material as $$\begin{aligned} \label{em} \int n_e n_H dV =\frac{ (144 / \pi) \left( \dot M / v_{\rm w}\right)^2 }{(1.17 {\rm\; amu})(1.34 {\rm\; amu})R_{\rm cs} }\end{aligned}$$ Only half of this emission measure contributes to the observed flux, as the other half is absorbed by the opaque unshocked ejecta. The norm of $\mathit{apec}$ in XSPEC is defined as $$N_{\rm APEC} = \frac{10^{-14}}{4 \pi \left( D_{\rm A} (1+z)\right)^2} \int n_e n_H dV,$$ where $D_{\rm A}$ is the angular diameter distance to the source. Therefore, the mass loss rate can now be determined from the emission measure as $$\begin{aligned} \dot{M}=&7.5\times10^{-7} \left(\frac{v_{\rm w}}{10 {\rm \; km \; s^{-1}}}\right) \left(\frac{D_{\rm A} (1+z)}{10 {\rm \; Mpc}}\right) \nonumber \\ &\times \left(\frac{N_{\rm APEC}}{10^{-5}}\right)^{1/2} \left(\frac{R_{\rm cs}}{10^{15} {\rm \; cm}}\right)^{1/2} {\rm M_\odot \; yr^{-1}}.\end{aligned}$$ We determine the norm of the APEC emission measure directly from our fit. We calculate the radius from the velocity determined in the last section and the time of observation. We assume a wind velocity $v_{\rm w} = 10$ km s$^{-1}$ as is appropriate for red supergiant progenitors. The emission measures determined at various times after the explosion, point back to mass loss rates at different lookback times before the explosion. These are plotted in Figure \[mdot\_tminus\] as a function of the lookback time $t_{\rm look}$. Note that our observations are consistent with a $\propto r^{-2}$ density profile as expected from a steady mass loss rate of $\dot{M}=(2.6 \pm 0.2) \times 10^{-6} \times (v_{\rm w} / 10 {\rm \; km \; s^{-1}}) {\rm \; M_\odot \; yr^{-1}}$ over the last $400 \times (v_{\rm w} / 10 {\rm \; km \; s^{-1}})^{-1}$ years of pre-supernova stellar evolution. We compare this observed mass loss rate of the progenitor of SN 2013ej, with what is expected from theory. MESA [@2011ApJS..192....3P] was used to simulate stars with masses between 11 to 19 $M_{\odot}$, for half solar metallicity. @2011ApJS..192....3P Sec 6.6 describe the mass loss prescription used in our simulations as the [*Dutch*]{} Scheme. We expect the supernova ejecta to encounter the mass lost during the RGB phase of the wind which follows . In Figure \[mmdot\] we compare the observed mass loss rate with those obtained from MESA and from for a progenitor size of $10^3R_\odot$. Note however that various modifications have been suggested to this prescription . We consider a progenitor mass ranging from 11 to 16 ${\rm M_\odot}$ combining pre-supernova progenitor identification [@2014MNRAS.439L..56F] and modeling of the supernova lightcurve [@2015ApJ...806..160B; @2015arXiv150901721D]. Within this mass range, the mass loss rate obtained from x-ray observations in this work are in agreement with the predictions from both MESA and . ![Zero Age Main Sequence Mass (ZAMS) and wind mass loss rate for MESA runs (green), and theoretical line (blue) from plotted for comparison. Shaded box represents $1\sigma$ confidence intervals for mass loss rate observed in SN 2013ej ($\dot{M}=(2.6 \pm 0.2) \times 10^{-6} {\rm \; M_\odot \; yr^{-1}}$, see Fig \[mdot\_tminus\] of this work) and the estimated progenitor mass (${\rm11-16\; M_\odot}$, from literature). Note that the observed mass loss is consistent with the theoretical expectations of mass loss rates from red supergiant stars . Our mass loss rate measurement points back to a more precise estimate of the progenitor mass, $M_{\rm ZAMS} = 13.7 \pm 0.3 {\rm \; M_\odot}$.[]{data-label="mmdot"}](mmdot.eps){width="\columnwidth"} [rccccccccc]{} $13.0\pm5.8$ & $(6.5\pm2.7)\times 10^{-6}$ & & & $4 \pm 1 $ & $1.13 \pm 0.25$ & $30.8 \pm 15.7$ & & $\phn 48 \pm 17$ & $4.38 \pm 1.53$\ $28.9\pm0.3$ & & & & & & $2.16 \pm 1.66 $ & & $\phn 79 \pm 5\phn$ & $1.67 \pm 0.65$\ $59.7\pm0.4$ & & & & & & $3.25 \pm 1.37 $ & & $152 \pm 9\phn$ & $2.83 \pm 0.64$\ $78.0\pm1.8$ & & & & & & $2.06 \pm 0.82$ & & $193 \pm 12$ & $2.54 \pm 0.55$\ $114.3\pm0.4$ & & & & & & $2.00 \pm 0.70 $ & & $272 \pm 16$ & $2.98 \pm 0.58$\ $145.1\pm0.4$ & & & & & & $0.82 \pm 0.43 $ & & $338 \pm 19$ & $2.61 \pm 0.50$\ & $\propto L_{\rm bol} t^{-1}$ & $=81T_{\rm rev}$ & $= 0.0228 N_{\rm APEC}$ & $\propto t^{-1}$ & $\propto t^{-1/5}$ & & & & \[fitparms\] Cooling shell absorption ------------------------ @2003LNP...598..171C proposed that a shell of material formed by the radiative cooling of shocked material may form between the reverse and forward shocked materials. Though more material is cooled with time, it gets diluted with the expansion of the ejecta. Also, as the density of the reverse shocked material falls, it does not cool as effectively as before. Therefore, this shell poses larger absorbing column densities at early times. Since the emission from the reverse shocked material is softer, hiding some of it makes the total spectrum harder. We determine the column density of this cold material at 12.96 days to be $n_{\rm cool} = (4 \pm 1) \times 10^{22} {\rm \; atoms \; cm^{-2}}$. This is enough to block most of the reverse shock emission at early times. This level of variable absorption is at tension ($\sim 2 \sigma$ level) with the expected value [@2003LNP...598..171C]. This could be the result of excess absorption from partially ionized wind in the circumstellar material. The amount of material in the cool shell depends upon the density of the ejecta which the reverse shock runs into. In a self similar explosion this depends on the circumstellar density and hence the mass loss rate from the progenitor [@2003LNP...598..171C]. We find that the observed column density of cool material may be explained by a mass loss rate of $\dot{M}=(6 \pm 3) \times 10^{-6} {\rm \; M_\odot \; yr^{-1}}$. This is less precise than, but consistent with, the mass loss rate derived from the emission measure. This provides a consistency check for the scenario in which the excess absorption at early times indeed arises from the cooling shell. Particle acceleration --------------------- Electrons are accelerated in the strong forward shock produced by the supernova. The optical photons produced by the supernova are Inverse Compton scattered into the x-ray band by these relativistic electrons. Our measurement of the Inverse Compton flux density provides a direct probe of the particle acceleration efficiency. Following @2006ApJ...651..381C, we can express the Inverse Compton flux, for an electron index of $p=3$, as $$\begin{aligned} E\frac{dL_{\rm IC}}{dE}&\approx8.8\times10^{37} \gamma_{\rm min} \epsilon_{\rm e} \left( \frac{\dot{M}/(4 \pi v_{\rm w})}{5 \times 10^{11} {\rm \; g \; cm^{-1}}} \right) \nonumber \\ &\times \left( \frac{V_{\rm cs}}{10^4 {\rm \; km \; s^{-1}}} \right) \left(\frac{L_{\rm bol}}{10^{42} {\rm \; ergs \; s^{-1}}}\right) \nonumber \\ &\times \left(\frac{t}{10 {\rm \; days}} \right)^{-1} {\rm \; ergs \; s^{-1}},\end{aligned}$$ where $\gamma_{\rm min}$ is the minimum Lorentz factor of the relativistic electrons and $\epsilon_{\rm e}$ is the fraction of thermal energy given to relativistic electrons. Our measurement of the Inverse Compton flux density implies an electron acceleration efficiency of $\gamma_{\rm min}\epsilon_{\rm e}=0.02\pm0.01$. This shows that for a $\gamma_{\rm min}=2$, around $1\%$ of the thermal energy is used to accelerate relativistic electrons. Discussion ========== Explosions of massive stars with extended hydrogen envelopes produce Type II supernovae. The cores of these stars undergo rapid evolution during the final millennium before collapse, as they burn elements with progressively higher atomic numbers. The outer layers of these stars, supported against gravity by the energy generation in the core, can only slowly adjust to these changes over a much longer Kelvin-Helmholtz time scale. Therefore, conditions at the surface of the star, including luminosity and mass loss rate, are not expected to reflect the rapid evolution taking place in the core during the last stages of stellar evolution. This paradigm has been called into question by recent observations of luminous outbursts and massive outflows observed months to years before certain supernovae. Our x-ray observations of SN 2013ej indicate a mass loss rate from the progenitor which remained steady in the last 400 years before explosion. Within the best constraints the mass loss rate is consistent with stellar evolution models and theoretical mass loss prescriptions. If theoretical mass loss rate predictions are to be trusted, our precise measurement of the mass loss rate can be used to derive a mass of $M_{\rm ZAMS} = 13.7 \pm 0.3 {\rm \; M_\odot}$. The statistical uncertainty in such a measurement rivals the most precise progenitor mass measurements. However, we need to address gaps in our understanding of mass loss from massive stars before we can quantify systematic errors and rely on the accuracy of such a measurement. The mass loss rate inferred here is larger than that observed by us in the Type II-P SN 2004dj [@2012ApJ...761..100C]. The mass loss rate from the progenitor of the Type II-P SN 2011ja showed rapid variations in the final stages before explosion [@2013ApJ...774...30C; @2015arXiv150906379A]. No such variation is inferred for SN 2013ej and its steady mass loss rate is comparable to the higher end of mass loss rates inferred for SN 2011ja. Through our program of x-ray observations of nearby supernovae, we hope to shed light on details of mass loss from massive stars both as a function of progenitor mass and lookback time before explosion. SN 2013ej was caught much sooner after explosion than SN 2004dj or SN 2011ja thanks to timely Swift and Chandra observations. This allowed us to discover two interesting effects. @2003LNP...598..171C postulated the presence of a cool shell which may obscure the reverse shock emission at early times. We not only see this effect but measure the column depth of this shell and confirm that it is consistent with the circumstellar density seen in emission. If we can measure this effect more precisely in the future, the combination of the same mass loss rate measured using absorption and emission may allow an independent determination of the distance to nearby supernovae. @2006ApJ...651..381C had suggested that Inverse Compton scattering by relativistic electrons may be the dominant source of x-rays in some supernovae. At early times, when the light of the SN 2013ej provides a bright source of seed photons, emission from this non thermal process is found to be comparable to those from thermal processes (see early XRT spectrum in Fig \[ufs\]). We use this to measure the efficiency of relativistic electron acceleration. Our measurement provides a check for recent predictions of particle acceleration efficiencies in strong but non-relativistic shocks [@2007ApJ...661..879E; @2015ApJ...809...55B; @2015PhRvL.114h5003P]. We have also considered the detectability of core collapse supernovae in external galaxies in the harder X-ray bands. With the capability of NuSTAR [@2013ApJ...770..103H], SN 2013ej would have been detected in 6-10 keV band at a $3\sigma$ level with an exposure of 1 Ms, provided the SN was targeted immediately after discovery and classification. Thus only very young and very nearby supernovae, e.g. within $2-3\; \rm Mpc$ can be realistically targeted for detections in the high energy bands in the near future. We acknowledge the use of public data from the Swift data archive. This research has made use of data obtained using the Chandra X-ray Observatory through an advance Target of Opportunity program and software provided by the Chandra X-ray Center (CXC) in the application packages CIAO and ChIPS. Support for this work was provided by the National Aeronautics and Space Administration through Chandra Award Number G04-15076X issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the National Aeronautics Space Administration under contract NAS8-03060. We thank Naveen Yadav for MESA runs and the anonymous referee for useful suggestions. A.R. thanks the Fulbright Foundation for a Fulbright-Nehru Fellowship at Institute for Theory and Computation (ITC), Harvard University, and the Director and staff of ITC for their hospitality during his sabbatical leave from Tata Institute of Fundamental Research. At Tata Institute this research is part of 12th Five Year Plan Project 12P-0261. [^1]: The method for extraction of spectrum and response files for an unresolved source is described in `http://cxc.harvard.edu/ciao/threads/pointlike/`
--- abstract: 'We study the existence and propagation of multidimensional dark non-diffractive and non-dispersive spatiotemporal optical wave-packets in nonlinear Kerr media. We report analytically and confirm numerically the properties of spatiotemporal dark lines, X solitary waves and lump solutions of the $(2+1)D$ nonlinear Schrödinger equation (NLSE). Dark lines, X waves and lumps represent holes of light on a continuous wave background. These solitary waves are derived by exploiting the connection between the $(2+1)D$ NLSE and a well-known equation of hydrodynamics, namely the $(2+1)D$ Kadomtsev-Petviashvili (KP) equation. This finding opens a novel path for the excitation and control of spatiotemporal optical solitary and rogue waves, of hydrodynamic nature.' author: - Stefan Wabnitz - Yuji Kodama - Fabio Baronio title: Optical Kerr spatiotemporal dark extreme waves --- INTRODUCTION {#sec:intro} ============ Techniques to shape and control the propagation of electromagnetic radiation are of paramount importance in many fields of basic science and applied research, such as atomic physics, spectroscopy, communications, material processing, and medicine [@boyd08; @weiner]. Among these, of particular interest are methods that produce localized and distortion-free wave-packets, i.e. free from spatial and temporal spreading due to diffraction or material group velocity dispersion (GVD), respectively [@recami]. There are two main strategies to achieve propagation invariant electromagnetic wave packets [@majus14]. The first methodology is based on the spatiotemporal synthesis of a special input wave, so that diffractive and dispersive effects compensate for each other upon linear propagation in the material. Building blocks of these linear light bullets are Bessel beams and their linear combinations, along with Airy pulses [@Bessel]. These waveforms enable the generation spatiotemporal invariant packets such as the Airy-Bessel beams, and the so-called X-waves, obtained by a linear superposition of Bessel beams with different temporal frequencies. The second approach involves the generation of solitary waves, that exploit the nonlinear (quadratic or cubic) response of the material for compensating diffractive and dispersive wave spreading [@silber90; @wise99]. Although successfully exploited in $(1+1)D$ propagation models, that describe for example temporal solitons in optical fibers and spatial solitons in slab waveguides, in more than one dimension spatiotemporal solitons have so far largely eluded experimental observation, owing to their lack of stability associated with the presence of modulation instability (MI), collapse and filamentation. Here, we overview our recent contributions to the field of non-diffractive and non-dispersive wave-packets in Kerr media [@baro16; @baro16ol; @baro17odp], by deriving analytically and confirming numerically the existence and propagation of novel multidimensional $(2+1) D$ dark non-diffractive and non-dispersive spatiotemporal solitons propagating in i) self-focusing and normal dispersion Kerr media, and in ii) self-defocusing and anomalous dispersion Kerr media. The analytical dark solitary solutions are derived by exploiting the connection between the $(2+1)D$ NLSE and the $(2+1)D$ Kadomtsev-Petviashvili (KP) equation [@KP], a well-known equation of hydrodynamics. Our results extend and confirm the connection between nonlinear wave propagation in optics and hydrodynamics, that was first established in the $1990$’s [@turi88; @peli95; @kivs96]. Optical NLSE–hydrodynamic KP mapping {#sec:sc} ==================================== In the presence of group-velocity dispersion and one-dimensional diffraction, the dimensionless time-dependent paraxial wave equation in cubic Kerr media reads as [@conti03]: $$\label{2DNLS} iu_z+\frac{\alpha}{2}u_{tt}+\frac{\beta}{2}u_{yy}+\gamma |u|^2 u=0,$$ namely the (2+1)D, or more precisely (1+1+1)D, NLSE, where $u(t,y,z)$ stands for the complex wave envelope, and $t,y$ represent the retarded time, in the frame traveling at the natural group-velocity, and the spatial transverse coordinate, respectively, and $z$ is the longitudinal propagation coordinate. Each subscripted variable in Eq. (\[2DNLS\]) stands for partial differentiation. $\alpha, \beta, \gamma$ are normalized real constants that describe the effect of dispersion, diffraction and Kerr nonlinearity, respectively. We refer to as *elliptic* NLSE if $\alpha\beta>0$, and *hyperbolic* NLSE if $\alpha\beta<0$. In the case of weak nonlinearity, weak diffraction and slow modulation, the dynamics of optical NLSE dark envelopes $u(t,y,z)$ may be related to the hydrodynamic KP variable $\eta(\tau,\upsilon,\varsigma)$ as follows [@baro16; @baro16ol]: $$\begin{aligned} \label{NLSEKP} u(t,y,z)=\sqrt{\rho_0+\eta(\tau,\upsilon,\varsigma)} \ \ e^{i[\gamma\rho_0z +\phi(\tau,\upsilon,\varsigma)]} $$ where $\rho_0$ stands for a background continuous wave amplitude, $\eta(\tau,\upsilon,\varsigma)$ represents a small amplitude variation, say $\eta\sim \mathcal{O}(\epsilon)$ with $0<\epsilon\ll 1$ and the order one background $\rho_0$; $\phi=-(\gamma/c_0) \int \eta(\tau,\upsilon,\varsigma) d\tau$; $\eta(\tau,\upsilon,\varsigma)$ satisfies the KP equation, $$\label{KP} \left(-\eta_\varsigma+\frac{3\alpha\gamma}{2c_0}\eta\eta_\tau+\frac{\alpha^2}{8c_0}\eta_{\tau\tau\tau}\right)_\tau-\frac{c_0 \beta}{2\alpha}\eta_{\upsilon\upsilon}=0,$$ where $\tau=t-c_0z, \upsilon=y$, $\varsigma=z$ with $c_0=\sqrt{-\gamma\alpha\rho_0}$, $\alpha \gamma<0$ (see [@baro16] for further details). Of interest in the optical context, the *elliptic* anomalous dispersion and self-defocusing regime [@baro16] ($\alpha > 0$, $\beta>0, \gamma<0$) leads to the KP-I regime, while the *hyperbolic* normal dispersion and self-focusing regime [@baro16ol] ($\alpha <0$, $\gamma>0, \beta>0$), leads to the KP-II regime. Without loss of generality, we may set the following constraints to the coefficients of Eq. (\[2DNLS\]), $|\alpha|=4 \sqrt{2}, \beta= 6\sqrt{2}, |\gamma|=2\sqrt{2}$; moreover, we fix $\rho_0=1$ (thus $c_0=4$). Note that, with the previous relations among its coefficients, in the case ($\alpha > 0$, $\beta>0, \gamma<0$), Eq. (\[KP\]) reduces to the standard KP-I form: $(-\eta_{{\varsigma}}-6 \eta\eta_\tau+\eta_{\tau\tau\tau})_\tau-3\eta_{\upsilon\upsilon}=0$. Whereas in the case ($\alpha <0$, $\gamma>0, \beta>0$), Eq. (\[KP\]) reduces to the standard KP-II form $(-\eta_{{\varsigma}}-6 \eta\eta_\tau+\eta_{\tau\tau\tau})_\tau+3\eta_{\upsilon\upsilon}=0$. Moreover, the imposed constraints to the coefficients of Eq. (\[2DNLS\]) also fix the scaling between the dimensionless variables $z,t,y$ in Eq. (\[2DNLS\]) and the corresponding real-world quantities $Z=Z_0z,T=T_0t,Y=Y_0y$. The longitudinal scaling factor turns out to be $Z_0=2\sqrt{2} L_{nl}$, where $L_{nl}=(\gamma_{phys} I_0)^{-1}$ is the usual nonlinear length associated with the intensity $I_0$ of the background and $\gamma_{phys}=k_0 n_{2I}$, $n_{2I}$ being the Kerr nonlinear index and $k_0$ the vacuum wavenumber. The “transverse" scales read as $T_0=\sqrt{k'' L_{nl}/2}$ and $Y_0=\sqrt{L_{nl}/(3k_0 n)}$, where $k''$ and $n$ are the group-velocity dispersion and the linear refractive index, respectively. Normal dispersion and self-focusing regime: nonlinear lines and X-waves {#sec:sup} ======================================================================= At first, we consider the case of normal dispersion and self-focusing nonlinearity [@baro16ol; @baro17odp]. We proceed to consider the existence and propagation of $(2+1)D$ NLSE dark line solitary waves, which are predicted by the existence of $(2+1)D$ KP-II bright line solitons [@kod10; @kod11]. When considering the small amplitude regime, a formula for an exact line bright soliton of Eq. (\[KP\]) can be expressed as follows [@kod10; @kod11]: $ \eta (\tau, \upsilon, \varsigma) = - \epsilon \, \, sech^2 [\sqrt{ \epsilon/2} (\tau +tan \varphi \, \upsilon+ c \varsigma)], $ where $\epsilon$ rules the amplitude and width of the soliton, $\varphi$ is the angle measured from the $\upsilon$ axis in the counterclockwise, $c =2 \epsilon + 3 tan^2 \varphi$ is the velocity in $\tau$-direction. Notice that $c$ is of order $\epsilon$. Moreover we obtain $ \phi(\tau, \upsilon, \varsigma)=\sqrt{ \epsilon} \, \, tanh([\sqrt{ \epsilon/2} (\tau +tan \varphi \, \upsilon+ c \varsigma)]. $ The analytical spatiotemporal envelope intensity profile $u(t,y,z)$ of a NLSE dark line solitary wave is given by the mapping (\[NLSEKP\]) exploiting the KP bright soliton expression. The intensity dip of the dark line solitary wave is $-\epsilon$, the velocity $c_0-c- 3tan^2 \varphi =4-2\epsilon-3tan^2 \varphi$ in the $z$-direction. We numerically verified the accuracy of the analytically predicted dark line solitary waves of the NLSE. To this end, we made use of a standard split-step Fourier technique, which is commonly adopted in the numerical solution of the NLSE (\[2DNLS\]). Figure \[f1\] shows the numerical spatiotemporal envelope intensity profile $|u(t,y,z)|^2$ of a NLSE dark line solitary wave, which corresponds to the predicted analytical dynamics. ![\[f1\] Numerical spatio-temporal dark-line NLSE envelope intensity distribution $|u(t,y,z)|^2$, shown in the $y-t$ plane, at $z=0$, at $z=10$ and in the $t-z$ plane at $y=0$. Here, $\epsilon=0.1$, $\varphi=0.01$. ](N1.jpg "fig:"){width="8cm"} ![\[f1\] Numerical spatio-temporal dark-line NLSE envelope intensity distribution $|u(t,y,z)|^2$, shown in the $y-t$ plane, at $z=0$, at $z=10$ and in the $t-z$ plane at $y=0$. Here, $\epsilon=0.1$, $\varphi=0.01$. ](N2.jpg "fig:"){width="8cm"} As can be seen from the images, the numerical solutions of the NLSE show an excellent agreement with the analytical approximate NLSE solitary solutions. In the long wave context, the KP-II equation admits complex soliton solutions, mostly discovered and demonstrated in the last decade, which may describe non-trivial web patterns generated under resonances of line-solitons [@kod10; @kod11]. Here, we consider the resonances of four line solitons, which give birth to the so-called *O-type* bright X-shaped two-soliton solution of the KP-II (the name *O-type* is due to the fact that this solution was *originally* found by using the Hirota bilinear method). When considering the small amplitude regime, the formula of the *O-type* solution of Eq. (\[KP\]) can be expressed as follows, $\eta (\tau, \upsilon, \varsigma) = - 2 \left( \ln F\right)_{\tau\tau}, $ where the function $F(\tau,\upsilon,\varsigma)$ is given by $F=f_1+f_2$ with $ f_1= (\epsilon_1 + \epsilon_2) \, {\rm cosh}[ (\epsilon_1 - \epsilon_2) \tau+4 \,(\epsilon_1^3 - \epsilon_2^3) \varsigma], \ \ f_2=2 \sqrt{\epsilon_1 \epsilon_2} \,{\rm cosh}[ (\epsilon_1^2 - \epsilon_2^2)\upsilon]. $ $\epsilon_1, \epsilon_2$ are small real positive parameters which are related to the amplitude, width and the angle of the *O-type* X-soliton solutions. The corresponding (2+1)D NLSE dark X solitary wave $u(t,y,z)$ is directly given through the mapping Eq. (\[NLSEKP\]), by exploiting the soliton expression for $\eta(\tau,\upsilon,\varsigma)$. ![\[f2\] Numerical spatiotemporal NLSE envelope intensity distribution $|u(t,y,z)|^2$, in the $(y,t)$ plane, showing the dark X solitary wave dynamics, at $z=0$ and at $z=10$. Here, $\epsilon_1=0.2$, $\epsilon_2=0.001$. ](XN1.jpg "fig:"){width="8cm"} ![\[f2\] Numerical spatiotemporal NLSE envelope intensity distribution $|u(t,y,z)|^2$, in the $(y,t)$ plane, showing the dark X solitary wave dynamics, at $z=0$ and at $z=10$. Here, $\epsilon_1=0.2$, $\epsilon_2=0.001$. ](XN2.jpg "fig:"){width="8cm"} We numerically verified the accuracy of the analytically predicted *O-type* dark X solitary wave of the NLSE. Fig. \[f2\] shows the $(y,t)$ profile of the numerical solution of the hyperbolic NLSE at $z=0$, and at $z=10$. In this particular example we have chosen $\epsilon_1=0.2$, $\epsilon_2=0.001$. Specifically, Fig. \[f2\] illustrates a solitary solution which describes the X-interaction of four dark line solitons. The maximum value of the dip in the interaction region is $2 (\epsilon_1-\epsilon_2)^2 \, (\epsilon_1+\epsilon_2) / (\epsilon_1+\epsilon_2 +2 \sqrt{\epsilon_1 \epsilon_2})$. Asymptotically, the solution reduces to two line dark waves for $t\ll0$ and two for $t\gg0$, with intensity dips $\frac{1}{2}(\epsilon_1-\epsilon_2)^2$ and characteristic angles $\pm{\rm tan}^{-1}(\epsilon_1+\epsilon_2)$, measured from the $y$ axis. Numerical simulations and analytical predictions are in excellent agreement. We estimate the error between the asymptotic formula and the X solitary wave in the numerics to be lower than $2 \%.$ Anomalous dispersion and self-defocusing regime: dark lumps {#sec:sup} =========================================================== Next, we consider the case of anomalous dispersion and self-defocusing nonlinearity [@baro16]. We proceed to verify the existence of (2+1)D NLSE dark-lump solitary waves, as predicted by the solutions of KP-I through Eq.(\[NLSEKP\]) (see [@as81] for details). When considering the small amplitude regime ($\epsilon \ll 1$), a form of KP lump-soliton solution of Eq. (\[KP\]) can be expressed as $\eta (\tau, \upsilon, \varsigma) = -4 [ \epsilon^ {-1}-(\tau-3 \epsilon \varsigma)^2+\epsilon \upsilon^2 ] / [ \epsilon^ {-1}+(\tau-3 \epsilon \varsigma)^2+\epsilon\upsilon^2 ]^2$. The parameter $\epsilon$ rules the amplitude/width and velocity properties of the KP lump soliton. The lump peak amplitude in the $(\varsigma,\upsilon)$ plane is $-4 \epsilon$; the velocity in the $\tau$-direction is $3\epsilon$. Moreover, $\phi(\tau, \upsilon, \varsigma)= 2 \sqrt{2} \epsilon (\tau-3 \epsilon \varsigma)/ [1+ \epsilon (\tau-3 \epsilon \varsigma)+\epsilon^2 \upsilon^2].$ The analytical spatiotemporal envelope intensity profile $u(t,y,z)$ of a NLSE dark solitary wave is given by the mapping (\[NLSEKP\]), which exploits the KP bright lump expression. Then, we numerically verified the accuracy of the analytically predicted dark lumps solitary waves of the NLSE. Figure \[f3\] shows the numerical spatio-temporal envelope intensity profile $|u|^2$ of a NLSE dark lump solitary wave in the $y$-$t'$ plane ($t'=t-c_0 z$), at the input $z=0$ and after the propagation distance $z=100$, for $\epsilon=0.05$. In the numerics, the initial dark NLSE profile, of KP-I lump origin, propagates stably in the $z$-direction, with virtually negligible emission of dispersive waves, with the predicted velocity $c_0+3\epsilon$, and intensity dip of $4 \epsilon$. Thus, the predicted theoretical dark lump solitary waves of Eq. (\[NLSEKP\]) are well confirmed by numerical simulations. ![\[f3\] Numerical spatio-temporal dark-lump NLSE envelope intensity distribution $|u|^2$, shown in the $y$-$t'$ plane with $t'=t-c_0z$, at $z=0$, and $z=100$. Here, $\epsilon=0.05$. ](f1.jpg "fig:"){width="8cm"} ![\[f3\] Numerical spatio-temporal dark-lump NLSE envelope intensity distribution $|u|^2$, shown in the $y$-$t'$ plane with $t'=t-c_0z$, at $z=0$, and $z=100$. Here, $\epsilon=0.05$. ](f3.jpg "fig:"){width="8cm"} ![\[f5\] Numerical spatio-temporal NLSE envelope intensity distribution $|u|^2$, in the $y$-$t'$ plane, showing anomalous scattering of dark waves, at $z=0$, at $z=150$ . Here, $\epsilon=0.1$, $\tau_0=0, \upsilon_0=0, \varsigma_0=-50, \delta_1=0, \delta_2=0$ ](f4.jpg "fig:"){width="8cm"} ![\[f5\] Numerical spatio-temporal NLSE envelope intensity distribution $|u|^2$, in the $y$-$t'$ plane, showing anomalous scattering of dark waves, at $z=0$, at $z=150$ . Here, $\epsilon=0.1$, $\tau_0=0, \upsilon_0=0, \varsigma_0=-50, \delta_1=0, \delta_2=0$ ](f6.jpg "fig:"){width="8cm"} We remark that the KP-I equation admits other types of lump solutions which have several peaks with the same amplitude in the asymptotic stages $|z|\gg 0$. We call such lump solution *multi-pole lump*. Here we show that $(2+1)$D NLSE can also support such lump solutions. We consider multi-pole lump solution with two peaks, which is expressed as: $\eta (\tau, \upsilon, \varsigma) = -2 \partial_\tau ^2 {\rm log} F$, where $F=|f_1^2+i f_2 +f_1/ \epsilon+1 /2 \epsilon^ {2}|^2+ |f_1+1/ \epsilon|^2 / 2\epsilon^2+1/4\epsilon^ {4}$, and $f_1=\tau_1+2i \epsilon \upsilon_1-12 \epsilon^2 \varsigma_1 +\delta_1$, $f_2=-2 \upsilon-24i \epsilon \varsigma+\delta_2$. $\tau_1=\tau-\tau_{0}, \upsilon_1=\upsilon-\upsilon_{0}, \varsigma_1=\varsigma-\varsigma_0$ define the dislocation; $\delta_1, \delta_2$ are arbitrary complex parameters. The analytical spatiotemporal envelope intensity profile $u(t,y,z)$ of a NLSE dark solitary wave is again given by the mapping (\[NLSEKP\]). Figure \[f5\] shows the initial spatio-temporal envelope intensity profile $|u|^2$ of a two-peaked NLSE dark lump in the $y-t'$ plane, along with the numerically computed profiles after propagation distances $z=150$, for $\epsilon=0.1$ ($\tau_0=0, \upsilon_0=0, \varsigma_0=-50, \delta_1=0, \delta_2=0$). In particular, Fig. \[f5\] depicts the scattering interaction of the two-peaked waves: two dark lumps approach each other along the $t'$-axis, interact, and subsequently recede along the $y$-axis. These solutions exhibit anomalous (nonzero deflection angles) scattering due to multi-pole structure in the wave function of the inverse scattering problem. We remark that the numerical result of NLSE dynamics is in excellent agreement with analytical dark solitary solution Eq. (\[NLSEKP\]) with KP-I multi-pole lump solution. Instabilities and Experimental Feasibility ========================================== Let us discuss the important issue of the stability of the predicted dark line, X solitary waves and lumps. Two instability factors may affect the propagation of these waves. The first one is the modulation instability (MI) of the continuous wave background. In the case of normal dispersion and self-focusing, $\alpha <0$, $\beta, \gamma>0$, MI is of the conical type [@yuen80]. Generally speaking, MI can be advantageous to form X waves from arbitrary initial conditions both in the absence or in the presence of the background. However, for sufficiently long propagation distances the MI of the CW background may compete and ultimately destroy the propagation of dark solitary waves and their interactions. In the case of anomalous dispersion and in the self-defocusing regime, $\alpha > 0$, $\beta>0, \gamma<0$, MI is absent, thus lumps are not affected by MI. The second mechanism is related to the transverse instability of the line solitons that compose the asymptotic state of the X wave. We point out that such instability is known to occur for the NLSE, despite the fact that line solitons are transversally stable in the framework of the KP-II (unlike those of the KP-I) [@KP]. However, in our simulations of the NLSE, these transverse instabilities never appear, since they are extremely long-range, especially for shallow solitons. In fact, we found that the primary mechanism that affects the stability of dark line and X solitary waves is the MI of the CW background. Let us briefly discuss a possible experimental setting in nonlinear optics for the observation of cubic spatiotemporal solitary wave dynamics of hydrodynamic origin. As to (2+1)D spatiotemporal dynamics, one may consider optical propagation in a planar glass waveguide (e.g., see the experimental set-up of Ref. [@eise01]), or a quadratic lithium niobate crystal, in the regime of large phase-mismatch, which mimics an effective Kerr nonlinear regime (e.g., see the experimental set-up of Ref. [@baro06]). As far as the (2+1)D spatial dynamics is concerned, one may consider using a CW Ti:sapphire laser pulse propagating in a nonlinear medium composed of atomic-rubidium vapor (e.g., see the experimental set-up of Ref. [@kivs96]), or a bulk quadratic lithium niobate crystal, again in the regime of large phase-mismatch (e.g., see the experimental set-up of Ref. [@baro04; @krupa15]). Conclusions =========== We have analytically predicted a new class of dark solitary wave solutions, that describe non-diffractive and non-dispersive spatiotemporal localized wave packets propagating in optical Kerr media. We numerically confirmed the existence of nonlinear lines, X-waves, lumps and peculiar scattering interactions of the solitary waves of the (2+1)D NLSE. The key novel property of these solutions is that their existence and interactions are inherited from the hydrodynamic soliton solutions of the well known KP equation. Our findings open a new avenue for research in spatiotemporal extreme nonlinear optics. Given that deterministic rogue and shock wave solutions, so far, have been essentially restricted to (1+1)D models, future research on multidimensional spatiotemporal nonlinear waves will lead to a substantial qualitative enrichment of the landscape of extreme wave phenomena. We acknowledge the financial support of the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 740355). [99]{} R. Boyd, [*Nonlinear Optics*]{}, 3rd ed. (Academic Press, London, 2008). A. Weiner, [*Ultrafast Optics*]{} (Wiley, New York, 2009). H.E. Hernandez-Figueroa, E. Recami, and M. Zamboni-Rached, [*Localized Waves*]{} (Wiley, New York, 2008). D. Majus, G. Tamosauskas, I. Grazuleviciute, N. Garejev, A. Lotti, A. Couairon, D. Faccio, and A. Dubietis, “Nature of Spatiotemporal Light Bullets in Bulk Kerr Media," Phys. Rev. Lett. [**112**]{}, 193901 (2014). J. Durnin, J.J. Miceli, and J. H. Eberly, “Diffraction-free beams", Phys. Rev. Lett. [**58**]{}, 1499 (1987). Y. Silberberg, “Collapse of optical pulses," Opt. Lett. [**15**]{}, 1282 (1990). X. Liu, L. J. Qian, and F.W. Wise, “Generation of optical spatiotemporal solitons," Phys. Rev. Lett. [**82**]{}, 4631 (1999). F. Baronio, S. Wabnitz, and Y. Kodama, “Optical Kerr Spatiotemporal Dark-Lump Dynamics of Hydrodynamic Origin," Phys. Rev. Lett. [**116**]{}, 173901 (2016). F. Baronio, S. Chen, M. Onorato, S. Trillo, S. Wabnitz, and Y. Kodama, “Spatiotemporal optical dark X solitary waves," Opt. Lett. [**41**]{}, 5571-574 (2016). F. Baronio, M. Onorato, S. Chen, S. Trillo, Y. Kodama, and S. Wabnitz, “Optical-fluid dark line and X solitary waves in Kerr media," Opt. Data Process. and Storage [**3**]{}, 1-7 (2017). B.B. Kadomtsev and V.I. Petviashvili, “On the stability of solitary waves in weakly dispersing media," Sov. Phys. - Dokl. [**15**]{}, 539-541 (1970). E.A. Kuznetsov and S.K. Turitsyn, “Instability and collapse of solitons in media with a defocusing nonlinearity," Sov. Phys. JEPT [**67**]{}, 1583-1588 (1988). D.E. Pelinovsky, Y.A. Stepanyants, and Y.S. Kivshar, “Self-focusing of plane dark solitons in nonlinear defocusing media," Phys. Rev. E [**51**]{}, 5016 (1995). V. Tikhonenko, J. Christou, B. Luther-Davies, and Y. S. Kivshar, “Observation of vortex solitons created by the instability of dark soliton stripes," Opt. Lett. [**21**]{}, 1129-1131 (1996). C. Conti, S. Trillo, P. Di Trapani, G. Valiulis, A. Piskarskas, O. Jedrkiewicx, and J. Trull, “Nonlinear Electromagnetic X Waves", Phys. Rev. Lett. [**90**]{}, 170406 (2003). Y. Kodama, “KP Solitons in shallow water," J. Phys. A: Math. Theor. [**43**]{}, 434004 (2010). W. Li, H. Yeh, and Y. Kodama, “On the Mach reflection of a solitary wave: revisited," J. Fluid Mech. [**672**]{}, 326-357 (2011). M. J. Ablowitz and H. Segur, [*Solitons and the inverse scattering transform*]{}, SIAM Stud. in Appl. Math. (SIAM, Philadelphia, 1981). H. C. Yuen and B. M. Lake, “Instabilities of Waves on Deep Water," Ann. Rev. Fluid Mech. [**12**]{}, 303-334 (1980). H. S. Eisenberg, R. Morandotti, Y. Silberberg, S. Bar-Ad, D. Ross, and J. S. Aitchison, “Kerr Spatiotemporal Self-Focusing in a Planar Glass Waveguide," Phys. Rev. Lett. [**87**]{}, 043902 (2001). F. Baronio, C. De Angelis, M. Marangoni, C. Manzoni, R. Ramponi, and G. Cerullo, “Spectral shift of femtosecond pulses in nonlinear quadratic PPSLT Crystals," Opt. Express [**14**]{}, 4774-4779 (2006). F. Baronio, C. De Angelis, P. Pioger, V. Couderc, A. Barthelemy, “Reflection of quadratic solitons at the boundary of nonlinear media," Opt. Lett. [**29**]{}, 986-988 (2004). K. Krupa, A. Labruyere, A. Tonello, B. M. Shalaby, V. Couderc, F. Baronio, and A. B. Aceves, “Polychromatic filament in quadratic media: spatial and spectral shaping of light in crystals," Optica [**2**]{}, 1058-1064 (2015).
--- author: - | S. Khalil$^{1,2,\dagger}$ and E. Torrente-Lujan$^{3,\dagger}$\ $^1$ Centre for Theoretical Physics, University of Sussex, Brighton BN1 9QJ,U.K.\ $^2$ Ain Shams University, Faculty of Science, Cairo 11566, Egypt.\ $^3$ Departamento de Física Teórica, C-XI, Universidad Autónoma de Madrid, 28049 Cantoblanco, Madrid, Spain.\ $^\dagger$ E-mail: kafz8@pact.cpes.susx.ac.uk, e.torrente@cern.ch. title: 'Neutrino mass and oscillation as probes of physics beyond the Standard Model [^1]' --- Introduction ============ The existence of a so-called neutrino, a light, neutral, feebly interacting fermion, was first proposed by W. Pauli in 1930 to save the principle of energy conservation in nuclear beta decay [@pauli]. The idea was promptly adopted by the physics community; in 1933 E. Fermi takes the neutrino hypothesis, gives the neutrino its name and builds his theory of beta decay and weak interactions. But, it was only in 1956 that C. Cowan and F. Reines were able to discover the neutrino, more exactly the anti-neutrino, experimentally [@cowan]. Danby et al. [@danby] confirmed in 1962 that there exist, at least, two types of neutrinos, the $\nu_e$ and $\nu_\mu$. In 1989, the study of the Z boson lifetime allows to show with great certitude that only three light neutrino species do exist. Only in 2000, it has been confirmed by direct means [@tauneutrino] the existence of the third type of neutrino, the $\nu_\tau$ in addition to the $\nu_e$ and $\nu_\mu$. Until here the history, with the present perspective we can say that the neutrino occupies a unique place among all the fundamental particles in many ways and as such it has shed light on many important aspects of our present understanding of nature and is still believed to hold a key role to the physics beyond the Standard Model (SM). In what respects its mass, Pauli initially expected the mass of the neutrino to be small but not necessary zero: not very much more than the electron mass, F. Perrin in 1934 showed that its mass has to be less than that of the electron. After more than a half a century, the question of whether the neutrino has mass is still one open question, being one of the outstanding issues in particle physics, astrophysics, cosmology and theoretical physics in general. Presently, there are several theoretical, observational and experimental motivations which justify the searching for possible non-zero neutrino masses (see i.e. [@langacker1; @fuk2; @vallereview; @RAMOND2; @wilczek1; @bilenky98; @bilenky9812] for excellent older reviews on this matter). Understanding of fermion masses in general are one of the major problems of the SM and observation of the existence or confirmation of non-existence of neutrino masses could introduce useful new perspectives on the subject. If they are confirmed as massless they would be the only fermions with this property. A property which is not dictated by any known fundamental underlying principle, such as gauge invariance in the case of the photon. If it is concluded that they are massive then the question is why are their masses so much smaller than those of their charged partners. Although theory alone can not predict neutrino masses, it is certainly true that they are strongly suggested by present theoretical models of elementary particles and most extensions of the SM definitively require neutrinos to be massive. They therefore constitute a powerful probe of new physics at a scale larger than the electroweak scale. If massive, the superposition postulates of quantum theory predict that neutrinos, particles with identical quantum numbers, could oscillate in flavor space. If the absolute difference of masses among them is small enough then these oscillations could have important phenomenological consequences. Some hints at accelerator experiments as well as the observed indications of spectral distortion and deficit of solar neutrinos and the anomalies on the ratio of atmospheric $\nu_e/\nu_\mu$ neutrinos and their zenith distribution are naturally accounted by the oscillations of a massive neutrino. Recent claims of the high-statistics high-precision Super-Kamiokande (SK) experiment are unambiguous and left little room for the scepticism as we are going to see along this review. Moreover, neutrinos are basic ingredients of astrophysics and cosmology. There may be a hot dark matter component (HDM) to the Universe: simulations of structure formation fit the observations only when some significant quantity of HDM is included. If so, neutrinos would be, at least by weight, one of the most important ingredients in the Universe. Regardless of mass and oscillations, astrophysical interest in the neutrino and their properties arises from the fact that it is copiously produced in high temperature and/or high density environment and it often dominates the physics of those astrophysical objects. The interactions of the neutrino with matter is so weak that it passes freely through any ordinary matter existing in the Universe. This makes neutrinos to be a very efficient carrier of energy drain from optically thick objects becoming very good probes for the interior of such objects. For example, the solar neutrino flux is, together with heliosysmology, one of the two known probes of the solar core. A similar statement applies to objects as the type-II supernovas: the most interesting questions around supernovas, the explosion dynamics itself with the shock revival, and, the synthesis of the heaviest elements by the so-called r-processes, could be positively affected by changes in the neutrino flux, e.g. by MSW active or sterile conversions [@supernova]. Finally, ultra high energy neutrinos are called to be useful probes of diverse distant astrophysical objects. Active Galactic Nuclei (AGN) should be copious emitters of $\nu$’s, providing both detectable point sources and an observable diffuse background which is larger in fact than the atmospheric neutrino background in the very high energy range [@AGN]. This review is organized as follows, in section 2 we discuss the neutrino in the SM. Section 3 is devoted to the possible ways for generating neutrino mass terms and different models for these possibilities are presented. Neutrino oscillation in vacuum and in matter are studied in section 4. The cosmological and the astrophysical constraints on diverse neutrino properties are summarized in section 5. In section 6 we give an introduction to the phenomenological description of neutrino oscillations in vacuum and in matter. In section 7 we give an extensive description of the different neutrino experiments, their results and their interpretation. Finally we present some conclusions and final remarks in section 7. The neutrino in the Standard Model. =================================== The current Standard Model of particles and interactions supposes the existence of three neutrinos. The three neutrinos are represented by two-component Weyl spinors each describing a left-handed fermion. They are the neutral, upper components of doublets $L_i$ with respect the $SU(2)$ group, the weak interaction group, we have, $$L_i\equiv \left(\begin{array}{c} \nu_{i} \\ l_i \end{array} \right), \hspace{1.2cm} i = (e, \mu, \tau).$$ They have the third component of the weak isospin $I_{3W}=1/2$ and are assigned an unit of the global $i$th lepton number. The three right-handed charged leptons have however no counterparts in the neutrino sector and transform as singlets with respect the weak interaction. These SM neutrinos are strictly massless, the reason for this can be understood as follows. The only Lorenz scalar made out of them is the Majorana mass, of the form $\nu^t_i \nu_i$; it has the quantum number of a weak isotriplet, with $I_{3W}=1$ as well as two units of total lepton number. Thus to generate a renormalizable Majorana mass term at the tree level one needs a Higgs isotriplet with two units of lepton number. Since in the stricter version of the SM the Higgs sector is only constituted by a weak isodoublet, there are no tree-level neutrino masses. When quantum corrections are introduced we should consider effective terms where a weak isotriplet is made out of two isodoublets and which are not invariant under lepton number symmetry. The conclusion is that in the SM neutrinos are kept massless by a global chiral lepton number symmetry (and more general properties as renormalizability of the theory, see Ref.[@RAMOND2] for an applied version of this argument). However this is a rather formal conclusion, there is no any other independent, compelling theoretical argument in favor of such symmetry, or, with other words, there is no reason why we would like to keep it intact. Independent from mass and charge oddities, in any other respect neutrinos are very well behaved particles within the SM framework and some figures and facts are unambiguously known about them. The LEP Z boson line-shape measurements imply that are only three ordinary (weak interacting) light neutrinos [@SM; @PDG98]. Big Bang Nucleosynthesis (BBN) constrains the parameters of possible sterile neutrinos, non-weak interacting or those which interact and are produced only by mixing [@BBN]. [*All the existing*]{} data on the weak interaction processes in which neutrinos take part are perfectly described by the SM charged-current (CC) and neutral-current (NC) Lagrangians: $$\begin{aligned} L_I^{CC}&=&-\frac{g}{\surd 2} \sum_{i=e,\mu,\tau} \overline{\nu_{L}}_i\gamma_\alpha {l_{L}}_i W^\alpha+ h.c.\\ L_I^{NC}&=&-\frac{g}{2 \cos \theta_W} \sum_{i=e,\mu,\tau} \overline{\nu_{L}}_i\gamma_\alpha {\nu_{L}}_i Z^\alpha + h.c.\end{aligned}$$ where $Z^\alpha,W^\alpha$ are the neutral and charged vector bosons intermediaries of the weak interaction. The CC and NC interaction Lagrangians conserve three total additive quantum numbers, the lepton numbers $L_{e,\mu,\tau}$ while the structure of the CC interactions is what determine the notion of flavor neutrinos $\nu_{e,\mu,\tau}$. There are no indications in favor of the violation of the conservation of these lepton numbers in weak processes and very strong bounds on branching ratios of rare, lepton number violating, processes are obtained, for examples see Table \[tttt1\]. $$\begin{array}{|lcc|lcc|}\hline R(\mu\to e\gamma) &<& 4.9\times 10^{-11} & R(\tau\to e\gamma)&<& 2.7\times 10^{-6}\\ R(\mu\to 3 e) &<& 1.0\times 10^{-12} & R(\tau\to \mu\gamma)&<& 3.0\times 10^{-6}\\ R(\mu\to e(2\gamma))&<& 7.2\times 10^{-11} & R(\mu\to 3 e)&<& 2.9\times 10^{-6}.\\ \hline \end{array}$$ From the theoretical point of view, in the minimal extension of the SM where right-handed neutrinos are introduced and the neutrino gets a mass, the branching ratio of the $\mu\to e \gamma$ decay is given by (2 generations are assumed [@muegamma]), $$\begin{aligned} R(\mu\to e\gamma) & =& G_F \left (\frac{\sin 2\theta\ \Delta m_{1,2}^2}{2 M_W^2}\right )^2 \nonumber\end{aligned}$$ where $m_{1,2}$ are the neutrino masses, $M_W$ is the mass of the $W$ boson and $\theta$ is the mixing angle in the lepton sector. Using the experimental upper limit on the heaviest $\nu_\tau$ neutrino one obtains $R\sim 10^{-18}$, a value far from being measurable at present as we can see from table \[tttt1\] The $\mu\to e \gamma$ and similar processes are sensitive to new particles not contained in the SM. The value is highly model dependent and could change by several orders of magnitude if we modify the neutrino sector for example introducing an extra number of heavy neutrinos. Neutrino mass terms and models. =============================== Model independent neutrino mass terms ------------------------------------- Phenomenologically, Lagrangian mass terms can be viewed as terms describing transitions between right (R) and left (L)-handed states. For a given minimal, Lorenz invariant, set of four fields: $\psi_L,\psi_R,(\psi^c)_L,(\psi^c)_R$, would-be components of a generic Dirac Spinor, the most general mass part of the Lagrangian can be written as: $$\begin{aligned} L_{mass}&=& m_D \left ( \overline{\psi}_L \psi_R\right ) +\frac{1}{2} m_T \left ( \overline{(\psi_L)^c} \psi_L\right ) +\frac{1}{2} m_S \left ( \overline{(\psi_R)^c} \psi_R\right )+h.c. \label{e2001}\end{aligned}$$ In terms of the newly defined Majorana fields ($\nu^c=\nu,N^c=N$): $\nu=(1/\sqrt 2) (\psi_L+(\psi_L)^c)$, $N=(1/\sqrt 2) (\psi_R+(\psi_R)^c)$, the Lagrangian $L_{mass}$ can be rewritten as: $$\begin{aligned} L_{mass}&=& \pmatrix{ \overline{\nu} ,& \overline{N} } M \pmatrix{ \nu \cr N} \label{e2003}\end{aligned}$$ where $M$ is the neutrino mass matrix defined as: $$\begin{aligned} M&\equiv& \pmatrix{ m_T & m_D \cr m_D & m_S }. \label{e2003b}\end{aligned}$$ We proceed further and diagonalizing the matrix M one finds that the physical particle content is given by two Majorana mass eigenstates: the inclusion of the Majorana mass splits the four degenerate states of the Dirac field into two non-degenerate Majorana pairs. If we assume that the states $\nu,N$ are respectively active (belonging to weak doublets) and sterile (weak singlets), the terms corresponding to the ”Majorana masses” $m_T$ and $m_S$ transform as weak triplets and singlets respectively. While the term corresponding to $m_D$ is an standard, weak singlet in most cases, Dirac mass term. The neutrino mass matrix can easily be generalized to three or more families, in which case the masses become matrices themselves. The complete flavor mixing comes from two different parts, the diagonalization of the charged lepton Yukawa couplings and that of the neutrino masses. In most simple extensions of the SM, this CKM-like leptonic mixing is totally arbitrary with parameters only to be determined by experiment. Their prediction, as for the quark hierarchies and mixing, needs further theoretical assumptions (i.e. Ref.[@RAMOND; @RAMOND2] predicting $\nu_\mu-\nu_\tau$ maximal mixing). We can analyze different cases. In the case of a purely Dirac mass term, $m_T=m_S= 0$ in Eq.(\[e2003\]), the $\nu,N$ states are degenerate with mass $m_D$ and a four component Dirac field can be recovered as $\nu\equiv \nu+N$. It can be seen that, although violating individual lepton numbers, the Dirac mass term allows a conserved lepton number $L=L_\nu+L_N$. In the general case, pure Majorana mass transition terms, $m_T$ or $m_S$ terms in Lagrangian (\[e2003\]), describe in fact a particle-antiparticle transition violating lepton number by two units ($\Delta L=\pm 2$). They can be viewed as the creation or annihilation of two neutrinos leading therefore to the possibility of the existence of neutrinoless double beta decay. In the general case where all classes of terms are allowed, it is interesting to consider the so-called ”see-saw” limit in Eq.(\[e2003\]). In this limit taking $m_T\sim 1/m_S \sim 0, m_D<< m_S$, the two Majorana neutrinos acquire respectively masses $m_1 \sim m_D^2/m_S<< m_D$,$ m_2 \sim m_S$. There is one heavy neutrino and one neutrino much lighter than the typical Dirac fermion mass. One of neutrino mass has been automatically suppressed, balanced up (“see-saw”) by the heavy one. The ”see-saw” mechanism is a natural way of generating two well separated mass scales. Neutrino mass models -------------------- Any fully satisfactory model that generates neutrino masses must contain a natural mechanism that explains their small value, relative to that of their charged partners. Given the latest experimental indications it would also be desirable that includes any comprehensive justification for light sterile neutrinos and large, near maximal, mixing. Different models can be distinguished according to the new particle content or according to the scale. According to the particle content, of the different open possibilities, if we want to break lepton number and to generate neutrino masses without introducing new fermions in the SM, we must do it by adding to the SM Higgs sector fields carrying lepton numbers, one can arrange then to break lepton number explicitly or spontaneously through their interactions. But, possibly, the most straightforward approach to generate neutrino masses is to introduce for each neutrino an additional weak neutral singlet. This happens naturally in the framework of LR symmetric models where the origin of SM parity ($P$) violation is ascribed to the spontaneous breaking of a baryon-lepton (the $B-L$ quantum number) symmetry. In the $SO(10)$ GUT the Majorana neutral particle $N$ enters in a natural way in order to complete the matter multiplet, the neutral $N$ is a $SU(3)\times SU(2)\times U(1) $ singlet. According to the scale where the new physics have relevant effects, Unification (i.e. the aforementioned $SO(10)$ GUT) and weak-scale approaches (i.e. radiative models) are usually distinguished [@varios; @varios2]. The anomalies observed in the solar neutrino flux, atmospheric flux and low energy accelerator experiments cannot all be explained consistently without introducing a light, then necessarily sterile, neutrino. If all the Majorana masses are small, active neutrinos can oscillate into the sterile right handed fields. Light sterile neutrinos can appear in particular see-saw mechanisms if additional assumptions are considered (“singular see-saw “ models) with some unavoidable fine tuning. The alternative to such fine tuning would be seesaw-like suppression for sterile neutrinos involving new unknown interactions, i.e. family symmetries, resulting in substantial additions to the SM, (i.e. some sophisticated superstring-inspired models, Ref.[@benakli]). Finally some example of weak scale models, radiative generated mass models where the neutrino masses are zero at tree level constitute a very different class of models: they explain in principle the smallness of $m_\nu$ for both active and sterile neutrinos. Different mass scales are generated naturally by different number of loops involved in generating each of them. The actual implementation generally requires however the ad-hoc introduction of new Higgs particles with nonstandard electroweak quantum numbers and lepton number-violating couplings [@valle2]. The origin of the different Dirac and Majorana mass terms $m_D,m_S,M_T$ appearing above is usually understood by a dynamical mechanism where at some scale or another some symmetry is spontaneously broken as follows. First we will deal with the Dirac mass term. For the case of interest, $\nu_L$ and $\nu_R$ are SU(2) doublets and singlets respectively, the mass term describes then a $\Delta I=1/2$ transition and is generated from SU(2) breaking with a Yukawa coupling: $$\begin{aligned} L_{Yuk}&=&h_i \left (\overline{\nu}_i ,\overline{l_i}\right )_L \left(\begin{array}{c} \phi^0\\ \phi^- \end{array}\right) {N_R}_i + h.c. \label{e2005}\end{aligned}$$ Where $\phi_0,\phi¯$ are the components of the Higgs doublet. The coefficient $h_i$ is the Yukawa coupling. One has that, after symmetry breaking, $m_D=h_i v/2$ where $v$ is the vacuum expectation value of the Higgs doublet. A neutrino Dirac mass is qualitatively just like any other fermion masses, but that leads to the question of why it is so small in comparison with the rest of fermion masses: one would require $h_{\nu_e}<10^{-10}$ in order to have $m_{\nu e}< 10 $ eV. Or in other words: $h_{\nu_e}/h_e\sim 10^{-5}$ while for the hadronic sector we have $h_{up}/h_{down}\sim O(1)$. Now we will deal with the Majorana mass terms. The $m_S$ term will appear if $N$ is a gauge singlet. In this case a renormalizable mass term of the type $L_N=m_S N^t N$ is allowed by the SM gauge group $SU(3)\times SU(2)\times U(1)$ symmetry. However, it would not be consistent in general with unified symmetries, i.e. with a full SO(10) symmetry and some complicated mechanism should be invocated. A $m_S$ term is usually associated with the breaking of some larger symmetry, the expected scale for it should be in a range covering from $\sim$ TeV (LR models) to GUT scales $\sim 10^{15}-10^{17}$ GeV. Finally, the $m_T$ term will appear if $\nu_L$ is active, belongs to some gauge doublet. In this case we have $\Delta I$=1 and $m_T$ must be generated by either a) an elementary Higgs triplet or b) by an effective operator involving two Higgs doublets arranged to transform as a triplet. In case a), for an elementary triplet $m_T\sim h_T v_T$, where $h_T$ is a Yukawa coupling and $v_T$ is the triplet VEV. The simplest implementation (the old Gelmini-Roncadelli model [@gelmini]) is excluded by the LEP data on the Z width: the corresponding Majoron couples to the Z boson increasing significantly its width. Variant models involving explicit lepton number violation or in which the Majoron is mainly a weak singlet ([@chika], invisible Majoron models) could still be possible. In case b), for an effective operator originated mass, one expects $m_T\sim 1/M$ where $M$ is the scale of the new physics which generates the operator. A few words about the range of expected neutrino masses for different types of models depending on the values of $m_D,M_{S,T}$. For $m_S\sim 1$ TeV (LR models) and with typical $m_D$’s, one expects masses of order $10^{-1}$ eV, 10 keV, and 1 MeV for the $\nu_{e,\mu,\tau}$ respectively. GUT theories motivates a big range of intermediate scales $10^{12}-10^{16}$ GeV. In the lower end of this range, for $m_S\sim 10^{12}$ GeV (some superstring-inspired models, GUT with multiple breaking stages) one can obtain light neutrino masses of the order $(10^{-7}$ eV, $10^{-3}$ eV, 10 eV). At the upper end, for $m_S\sim 10^{16}$ (grand unified seesaw with large Higgs representations) one typically finds smaller masses around $(10^{-11}$, $10^{-7}$, $10^{-2}$) eV somehow more difficult to fit into the present known experimental facts. The magnetic dipole moment and neutrino masses ---------------------------------------------- The magnetic dipole moment is another probe of possible new interactions. Majorana neutrinos have identically zero magnetic and electric dipole moments. Flavor transition magnetic moments are allowed however in general for both Dirac and Majorana neutrinos. Limits obtained from laboratory experiments are of the order of a few $\times 10^{-10}\mu_B$ and those from stellar physics or cosmology are $O(10^{-11}-10^{-13})\mu_B$. In the SM electroweak theory, extended to allow for Dirac neutrino masses, the neutrino magnetic dipole moment is nonzero and given, as ([@PDG98] and references therein): $$\begin{aligned} \mu_\nu&=& \frac{3 e G_F m_\nu}{8 \pi^2 \surd 2}= 3\times 10^{-19} (m_\nu/1\ eV)\mu_B\end{aligned}$$ where $\mu_B$ is the Bohr magneton. The proportionality of $\mu_\nu$ to the neutrino mass is due to the absence of any interaction of $\nu_R$ other than its Yukawa coupling which generates its mass. In LR symmetric theories $\mu_\nu$ is proportional to the charged lepton mass: a value of $\mu_\nu\sim 10^{-13}-10^{-14}\mu_B$ can be reached still too small to have practical astrophysical consequences. Magnetic moment interactions arise in any renormalizable gauge theory only as finite radiative corrections. The diagrams which generate a magnetic moment will also contribute to the neutrino mass once the external photon line is removed. In the absence of additional symmetries a large magnetic moment is incompatible with a small neutrino mass. The way out suggested by Voloshin consists in defining a SU(2)$_\nu$ symmetry acting on the space $(\nu,\nu^c)$, magnetic moment terms are singlets under this symmetry. In the limit of exact SU(2)$_\nu$ the neutrino mass is forbidden but $\mu_\nu$ is allowed [@fukugita]. Diverse concrete models have been proposed where such symmetry is embedded into an extension of the SM (left-right symmetries, SUSY with horizontal gauge symmetries [@babu1]). Aspects of some theoretical models for neutrino mass ==================================================== Neutrino masses in LR models ---------------------------- A very natural way to generate neutrino mass is to minimally extend the SM including additional 2-spinors as right handed neutrinos and at the same time extend the, non-QCD, SM gauge symmetry group to $G_{LR}\equiv SU(2)_L \times SU(2)_R\times U(1)_{B-L}\times P $. The resulting model, initially proposed in 1973-1974, is known as the left-right (LR) symmetric model [@mohapatra]. This kind of models were first proposed with the goal of seeking a spontaneous origin for $P$ violation in weak interactions: CP and P are conserved at large energies; at low energies, however, the group $G_{LR}$ breaks down spontaneously at a scale $M_R$. Any new physics correction to the SM would be of order $(M_L/M_R)^2$ where $M_L\sim m_W$; if we choose $M_R>>M_L$ we obtain only small corrections, compatible with present known physics. We can satisfactorily explain in this case the small quantity of CP violation observed in present experiments and why the neutrino mass is so small, as we will see below. The quarks ($Q$) and leptons ($L$) in LR models transform as doublets under the group $ SU(2)_{L,R}$ as follows: $Q_L,L_L\sim (2,1)$ and $Q_R,L_R\sim (1,2)$. The gauge interactions are symmetric between left and right -handed fermions; therefore before symmetry spontaneous breaking, weak interactions, as the others, conserve parity The breaking of the gauge symmetry is implemented by multiplets of LR symmetric Higgs fields, the concrete choosing of these multiplets is not unique. It has been shown that in order to understand the smallness of the neutrino mass, it is convenient to choose respectively one doublet and two triplets as follows: $$\begin{aligned} \phi &\sim& (2,2,0)\\ \Delta_L \sim (3,1,2)& ,& \Delta_R \sim (1,3,2).\end{aligned}$$ The Yukawa couplings of these Higgs fields to the quarks and leptons are given by $$\begin{aligned} L_{yuk} &=& h_1 \bar{L}_L \phi L_R + h_2 \bar{L}_L \tilde{\phi} L_R + h_1' \bar{Q}_L \phi Q_R + h_2' \bar{Q}_L \tilde{\phi} Q_R \nonumber\\ &+& f(L_L L_L \Delta_L + L_R L_R \Delta_R) + h.c.\end{aligned}$$ The gauge symmetry breaking proceeds in two steps. The $SU(2)_R\times U(1)_{B-L}$ is broken down to $U(1)_Y$ by choosing $\langle \Delta_R^0 \rangle= v_R \neq 0$ since this carries both $SU(2)_R$ and $U(1)_{B-L}$ quantum numbers. It gives mass to charged and neutral right handed gauge bosons, , $$M_{W_R} = g v_R,\quad M_{Z'} = \sqrt{2} g v_R/\sqrt{1-\tan^2 \theta_W}.$$ Furthermore, as consequence of $f$-term in the Lagrangian above this stage of symmetry breaking also leads to a mass term for the right-handed neutrinos of the order $\sim f v_R$. Next, as we break the SM symmetry by turning on the vev’s for $\phi$ fields as $ \langle \phi \rangle =diag(v_\kappa, v_\kappa')$, with $v_R >> v_\kappa'>> v_\kappa$, we give masses to the $W_L$ and the $Z$ bosons and also to quarks and leptons ($m_e\sim h v_\kappa$). At the end of the process of spontaneous symmetry breaking the two $W$ bosons of the model will mix, the lowest physical mass eigenstate is identified as the observed W boson. Current experimental limits set the limit (see Ref.[@PDG98], at 90% CL) $m_{WR}> 550 $ GeV. In the neutrino sector the above Yukawa couplings after $SU(2)_L$ breaking by $\langle \phi \rangle \neq 0$ leads to the Dirac masses for the neutrino. The full process leads to the following mass matrix for the $\nu$, $N$, (the matrix M in eq.\[e2003b\]) $$M = \left( \begin{array}{cc} \sim 0 & h v_\kappa \\ h v_\kappa & f v_R \end{array}\right).$$ From the structure of this matrix we can see the [*see-saw*]{} mechanism at work. By diagonalizing M, we get a light neutrino corresponding to the eigenvalue $m_{\nu}\simeq (hv_\kappa)^2/f v_R$ and a heavy one with mass $m_N\simeq f v_R$. Variants of the basic $LR$ model include the possibility of having Dirac neutrinos as the expense of enlarging the particle content. The introduction of two new singlet fermions and a new set of carefully-chosen Higgs bosons, allows us to write the $4\times 4$ mass matrix [@mohapatrareview]: $$\begin{aligned} M&=&\pmatrix{ 0 &m_D &0 &0 \cr m_D &0 &0 &f v_R \cr 0 &0 &0 &\mu \cr 0 &f v_R &\mu &0 }. \label{matrix00}\end{aligned}$$ Matrix \[matrix00\] leads to two Dirac neutrinos, one heavy with mass $\sim f v_R$ and another light with mass $m_\nu\sim m_D \mu /f v_R$. This light four component spinor has the correct weak interaction properties to be identified as the neutrino. A variant of this model can be constructed by addition of singlet quarks and leptons. One can arrange these new particles in order that the Dirac mass of the neutrino vanishes at the tree level and arises at the one-loop level via $W_L-W_R$ mixing. Left-right symmetric models can be embedded in grand unification groups. The simplest GUT model that leads by successive stages of symmetry breaking to left-right symmetric models at low energies is the $SO(10)$-based model. A example of LR embedding GUT Supersymmetric theories will be discussed below in the context of Superstring-inspired models. SUSY models: Neutrino masses without right-handed neutrinos ----------------------------------------------------------- Supersymmetry (SUSY) models with explicit broken $R$-parity provide an interesting example of how we can generate neutrino masses without using a right-handed neutrino but incorporating new particles and enlarging the Higgs sector. In a generic SUSY model, due to the Higgs and lepton doublet superfields have the same $SU(3)_c\times SU(2)_L \times U(1)_Y$ quantum numbers, we have in the superpotential terms, bilinear or trilinear in the superfields, that violate baryon and lepton number explicitly. They lead to a mass for the neutrino but also to to proton decay with unacceptable high rates. One radical possibility is to introduce by hand a symmetry that rule out these terms, this is the role of the $R$-symmetry introduced in the MSSM. A less radical possibility is to allow for the existence in the of superpotential of a bilinear term, i.e. $W = \epsilon_3 L_3 H_2$. This is simplest way to illustrate the idea of generating neutrino mass without spoiling current limits on proton decay. The bilinear violation of $R$-parity implied by the $\epsilon_3$ term leads [@valle] by a minimization condition to a non-zero sneutrino vev, $v_3$. In such a model the $\tau$ neutrino acquire a mass, due to the mixing between neutrinos and neutralinos. The $\nu_{e}$ and $\nu_{\mu}$ neutrinos remain massless in this model, it is supposed that they get masses from scalar loop contributions. The model is phenomenologically equivalent to a three Higgs doublet model where one of these doublets (the sneutrino) carry a lepton number which is broken spontaneously. We have the following mass matrix for the neutralino-neutrino sector, in block form the $5\times 5$ matrix reads: $$M = \left[ \begin{array}{c|c} G & Q \vphantom{Q^t} \\[0.1cm] \hline Q^t & \begin{array}{ccc} 0 & -\mu & 0\\ -\mu & 0 &\epsilon_3\\ 0 & \epsilon_3 & 0 \end{array} \end{array} \right] \label{matrix}$$ where $G=diag(M_1,M_2)$ corresponding to the two gauginos masses. The $ Q$ is a $2\times3$ matrix containing $v_{u,d,3}$ the vevs of $H_1$, $H_2$ and the sneutrino. The next two rows are Higgsinos and the last one denotes the tau neutrino. Let us remind that gauginos and Higgsinos are the supersymmetric fermionic counterparts of the gauge and Higgs fields. In diagonalizing the mass matrix $M$, a “see-saw” mechanism is again at work, in which the role of $M_D, M_R$ scale masses are easily recognized. It turns out that $\nu_{\tau}$ mass is given by ($v_3'\equiv \epsilon_3 v_d + \mu v_3$), $$m_{\nu_\tau}\propto \frac{(v_3')^2}{M},$$ where $M$ is the largest gaugino mass. However, in an arbitrary SUSY model this mechanism leads to (although relatively small if $M$ is large) still too large $\nu_{\tau}$ masses. To obtain a realistically small $\nu_{\tau}$ mass we have to assume universality among the soft SUSY breaking terms at GUT scale. In this case the $\nu_{\tau}$ mass is predicted to be small due to a cancellation between the two terms which makes negligible the $v_3'$. We consider now the properties of neutrinos in superstring models. In a number of these models, the effective theory imply a supersymmetric $E_6$ grand unified model, with matter fields belonging to the $27$ dimensional representations of $E_6$ group plus additional $E_6$-singlet fields. The model contains additional neutral leptons in each generation and neutral $E_6$-singlets, gauginos and Higgsinos. As before but with a larger number of them, all of these neutral particles can mix, making the understanding of neutrino masses quite difficult if no simplifying assumptions are employed. Several of these mechanisms have been proposed to understand neutrino masses [@mohapatrareview]. In some of these mechanisms the huge neutral mixing mass matrix is reduced drastically down to a $3\times 3$ neutrino mass matrix result of the mixing of the $\nu,\nu^c$ with an additional neutral field $T$ whose nature depends on the particular mechanism. In the basis $(\nu, \nu^c, T)$ the mass matrix is of the form (with $\mu$ possibly being zero): $$M = \left( \begin{array}{ccc} 0 & m_D & 0 \\ m_D & 0 & \lambda_2 v_R\\ 0 & \lambda_2 v_R & \mu \end{array} \right). \label{matrix2}$$ We distinguish two important cases, the R-parity violating case and the mixing with a singlet, where the sneutrinos, superpartners of $\nu^c$, are assumed to acquire a v.e.v. of order $v_R$. In the first case the $T$ field corresponds to a gaugino with a Majorana mass $\mu$ that can arise at two-loop order. Usually $\mu \simeq 100$ GeV, if we assume $\lambda v_R\simeq 1$ TeV additional dangerous mixing with the Higgsinos can be neglected and we are lead to a neutrino mass $m_\nu\simeq 10^{-1}$ eV. Thus, smallness of neutrino mass is understood without any fine tuning of parameters. In the second case the field $T$ corresponds to one of the $E_6$-singlets presents in the model [@witten; @mohapatrae6]. One has to rely on symmetries that may arise in superstring models on specific Calabi-Yau space to conveniently restrict the Yukawa couplings. If we have $\mu\equiv 0$ in matrix \[matrix2\], this leads to a massless neutrino and a massive Dirac neutrino. There would be neutrino mixing even if the light neutrino remains strictly massless. If we include a possible Majorana mass term for the $S$-fermion of order $\mu\simeq 100$ GeV we get similar values of the neutrino mass as in the previous case. It is worthy to mention that mass matrices as that one appearing in expression \[matrix2\] have been proposed without embedding in a supersymmetric or any other deeper theoretical framework. In this case small tree level neutrino masses are obtained without making use of large scales. For example, the model proposed by Ref.[@caldwell1] (see also Ref.[@valle1]) which incorporates by hand additional iso-singlet neutral fermions. The smallness of neutrino masses is explained directly from the, otherwise left unexplained, smallness of the parameter $\mu$ in such a model. Neutrino masses and extra dimensions ------------------------------------ Recently, models where space-time is endowed with extra dimensions (4+$n$) have received some interest [@dvali]. It has been realized that the fundamental scale of gravity need not be the 4-dimensional “effective” Planck scale $M_P$ but a new scale $M_f$, as low as $M_f\sim$ TeV. The observed Planck scale $M_P$ is then related to $M_f$ in $4+n$ dimensions, by $$\eta^2\equiv \left (\frac{M_f}{M_P}\right )^2 \sim \frac{1}{ M_f^n R^n}$$ where $R$ is the typical length of the extra dimensions. , the coupling is $M_f/M_P \simeq 10^{-16}$ for $M_f \simeq 1$ TeV. For $n=2$, the radii $R$ of the extra dimensions are of the order of the millimeter, which could be hidden from many, extremely precise, measurements that exist at present but it would give hope to probe the concept of hidden space dimensions (and gravity itself) by experiment in the near future. According to current theoretical frameworks (see for example Ref. [@dvali]), all the SM group-charged particles are localized on a $3$-dimensional hyper-surface ‘brane’ embedded in the bulk of the $n$ extra dimensions. All the particles split in two categories, those that live on the brane and those which exist every where, as ‘bulk modes’. In general, any coupling between the brane and the bulk modes are suppressed by the geometrical factor $\eta$. Graviton and possible other neutral states belongs to the second category. The observed weakness of gravity can be then interpreted as a result of the new space dimensions in which gravity can propagate. The small coupling above can also be used to explain the smallness of the neutrino mass [@smirnov]. The left handed neutrino $\nu_L$ having weak isospin and hypercharge must reside on the brane. Thus it can get a naturally small Dirac mass through the mixing with some bulk fermion which can be interpreted as right handed neutrinos $\nu_R$: $$L_{mass,Dirac}\sim h \eta H \bar{\nu}_L \nu_R.$$ Here $ H,h$ are the Higgs doublet fields and a Yukawa coupling. After EW breaking this interaction will generate the Dirac mass $m_D = h v \eta \simeq 10^{-5}\ \mathrm{eV}$. The right handed neutrino $\nu_R$ has a whole tower of Kaluza-Klein relatives $\nu_{iR}$. The masses of these states are given by $m_{i} = i/R $, the $\nu_L$ couples with all with the same mixing mass. We can write the mass Lagrangian as $L=\bar{\nu}_L M \nu_R$ where $\nu_L=(\nu_L,\tilde{\nu}_{1L}, ...)$ , $\nu_R=(\nu_0R,\tilde{\nu}_{1R}, ...)$ and the resulting mass matrix $M$ being: $$M = \pmatrix{ m_D & \sqrt{2} m_D & \sqrt{2} m_D & .&\sqrt{2} m_D &. \cr 0 & 1/R & 0 & .&0 & .\cr 0 & 0 & 2/R & . & 0&.\cr . &. & . & . & k/R&.\cr . &. & . & . & . &. } \label{extra2}$$ The eigenvalues of the matrix $M M^{\dag}$ are given by a transcendental equation. In the limit, $m_D R \rightarrow 0$, or $m_D\rightarrow 0$, the eigenvalues are $\sim k/R$, $k\in Z$ with a doubly-degenerated zero eigenvalue. Other examples can be considered which incorporates a LR symmetry (see for example Ref. [@mohapatra2]), a $SU(2)_R$ right handed neutrino is assumed to live on the brane together with the standard one. In this class of models, it has been shown that the left handed neutrino is exactly massless whereas assumed bulk sterile neutrinos have masses related to the size of the extra dimensions. They are of order $10^{-3}$ eV, if there is at least one extra dimension with size in the micrometer range. Family symmetries and neutrino masses ------------------------------------- The observed mass and mixing interfamily hierarchy in the quark and, presumably in the lepton sector might be a consequence of the existence of a number of $U(1)_F$ family symmetries [@froggatt]. The observed intrafamily hierarchy, the fact that for each family $m_{up}>> m_{down}$, seem to require one of these to be anomalous [@familons1; @familons2]. A simple model with one family-dependent anomalous $U(1)$ beyond the SM was first proposed in Ref.[@familons1] to produce the observed Yukawa hierarchies, the anomalies being canceled by the Green-Schwartz mechanism which as a by-product is able to fix the Weinberg angle (see also Ref.[@familons2]). Recent developments includes the model proposed in Ref.[@RAMOND], which is inspired by models generated by the $E_6\times E_8$ heterotic string. The gauge structure of the model is that of the SM augmented by three Abelian $U(1)$ symmetries $X,Y^{1,2}$, the first one is anomalous and family independent. Two of the them, the non-anomalous ones, have specific dependences on the three chiral families designed to reproduce the Yukawa hierarchies. There are right handed neutrinos which trigger neutrino masses by the see-saw mechanism. The three symmetries $X,Y^{1,2}$ are spontaneously broken at a high scale M by stringy effects. It is assumed that three fields $\theta_i$ acquire a vacuum value. The $\theta_i$ fields are singlets under the SM symmetry but not under the $X$ and $Y^{1,2}$ symmetries. In this way, the Yukawa couplings appear as the effective operators after $U(1)_F$ spontaneous symmetry breaking. For neutrinos we have [@ramond3] the mass Lagrangian $$\begin{aligned} L_{mass}&\sim & h_{ij} L_i H_u N_j^c \lambda^{q_i+n_j} + M_N \xi_{ij} N_i^c N_j^c \lambda^{n_i+n_j} \nonumber\end{aligned}$$ where $h_{ij}, \xi_{ij} \simeq O(1)$. The parameter $\lambda$ determine the mass and mixing hierarchy, $\lambda~=~\langle\theta\rangle/M\sim\sin\theta_c$ where $\theta_c$ is the Cabibbo angle. The $q_i, n_i$ are the $U(1)$ charges assigned respectively to left handed leptons $L$ and right handed neutrinos $N$. These coupling generate the following mass matrices for neutrinos: $$\begin{aligned} m_{\nu}^D &= & diag(\lambda^{q_1}, \lambda^{q_2} , \lambda^{q_3})\ \hat{h}\ diag(\lambda^{n_1}, \lambda^{n_2} , \lambda^{n_3}) \langle H_u \rangle ,\nonumber\\ M_{\nu} & = & diag(\lambda^{n_1}, \lambda^{n_2} , \lambda^{n_3})\ \hat{\xi}\ diag(\lambda^{n_1}, \lambda^{n_2} , \lambda^{n_3}) M_N.\end{aligned}$$ From these matrices, the see-saw mechanism gives the formula for light neutrinos: $$m_{\nu} \simeq \frac{\langle H_u \rangle^2}{M} diag(\lambda^{q_1}, \lambda^{q_2}, \lambda^{q_3})\ \hat{h}\ \hat{\xi}^{-1}\ \hat{h}^{T}\ diag(\lambda^{q_1}, \lambda^{q_2}, \lambda^{q_3}).$$ The neutrino mass mixing matrix depends only on the charges assigned to the left handed neutrinos, by a cancellation of right handed neutrino charges by virtue of the see-saw mechanism. There is freedom in assigning charges $q_i$. If the charges of the second and the third generations of leptons are equal ($q_2 = q_3$), then one is lead to a mass matrix which have the following structure: $$m_\nu \sim \left( \begin{array}{ccc} \lambda^6 & \lambda^3 & \lambda^3 \\ \lambda^3 & a & b\\ \lambda^3 & b & c \end{array} \right) .$$ where $a, b, c \sim O(1)$. This matrix can be diagonalized by a large $\nu_2 - \nu_3 $ rotation, it is consistent with a large $\mu-\tau$ mixing. In this theory, explanation of the large neutrino mixing is reduced to a theory of prefactors in front of powers of the parameter $\lambda$. Cosmological Constraints {#sectioncosmos} ======================== Cosmological mass limits and Dark Matter ---------------------------------------- There are some indirect constraints on neutrino masses provided by cosmology. The most relevant is the constraint which follows from demanding that the energy density in neutrinos should not be too high. At the end of this section we will deal with some other limits as the lower mass limit obtained from galactic phase space requirements or limits on the abundance of additional weakly interacting light particles. Stable neutrinos with low masses ($m_\nu{\mbox{\raisebox{-1.ex} {$\stackrel{\textstyle <}{\textstyle \sim}$}}}\ 1$ MeV) make a contribution to the total energy density of the universe which is given by: $$\begin{aligned} \rho_\nu&=& m_{tot}\ n_\nu\end{aligned}$$ where the total mass $m_{tot}=\sum_\nu (g_\nu/2) m_\nu$, with the number of degrees of freedom $g_\nu=4(2)$ for Dirac (Majorana) neutrinos. The number density of the neutrino sea is related to that one of photons by entropy conservation in the adiabatic expansion of the universe, $n_\nu=3/11\ n_\gamma$, and this last one is very accurately obtained from the CMBR measurements, $n_\gamma= 410.5$ cm$^{-3}$ (for a Planck spectrum with $T_0=2.725\pm 0.001\ K\ \simeq 2.35 \times 10^{-4}$ eV). Writing $\Omega_\nu=\rho_\nu/\rho_c$, where $\rho_c$ is the critical energy density of the universe ($\rho_c=3 H_{0}^2/8 \pi G_N$), we have ($m_\nu>> T_0$) $$\begin{aligned} \Omega_\nu h^2& =& 10^{-2}\ m_{tot}\ ( \mathrm{eV}), \label{cosmoneutrino}\end{aligned}$$ where $h$ is the reduced Hubble constant, recent analysis [@hubble] give the favored value: $h=0.71\pm0.08$. Constrained by requirements from BBN Nucleosynthesis, galactic structure formation and large scale observations, increasing evidence (luminosity-density relations, galactic rotation curves,large scale flows) suggests that [@darkmatter] $$\Omega_{M} h^2= 0.05 - 0.2,$$ where $\Omega_{M}$ is the total mass density of the universe, as a fraction of the critical density $\rho_c$. This $\Omega_{M}$ includes contributions from a variety of sources: photons, baryons, non-baryonic Cold Dark Matter (CDM) and Hot Dark Matter (HDM). The two first components are rather well known. The photon density is very well known to be quite small: $ \Omega_\gamma h^2= 2.471\times 10^{-5}$. The deuterium abundance BBN constraints [@deuterium] on the baryonic matter density ($\Omega_B$) of the universe $0.017 \leq \Omega_B h^2 \leq 0.021.$ The hot component, HDM is constituted by relativistic long-lived particles with masses much less than $\sim 1$ keV, in this category would enter the neutrinos. Detailed simulations of structure formation fit the observations only when one has some 20 % of HDM (plus $80\%$ CDM), the best fit being two neutrinos with a total mass of 4.7 eV. There seems to be however some kind of conflict within cosmology itself: observations of distant objects favor a large cosmological constant instead of HDM (see Ref.[@HDM] and references therein). One may conclude that the HDM part of $\Omega_M$ does not exceed 0.2. Requiring that $\Omega_\nu<\Omega_M$, we obtain $\Omega_{\nu} h^2 {\mbox{\raisebox{-1.ex} {$\stackrel{\textstyle <}{\textstyle \sim}$}}}\ 0.1$. From here and from Eq.\[cosmoneutrino\], we obtain the cosmological upper bound on the neutrino mass $$m_{tot}{\mbox{\raisebox{-1.ex} {$\stackrel{\textstyle <}{\textstyle \sim}$}}}\ 8\ \mathrm{eV}.$$ Mass limits, in this case lower limits, for heavy neutrinos ($\sim 1$ GeV) can also be obtained along the same lines. The situation gets very different if the neutrinos are unstable, one gets then joint bounds on mass and lifetime, then mass limits above can be avoided. There is a limit to the density of neutrinos (or weak interacting dark matter in general) which can be accumulated in the halos of astronomical objects (the [*Tremaine-Gunn*]{} limit): if neutrinos form part of the galactic bulges phase-space restrictions from the Fermi-Dirac distribution implies a lower limit on the neutrino mass [@peacock]: $$m_\nu{\mbox{\raisebox{-1.ex} {$\stackrel{\textstyle >}{\textstyle \sim}$}}}\ 33 \ eV.$$ The abundance of additional weakly interacting light particles, such as a light sterile $\nu_s$, is constrained by BBN since it would enter into equilibrium with the active neutrinos via neutrino oscillations. A limit on the mass differences and mixing angle with another active neutrino of the type $\Delta m^2 \sin^2 2\theta{\mbox{\raisebox{-1.ex} {$\stackrel{\textstyle <}{\textstyle \sim}$}}}3\times 10^{-6}$ eV$^2$ should be fulfilled in principle. From here is deduced that the effective number of neutrino species is $$N_\nu^{eff}< 3.5-4.5.$$ However systematical uncertainties in the derivation of the BBN bound make it too unreliable to be taken at face value and can eventually be avoided [@foot]. Neutrino masses and lepton asymmetry ------------------------------------- In supersymmetric LR symmetric models, inflation, baryogenesis (or leptogenesis) and neutrino oscillations can become closely linked. Baryosinthesis in GUT theories is in general inconsistent with an inflationary universe. The exponential expansion during inflation will wash out any baryon asymmetry generated previously at GUT scale. One way out of this difficulty is to generate the baryon or lepton asymmetry during the process of reheating at the end of the inflation. In this case the physics of the scalar field that drives the inflation, the inflaton, would have to violate CP (see Ref.[@peacock] and references therein). The challenge of any baryosinthesis model is to predict the observed asymmetry which is usually written as a baryon to photon (number or entropy) ratio. The baryon asymmetry is defined as $$\begin{aligned} n_B/s\equiv \left ( n_b-n_{\overline{b}}\right )/s.\end{aligned}$$ At present there is only matter and not known antimatter, $n_{\overline{b}}\sim 0$. The entropy density $s$ is completely dominated by the contribution of relativistic particles so is proportional to the photon number density which is very well known from CMBR measurements, at present $s=7.05\ n_\gamma$. Thus, $n_B/s\propto n_b/n_\gamma$. From BBN we know that $n_b/n_\gamma=(5.1\pm 0.3)\times 10^{-10}$ so we arrive to $n_B/s=(7.2\pm 0.4)\times 10^{-11}$ and from here we obtain equally the lepton asymmetry ratio. It was shown in Ref. [@khalil] that hybrid inflation can be successfully realized in a SUSY LR symmetric model with gauge group $G_{PS}=SU(4)_c\times SU(2)_L \times SU(2)_R$. The inflaton sector of this model consists of the two complex scalar fields $S$ and $\theta$ which at the end of inflation oscillate about the SUSY minimum and respectively decay into a pair of right-handed sneutrinos ($\nu^c_i$) and neutrinos. In this model, a primordial lepton asymmetry is generated [@yanagita] by the decay of the superfield $\nu^c_2$ which emerges as the decay product of the inflaton. The superfield $\nu^c_2$ decays into electroweak Higgs and (anti)lepton superfields. This lepton asymmetry is subsequently partially converted into baryon asymmetry by non-perturbative EW sphalerons. The resulting lepton asymmetry [@Laz3] can be written as a function of a number of parameters among them the neutrino masses and mixing angles and compared with the observational constraints above. It is highly non-trivial that solutions satisfying the constraints above and other physical requirements can be found with natural values of the model parameters. In particular, it is shown that the values of the neutrino masses and mixing angles which predict sensible values for the baryon or lepton asymmetry turn out to be also consistent with values required to solve the solar neutrino problem. Phenomenology of Neutrino Oscillations ====================================== Neutrino Oscillation in Vacuum ------------------------------ If the neutrinos have nonzero mass, by the basic postulates of the quantum theory there will be in general mixing among them as in the case of quarks. This mixing will be observable at macroscopic distances from the production point and therefore will have practical consequences only if the [*difference*]{} of masses of the different neutrinos is very small, typically $\Delta m{\mbox{\raisebox{-1.ex} {$\stackrel{\textstyle <}{\textstyle \sim}$}}}1$ eV. In presence of masses, weak ($\nu_w$) and mass ($\nu_m$) basis of eigenstates are differentiated. To transform between them we need an unitary matrix $U$. Neutrinos can only be created and detected as a result of weak processes, at origin we have a weak eigenstate: $$\nu_w(0)= U \nu_m(0).$$ We can easily construct an heuristic theory of neutrino oscillations if we ignore spin effects as follows. After a certain time the system has evolved into $$\nu_m(t)=\exp (-i H t) \nu_m(0)$$ where $H$ is the Hamiltonian of the system, free evolution in vacuum is characterized by $H=diag(\dots E_i \dots)$ where $E_i^2=p^2+m_i^2$. In most cases of interest ($E\sim$MeV, $m\sim$eV), it is appropriated the ultrarelativistic limit: in this limit $p\simeq E$ and $E\simeq p+m^2/2p$. The effective neutrino Hamiltonian can then be written $H^{eff}=diag(\dots m_i^2\dots)/2E $ and $$\nu_w(t)= U \exp (-i H^{eff} t) U^\dagger \nu_w(0)=\exp(-i H^{eff}_w t) \nu_w(0).$$ In the last expression we have written the effective Hamiltonians in the weak basis $H^{eff}_w~\equiv~M^2/2E$ with $M\equiv U\ diag(\dots m_i^2 \dots) U^\dagger$. This derivation can be put in a firm basis and one finds again the same expressions as the first terms of rigorous expansions in $E$, see for example the treatment using Foldy-Woythusen transformations in Ref.[@BILENKY]. The results of the neutrino oscillation experiments are usually analyzed under the simplest assumption of oscillations between two neutrino types, in this case the mixing matrix $U$ is the well known 2-dimension orthogonal rotation matrix depending on a single parameter $\theta$. If we repeat all the computation above for this particular case, we find for example that the probability that a weak interaction eigenstate neutrino $(\nu_e)$ has oscillated to other weak interaction eigenstate neutrino $(\nu_{\mu})$ after traversing a distance $l(= c t)$ is $$P(\nu_e \rightarrow \nu_{\mu}; l)= \sin^2 2\theta \sin^2 \left (\frac{l}{l_{osc}} \right ) \label{oscil1}$$ where the oscillation length is defined by $1/l_{osc}\equiv \delta m^2 l/4 E$ and $\delta m^2 = m_1^2-m_2^2$. Numerically, in practical units, it turns out that $$\frac{\delta m^2l}{4 E} \simeq 1.27\ \frac{\delta m^2 (eV^2)\ l(m) }{E(MeV)}.$$ These probabilities depend on two factors: a mixing angle factor $\sin^2 2 \theta $ and a kinematical factor which depends on the distance traveled, on the momentum of the neutrinos, as well as on the difference in the squared mass of the two neutrinos. Both, the mixing factor $\sin^2 2\theta $ and the kinematical factor should be of $O(1)$ to have a significant oscillations. Neutrino Oscillations in Matter ------------------------------- When neutrinos propagate in matter, a subtle but potentially very important effect, the MSW effect, takes place which alters the way in which neutrinos oscillate into one another. In matter the neutrino experiences scattering and absorption, this last one is always negligible. At very low energies, coherent elastic forward scattering is the most important process. As in optics, the net effect is the appearance of a phase difference, refractive index or equivalently a neutrino effective mass. This effective mass can considerable change depending on the densities and composition of the medium, it depends also on the nature of the neutrino. In the neutrino case the medium is flavor-dispersive: the matter is usually nonsymmetric with respect $e$ and $\mu,\tau$ and the effective mass is different for the different weak eigenstates [@msw2]. This is explained as follows for the simpler and most important case, the solar electron plasma. The electrons in the solar medium have charged current interactions with $\nu_e$ but not with $\nu_{\mu}$ or $\nu_{\tau}$. The resulting interaction energy is given by $ H_{int} = \sqrt{2} G_F N_e$, where $G_F$ and $N_e$ are the Fermi coupling and the electron density. The corresponding neutral current interactions are identical for all neutrino species and hence have no net effect on their propagation. Hypothetical sterile neutrinos would have no interaction at all. The effective global Hamiltonian in flavor space is now the sum of two terms, the vacuum part we have seen previously and the new interaction energy: $$H^{eff,mat}_w=H^{eff,vac}_w+ H_{int} \pmatrix{ 1 & 0 & 0\cr 0 & 0 & 0\cr 0 & 0 & 0} .$$ The practical consequence of this effect is that the oscillation probabilities of the neutrino in matter could largely increase due to resonance phenomena [@msw1]. In matter, for the two dimensional case and in analogy with vacuum oscillation, one defines an effective mixing angle as $$\sin 2\theta_M = \frac{ \sin2\theta/l_{osc}} {\left[(\cos 2\theta/l_{osc} - G_F N_e/\sqrt{2})^2 + ( \sin 2\theta/l_{osc} )^2 \right]^{1/2}}.$$ The presence of the term proportional to the electron density can give rise to a resonance. There is a critical density $N_e^{crit}$, given by $$N_e^{crit} = \frac{\delta m^2\cos 2\theta}{2\sqrt{2} E G_F},$$ for which the matter mixing angle $\theta_M$ becomes maximal $(\sin 2 \theta_M \rightarrow 1)$, irrespective of the value of mixing angle $\theta$. The probability that $\nu_e$ oscillates into a $\nu_{\mu}$ after traversing a distance $l$ in this medium is given by Eq.(\[oscil1\]), with two differences. First $\sin 2\theta \rightarrow \sin 2\theta_M$. Second, the kinematical factor differ by the replacement of $\delta m^2 \rightarrow \delta m^2 \sin 2\theta$. Hence it follows that, at the critical density, $$P_{\mathrm{matter}}(\nu_e \rightarrow \nu_{\mu}; l)_ {(N_e =N_e^{\mathrm{crit}})} = \sin^2 \left ( \sin2\theta\ \frac{l}{l_{osc}}\right ) . \label{oscil2}$$ This formula shows that one can get full conversion of a $\nu_e$ weak interaction eigenstate into a $\nu_{\mu}$ weak interaction eigenstate, provided that the length $l$ and the energy $E$ satisfy the relations $$\sin 2\theta\ \frac{l}{l_{osc}} = \frac{n \pi}{2} ; \quad n=1,2,..$$ There is a second interesting limit to consider. This is when the electron density $N_e$ is so large such that $\sin 2 \theta_M \rightarrow 0$ or $\theta_M \rightarrow \pi/2$. In this limit, there are no oscillations in matter because $\sin2\theta_M$ vanishes and we have $$P_{\mathrm{matter}}(\nu_e \rightarrow \nu_{\mu}; l)_{\left (N_e \gg \frac{\delta m^2}{2 \sqrt{2} E G_F}\right )} \rightarrow 0.$$ Experimental evidence and phenomenological analysis =================================================== In the second part of this review, we will consider the existing experimental situation. It is fair to say that at present there are at least an equal number of positive as negative (or better ”non-positive”) indications in favor of neutrino masses and oscillations. Laboratory, reactor and accelerator results. -------------------------------------------- No indications in favor of a non-zero neutrino masses have been found in direct kinematical searches for a neutrino mass. From the measurement of the high energy part of the tritium $\beta$ decay spectrum, upper limits on the electron neutrino mass are obtained. The two more sensitive experiments in this field, Troitsk [@troitsk] and Mainz [@mainz], obtain results which are plagued by interpretation problems: apparition of negative mass squared and bumps at the end of the spectrum. In the Troistk experiment, the shape of the observed spectrum proves to be in accordance with classical shape besides a region $\sim$ 15 eV below the end-point, where a small bump is observed; there are indications of a periodic shift of the position of this bump with a period of “exactly” $0.504\pm 0.003$ year [@troitsknew]. After accounting for the bump, they derive the limit $m_{\nu e}^2=-1.0\pm 3.0\pm 2.1$ eV$^2$, or $m_{\nu e}< 2.5$ eV (95[@troitsknew]. The latest published results by the Mainz group leads to $m_{\nu e}^2=-0.1\pm 3.8\pm 1.8$ eV$^2$ (1998 “Mainz data 1”), From which an upper limit of $m_{\nu e}< 2.9$ eV [@mainz] (95% C.L., unified approach) is obtained. Preliminary data (1998 and 1999 measurements) provide a limit $m_{\nu e}< 2.3$ eV [@mainznew]. Some indication for the anomaly, reported by the Troitsk group, was found, but its postulated half year period is not supported by their data. Diverse exotic explanations have been proposed to explain the Troitsk bump and their seasonal dependence. The main feature of the effect might be “phenomenologically” interpreted, not without problems, as $^3$He capture of relic neutrinos present in a high density cloud around the Sun [@troitsk; @stephe]. The Mainz and Troitsk ultimate sensitivity expected to be limited by systematics lies at the $\sim 2 $ eV level. In the near future, it is planned a new large tritium $\beta$ experiment with sensitivity $0.6- 1$ eV [@mainznew]. Regarding the heavier neutrinos, other kinematical limits are the following: - Limits for the muon neutrino mass have been derived using the decay channel $\pi^+\to\mu^+ \nu_\mu$ at intermediate energy accelerators (PSI, LANL). The present limits are $m_{\nu \mu}{\mbox{\raisebox{-1.ex} {$\stackrel{\textstyle <}{\textstyle \sim}$}}}160 $ keV [@PSI]. - A tau neutrino mass of less than 30 MeV is well established and confirmed by several experiments: limits of 28, 30 and 31 MeV have also been obtained by the OPAL, CLEO and ARGUS experiments respectively (see Ref.[@OPAL] and references therein). The best upper limit for the $\tau$ neutrino mass has been derived using the decay mode $\tau\to 5 \pi^\pm \nu_\tau$ by the ALEPH collaboration [@ALEPH]: $m_{\nu\tau}<$ 18 MeV (95% CL). Many experiments on the search for neutrinoless double-beta decay \[$(\beta\beta)_{0\nu}$\], $$(A,Z)\to\ (A,Z+2)\to 2\ e^-,$$ have been performed. This process is possible only if neutrinos are massive and Majorana particles. The matrix element of the process is proportional to the effective Majorana mass $\langle m\rangle=\sum \eta_i U_{ei}^2 m_i$. Uncertainties in the precise value of upper limits are relatively large since they depend on theoretical calculations of nuclear matrix elements. From the non-observation of $(\beta\beta)_{0\nu}$, the Heidelberg-Moscow experiment gives the most stringent limit on the Majorana neutrino mass. After 24 kg/year of data [@heidelberg99] (see also earlier results in Ref.[@heidelberg]), they set a lower limit on the half-life of the neutrinoless double beta decay in $^{76}$Ge of $T_{1/2}>5.7\times 10^{25}$ yr at 90% CL, thus excluding an effective Majorana neutrino mass $\mid\langle m\rangle\mid >0.2 $ eV (90% CL). This result allows to set strong constraints on degenerate neutrino mass models. In the next years it is expected an increase in sensitivity allowing limits down to the $\mid\langle m\rangle\mid \sim 0.02-0.006$ eV levels (GENIUS I and II experiments, [@genius]). Many short-baseline (SBL) neutrino oscillation experiments with reactor and accelerator neutrinos did not find any evidence of neutrino oscillations. For example experiments looking for be $\overline{\nu}_e\to\overline{\nu}_e$ or ${\nu}_\mu\to{\nu}_\mu$ dissaperance (Bugey, CCFR [@Bugey; @CCFR]) or oscillations $\overline{\nu}_\mu\to\overline{\nu}_e$ (CCFR,E776[@CCFR; @E776]). The first reactor long-baseline (L$\sim$ 998-1115 m) neutrino oscillation experiment CHOOZ found no evidence for neutrino oscillations in the ${\bar{\nu}_{\rm e}}$ disappearance mode [@CHOOZ; @CHOOZ99]. CHOOZ results are important for the atmospheric deficit problem: as is seen in Fig.(\[fCHOOZ\]) they are incompatible with an ${{\nu}_{\rm e}}\to {\nu_{\rm \mu}}$ oscillation hypothesis for the solution of the atmospheric problem. Their latest results [@CHOOZ99] imply an exclusion region in the plane of the two-generation mixing parameters (with normal or sterile neutrinos) given approximately by $\Delta m^2 > 0.7~10^{-4}{\mbox{$\textstyle eV^2$}}$ for maximum mixing and ${{\sin^2 2\theta}}> 0.10$ for large ${{\Delta m^2}}$ (as shown approximately in Fig.(\[fCHOOZ\]) (left) which corresponds to early results). Lower sensitivity results, based only on the comparison of the positron spectra from the two different-distance nuclear reactors, has also been presented, they are shown in Fig.(\[fCHOOZ\]) (right). These are independent of the absolute normalization of the antineutrino flux, the cross section and the target and detector characteristics and are able alone to almost completely exclude the SK allowed oscillation region [@CHOOZ99]. The Palo Verde Neutrino Detector searches for neutrino oscillations via the disappearance of electron anti-neutrinos produced by a nuclear reactor at a distance $L\sim 750-890$ m. The experiment has been taking neutrino data since October 1998 and will continue taking data until the end of 2000 reaching its ultimate sensitivity. The analysis of the 1998-1999 data (first 147 days of operation) [@paloverdenew] yielded no evidence for the existence of neutrino oscillations. The ratio of observed to expected number of events: $$\frac{\overline{\nu}_{e,obs}}{\overline{\nu}_{e,MC}}=1.04\pm 0.03\pm0.08.$$ The resulting $\overline{\nu}_e\to\overline{\nu}_x$ exclusion plot is very similar to the CHOOZ one. Together with results from CHOOZ and SK, concludes that the atmospheric neutrino anomaly is very unlikely to be due to $\overline{\nu}_\mu\to\overline{\nu}_e$ oscillation. Los Alamos LSND experiment has reported indications of possible $\overline{\nu}_\mu\to\overline{\nu}_e$ oscillations [@LSND]. They search for $\overline{\nu}_e$’s in excess of the number expected from conventional sources at a liquid scintillator detector located 30 m from a proton beam dump at LAMPF. It has been claimed that a $\overline{\nu}_e$ signal has been detected via the reaction $\overline{\nu}_e p\to e^+ n$ with $e^+$ energy between 36 and 60 MeV, followed by a $\gamma$ from $n p\to d\gamma$ (2.2 MeV). The LSND experiment took its last beam on December, 1998. The analysis of the complete 1993-1998 data set (see Refs.[@yellin; @Mills; @LSNDnew]) yields a fitted-estimated excess of $\overline{\nu}_e$ of $90.9\pm 26.1$. If this excess is attributed to neutrino oscillations of the type $\overline{\nu}_\mu\to\overline{\nu}_e$, it corresponds to an oscillation probability of $3.3\pm 0.09\pm 0.05\times 10^{-3}$. The results of a similar search for $\nu_\mu\to \nu_e$ oscillations where the (high energy, $60 <E_\nu< 200$ MeV) $\nu_e$ are detected via the CC reaction $C(\nu_e,e^-) X$ provide a value for the corresponding oscillation probability of $2.6\pm 1.0\pm 0.5\times 10^{-3}$ (1993-1997 data). There are other exotic physics explanations of the observed antineutrino excess. One example is the lepton-number violating decay $\mu^+\to e^+ \overline{\nu}_e \nu_\mu$, which can explain these observations with a branching ratio $Br\sim 0.3 \%$, a value which is lower but not very far from the respective existing upper limits ($Br< 0.2-1\%$, [@PDG98]). The surprisingly positive LSND result has not been confirmed by the KARMEN experiment (Rutherford- Karlsruhe Laboratories). This experiment, following a similar experimental setup as LSND, searches for $\bar\nu_e$ produced by $\bar\nu_\mu\to\bar\nu_e$ oscillations at a mean distance of 17.6 m. The time structure of the neutrino beam is important for the identification of the neutrino induced reactions and for the suppression of the cosmic ray background. Systematic time anomalies not completely understood has been reported which rest credibility to any further KARMEN claim. They see an excess of events above the typical muon decay curve, which is $4.3$ sigmas off (1990-1999 data, see Ref.[@karmennew]) and which could represent an unknown instrumental effect. Exotic explanations as the existence of a weakly interacting particle “X”, for example a mixing of active and sterile neutrinos, of a mass $m_X=m_\pi-m_\mu\simeq$ 33.9 MeV have been proposed as an alternative solution to these anomalies and their consequences extensively studied [@karmennew; @relativekarmenanomaly]. This particle might be produced in reactions the $\pi^+\to \mu^+ + X$ and decay as $X\to e^+ e^-\nu$. KARMEN set upper limits on the visible branching ratio $ \Gamma_X=\Gamma ( \pi^+\to \mu^+ + X)/\Gamma ( \pi^+\to \mu^+ + \nu_\mu) and$ lifetime $\tau_x$. From their results [@karmennew] one obtains the relation ($1<<\tau_x(\mu\ s)<\sim 10^8$) $$\frac{\Gamma_X}{\tau_X (\mu\ s)}\sim 10^{-18}.$$ More concretely, the results are as it follows. About antineutrino signal, the 1990-1995 and early 1997-1998 KARMEN data showed inconclusive results: They found no events, with an expected background of $ 2.88 \pm 0.13 $ events, for $\overline{\nu}_\mu\to\overline{\nu}_e$ oscillations [@KARMEN]. The results of the search Feb. 1997- Dec. 1999 which include a 40-fold improvement in suppression of cosmic induced background has been presented in a preliminary way [@karmennew; @steidl99]. They find this time 9.5 oscillation candidates in agreement with the, claimed, well known background expectation of $10.6\pm 0.6$ events. An upper limit for the mixing angle is deduced: $\sin^ 2 2\theta< 1.7\ 10^{-3}$ (90% C.I.) for large $\Delta m^2\ (= 100$ eV$^2$). The positive LSND result in this channel could not be completely excluded but they are able to exclude the entire LSND favored regions above 2 eV$^2$ and most of the rest of its favored parameter space. In the present phase, the KARMEN experiment will take data until spring 2001. At the end of this period, the KARMEN sensitivity is expected to be able to exclude the whole parameter region of evidence suggested by LSND if no oscillation signal were found (Fig.\[fLSND\]). The first phase of a third pion beam dump experiment designed to set the LSND-KARMEN controversy has been approved to run at Fermilab. Phase I of ”BooNe” ( MiniBooNe) expects a 10 $\sigma$ signal ($\sim 1000 $ events) and thus will make a decisive statement either proving or ruling it out. Plans are to run early 2001. Additionally, there is a letter of intent of a similar experiment to be carried out at the CERN PS [@BOON; @CERNPS]. The K2K experiment started in 1999 the era of very long-baseline neutrino-oscillation experiment using a well-defined neutrino beam. In the K2K experiment ($L\sim$ 250 km), the neutrino beam generated by the KEK proton synchrotron accelerator is aimed at the near and far detectors, which are carefully aligned in a straight line. Then, by comparing the neutrino events recorded in these detectors, they are able to examine the neutrino oscillation phenomenon. Super-Kamiokande detector itself acts as the far detector. The K2K near detector complex essentially consists of a one kiloton water Cerenkov detector (a miniature Super-Kamiokande detector). A total intensity of $\sim 10^{19}$ protons on target, which is about 7% of the goal of the experiment, was accumulated in 39.4 days of data-taking in 1999 [@K2K]. They obtained 3 neutrino events in the fiducial volume of the Super-Kamiokande detector, whereas the expectation based on observations in the front detectors was $12.3^{ +1.7}_{-1.9}$ neutrino events. It corresponds to a ratio of data versus theory $0.84\pm 0.01$. Although the preliminary results are rather consistent with squared mass difference $8\times 10^{-3}$ eV$^2$ and maximal mixing, it is too early to draw any reliable conclusions about neutrino mixing. An complete analysis of oscillation searches from the view points of absolute event numbers, distortion of neutrino energy spectrum, and $\nu_e/\nu_\mu$ ratio is still in progress. Solar neutrinos --------------- Indications in the favor of neutrino oscillations were found in ”all” solar neutrino experiments (along this section and the following ones, we will make reference to results appeared in Refs. [@gallex; @sage; @homestake; @sk9812; @sk9805; @suz1]): The Homestake Cl radiochemical experiment with sensitivity down to the lower energy parts of the $^{8}$B neutrino spectrum and to the higher $^{7}$Be line [@homestake]. The two radiochemical $^{71}$Ga experiments, SAGE and GALLEX, which are sensitive to the low energy pp neutrinos and above [@sage; @gallex] and the water Cerenkov experiments Kamiokande and Super-Kamiokande (SK) which can observe only the highest energy $^{8}$B neutrinos. Water Cerenkov experiments in addition demonstrate directly that the neutrinos come from the Sun showing that recoil electrons are scattered in the direction along the sun-earth axis [@sk9812; @sk9805; @suz1]. Two important points to remark are: a) The prediction of the existence of a global neutrino deficit is hard to modify due to the constraint of the solar luminosity on pp neutrinos detected at SAGE-GALLEX. b) The different experiments are sensitive to neutrinos with different energy ranges and combined yield spectroscopic information on the neutrino flux. Intermediate energy neutrinos arise from intermediate steps of the thermonuclear solar cycle. It may not be impossible to reduce the flux from the last step ($^{8}$B), for example by reducing temperature of the center of the Sun, but it seems extremely hard to reduce neutrinos from $^7$Be to a large extent, while keeping a reduction of $^8$B neutrinos production to a modest amount. If minimal standard electroweak theory is correct, the shape of the $^8$B neutrino energy spectrum is independent of all solar influences to very high accuracy. Unless the experiments are seriously in error, there must be some problems with either our understanding of the Sun or neutrinos. Clearly, the SSM cannot account for the data (see Fig.\[SSM1\]) and possible highly nonstandard solar models are strongly constrained by heliosysmology studies \[see Fig.(\[SSM2\])\]. There are at least two reasonable versions of the neutrino oscillation phenomena which could account for the suppression of intermediate energy neutrinos. The first one, neutrino oscillations in vacuum, requires a large mixing angle and a seemingly unnatural fine tuning of neutrino oscillation length with the Sun-Earth distance for intermediate energy neutrinos. The second possibility, level-crossing effect oscillations in presence of solar matter and/or magnetic fields of regular and/or chaotic nature (MSW, RSFP), requires no fine tuning either for mixing parameter or neutrino mass difference to cause a selective large reduction of the neutrino flux. This mechanism explains naturally the suppression of intermediate energy neutrinos, leaving the low energy pp neutrino flux intact and high energy $^8$B neutrinos only loosely suppressed. Concrete range of parameters obtained including the latest SK (Super-Kamiokande) data will be showed in the next section. The SK detector and Results. ---------------------------- The high precision and high statistics Super-Kamiokande (SK) experiment initiated operation in April 1996. A few words about the detector itself. SK is a 50-kiloton water Cerenkov detector located near the old Kamiokande detector under a mean overburden of 2700 meter-water-equivalent. The effective fiducial volume is $22.5$ kt. It is a well understood, well calibrated detector. The accuracy of the absolute energy scale is estimated to be $\pm 2.4\%$ based on several independent calibration sources: cosmic ray through-going and stopping muons, muon decay electrons, the invariant mass of $\pi^0$’s produced by neutrino interactions, radioactive source calibration, and, as a novelty in neutrino experiments, a 5-16 MeV electron LINAC. In addition to the ability of recording higher statistics in less time, due to the much larger dimensions of the detector, SK can contain multi-GeV muon events making possible for the first time a measurement of the spectrum of $\mu$-like events up to $\sim 8-10 $ GeV. The results from SK, to be summarized below, combined with data from earlier experiments provide important constraints on the MSW and vacuum oscillation solutions for the solar neutrino problem (SNP), [@nu98xxx; @smy99; @SKnew]: [*Total rates.*]{} The most robust results of the solar neutrino experiments so far are the total observed rates. Preliminary results corresponding to the first 825 days of operation of SK (presented in spring’2000, [@SKnew]) with a total number of events $N_{ev}= 11235\pm 180\pm 310$ in the energy range $E_{vis}=6.5-20$ MeV. predict the following flux of solar ${}^8$B neutrinos: $$\phi_{{}^8 B}=(2.45\pm 0.04\pm 0.07)\times 10^6\ cm^{-2}\ sec^{-1},$$ a flux which is clearly below the SSM expectations. The most recent data on rates on all existing experiments are summarized in Table (\[t1\]). Total rates alone indicate that the $\nu_e$ energy spectrum from the Sun is distorted. The SSM flux predictions are inconsistent with the observed rates in solar neutrino experiments at approximately the 20$\sigma$ level. Furtherly, there is no linear combination of neutrino fluxes that can fit the available data at the 3$\sigma$ level \[Fig.(\[SSM1\]\]. [*Zenith angle: day-night effect.*]{} If MSW oscillations are effective, for a certain range of neutrino parameters the observed event rate will depend upon the zenith angle of the Sun (through a Earth matter regeneration effect). Win present statistics, the most robust estimator of zenith angle dependence is the day-night (or up-down) asymmetry, A. The experimental estimation is [@SKnew]: $$\begin{aligned} A\equiv\frac{N-D}{N+D}=0.032\pm0.015\pm 0.006, \quad (E_{recoil}>6.5\ {\rm MeV}).\end{aligned}$$ The difference is small and not statistically significant but it is in the direction that would be expected from regeneration at Earth (the Sun is apparently neutrino brighter at night). Taken alone the small value observed for A excludes a large part of the parameter region that is allowed if only the total rates would be considered \[see Fig.(\[SOLAR1\])\]. [*Spectrum Shape.*]{} The shape of the neutrino spectrum determines the shape of the recoil electron energy spectrum produced by neutrino-electron scattering in the detector and is independent of the astrophysical source. All the neutrino oscillation solutions (SMA,LMA,LOW and Vacuum) provide acceptable, although not excellent fits to the recoil energy spectrum. The simplest test is to investigate whether the ratio, R, of the observed to the standard energy spectrum is a constant with increasing energy. The null flatness hypothesis is accepted at the 90% CL ($\chi^2\sim 1.5$, [@SKnew]). However, alternative fits of the ratio $R$ to a linear function of energy yields slope values does not discard the presence of distortion at higher energies \[see Figs.(\[SOLAR1\]-\[SOLAR2\]) and next paragraph\]. [*Spectrum shape: the hep neutrino problem.*]{} A small but significant discrepancy appears when comparing the predictions from the global best fits for the energy spectrum at high energies ($E_{\nu}{\mbox{\raisebox{-1.ex} {$\stackrel{\textstyle >}{\textstyle \sim}$}}}13 $ MeV) with the SK results. From this discrepancy it has been speculated that uncertainties on the $hep$ neutrino fluxes may affect the higher energy solar neutrino energy spectrum. Presently low energy nuclear physics calculations of the rate of the hep reaction are uncertain by a factor of, at least, six. Coincidence between expected and measured ratios is improved when the hep flux is allowed to vary as a free parameter \[see Fig.(\[SOLAR6\]) and Ref.[@SKnew]\]. The best fit is obtained by a combination $\phi\sim 0.45 {}^8\mbox{ B}+16 \mbox{hep}$ ($\chi^2\sim 1.2$. An upper limit on the ratio of experimental to SSM $hep$ flux is obtained: $$\phi_{hep}^{exp}/{ \phi_{hep}^{BP98}}< 15, \ (90\% CL).$$ [*Seasonal Variation.*]{} No evidence for a anomalous seasonal variation of the neutrino flux has been found. The results (SK 825d, $E_{vis}=10-20$ MeV) are consistent with what is expected from a geometrical variation due to the Earth orbital eccentricity ($\chi^2\sim 0.5$ for the null hypothesis, Ref.[@SKnew]). [*Analysis of data.*]{} From a two-flavor analysis (Ref.[@SKnew], see also Ref.[@bah2; @bahcall99]) of the total event rates in the ClAr, SAGE,GALLEX and SK experiments the best $\chi^2$ fit considering active neutrino oscillations is obtained for $\Delta m^2=5.4\times 10^{-6}$ eV$^2, \sin^2 2\theta=5.0\times 10^{-3}$ (the so called small mixing angle solution, SMA). Other local $\chi^2$ minima exist. The large mixing angle solution (LMA) occurs at $\Delta m^2=3.2\times 10^{-5}$ eV$^2, \sin^2 2\theta=0.76$, the LOW solution (lower probability, low mass), at $\Delta m^2=7.9\times 10^{-8}$ eV$^2, \sin^2 2\theta=0.96$. The vacuum oscillation solution occurs at $\Delta m^2=4.3\times 10^{-10}$ eV$^2, \sin^2 2\theta=0.79$. At this extremely low value for the mass difference the MSW effect is inopperant. For oscillations involving sterile neutrinos (the matter effective potential is modified in this case) the LMA and LOW solutions are not allowed and only the (only slightly modified) SMA solution together with the vacuum solution are still possible. In the case where all data, the total rates, the zenith-angle dependence and the recoil energy spectrum, is combined the best-fit solution is almost identical to what is obtained for the rates-only case. For other solutions, only the SMA and vacuum solution survives (at the 99% CL). The LMA and the LOW solutions are, albeit marginally, ruled out [@bah2]. [*Solar magnetic Fields and antineutrino flux bounds.*]{} Analysis which consider neutrino propagation in presence of solar magnetic fields have also been presented. In this case a variant, more complicated, version of the MSW effect, the so called RSFP effect could manifest itself. Typically, these analysis yield solutions with ${\Delta m^2}\sim 10^{-7}-10^{-8}$ eV$^2$ for both small and large mixing angles. Spin flavor or resonant spin flavor (RSFP) solutions are much more ambiguous than pure MSW solutions because of necessity of introducing additional free parameters in order to model the largely unknown intensity and profile of solar magnetic field. The recognition of the random nature of solar convictive fields and recent theoretical developments in the treatment of Schroedinger random equations have partially improved this situation, allowing the obtaining of SNP solutions without the necessity of a detailed model description (see recent analysis in [@tor2a; @tor2b; @tor2c; @bykov; @tor5]). In addition, random RSFP models predict the production of a sizeable quantity of electron antineutrinos in case the neutrino is a Majorana particle. Presently, antineutrino searches [@tor6] with negative results in Kamiokande and SK are welcome because restrict significantly the, uncomfortably large, parameter space of RSFP models. A search [@tor6] for inverse beta decay electron antineutrinos has set limits on the absolute flux of solar antineutrinos originated from the solar ${}^8$B neutrino component: $$\Phi_{\overline{\nu}}({}^8 B)< 1.8\times 10^5 \mbox{cm}^{-2} \mbox{s}^{-1},\ (95\% \ \mbox{CL}),$$ a number which is equivalent to an averaged conversion probability bound of (with respect the SSM-BP98 model) $$P<3.5\%\ ( 95\% \ \mbox{CL}).$$ In the future such antineutrinos could be identified both in SK or in SNO experiments setting the Majorana nature of the neutrino. In Ref.[@tor2c] \[see Fig.(\[efig\]) for illustration\] it has been shown that, even for moderate levels of noise, it is possible to obtain a probability for $\nu_e\to \overline{\nu}_e$ conversions about $\sim 1-3\%$ in the energy range 2-10 MeV for large regions of the mixing parameter space while still satisfying present SK antineutrino bounds and observed total rates. In the other hand it would be possible to obtain information about the solar magnetic internal field if antineutrino bounds reach the $1\%$ level and a particle physics solution to the SNP is assumed. Atmospheric neutrinos --------------------- Atmospheric neutrinos are the decay products of hadronic showers produced by cosmic ray interactions in the atmosphere. The composed ratio R $$R\equiv \left ( \mu/e\right )_{DATA}/\left ( \mu/e\right )_{MC}$$ where $\mu/e$ denotes the ratio of the number of $\mu$-like to $e$-like neutrino interactions observed in the experiment or predicted by the simulation is considered as an estimator of the atmospheric neutrino flavor ratio $(\nu_\mu+\overline{\nu}_\mu)/(\nu_e+\overline{\nu}_e).$ The calculations of individual absolute neutrino fluxes have large uncertainties at the $\sim 20\%$ level [@atmfluxes]. However, the flavor flux ratio is known to an accuracy of better than $5\%$ in the energy range GeV. The calculated flux ratio has a value of about 2 for energies $<$ 1 GeV and increases with increasing neutrino energy reaching a value $\sim 10$ at $100$ GeV. The angle distribution of the different fluxes is also an important ingredient in the existing evidence for atmospheric neutrino oscillations. Calculations show that for neutrino energies higher than a few GeV, the fluxes of upward and downward going neutrinos are expected to be nearly equal; geomagnetic field effects at these energies are expected to be small because of the relative large geomagnetic rigidity of the primary cosmic rays that produce these neutrinos [@atmfluxes]. Prior to the present era dominated by Super-Kamiokande results, anomalous, statistically significant, low values of the ratio $R$ have been repeatedly obtained previously [@experiments; @experiments2] in the water Cerenkov detectors Kamiokande and IMB-3 and in the calorimeter-based Soudan-2 experiment for “sub-GeV” events (E$_{vis}< 1$ GeV). The NUSEX and Frejus experiments reported however results consistent with no deviation from unity with smaller data samples. Kamiokande experiment observed a value of $R$ smaller than unity in the multi-GeV (E$_{vis}>$1 GeV) energy region as well as a dependence of this ratio on the zenith angle. IMB-3, with a smaller data sample, reported inconclusive results in a similar energy range, not in contradiction with Kamiokande observations [@experiments; @experiments2]. Super-Kamiokande (SK) results are completely consistent with previous results at a much higher accuracy level. Specially significant improvements in accuracy have been obtained in measuring the zenith angular dependence of the neutrino events: in summary, the single most significant result obtained by SK is that the flux of muon neutrinos going up is smaller than that of down-going neutrinos. As we commented before, in addition to the ability of recording higher statistics in less time, due to the much larger dimensions of the detector, the SK detector can contain multi-GeV muon events making possible for the first time a measurement of the spectrum of $\mu$-like events up to $\sim 8-10 $ GeV. From experimental and phenomenological reasons, the SK experiment uses the following event classification nomenclature. According to their origin, events can be classified as [*e-like*]{} (showering, $\nu_e$ or $\overline{\nu}_e$ events) or [*$\mu$-like*]{} (non-showering, $\nu_\mu$ or $\overline{\nu}_\mu$ events). According to the position of the neutrino interaction, they distinguish [*contained events*]{} (vertex in fiducial volume, $98\%$ muon induced), which, depending on their energy, are typed as [*sub-GeV*]{} ($E<\sim 1$ GeV) or [multi-GeV samples]{} ($E<\sim 10$ GeV). Non-contained events can be: [*Upward through-going muons*]{} (vertex outside the detector, muon induced, $E_\nu\sim 500$ GeV) or [*Upward stopping muons*]{} (typically $E_\nu<\sim 50 $ GeV). In all cases, the neutrino path-length covers the full range, from $\sim 10^1$ km for [*down*]{} events to $10^4$ km for [*up*]{} events. In what follows we summarize the present results about total and zenith-angle dependent rates. [*Total rates.*]{} In the sub-GeV range ($E_{vis}< 1.33$ GeV), From an exposure of 61 kiloton-years (kty) (990 days of operation) of the SK detector the measured ratio $R$ is: $$R_{sub gev}=0.66\pm0.02\pm 0.05.$$ It is not possible to determine from data, whether the observed deviation of $R$ is due to an electron excess of a muon deficit. The distribution of $R$ with momentum in the sub-GeV range is consistent with a flat distribution within the statistical error as happens with zenith angle distributions \[see right plots in Fig.(\[ATM8\])\]. In the multi-GeV range, it has been obtained (for a similar exposure) a ratio $R$ which is slightly higher than at lower energies $$R_{multi gev}=0.66\pm0.04\pm 0.08.$$ For e-like events, the data is apparently consistent with MC. For $\mu$-like events there is a clear discrepancy between measurement and simulation. [*Zenith Angle.*]{} A strong distortion in the shape of the $\mu$-like event zenith angle distribution was observed \[Plots (\[ATM4\]-\[ATM8\])\]. The angular correlation between the neutrino direction and the produced charged lepton direction is much better at higher energies ( $\sim 15^0-20^0$): the zenith angle distribution of leptons reflects rather accurately that of the neutrinos in this case. At lower energies, the ratio of the number of upward to downward $\mu$-like events was found to be $$(N_{up}/N_{down})^\mu_{Data}=0.52\pm 0.07$$ while the expected value is practically one: $$(N_{up}/N_{down})^\mu_{MC}=0.98\pm 0.03.$$ The validity of the results has been tested by measuring the azimuth angle distribution of the incoming neutrinos, which is insensitive to a possible influence from neutrino oscillations. This shape agreed with MC predictions which were nearly flat. Another signal for the presence of neutrino oscillations could be present in the ratio of neutrino events for two well separated energy ranges. This is the case for the ratio between upward through going to upward stopping muon events, both classes correspond to very high energy events. The results and expected values are the following ([@SKmoriond2000; @atmfluxes]) $$\begin{aligned} \left (N_{stop}/N_{throug}\right )_{Data}^\mu& =& 0.23\pm0.02 \\ \left (N_{stop}/N_{throug}\right )_{MC}^\mu &=& 0.37\pm 0.05.\end{aligned}$$ The ratio of data to MC is $\sim 0.6$. With these results, the probability that they do correspond to no-oscillation scenario is rather low, $P\sim 10^{-4}-10^{-3}$ [@SKmoriond2000]. [*Analysis.*]{} Oscillation parameters are measured by several samples (FC, PC, up-stop, up-through). The result is that all samples are overall consistent with each other. This hypothesis fits well to the angular distribution, since there is a large difference in the neutrino path-length between upward-going ($\sim 10^{4}$ Km) and downward-going ($\sim 20$ Km): a zenith angle dependence of $R$ can be interpreted as a clear-cut evidence for neutrino oscillations. Among the different possibilities, the most obvious solution to the observed discrepancy is $\nu_\mu\to\nu_\tau$ flavor neutrino oscillations. $\nu_\mu-\nu_e$ oscillations does not fit however so well, they would also conflict laboratory measurements \[CHOOZ, see figs.(\[fCHOOZ\]-\[ATM6\])\]. Oscillation into sterile neutrinos, $\nu_\mu\to \nu_s$, could also be in principle a good explanation consistent with data. Different tests has been performed for distinguishing $\nu_\mu\to\nu_\tau$ from $\nu_\mu\to\nu_s$ oscillations: A possible test of $\nu_\mu\to \nu_s$ vs $\nu_\mu\to \nu_\tau$ oscillations is provided by the study of the $\pi^0/e$ ratio [@nakahata]. In the $\mu-\tau$ case, the $\pi^0$ production due to neutral current interactions do not change, causing the $\pi^0/e$ ratio to be the same as the expectation without neutrino oscillations. In the sterile case such a ratio should be smaller ($\sim$ 83%) than expected because the absence of $\nu_s$ neutral current interactions. $\pi^0$ events experimental identification can be performed by study of their invariant mass distributions compared with Monte Carlo simulations. Present results conclude that the $\nu_\mu\to\nu_s$ oscillation hypothesis is disfavored at the $99\% CL$. Evidence for oscillations equals evidence for non-zero neutrino mass within the standard neutrino theory. The allowed neutrino oscillation parameter regions obtained by Kamiokande and SK from different analysis are shown in Fig.(\[ATM6\]). Under the interpretation as $\nu_\mu\to\nu_\tau$ oscillations, the best fit provide $\Delta m^2=\sim 2-5 \times 10^{-3}$ and a very large mixing angle $\sin^2 2\theta> 0.88$. Unless there is no fine tuning, this suggests a neutrino mass of the order of 0.1 eV. Such a mass implies the neutrino energy density in the universe to be 0.001 of the critical density which is too small to have cosmological consequences. This is of course a very rough argument: specific models, however, may allow larger neutrino masses quite naturally. Global multi-fold analysis and the necessity for sterile neutrinos. ------------------------------------------------------------------- From the individual analysis of the data available from neutrino experiments, it follows that there exist three different scales of neutrino mass squared differences and two different ranges of small and maximal mixing angles, namely: $$\begin{aligned} \Delta m_{sun}^2&\sim 10^{-5}-10^{-8} \ eV^2\ ,& \sin^2 2\theta\sim 7\times 10^{-3} (MSW,RSFP), \\ &\sim 10^{-10} \ eV^2,& \sin^2 2\theta\sim 0.8-0.9\ (Vac.); \\ \Delta m_{Atm}^2&\sim 5\times 10^{-3} \ eV^2,&\ \sin^2 2\theta\sim 1\\ \Delta m_{LSND}^2&\sim 3\times 10^{-1}-2\ eV^2&\ \sin^2 2\theta\sim 10^{-3}-10^{-2}. \label{e1001}\end{aligned}$$ Fortunely for the sake of simplicity the neutrino mass scale relevant for HDM is roughly similar to the LSND one. The introduction of the former would not change any further conclusion. But for the same reason, the definitive refutation of LSND results by KARMEN or future experiments does not help completely in simplifying the task of finding a consistent framework for all the neutrino phenomenology. Any combination of experimental data which involves only of the two mass scales can be fitted within a three family scenario, but solving simultaneously the solar and atmospheric problems requires generally some unwelcome fine tuning of parameters at the $10^{-2}$ level. The detailed analysis of Ref.[@BILENKY] obtains for example that solutions with 3 neutrino families which are compatible with the results from SBL inclusive experiments, LSND and solar neutrino experiments are possible. Moreover it has been shown that it is possible to obtain, under simple assumptions but without a detailed fit of all possible parameters, very concrete expressions for the $3\times 3$ mixing matrix, see for example the early Ref.[@tor-quasi], of which the called bi-maximal model of Ref.[@bimaximal] is a particular case. The real problem arises when one add the results from CHOOZ, which rule out large atmospheric $\nu_\mu \nu_e$ transitions and zenith dependence from SK atmospheric data one comes to the necessity of consideration of schemes with four massive neutrinos including a light sterile neutrino. Among the numerous possibilities, complete mass hierarchy of four neutrinos is not favored by existing data [@BILENKY] nor four-neutrino mass spectra with one neutrino mass separated from the group of the three close masses by the ”LSND gap” ($\sim$ 1 eV). One is left with two possible options where two double-folded groups of close masses are separated by a $\sim 1 $ eV gap: $$\begin{aligned} &(A)&\ \underbrace{\overbrace{\nu_e\to\nu_s: \ m_1< m_2}^{sun}<< \overbrace{\nu_\mu\to \nu_\tau:\ m_3< m_4}^{atm}}_{LSND\sim 1 eV} \\[0.1cm] &(B)&\ \underbrace{\overbrace{\nu_e\to \nu_\tau: \ m_1< m_2}^{sun}<< \overbrace{\nu_\mu\to\nu_s:\ m_3< m_4}^{atm}}_{LSND\sim 1 eV}.\end{aligned}$$ The two models would be distinguishable from the detailed analysis of future solar and atmospheric experiments. For example they may be tested combining future precise recoil electron spectrum in $\nu e\to \nu e$ measured in SK and SNO ( see Ref.[@SNO] for experiment details and Refs.[@bah10] for performing expectations) with the SNO spectrum measured in CC absorption. The SNO experiment (a 1000 t heavy water under-mine detector) will measure the rates of the charged (CC) and neutral (NC) current reactions induced by solar neutrinos in deuterium: $$\begin{aligned} && \nu_e + d \rightarrow p+p+e^-\quad({\rm CC\ absorption})\nonumber \\ && \nu_x + d \rightarrow p+n+\nu_x\quad({\rm NC\ dissociation}). \label{reactionNC}\end{aligned}$$ including the determination of the electron recoil energy in the CC reaction. Only the more energetic $^8$B solar neutrinos are expected to be detected since the expected SNO threshold for CC events is an electron kinetic energy of about 5 MeV and the physical threshold for NC dissociation is the binding energy of the deuteron, $E_b= 2.225$ MeV. If the (B) model it is true one expects $\phi^{CC}/\phi^{NC}\sim 0.5$ while in the (A) model the ratio would be $\sim 1$. The schemes (A) and (B) give different predictions for the neutrino mass measured in tritium $\beta$-decay and for the effective Majorana mass observed in neutrinoless double $\beta$ decay. Respectively we have $\mid \langle m\rangle\mid < m_4$ (A) or $<< m_4$ (B). Thus, if scheme (A) is realized in nature this kind of experiments can see the effect of the LSND neutrino mass. From the classical LEP requirement $N_\nu^{act}=2.994\pm 0.012$ [@PDG98], it is clear that the fourth neutrino should be a $SU(2)\otimes U(1)$ singlet in order to ensure that does not affect the invisible Z decay width. The presence of additional weakly interacting light particles, such as a light sterile $\nu_s$, is constrained by BBN since it would enter into equilibrium with the active neutrinos via neutrino oscillations (see Section \[sectioncosmos\]). The limit $\Delta m^2 \sin^2 2\theta< 3\times 10^{-6}$ eV$^2$ should be fulfilled in principle. However systematical uncertainties in the derivation of the BBN bound make any bound too unreliable to be taken at face value and can eventually be avoided [@foot]. Taking the most restrictive options (giving $N_\nu^{eff}< 3.5$) only the (A) scheme is allowed, one where the sterile neutrino is mainly mixed with the electron neutrino. In the lest restrictive case ($N_\nu^{eff}< 4.5$) both type of models would be allowed. Conclusions and future perspectives. ==================================== The theoretical challenges that the present phenomenological situation offers are two at least: to understand origin and, very particularly, the lightness of the sterile neutrino (apparently requiring a radiatively generated mass) and to account for the maximal neutrino mixing indicated by the atmospheric data which is at odd from which one could expect from considerations of the mixing in the quark sector. Actually, the existence of light sterile neutrinos could even be beneficial in diverse astrophysical and cosmological scenarios (supernova nucleosynthesis, hot dark matter, lepton and baryon asymmetries for example). In the last years different indications in favor of nonzero neutrino masses and mixing angles have been found. These evidences include four solar experiments clearly demonstrating an anomaly compared to the predictions of the Standard Solar Model (SSM) and a number of other atmospheric experiments, including a high statistics, well calibrated one, demonstrating a quite different anomaly at the Earth scale. One could argue that if we are already beyond the stage of having only ”circumstantial evidence for new physics”, we are still however a long way from having ”conclusive proof of new physics”. Evidence for new physics does not mean the same as evidence for neutrino oscillations but there exists a significant case for neutrino oscillations and hence neutrino masses and mixing as ”one”, indeed the most serious candidate, explanation of the data. Non-oscillatory alternative explanations of the neutrino anomalies are also possible but any of them will not be specially elegant or economical (see Ref.[@pakvasa] for a recent summary and references therein): they will involve anyway non-zero neutrino masses and mixing. As a result even if neutrinos have masses and do mix, the observed neutrino anomalies, may be a manifestation of a complicated mixture of effects due to oscillations and effects due to other exotic new physics. The dominant effect would not necessarily be the same in each energy or experimental domain. The list of effects due to exotic physics which have been investigated in some degree in the literature, would include [@pakvasa]: Oscillation of massless neutrinos via FCNC and Non-Universal neutral currents (NUNC) which has been considered as feasible explanation for the solar neutrino observations [@exotic1] and atmospheric neutrinos [@exotic3]. It has also been studied the possibility of decaying neutrinos as possible solutions in the solar case [@exotic2] and in the atmospheric case [@exotic4]. LSND results could be accounted for without oscillations provided that muon conventional decay modes are accompanied by rare modes including standard and/or sterile neutrinos. In this case the energy or distance dependence, typical of the oscillation explanation, would be absent [@exotic0]. Finally, explanations of the neutrino experimental data which involve alterations of the basic framework of known physics (quantum and relativity theory) have been proposed: A similar signature to neutrino decay would be produced by a huge, non-standard, quantum decoherence rate along the neutrino propagation [@exotic5]. Proposed explanations involving relativity effects include gravitationally induced oscillations (see, for example, Ref.[@fogligrav]) or violation of Lorenz invariance [@exotic6]. In the first case it has been suggested that different flavors have different coupling to the gravitational potential. In the second case it is claimed the existence of different maximum speeds for each neutrino specie. In both cases, rather than the usual dependence on $L/E$, one finds a $L\times E$ as a characteristical signature. Of course, one possible alternative is that one or more of the experiments will turn out to be wrong. This is possible, probable and even desirable from a phenomenologist point of view because his/her task would be considerably simplified as we have seen above. What it is little probable, with all the evidence accumulated by now, is that all the experiments turn out to be simultaneously wrong. Many neutrino experiments are taking data, are going to start or are under preparation: solar neutrino experiments (SNO and Borexino are of major interest, also HERON, HELLAZ, ICARUS, GNO and others); LBL reactor (CHOOZ, Palo Verde, KamLand) and accelerator experiments (K2K, MINOS, ICARUS and others); SBL experiments (LSND, KARMEN, BooNe and many others). The important problem for any next generation experiment is to find specific and unambiguous experimental probe that the ”anomalies” which has been found are indeed signals of neutrino oscillations and to distinguish among the different neutrino oscillation possibilities (this is specially important in the Solar case). Among these probes, we could include: - Perhaps the most direct test of SM deviation: to measure the ratio of the flux of $\nu_e$’s (via a CC interaction) to the flux of neutrinos of all types ($\nu_e + \nu_\mu + \nu_\tau$, determined by NC interactions). This measurement will be done hopefully by the SNO experiment in the near future \[see Fig.(\[FUT3\])\]. - Statistically significant demonstration of an energy-dependent modification of the shape of the electron neutrino spectrum arriving at Earth. Besides observing distortion in the shape of $^8$B neutrinos, it will be very important to make direct measurements of the $^7$Be (Borexino experiment) and pp (HERON,HELLAZ) neutrinos. - Improved observation of a zenith angle effect in atmospheric experiments or their equivalent, a day-night effect in solar experiments. - And least, but by no means the least, independent confirmation by one or more accelerator experiments. There is a high probability that in the near future we should know much more than now about the fundamental properties of neutrinos and their masses, mixing and their own nature whether Dirac or Majorana. The authors are supported by research grants from the Spanish Ministerio de Educacion y Cultura. [10]{} W. Pauli, in Neutrino Physics, p.1, edited by K. Winter, Cambridge University press, 1991. For further details in neutrino history see the D. Verkindt web page, http://wwwlapp.in2p3.fr/neutrinos/aneut.html. A very complete account of neutrino physics including always-updated compilation of experimental results appear in J. Peltoniemi, http://cupp.oulu.fi/neutrino//index.html C. L. Cowan, F. Reines, F.B. Harrison, H.W. Kruse and A.D. McGuire, [*Science*]{} [**124**]{} (1956) 103. Also in Neutrino Physics, p.41 edited by K. Winter; F. Reines and C. L. Cowan, Phys. Rev. [**113**]{}, (1959) 273 G. Danby, J. M. Gaillard, K. Goulianos, L. M. Lederman, N. Mistry, M. Schwartz and J. Steinberger, . E872 DONUT collaboration, Fermilab press release, July 21, 2000. P. Langacker. Talk given at 4th Intl. Conf. on Physics Beyond the Standard Model, Lake Tahoe, CA, 13-18 Dec. 1994, hep-ph/9503327; Published in Trieste HEP Cosmol.1992:0487-522. [*Ibid.*]{}, , hep-ph/9811460. [*Ibid.*]{}., Talk given at 1th Int. Workshop on Weak Interactions and neutrinos (WIN99), Cape Town, SA, 24-30 Jan 1999, hep-ph/9905428. M. Fukugita. YITP/K-1086. Invited Talk presented at Oji International Seminar ”Elementary Processes in Dense Plasmas”. July 1994. J.W.F. Valle, hep-ph/9809234. Published in “Proc. of New Trends in Neutrino Physics”, May 1999, Ringberg castle, Tegernsee, Germany. P. Ramond, hep-ph/9809401, . F. Wilczek, hep-ph/9809509, . S.M. Bilenky et al. Summary of the Europhysics neutrino Oscillation Workshop Amsterdam, The Netherlands, 7-9 Sep 1998, hep-ph/9906251. S.M. Bilenky, C. Giunti and C.W. Kim, hep-ph/9902462, Int.J.Mod.Phys.[**A15**]{}(2000) 625. S.M. Bilenky, C. Giunti and W. Grimus, hep-ph/9812360, Prog. Part. Nucl. Phys. 43 (1999) 1. . J.M. Lattimer, Nucl. Phys.[**A478**]{} (1988) 199c. B. Jegerlehner, F. Neubig, G. Raffelt,. H.T. Janka, E. Mueller, . H.T. Janka, Frontier objects in astrophysics and particle physics, Vulcano 18-23 May 1992. Procs. 345-374 (Edited by F. Giovannelli). L. Nellen, K. Mannheim, P.L. Biermann, . F. Halzen, E. Zas, . S. Sahu, V.M. Bannur, hep-ph/9803487, . W. Bednarek, R.J. Protheroe, astro-ph/9802288. S.L. Glashow, Nucl. Phys. [**22**]{} (1961) 597. S. Weinberg, . R. Bertlmann and H. Pietschmann, . J. Ellis et Al., Ann. Rev. Nucl. Part. Sci. 32 (1982) 443. C. Caso et al., Eur. Phys. J. C 3, 1 (1998). D. E. Groom [*et al.*]{}, Eur. Phys. J.  [**C15**]{} (2000) 1. B. D. Fields and K.A. Olive, . G. Steigman, D.N. Schramm and J. Gunn, . K.A. Olive and D. Thomas, Astro. Part. Phys. 7 (1997) 27. C.J. Copi, D.N. Schramm and M.S. Turner, . G. Steigman, K.A. Olive and D.N. Schramm, . K.A. Olive, D.N. Schramm and G. Steigman, . P. Depommier article in [@klapdor]. S.M. Bilenky et al., . N. Irges, S. Lavignac and P. Ramond, . J. Pati and A. Salam, . H. Georgi and S. Glashow, . H. Georgi, in Particles and Fields, 1974, ed. C. Carlson (AIP press, NY, 1975) S. Dimopoulos and F. Wilczek, Proc. 19th Course of the Intl. School of Subnuclear Phys., Erice, Italy, 1981, ed. A. Zichichi (Plenum, NY, 1983). K.S. Babu and S.M. Barr, . K.S. Babu, J. Pati and F. Wilczek, hep-ph/9812538, . M. Gell-mann, P. Ramond and R. Slansky, in Supergravity, ed. F. van Nieuwenhuizen and D. Freedman (North Holland, Amsterdam, 1979). S. Weinberg, . R. N. Mohapatra and G. Senjanovic, . S. A. Bludman, D.C. Kennedy and P. Langacker, . [*Ibid.*]{}, . A. Zee, ; [**B161**]{} (1985) 141; . K. Benakli, Y. Smirnov. . J.T. Peltoniemi, D. Tommasin and J.W.F. Valle. . J.T. Peltoniemi, J.W.F. Valle. . G.B. Gelmini and M. Roncadelli, . H. Georgi et Al., . Y. Chikashige, R.N. Mohapatra and R.D. Peccei, . M. Fukugita, T. Yanagida, . K. S. Babu, R. N. Mohapatra, . [*Ibid.*]{}, . [*Ibid.*]{}, R. N. Mohapatra. . J. C. Pati and A. Salam, ; R. N. Mohapatra and J. C.Pati, ; R. N. Mohapatra and G. Senjanovic, . See also reviews in: R. N. Mohapatra, Unification and Supersymmetry: The frontiers of Quark-lepton physics. Springer Verlag. N.Y. 1986. Advance Series on Directions in High Energy Physics.- Vol.3 CP violation. Editor: C. Jarlskog. World Scientific, Singapore, 1989. R.N. Mohapatra article in [@klapdor]. M. A. Diaz, hep-ph/9711435; hep-ph/9712213; J. C. Ramao, hep-ph/9712362. J. W. F. Valle, hep-ph/9808292; hep-ph/9906378. E. Witten, . R. N. Mohapatra and J. W. F. Valle, . D.O. Caldwell and R.N. Mohapatra, . Z.G. Berezhiani and R.N. Mohapatra, . J.W.F. Valle. . N. Arkani-Hamed, S. Dimopoulos and G. Dvali, ;\ I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos and G. Dvali, ;\ N. Arkani-Hamed, S. Dimopoulos and G. Dvali, . N. Arkani-Hamed, S. Dimopoulos, G. Dvali and J. Marchel-Russell, hep-ph/9811448.\ G. Dvali and A. Yu. Smirnove, . R. N. Mohapatra, S. Nandi, and A. Pérez-Lorenzana, . C. Froggatt and H. B. Nielson, ;\ M. Leurer, Y. Nir and N. Seiberg, . L. Ibanez, G.G. Ross, . P. Binetruy, P. Ramond, . J. K. Elwood, N. Irges and P. Ramond, . M. Fukugita, C.J. Hogan, in [*Structure Formation in the Universe*]{}, Proc. of the NATO ASI, Cambridge, 1999. astro-ph/0005060 M. Fukugita, C.J. Hogan, in [@pdg98] (2000) pp.136-138. J.R. Primack, M.A.K. Gross, astro-ph/0007165; astro-ph/0007187. S. Dodelson and L. Knox, Phys. Rev. Lett.  [**84**]{} (2000) 3523 \[astro-ph/9909454\]. W. L. Freedman, Phys. Scripta [**T85**]{} (2000) 37 \[astro-ph/9905222\]. N. Kaiser, astro-ph/9809341. S. Burles and D. Tytler, Astrophys. J. 499 (1998) 699; ibid. 507 (1998) 732. G.F. Smoot et Al., Astropys. J. 396 (1992) L1. E.L. Wright et Al., Astropys. J. 396 (1992) L13. J.R. Primack, astro-ph/9707285, astro-ph/9610078. E. Gawiser, J. Silk, astro-ph/9806197, Science, 280, 1405 (1998). J. Bond, G. Efstathiou, . S. Dodelson, G. Gyuk and M. Turner, . H.B. Kim and J.E. Kim, . M. White, G. Gelmini and J. Silk, . J. A. Peacock, “Cosmological Physics”, Cambridge University Press, 1999. R. Foot, R.R. Volkas, . R. Jeannerot, S. Khalil, G. Lazarides, and Q. Shafi, hep-ph/0002151, JHEP (2000) 0010. M. Fukugita and T. Yanagita, ;\ W. Buchmüller and M. Plümacher, ;\ G. Lazarides and Q. Shafi, .\ For a recent review in the context of SUSY hybrid inflation see G. Lazarides, hep-ph/9904428. G. Lazarides, Q. Shafi and N. D. Vlachos, . L. Wolfenstein, ; . S. P. Mikheyev and A. Yu. Smirnov, Yad. Fiz. 42 (1985) 1442; Nuovo Cim. 9C (1986)17 V. M. Lobashev [*et al.*]{}, . V. M. Lobashev [*et al.*]{}, [*Prepared for 2nd International Conference on Dark Matter in Astro and Particle Physics (DARK98), Heidelberg, Germany, 20-25 Jul 1998*]{}. V. M. Lobashev, Talk presented at Neutrino ’98 [@nu98]. C. Weinheimer [*et al.*]{}, . V. M. Lobashev, in [*NONE*]{} Phys. Atom. Nucl.  [**63**]{} (2000) 962. J. Bonn [*et al.*]{}, in [*NONE*]{} Phys. Atom. Nucl.  [**63**]{} (2000) 969. J. Bonn [*et al.*]{}, in [*NONE*]{} Nucl. Phys. Proc. Suppl.  [**87**]{} (2000) 271. L. Stephenson et Al., Int. Jour. Modern Physics A13-16 (1998) 2765. C. Daum et Al.. ; . OPAL Coll. K. Ackerstaff et al., hep-ex/9806035, Eur. Phys. J. C5 (1998) 229-237. ALEPH Collaboration. R. Barate et Al.. Eur. Phys. J. C2 (1998) 395-406. L. Baudis et al. (Heidelberg-Moscow collaboration), hep-ex/9902014, . M. Gunther et al., . H.V. Klapdor-Kleigrothaus, hep-ex/9907040; L. Baudis et Al., GENIUS collab., hep-ph/9910205. L. Baudis, A. Dietz, G. Heusser, H. V. Klapdor-Kleingrothaus, B. Majorovits and H. Strecker \[GENIUS Collaboration\], in [*NONE*]{} astro-ph/0005568. B. Achkar et al., . A. Romosan et al., . D. Naples et Al. hep-ex/9809023, . L. Borodovsky et al., Phys. Rev. Lett. 68, 274 (1992). M. Apollonio [*et al.*]{}, . M. Apollonio et Al. (CHOOZ coll.), hep-ex/9907037, . F. Boehm [*et al.*]{}, Phys. Rev.  [**D62**]{} (2000) 072002 \[hep-ex/0003022\]. C. Athanassopoulos et al. (LSND Coll.) ; ; Phys. Rev. [**C54**]{} (1996) 2685; Phys. Rev. [**C58**]{} (1998) 2489 (nucl-ex/9706006); (nucl-ex/9709006). D.H. White, Talk presented at Neutrino ’98 [@nu98]. S.J. Yellin, hep-ex/9902012. G. Mills, Proc. to Les Rencontres de Moriond 1999, 13. - 20. March 1999, Les Arc 1800. I. Stancu \[LSND Collaboration\], in [*NONE*]{} Nucl. Phys. Proc. Suppl.  [**85**]{} (2000) 78. E. D. Church \[LSND Collaboration\], in [*NONE*]{} Nucl. Phys.  [**A663**]{} (2000) 799. J. Kleinfeller \[KARMEN Collaboration\], in [*NONE*]{} Nucl. Phys. Proc. Suppl.  [**87**]{} (2000) 281. K. Eitel \[KARMEN Collaboration\], in [*NONE*]{} hep-ex/0008002. C. Oehler \[KARMEN Collaboration\], in [*NONE*]{} Nucl. Phys. Proc. Suppl.  [**85**]{} (2000) 101. T. E. Jannakos \[KARMEN Collaboration\], in [*NONE*]{} Nucl. Phys. Proc. Suppl.  [**85**]{} (2000) 84. Some additional references relative to the KARMEN anomaly. Theoretical descriptions: Barger et Al., . Govaerts et Al., . Gninenko and Krasnikov, . PSI measurements: Daum et Al., . Bilger et Al., . Bilger et Al., . B. Armbruster et al. (KARMEN Collaboration), ; Phys. Rev. [**C 57**]{} (1998) 3414; . M. Steidl (KARMEN Collaboration). Internal report unpublished. BOONe proposal: http://www.neutrino.lanl.gov/BooNE. M. Guler et al. ”Letter of intent search for oscillation $\nu_\mu\to\nu_e$ at the CERN PS”. CERN-SPSC/97-21, SPSC/I 216, October 10, 1997. Y. Itow \[Super-Kamiokande and K2K Collaborations\], [*In \*La Thuile 1999, Results and perspectives in particle physics\* 3-20*]{}. A. Suzuki [*et al.*]{} \[K2K Collaboration\], hep-ex/0004024, Nucl.Instrum.Meth. [**A453**]{}(2000) 165. Y. Oyama \[K2K Collaboration\], hep-ex/0004015. H. W. Sobel \[K2K Collaboration\], [*In \*Venice 1999, Neutrino telescopes, vol. 1\* 351-360*]{}. S. Mine \[K2K Collaboration\], [*Given at International Workshop on JHF Science (JHF 98), Tsukuba, Japan, 4-7 Mar 1998*]{}. M. Sakuda \[K2K Collaboration\], KEK-PREPRINT-97-254 [*Submitted to APCTP Workshop: Pacific Particle Physics Phenomenology (P4 97), Seoul, Korea, 31 Oct - 2 Nov 1997*]{}. Y. Oyama \[K2K collaboration\], hep-ex/9803014. P. Anselmann et al., GALLEX Coll., . W. Hampel et al., GALLEX Coll., . T.A. Kirsten, Prog. Part. Nucl. Phys. 40 (1998) 85-99. W. Hampel et al., (GALLEX Coll.) . M. Cribier, . W. Hampel et al., (GALLEX Coll.) . W. Hampel et al., (GALLEX Coll.) . A.I. Abazov et al. (SAGE Coll.), . D.N. Abdurashitov et al. (SAGE Coll.), . J.N. Abdurashitov et al., (SAGE Coll.), Phys. Rev. [**C60**]{} (1999) 055801; astro-ph/9907131. J.N. Abdurashitov et al., (SAGE Coll.), ; astro-ph/9907113. R. Davis, Prog. Part. Nucl. Phys. 32 (1994) 13. B.T. Cleveland et al., (HOMESTAKE Coll.) . B.T. Cleveland et al., (HOMESTAKE Coll.) Astrophys. J. 496 (1998) 505-526. Y. Fukuda et Al. (SK Collaboration), hep-ex/9812011, . Y. Fukuda et Al. (SK Collaboration), hep-ex/9805021, , Erratum-. Y. Suzuki (Kamiokande Collaboration), Talk given at the 6th International Workshop on Neutrino Telescopes, Venice, February 22-24,1994. Vistas on XXIst Century Particle Physics, Aspen Winter Conference on Particle Physics, January 21, 2000. Les Rencontres de Moriond 1999, 13.-20. Rencontres de Moriond: Electroweak Interactions and Unified Theories, Les Arcs 1800 (France), March 10-17 2000. 31th Intnl. Conference on High-Energy Physics (ICHEP2000), Vancouver, British Columbia, Canada, 24-30 Jul 2000. K. Martens in . Y. Takeuchi in [@ichep00]. K. Eitel, Proc. to Lake Louise Winter Institute 1999 14. - 20. Feb. 1999, Lake Louise. T. Jannakos, Proc. to Les Rencontres de Moriond 1999, 13.-20. March 1999, Les Arc 1800. M. Steidl, Proc. to Les Rencontres de Physique de la Valle Aoste 1999, 28. Feb.- 06. March 1999, La Thuile. For a up-date list of references see KARMEN WWW page: http://www-ik1.fzk.de/www/karmen/karmen\_e.html. M.B. Smy. DPF’99 conference; hep-ex/9903034. M. Shiozawa in [@moriond00]. T. Toshito, Atmospheric neutrino Results from SK (unpublished). J.N. Bahcall, P.I. Krastev and A.Y. Smirnov, hep-ph/9807216,. J.N. Bahcall, P.I. Krastev and A.Y. Smirnov, hep-ph/9905220, . E. Torrente-Lujan, . E. Torrente-Lujan,. E. Torrente-Lujan, . A.A. Bykov, V.Y. Popov, A.I. Rez, V.B. Semikoz, D.D. Sokoloff, hep-ph/9808342, . V.B. Semikoz, E. Torrente-Lujan, . E. Torrente-Lujan, Phys. Lett.  [**B494**]{} (2000) 255 \[hep-ph/9911458\]. T.K. Gaisser et al., hep-ph/9608225, . T.K. Gaisser, hep-ph/0001027, . T.K. Gaisser, Talk given at NEUTRINO98 (see Ref.[@nu98]), hep-ph/9811315, . M. Honda, Talk given at NEUTRINO98 (see Ref.[@nu98]), hep-ph/9811504, . K. Kasahara et al., Prepared for ICRC99 (See Ref.[@ICRC99]). Frejus Collaboration, Ch. Berger et al., . IMB Collaboration, D. Casper et al., . NUSEX collaboration, M. Aglietta et al., Europhys. Lett. 8 (1989) 611. Kamiokande Collaboration, H.S. Hirata et al., . Kamiokande Collaboration, Y. Fukuda et al., . Soudan Collaboration, W.W.M. Allison et al., . M. Ambrosio et al., MACRO coll., hep-ex/9807005, . Y. Fukuda et al. (SuperKamiokande Coll.), hep-ex/9803006,; hep-ex/9807003, . See also Refs. [@sk9812; @sk9805; @suz1]. S.M Bilenky, C. Giunti, W. Grimus. hep-ph/9805411. E. Torrente-Lujan, . V. Barger, S. Pakvasa, T.J. Weiler and K. Whisnant, . C. Giunti, hep-ph/9810272, . J. R. Klein \[SNO Collaboration\], [*In \*Venice 1999, Neutrino telescopes, vol. 1\* 115-125*]{}. J. Boger [*et al.*]{} \[SNO Collaboration\], Nucl. Instrum. Meth.  [**A449**]{} (2000) 172 \[nucl-ex/9910016\]. A. B. McDonald \[SNO Collaboration\], Nucl. Phys. Proc. Suppl.  [**77**]{} (1999) 43. J. N. Bahcall, P. I. Krastev, and A. Yu. Smirnov, hep-ph/0002293, Phys. Rev. [**D62**]{}(2000) 093004.\ N. Bahcall, P. I. Krastev, and A. Yu. Smirnov, hep-ph/9911248, . S. Pakvasa, hep-ph/9905426. invited talk at the “8th Intl. Symposium on Neutrino telescopes”, Venice, Feb. 1999. E. Roulet, . M.M. Guzzo, A. Masiero and S. Petcov, . J.N. Bahcall and P. Krastev, hep-ph/9703267. S. Pakvasa and K. Tennakone, . Z. Berezhiani, G. Fiorentini, A. Rossi and M. Moretti, JETP Lett 55 (1992) 151. A. Acker and S. Pakvasa, . P.F. Harrison, D.H. Perkins and W.G. Scott, hep-ph/9904297, . V. Barger, J.G. Learned, S. Pakvasa and T.J. Weiler, . A. Joshipura and S. Rindani, . S. Bergmann and Y. Grossman, . L.M. Johnson and D. McKay, . S. Tremaine, J.E. Gunn , . O.E. Gerhard, D.N. Spergel, Astrophys. J. 389, L9, (1992). Y. Grossman and M. P. Worah, hep-ph/9807511. G.L. Fogli, E. Lisi, A. Marrone and G. Scioscia, . S. Coleman and S.L. Glashow, . S. Glashow, A. Halprin, P.I. Krastev, C.N. Leung and J. Pantaleone, . Neutrinos. Edited by H.V. Klapdor, Springer-Verlag, Berlin, 1988. 26th International Cosmic Ray Conference (ICRC 99), Salt Lake City, Utah, 17-25 Aug 1999. 18th International Conference on Neutrino Physics and Astrophysics (NEUTRINO 98), Takayama, Japan, 4-9 Jun 1998. 6th International Workshop on Topics in Astroparticle and Underground Physics (TAUP99) Paris, France, 6-10 Sept. 1999. E. Ma and P. Roy, ; E. Ma, . G. Lazarides, R. Schaefer and Q. Shafi, Phys. Rev. D 56 (1997) 1324. M. Yu. Khlopov and A. D. Linde, Phys. Lett. B 138 (1984) 265 ;\ J. Ellis, J. E. Kim and D. Nanopoulos, Phys. Lett. B 145 (1984) 181. C. Giunti, hep-ph/9802201 (unpublished). Y. F. Wang, L. Miller and G. Gratta, Phys. Rev.  [**D62**]{} (2000) 013012 \[hep-ex/0002050\]. F. Boehm [*et al.*]{}, Phys. Rev. Lett.  [**84**]{} (2000) 3764 \[hep-ex/9912050\]. J. Busenitz \[Palo Verde Collaboration\], [*Prepared for 29th International Conference on High-Energy Physics (ICHEP 98), Vancouver, British Columbia, Canada, 23-29 Jul 1998*]{}. F. Boehm [*et al.*]{} \[Palo Verde Collaboration\], Nucl. Phys. Proc. Suppl.  [**77**]{}, 166 (1999). F. Boehm [*et al.*]{} \[Palo Verde Collaboration\], Prog. Part. Nucl. Phys.  [**40**]{}, 253 (1998). F. Boehm [*et al.*]{} \[Palo Verde Collaboration\], STANFORD-HEP-96-04 [*Talk given at 17th International Conference on Neutrino Physics and Astrophysics, Helsinki, Finland, 13-20 Jun 1996*]{}. T. Kajita, talk given at XVIIIth International Conference on [@nu98]. H. Barth [*et al.*]{}, Nucl. Phys. Proc. Suppl.  [**77**]{} (1999) 321. C. Weinheimer, Talk presented at Neutrino ’98 [@nu98]. B. Armbruster,Talk presented at the XXXIII$^{nd}$ Rencontres de Moriond: Electroweak Interactions and Unified Theories, Les Arcs 1800 (France), March 14-21 1998. K. Eitel and B. Zeitnitz, Talk presented at Neutrino ’98, [@nu98]. (Nucl. Phys. Proc. Suppl.77:212-219 (1999),hep-ex/9809007). B. Zeitnitz, Talk presented at Neutrino ’98, [@nu98]. M. Nakahata (for SK collab.), Nucl. Phys. B (Proc. suppl.) 76 (1999)))) 425-434. N. Hata and P. Langacker, Phys. Rev.  [**D56**]{} (1997) 6107 \[hep-ph/9705339\]. J.N. Bahcall and M.H. Pinsonneault, Rev. Mod. Phys. [**67**]{} (1995) 781. J. Christensen-Dalsgaard. Proc. of the 18th Texas Symposium on Relativistic Astrophysics. Chicago, 15-20 Dec 1996. astro-ph/9702094. V. Berezinsky, 25th. Intl. Conf. Cosmic ray conference, Durban, 28-July,8-August, 1997; astro-ph/9710126. J. N. Bahcall and P. I. Krastev, Phys. Lett.  [**B436**]{} (1998) 243 \[hep-ph/9807525\]. G. Fiorentini, V. Berezinsky, S. Degl’Innocenti and B. Ricci, Phys. Lett.  [**B444**]{} (1998) 387 \[astro-ph/9810083\]. Y. Fukuda [*et al.*]{} \[Super-Kamiokande Collaboration\], Phys. Rev. Lett.  [**81**]{} (1998) 1562 \[hep-ex/9807003\]. M. C. Gonzalez-Garcia, H. Nunokawa, O. L. Peres and J. W. Valle, Nucl. Phys.  [**B543**]{} (1999) 3 \[hep-ph/9807305\]. T. Kajita \[Super-Kamiokande Collaboration\], in [*NONE*]{} Nucl. Phys. Proc. Suppl.  [**77**]{} (1999) 123 \[hep-ex/9810001\]. J.N. Bahcall, E. Lisi, Phys. Rev. D 54, 5417 (1996)\]. M.C. Gonzalez-Garcia, M.M. Guzzo, P.I. Krastev, H. Nunokawa, O. Peres, V. Pleitez, J. Valle and R. Zukanovich Funchal. Phys. Rev. Lett. 82 (1999) 3202. G.L. Fogli, E. Lisi and A. Marrone, Phys. Rev. D59 (1999) 117303. R. Foot, C.N. Leung and O. Yasuda, Phys. Lett. B443 (1998) 185. Figures {#figures .unnumbered} ======= -- -- -- -- -- -- -- -- [^1]: *Invited article prepared for the Journal of the Egyptian Mathematical Society.*
--- abstract: 'In this paper, we introduce a graph structure, called non-zero component graph on finite dimensional vector spaces. We show that the graph is connected and find its domination number and independence number. We also study the inter-relationship between vector space isomorphisms and graph isomorphisms and it is shown that two graphs are isomorphic if and only if the corresponding vector spaces are so. Finally, we determine the degree of each vertex in case the base field is finite.' address: | Department of Mathematics,\ St.Xavier’s College, Kolkata, India.\ angsumandas@sxccal.edu author: - Angsuman Das title: 'Non-Zero Component Graph of a Finite Dimensional Vector Space[^1]' --- basis ,independent set ,graph 05C25 ,05C69 Introduction ============ The study of graph theory, apart from its combinatorial implications, also lends to characterization of various algebraic structures. The benefit of studying these graphs is that one may find some results about the algebraic structures and vice versa. There are three major problems in this area: (1) characterization of the resulting graphs, (2) characterization of the algebraic structures with isomorphic graphs, and (3) realization of the connections between the structures and the corresponding graphs. The first instance of such work is due to Beck [@beck] who introduced the idea of zero divisor graph of a commutative ring with unity. Though his key goal was to address the issue of colouring, this initiated the formal study of exposing the relationship between algebra and graph theory and at advancing applications of one to the other. Till then, a lot of research, e.g., [@survey2; @zero-divisor-survey; @anderson-livingston; @graph-ideal; @power1; @power2; @mks-ideal; @badawi] has been done in connecting graph structures to various algebraic objects. Recently, intersection graphs associated with subspaces of vector spaces were studied in [@int-vecsp-2; @int-vecsp-1]. However, as those works were a follow up of intersection graphs, the main linear algebraic flavour of characterizing the graph was missing. Throughout this paper, vector spaces are finite dimensional over a field $\mathbb{F}$ and $n=dim_{\mathbb{F}}(\mathbb{V})$. In this paper, we define a graph structure on a finite dimensional vector space $\mathbb{V}$ over a field $\mathbb{F}$, called Non-Zero Component Graph of $\mathbb{V}$ with respect to a basis $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ of $\mathbb{V}$, and study the algebraic characterization of isomorphic graphs and other related concepts. Definitions and Preliminaries ============================= In this section, for convenience of the reader and also for later use, we recall some definitions, notations and results concerning elementary graph theory. For undefined terms and concepts the reader is referred to [@west-graph-book]. By a graph $G=(V,E)$, we mean a non-empty set $V$ and a symmetric binary relation (possibly empty) $E$ on $V$. The set $V$ is called the set of vertices and $E$ is called the set of edges of $G$. Two element $u$ and $v$ in $V$ are said to be adjacent if $(u,v) \in E$. $H=(W,F)$ is called a [*subgraph*]{} of $G$ if $H$ itself is a graph and $\phi \neq W \subseteq V$ and $F \subseteq E$. If $V$ is finite, the graph $G$ is said to be finite, otherwise it is infinite. The open neighbourhood of a vertex $v$, denoted by $N(v)$, is the set of all vertices adjacent to $v$. A subset $I$ of $V$ is said to be [*independent*]{} if any two vertices in that subset are pairwise non-adjacent. The [*independence number*]{} of a graph is the maximum size of an independent set of vertices in $G$. A subset $D$ of $V$ is said to be [*dominating set*]{} if any vector in $V \setminus D$ is adjacent to at least one vertex in $D$. The [*dominating number*]{} of $G$, denoted by $\gamma(G)$ is the minimum size of a dominating set in $G$. A subset $D$ of $V$ is said to be a [*minimal dominating set*]{} if $D$ is a dominating set and no proper subset of $D$ is a dominating set. Two graphs $G=(V,E)$ and $G'=(V',E')$ are said to be [*isomorphic*]{} if $\exists$ a bijection $\phi: V \rightarrow V'$ such that $(u,v) \in E \mbox{ iff } (\phi(u),\phi(v)) \in E'$. A [*path*]{} of length $k$ in a graph is an alternating sequence of vertices and edges, $v_0,e_0,v_1,e_1,v_2,\ldots, v_{k-1},e_{k-1},v_k$, where $v_i$’s are distinct (except possibly the first and last vertices) and $e_i$ is the edge joining $v_i$ and $v_{i+1}$. We call this a path joining $v_0$ and $v_{k}$. A graph is [*connected*]{} if for any pair of vertices $u,v \in V,$ there exists a path joining $u$ and $v$. The [*distance*]{} between two vertices $u,v \in V,~ d(u,v)$ is defined as the length of the shortest path joining $u$ and $v$, if it exists. Otherwise, $d(u,v)$ is defined as $\infty$. The [*diameter*]{} of a graph is defined as $diam(G)=\max_{u,v \in V}~ d(u,v)$, the largest distance between pairs of vertices of the graph, if it exists. Otherwise, $diam(G)$ is defined as $\infty$. Non-Zero Component Graph of a Vector Space ========================================== Let $\mathbb{V}$ be a vector space over a field $\mathbb{F}$ with $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ as a basis and $\theta$ as the null vector. Then any vector $\mathbf{a} \in \mathbb{V}$ can be expressed uniquely as a linear combination of the form $\mathbf{a}=a_1\alpha_1+a_2\alpha_2+\cdots+a_n\alpha_n$. We denote this representation of $\mathbf{a}$ as its basic representation w.r.t. $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$. We define a graph $\Gamma(\mathbb{V}_\alpha)=(V,E)$ (or simply $\Gamma(\mathbb{V})$) with respect to $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ as follows: $V=\mathbb{V}\setminus \{\theta\}$ and for $\mathbf{a},\mathbf{b} \in V$, $\mathbf{a} \sim \mathbf{b}$ or $(\mathbf{a},\mathbf{b}) \in E$ if $\mathbf{a}$ and $\mathbf{b}$ shares at least one $\alpha_i$ with non-zero coefficient in their basic representation, i.e., there exists at least one $\alpha_i$ along which both $\mathbf{a}$ and $\mathbf{b}$ have non-zero components. Unless otherwise mentioned, we take the basis on which the graph is constructed as $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$. [\[basis-independent-theorem\] Let $\mathbb{V}$ be a vector space over a field $\mathbb{F}$. Let $\Gamma(\mathbb{V}_\alpha)$ and $\Gamma(\mathbb{V}_\beta)$ be the graphs associated with $\mathbb{V}$ w.r.t two bases $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ and $\{\beta_1,\beta_2,\ldots,\beta_n\}$ of $\mathbb{V}$. Then $\Gamma(\mathbb{V}_\alpha)$ and $\Gamma(\mathbb{V}_\beta)$ are graph isomorphic.]{}\ [[**Proof:** ]{}]{}Since, $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ and $\{\beta_1,\beta_2,\ldots,\beta_n\}$ are two bases of $\mathbb{V}$, $\exists$ a vector space isomorphism $T: \mathbb{V} \rightarrow \mathbb{V}$ such that $T(\alpha_i)=\beta_i, \forall i \in \{1,2,\ldots,n\}$. We show that the restriction of $T$ on non-null vectors of $\mathbb{V}$, $\mathbf{T}:\Gamma(\mathbb{V}_\alpha) \rightarrow \Gamma(\mathbb{V}_\beta)$ is a graph isomorphism. Clearly, $\mathbf{T}$ is a bijection. Now, let $\mathbf{a}=a_1\alpha_1+a_2\alpha_2+\cdots+a_n\alpha_n; \mathbf{b}=b_1\alpha_1+b_2\alpha_2+\cdots+b_n\alpha_n$ with $\mathbf{a}\sim \mathbf{b}$ in $\Gamma(\mathbb{V}_\alpha)$. Then, $\exists~ i \in \{1,2,\ldots,n\}$ such that $a_i\neq 0,b_i\neq 0$. Also, $\mathbf{T}(\mathbf{a})=a_1\beta_1+a_2\beta_2+\cdots+a_n\beta_n$ and $\mathbf{T}(\mathbf{b})=b_1\beta_1+b_2\beta_2+\cdots+b_n\beta_n$. Since, $a_i\neq 0,b_i\neq 0$, therefore $\mathbf{T}(\mathbf{a}) \sim \mathbf{T}(\mathbf{b})$ in $\Gamma(\mathbb{V}_\beta)$. Similarly, it can be shown that if $\mathbf{a}$ and $\mathbf{b}$ are not adjacent in $\Gamma(\mathbb{V}_\alpha)$, then $\mathbf{T}(\mathbf{a})$ and $\mathbf{T}(\mathbf{b})$ are not adjacent in $\Gamma(\mathbb{V}_\beta)$. [The above theorem shows that the graph properties associated of $\Gamma$ does not depend on the choice of the basis $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$. However, two vertices may be adjacent with respect to one basis but non-adjacent to some other basis as shown in the following example: Let $\mathbb{V}=\mathbb{R}^2,\mathbb{F}=\mathbb{R}$ with two bases $\{\alpha_1=(1,0),\alpha_2=(0,1)\}$ and $\{\beta_1=(1,1),\beta_2=(-1,1)\}$. Consider $\mathbf{a}=(1,1)$ and $\mathbf{b}=(-1,1)$. Clearly $\mathbf{a}\sim \mathbf{b}$ in $\Gamma(\mathbb{V}_\alpha)$, but $\mathbf{a}\not\sim \mathbf{b}$ in $\Gamma(\mathbb{V}_\beta)$.]{} Basic Properties of $\Gamma(\mathbb{V})$ ======================================== In this section, we investigate some of the basic properties like connectedness, completeness, independence number, domination number of $\Gamma(\mathbb{V})$. [$\Gamma(\mathbb{V}_\alpha)$ is connected and $diam(\Gamma)=2$.]{}\ [[**Proof:** ]{}]{}Let $\mathbf{a},\mathbf{b} \in V$. If $\mathbf{a}$ and $\mathbf{b}$ are adjacent, then $d(\mathbf{a},\mathbf{b})=1$. If $\mathbf{a}$ and $\mathbf{b}$ are not adjacent, since $\mathbf{a},\mathbf{b} \neq \theta$, $\exists \alpha_i, \alpha_j$ which have non-zero coefficient in the basic representation of $\mathbf{a}$ and $\mathbf{b}$ respectively. Moreover, as $\mathbf{a}$ and $\mathbf{b}$ are not adjacent, $\alpha_i \neq \alpha_j$. Consider $\mathbf{c}=\alpha_i + \alpha_j$. Then, $\mathbf{a}\sim \mathbf{c}$ and $\mathbf{b} \sim \mathbf{c}$ and hence $d(\mathbf{a},\mathbf{b})=2$. Thus, $\Gamma$ is connected and $diam(\Gamma)=2$. [$\Gamma(\mathbb{V})$ is complete if and only if $\mathbb{V}$ is one-dimensional.]{}\ [[**Proof:** ]{}]{}Let $\Gamma(\mathbb{V})$ be complete. If possible, let $dim(\mathbb{V})>1$. Therefore, $\exists~ \alpha_1,\alpha_2 \in \mathbb{V}$ which is a basis or can be extended to a basis of $\mathbb{V}$. Then $\alpha_1$ and $\alpha_2$ are two non-adjacent vertices in $\Gamma(\mathbb{V})$, a contradiction. Therefore, $dim(\mathbb{V})=1$. Conversely, let $\mathbb{V}$ be one-dimensional vector space generated by $\alpha$. Then any two non-null vectors $\mathbf{a}$ and $\mathbf{b}$ can be expressed as $c_1\alpha$ and $c_2\alpha$ respectively for non-zero $c_1,c_2 \in \mathbb{F}$ and hence $\mathbf{a}\sim \mathbf{b}$, thereby rendering the graph complete. [The domination number of $\Gamma(\mathbb{V}_\alpha)$ is 1.]{}\ [[**Proof:** ]{}]{}The proof follows from the simple observation that $\alpha_1+\alpha_2+\cdots+\alpha_n$ is adjacent to all the vertices of $\Gamma(\mathbb{V}_\alpha)$. [The set $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ is a minimal dominating set of $\Gamma(\mathbb{V}_\alpha)$. Now, the question arises what is the maximum possible number of vertices in a minimal dominating set. The answer is given as $n$ in the next theorem.]{} [If $D=\{\beta_1,\beta_2,\ldots,\beta_l\}$ is a minimal dominating set of $\Gamma(\mathbb{V}_\alpha)$, then $l \leq n$, i.e., the maximum cardinality of a minimal dominating set is $n$.]{}\ [[**Proof:** ]{}]{}Since $D$ is a minimal dominating set, $\forall i \in \{1,2,\ldots,l\}, D_i=D \setminus \{\beta_i\}$ is not a dominating set. Therefore, $\forall i \in \{1,2,\ldots,l\}, \exists~ \gamma_i \in \Gamma(\mathbb{V}_\alpha)$ which is not adjacent to any element of $D_i$ but adjacent to $\beta_i$. Since, $\gamma_i \neq \theta, \exists~ \alpha_{t_i}$ such that $\gamma_i$ has non-zero component along $\alpha_{t_i}$. Now, as $\gamma_i$ is not adjacent to any element of $D_i$, so is $\alpha_{t_i}$. Thus, $\forall i \in \{1,2,\ldots,l\}, \exists~ \alpha_{t_i}$ such that $\alpha_{t_i} \sim \beta_i$, but $\alpha_{t_i} \not\sim \beta_k, \forall k\neq i$. Claim: $i \neq j \Rightarrow \alpha_{t_i} \neq \alpha_{t_j}$. Let, if possible, $i \neq j$ but $\alpha_{t_i} = \alpha_{t_j}$. As $\beta_i \sim \alpha_{t_i}$ and $\alpha_{t_i}=\alpha_{t_j}$, therefore $\beta_i \sim \alpha_{t_j}$. However, it contradicts that $\alpha_{t_i} \not\sim \beta_k, \forall k\neq i$. Hence, the claim. As $\alpha_{t_1},\alpha_{t_2},\ldots,\alpha_{t_l}$ are all distinct, it follows that $l\leq n$. [\[independence-number-theorem\] The independence number of $\Gamma$ is $dim(\mathbb{V})$.]{}\ [[**Proof:** ]{}]{}Since $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ is an independent set in $\Gamma$, the independence number of $\Gamma \geq n=dim(\mathbb{V})$. Now, we show that any independent set can not have more than $n$ elements. Let, if possible, $\{\beta_1,\beta_2,\ldots,\beta_l\}$ be an independent set in $\Gamma$, where $l>n$. Since, $\beta_i \neq \theta, \forall i \in \{1,2,\ldots,l\}$, $\beta_i$ has at least one non-zero component along some $\alpha_{t_i}$, where $t_i \in \{1,2,\ldots,n\}$.\ [**Claim:**]{} $i \neq j \Rightarrow t_i \neq t_j$.\ If $\exists i \neq j$ with $t_i=t_j=t$(say), then $\beta_i$ and $\beta_j$ has non-zero component along $\alpha_t$. This imply that $\beta_i \sim \beta_j$, a contradiction to the independence of $\beta_i$ and $\beta_j$. Hence, the claim is valid. However, as there are exactly $n$ distinct $\alpha_i$’s, $l \leq n$, which is a contradiction. Thus, $\Gamma=n=dim(\mathbb{V})$. [\[ind-imply-lin-ind\] Let $I$ be an independent set in $\Gamma(\mathbb{V}_\alpha)$, then $I$ is a linearly independent subset of $\mathbb{V}$.]{}\ [[**Proof:** ]{}]{}Let $I=\{\beta_1,\beta_2,\ldots,\beta_k\}$ be an independent set in $\Gamma$. By Theorem \[independence-number-theorem\], $k \leq n$. If possible, let $I$ be linearly dependent in $\mathbb{V}$. Then $\exists~ i \in \{1,2,\dots,k\}$ such that $\beta_i$ can be expressed as a linear combination of $\beta_1,\beta_2,\ldots,\beta_{i-1},\beta_{i+1},\ldots,\beta_k$, i.e., $$\label{ind-set-equation} \beta_i=c_1\beta_1+c_2\beta_2+\cdots+c_{i-1}\beta_{i-1}+c_{i+1}\beta_{i+1}+\cdots+ c_k\beta_k=\sum^{k}_{s=1,s\neq i}c_s\beta_s$$ Now, since $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ is a basis of $\mathbb{V}$, let $\beta_j=\sum^{n}_{t=1} d_{tj}\alpha_t$ for $j=1,2,\ldots,i-1,i+1,\ldots k$. Thus, the expression of $\beta_i$ becomes $$\beta_i=\sum^{k}_{s=1,s\neq i}c_s\sum^{n}_{t=1} d_{tj}\alpha_t=\sum^{n}_{t=1}d_t\alpha_t \mbox{ for some scalars }d_t \in \mathbb{F}.$$ Since, $\beta_i \neq \theta$. Thus, $\beta_i$ has a non-zero component along some $\alpha_{t^*}$. Also, $\exists$ some $\beta_j, j\neq i$ such that $\beta_j$ has a non-zero component along $\alpha_{t^*}$. (as otherwise, if all $\beta_j,j\neq i$ has zero component along $\alpha_{t^*}$, then by Equation \[ind-set-equation\], $\beta_i$ has zero component along $\alpha_{t^*}$, which is not the case.) Thus, $\beta_j \sim \beta_i$, a contradiction to the independence of $I$. Thus, $I$ is a linearly independent set in $\mathbb{V}$. [Converse of Lemma \[ind-imply-lin-ind\] is not true, in general. Consider a vector space $\mathbb{V}$, its basis $\{\alpha_1,\alpha_2,\alpha_3,\ldots,\alpha_n\}$ and the set $L=\{\alpha_1+\alpha_2,\alpha_2,\alpha_3,\ldots,\alpha_n\}$. Clearly $L$ is linearly independent in $\mathbb{V}$, but it is not an independent set in $\Gamma(\mathbb{V}_\alpha)$ as $\alpha_1+\alpha_2 \sim \alpha_2$.]{} Non-Zero Component Graph and Graph Isomorphisms =============================================== In this section, we study the inter-relationship between the isomorphism of two vector spaces with the isomorphism of the two corresponding graphs. It is proved that two vector spaces are isomorphic if and only if their graphs are isomorphic. However, it is noted that a vector space isomorphism is a graph isomorphism (ignoring the null vector $\theta$), but a graph isomorphism may not be vector space isomorphism as shown in Example \[graph-iso-but-not-vecsp-iso\]. [\[graph-isomorphic-implies-equal-dimension\] Let $\mathbb{V}$ and $\mathbb{W}$ be two finite dimensional vector spaces over a field $\mathbb{F}$. If $\Gamma(\mathbb{V}_\alpha)$ and $\Gamma(\mathbb{W}_\beta)$ are isomorphic as graphs with respect to some basis $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ and $\{\beta_1,\beta_2,\ldots,\beta_k\}$ of $\mathbb{V}$ and $\mathbb{W}$ respectively, then $dim(\mathbb{V})=dim(\mathbb{W})$, i.e., $n=k$.]{}\ [[**Proof:** ]{}]{}Let $\varphi:\Gamma(\mathbb{V}_\alpha)\rightarrow \Gamma(\mathbb{W}_\beta)$ be a graph isomorphism. Since, $\alpha_1, \alpha_2,\ldots,\alpha_n$ is an independent set in $\Gamma(\mathbb{V}_\alpha)$, therefore $\varphi(\alpha_1),\varphi(\alpha_2),\ldots,\varphi(\alpha_n)$ is an independent set in $\Gamma(\mathbb{W}_\beta)$. Now, as in Theorem \[independence-number-theorem\] it has been shown that cardinality of an independent set is less than or equal to the dimension of the vector space, it follows that $n \leq k$. Again, $\varphi^{-1}:\Gamma(\mathbb{W}_\beta)\rightarrow \Gamma(\mathbb{V}_\alpha)$ is also a graph isomorphism. Then, by similar arguments, it follows that $k \leq n$. Hence the lemma. [Let $\mathbb{V}$ and $\mathbb{W}$ be two finite dimensional vector spaces over a field $\mathbb{F}$. If $\mathbb{V}$ and $\mathbb{W}$ are isomorphic as vector spaces, then for any basis $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ and $\{\beta_1,\beta_2,\ldots,\beta_n\}$ of $\mathbb{V}$ and $\mathbb{W}$ respectively, $\Gamma(\mathbb{V}_\alpha)$ and $\Gamma(\mathbb{W}_\beta)$ are isomorphic as graphs.]{}\ [[**Proof:** ]{}]{}Let $\varphi: \mathbb{V}\rightarrow \mathbb{W}$ be a vector space isomorphism. Then $\{\varphi(\alpha_1),\varphi(\alpha_2),\ldots,\varphi(\alpha_n)\}$ is a basis of $\mathbb{W}$. Consider the restriction $\overline{\varphi}$ of $\varphi$ on the non-null vectors of $\mathbb{V}$, i.e., $\overline{\varphi}: \Gamma(\mathbb{V}_\alpha) \rightarrow \Gamma(\mathbb{W}_{\varphi(\alpha)})$ given by $$\overline{\varphi}(a_1\alpha_1+a_2\alpha_2+\cdots+a_n\alpha_n)=a_1\varphi(\alpha_1)+a_2\varphi(\alpha_2)+\cdots +a_n\varphi(\alpha_n)$$ where $a_i \in \mathbb{F}$ and $(a_1,a_2,\ldots,a_n)\neq(0,0,\ldots,0)$. Clearly, $\overline{\varphi}$ is a bijection. Now,\ $\mathbf{a}\sim \mathbf{b} \mbox{ in }\Gamma(\mathbb{V}_\alpha) \Leftrightarrow \exists~ i \mbox{ such that }a_i \neq 0, b_i \neq 0 \Leftrightarrow \overline{\varphi}(\mathbf{a})\sim \overline{\varphi}(\mathbf{b})$. Therefore, $\Gamma(\mathbb{V}_\alpha)$ and $\Gamma(\mathbb{W}_{\varphi(\alpha)})$ are graph isomorphic. Now, by Theorem \[basis-independent-theorem\], $\Gamma(\mathbb{W}_{\varphi(\alpha)})$ and $\Gamma(\mathbb{W}_{\beta})$ are isomorphic as graphs. Thus, $\Gamma(\mathbb{V}_\alpha)$ and $\Gamma(\mathbb{W}_{\beta})$ are isomorphic as graphs. [Let $\mathbb{V}$ and $\mathbb{W}$ be two finite dimensional vector spaces over a field $\mathbb{F}$. If for any basis $\{\alpha_1,\alpha_2,\ldots,$ $\alpha_n\}$ and $\{\beta_1,\beta_2,\ldots,\beta_k\}$ of $\mathbb{V}$ and $\mathbb{W}$ respectively, $\Gamma(\mathbb{V}_\alpha)$ and $\Gamma(\mathbb{W}_\beta)$ are isomorphic as graphs, then $\mathbb{V}$ and $\mathbb{W}$ are isomorphic as vector spaces.]{}\ [[**Proof:** ]{}]{}Since $\Gamma(\mathbb{V}_\alpha)$ and $\Gamma(\mathbb{W}_\beta)$ are isomorphic as graphs, by Lemma \[graph-isomorphic-implies-equal-dimension\], $n=k$. Now, as $\mathbb{V}$ and $\mathbb{W}$ are finite dimensional vector spaces having same dimension over the same field $\mathbb{F}$, $\mathbb{V}$ and $\mathbb{W}$ are isomorphic as vector spaces. [\[graph-iso-but-not-vecsp-iso\] Consider an one-dimensional vector space $\mathbb{V}$ over $\mathbb{Z}_5$ generated by $\alpha$ (say). Then $\Gamma(\mathbb{V}_\alpha)$ is a complete graph of 4 vertices with $V=\{\alpha,2\alpha,3\alpha,4\alpha\}$. Consider the map $T:\Gamma(\mathbb{V}_\alpha) \rightarrow \Gamma(\mathbb{V}_\alpha)$ given by $T(\alpha)=2\alpha,T(2\alpha)=\alpha,T(3\alpha)=4\alpha,T(4\alpha)=3\alpha$. Clearly, $T$ is a graph isomorphism, but as $T(2\alpha)=\alpha\neq 4\alpha=2(2\alpha)=2T(\alpha)$, $T$ is not linear.]{} Automorphisms of Non-Zero Component Graph ========================================= In this section, we investigate the form of automorphisms of $\Gamma(\mathbb{V}_\alpha)$. It is shown that an automorphism maps $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ to a basis of $\mathbb{V}$ of a special type, namely non-zero scalar multiples of a permutation of $\alpha_i$’s. [\[main-auto-theorem\] Let $\varphi: \Gamma(\mathbb{V}_\alpha) \rightarrow \Gamma(\mathbb{V}_\alpha)$ be a graph automorphism. Then, $\varphi$ maps $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ to another basis $\{\beta_1,\beta_2,\ldots,\beta_n\}$ such that there exists $\sigma \in S_n$, where each $\beta_i$ is of the form $c_i \alpha_{\sigma(i)}$ and each $c_i$’s are non-zero.]{}\ [[**Proof:** ]{}]{}Let $\varphi: \Gamma(\mathbb{V}_\alpha) \rightarrow \Gamma(\mathbb{V}_\alpha)$ be a graph automorphism. Since, $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ is an independent set of vertices in $\Gamma(\mathbb{V}_\alpha)$, therefore $\beta_i=\varphi(\alpha_i): i=1,2,\ldots,n$ is also an independent set of vertices in $\Gamma(\mathbb{V}_\alpha)$. Let $$\begin{array}{c} \varphi(\alpha_1)=\beta_1=c_{11}\alpha_1+c_{12}\alpha_2+\cdots+c_{1n}\alpha_n\\ \varphi(\alpha_2)=\beta_2=c_{21}\alpha_1+c_{22}\alpha_2+\cdots+c_{2n}\alpha_n \\ \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \\ \varphi(\alpha_n)=\beta_n=c_{n1}\alpha_1+c_{n2}\alpha_2+\cdots+c_{nn}\alpha_n \end{array}$$ Since, $\beta_1 \neq \theta$ i.e., $\beta_1$ is not an isolated vertex, $\exists~ j_1 \in \{1,2,\ldots,n\}$ such that $c_{1j_1}\neq 0$. Therefore, $c_{ij_1}= 0, \forall i \neq 1$ (as $\beta_i$ is not adjacent to $\beta_1, \forall i \neq 1$.) Similarly, for $\beta_2$, $\exists~ j_2 \in \{1,2,\ldots,n\}$ such that $c_{2j_2}\neq 0$ and $c_{ij_2}= 0, \forall i \neq 2$. Moreover, $j_1 \neq j_2$ as $\beta_1$ and $\beta_2$ are not adjacent. Continuing in this manner, for $\beta_n$, $\exists~ j_n \in \{1,2,\ldots,n\}$ such that $c_{nj_n}\neq 0$ and $c_{ij_n}= 0, \forall i \neq n$ and $j_1,j_2,\ldots,j_n$ are all distinct numbers from $\{1,2,\ldots,n\}$. Thus, $c_{kj_l}=0$ for $k \neq l$ and $c_{kj_k}\neq 0$, where $k,l \in \{1,2,\ldots,n\}$ and $j_1,j_2,\ldots,j_n$ is a permutation on $\{1,2,\ldots,n\}$. Set $\sigma=\left( \begin{array}{cccc} 1 & 2 & \cdots & n\\ j_1 & j_2 & \cdots & j_n \end{array}\right)$. Therefore, $$\beta_i=c_{ij_i}\alpha_{j_i}=c_{ij_i}\alpha_{\sigma(i)},\mbox{ with }c_{ij_i}\neq 0,~~~~ \forall i \in \{1,2,\ldots,n\}.$$ As $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ is a basis, $\{\beta_1,\beta_2,\ldots,\beta_n\}$ is also a basis and hence the theorem. [Although $\varphi$ maps the basis $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ to another basis $\{\beta_1,\beta_2,\ldots,\beta_n\}$, it may not be a vector space isomorphism. It is because linearity of $\varphi$ is not guaranteed as shown in Example \[graph-iso-but-not-vecsp-iso\]. However, the following result is true.]{} [\[special-auto-theorem\] Let $\varphi$ be a graph automorphism, which maps $\alpha_i\mapsto c_{ij_i}\alpha_{\sigma(i)}$ for some $\sigma \in S_n$. Then, if $c \neq 0$, $\varphi(c\alpha_i)=d\alpha_{\sigma(i)}$ for some non-zero $d$. More generally, $\forall k \in \{1,2,\ldots,n\}$ if $c_1\cdot c_2 \cdots c_k \neq 0$, then $$\varphi(c_1\alpha_{i_1}+c_2 \alpha_{i_2}+\cdots+c_k \alpha_{i_k})=d_1\alpha_{\sigma(i_1)}+d_2\alpha_{\sigma(i_2)}+\cdots+d_k \alpha_{\sigma(i_k)}$$ for some $d_i$’s with $d_1\cdot d_2\cdots d_k \neq 0$.]{}\ [[**Proof:** ]{}]{}Since, $c\alpha_i \sim \alpha_i$, therefore $\varphi(c\alpha_i)\sim \varphi(\alpha_i)$ i.e., $\varphi(c\alpha_i)\sim c_{ij_i}\alpha_{\sigma(i)}$. Thus, $\varphi(c\alpha_i)$ has $\alpha_{\sigma(i)}$ as a non-zero component. If possible, let $\varphi(c\alpha_i)$ has a non-zero component along some other $\alpha_{\sigma(j)}$ for some $j\neq i$. Then $\varphi(c\alpha_i) \sim \alpha_{\sigma(j)}$ i.e., $\varphi(c\alpha_i) \sim \varphi(\alpha_j)$, which in turn implies $c\alpha_i \sim \alpha_j$ for $j\neq i$, a contradiction. Therefore, $\varphi(c\alpha_i)=d\alpha_{\sigma(i)}$ for some non-zero $d$. For the general case, since $$c_1\alpha_{i_1}+c_2 \alpha_{i_2}+\cdots+c_k \alpha_{i_k} \sim \alpha_{i_1}$$ $$\Rightarrow \varphi(c_1\alpha_{i_1}+c_2 \alpha_{i_2}+\cdots+c_k \alpha_{i_k}) \sim \varphi(\alpha_{i_1})=c\alpha_{\sigma(i_1)} \mbox{ for some non-zero }c$$ $$\Rightarrow \varphi(c_1\alpha_{i_1}+c_2 \alpha_{i_2}+\cdots+c_k \alpha_{i_k})\mbox{ has a non-zero component along }\alpha_{\sigma(i_1)}$$ $$\Rightarrow \varphi(c_1\alpha_{i_1}+c_2 \alpha_{i_2}+\cdots+c_k \alpha_{i_k}) \sim \alpha_{\sigma(i_1)}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$$ Similarly, $$\varphi(c_1\alpha_{i_1}+c_2 \alpha_{i_2}+\cdots+c_k \alpha_{i_k}) \sim \alpha_{\sigma(i_2)},\ldots,\varphi(c_1\alpha_{i_1}+c_2 \alpha_{i_2}+\cdots+c_k \alpha_{i_k}) \sim \alpha_{\sigma(i_k)}$$ Therefore, $\varphi(c_1\alpha_{i_1}+c_2 \alpha_{i_2}+\cdots+c_k \alpha_{i_k})=d_1\alpha_{\sigma(i_1)}+d_2\alpha_{\sigma(i_2)}+\cdots+d_k \alpha_{\sigma(i_k)} $ for some $d_i$’s with $d_1\cdot d_2\cdots d_k \neq 0$. [$\Gamma(\mathbb{V}_\alpha)$ is not vertex transitive if $dim(\mathbb{V})>1$.]{}\ [[**Proof:** ]{}]{}Since $dim(\mathbb{V})\geq 2$, by Theorem \[special-auto-theorem\], there does not exist any automorphism which maps $\alpha_1$ to $\alpha_1+\alpha_2$. Hence, the result. The Case of Finite Fields ========================= In this section, we find the degree of each vertices of $\Gamma(\mathbb{V})$ if the base field is finite. For more results, in the case of finite fields, please refer to [@angsu-jaa]. [\[nbd-remark\] The set of vertices adjacent to $\alpha_{i_1}+ \alpha_{i_2}+\cdots+ \alpha_{i_k}$ is same as the set of vertices adjacent to $c_1\alpha_{i_1}+c_2 \alpha_{i_2}+\cdots+c_k \alpha_{i_k}$ i.e., $N(\alpha_{i_1}+ \alpha_{i_2}+\cdots+ \alpha_{i_k})=N(c_1\alpha_{i_1}+c_2 \alpha_{i_2}+\cdots+c_k \alpha_{i_k})$ for $c_1 c_2 \cdots c_k \neq 0$.]{} [Let $\mathbb{V}$ be a vector space over a finite field $\mathbb{F}$ with $q$ elements and $\Gamma$ be its associated graph with respect to a basis $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$. Then, the degree of the vertex $c_1\alpha_{i_1}+c_2 \alpha_{i_2}+\cdots+c_k \alpha_{i_k}$, where $c_1 c_2 \cdots c_k \neq 0$, is $(q^k -1)q^{n-k}-1$.]{}\ [[**Proof:** ]{}]{}The number of vertices with $\alpha_{i_1}$ as non-zero component is $(q-1)q^{n-1}$ (including $\alpha_{i_1}$ itself). Therefore, $deg(\alpha_{i_1})=(q-1)q^{n-1}-1$. The number of vertices with $\alpha_{i_1}$ or $\alpha_{i_2}$ as non-zero component is equal to number of vertices with $\alpha_{i_1}$ as non-zero component $+$ number of vertices with $\alpha_{i_2}$ as non-zero component $-$ number of vertices with both $\alpha_{i_1}$ and $\alpha_{i_2}$ as non-zero component $$=(q-1)q^{n-1}+(q-1)q^{n-1}-(q-1)^2 q^{n-2}=(q^2-1)q^{n-2}.$$ As this count includes the vertex $\alpha_{i_1}+\alpha_{i_2}$, $deg(\alpha_{i_1}+\alpha_{i_2})=(q^2-1)q^{n-2} - 1$. Similarly, for finding the degree of $\alpha_{i_1}+\alpha_{i_2}+\alpha_{i_3}$, the number of vertices with $\alpha_{i_1}$ or $\alpha_{i_2}$ or $\alpha_{i_3}$ as non-zero component is equal to $$[(q-1)q^{n-1}+(q-1)q^{n-1}+(q-1)q^{n-1}]-[(q-1)^2 q^{n-2}+(q-1)^2 q^{n-2}+(q-1)^2 q^{n-2}]+(q-1)^3 q^{n-3}$$ $$=(q^3-1) q^{n-3}, \mbox{ and hence }deg(\alpha_{i_1}+\alpha_{i_2}+\alpha_{i_3})=(q^3-1) q^{n-3}-1.$$ Proceeding in this way, we get $$deg(\alpha_{i_1}+ \alpha_{i_2}+\cdots+ \alpha_{i_k})=(q^k -1)q^{n-k}-1.$$ Now, from Remark \[nbd-remark\], it follows that $$deg(c_1 \alpha_{i_1}+ c_2 \alpha_{i_2}+\cdots+c_k \alpha_{i_k})=(q^k -1)q^{n-k}-1.$$ Conclusion ========== In this paper, we represent a finite dimensional vector space as a graph and study various inter-relationships among $\Gamma(\mathbb{V})$ as a graph and $\mathbb{V}$ as a vector space. The main goal of these discussions was to study the nature of the automorphisms and establish the equivalence between the corresponding graph and vector space isomorphisms. Apart from this, we also study basic properties of completeness, connectedness, domination and independence number. As a topic of further research, one can look into the structure of maximal cliques and chromatic number of such graphs. Acknowledgement {#acknowledgement .unnumbered} =============== The author is thankful to Bedanta Bose of University of Calcutta, Kolkata for bringing the manuscript in the final form. A special thanks goes to Dr. Usman Ali of Bahauddin Zakariya University, Pakistan for pointing out a mistake in an earlier version of the paper. The research is partially funded by NBHM Research Project Grant, (Sanction No. 2/48(10)/2013/ NBHM(R.P.)/R&D II/695), Govt. of India. [20]{} A. Amini, B. Amini, E. Momtahan and M. H. Shirdareh Haghighi: *On a Graph of Ideals*, Acta Math. Hungar., 134 (3) (2012), 369-384. D.F. Anderson, M. Axtell, J. Stickles: *Zero-divisor graphs in commutative rings*, in Commutative Algebra Noetherian and Non-Noetherian Perspectives, ed. by M. Fontana, S.E. Kabbaj, B.Olberding, I. Swanson (Springer, New York, 2010), pp.23-45 D. F. Anderson and P. S. Livingston: *The zero-divisor graph of a commutative ring*, Journal of Algebra, 217 (1999), 434-447. A. Badawi: *On the Dot Product Graph of a Commutative Ring*, Comm. Algebra 43(1), 43-50 (2015). I. Beck: *Coloring of commutative rings*, Journal of Algebra, 116 (1988), 208-226. P.J. Cameron, S. Ghosh: *The power graph of a finite group*, Discrete Mathematics 311 (2011) 1220-1222. I. Chakrabarty, S. Ghosh, T.K. Mukherjee, and M.K. Sen: *Intersection graphs of ideals of rings*, Discrete Mathematics 309, 17 (2009): 5381-5392. I. Chakrabarty, S. Ghosh, M.K. Sen: *Undirected power graphs of semigroups*, Semigroup Forum 78 (2009) 410-426. A. Das: *On Non-Zero Component Graph of Vector Spaces over Finite Fields*, to appear in Journal of Algebra and its Applications. DOI: 10.1142/S0219498817500074 N. Jafari Rad, S.H. Jafari: *Results on the intersection graphs of subspaces of a vector space*, http://arxiv.org/abs/1105.0803v1 H.R. Maimani, M.R. Pournaki, A. Tehranian, S. Yassemi: *Graphs Attached to Rings Revisited*, Arab J Sci Eng (2011) 36: 997-1011. Y. Talebi, M.S. Esmaeilifar, S. Azizpour: *A kind of intersection graph of vector space*, Journal of Discrete Mathematical Sciences and Cryptography 12, no. 6 (2009): 681-689. D.B. West: *Introduction to Graph Theory*, Prentice Hall, 2001. [^1]: Dedicated to Professor Mridul Kanti Sen
--- abstract: 'A microscopic theory of current partition in fractional quantum Hall liquids, described by chiral Luttinger liquids, is developed to compute the noise correlations, using the Keldysh technique. In this Hanbury-Brown and Twiss geometry, at Laughlin filling factors $\nu=1/m$, the real time noise correlator exhibits oscillations which persist over larger time scales than that of an uncorrelated Hall fluid. The zero frequency noise correlations are negative at filling factor $1/3$ as for bare electrons (anti-bunching), but are strongly reduced in amplitude. These correlations become positive (bunching) for $\nu\leq 1/5$, suggesting a tendency towards bosonic behavior.' address: - ' $^1$ Centre de Physique Théorique, Case 907 Luminy, 13288 Marseille Cedex 9, France' - '$^2$ Université de Provence, 13331 Marseille Cedex 03, France' - '$^3$ Université de la Méditerranée, 13288 Marseille Cedex 9, France' author: - 'I. Safi$^1$, P. Devillard$^{1,2}$ and T. Martin$^{1,3}$' title: Partition noise and statistics in the fractional quantum Hall effect --- 0.9truecm [2]{} Transport experiments in the fractional quantum Hall effect (FQHE) [@laughlin] have provided a direct measurement of the fractional charge of the quasi-particles [@saminad; @picciotto] associated with these correlated electron fluids. These results constitute a preliminary test of the Luttinger liquid models [@kane_fisher_noise; @chamon_noise] based on chiral edge Lagrangians [@wen] which describe the low-lying edge excitations. However, the discussion has centered on the charge of the quasiparticles, rather than the statistics. On the other hand, noise correlation experiments [@henny; @oliver] in branched mesoscopic devices, i.e. fermion analogs of the Hanbury–Brown and Twiss experiments for photons [@hbt], have detected the negative noise correlations predicted by theory [@martin_landauer]. Statistical features in transport are quite explicit in such experiments. So far in the FQHE, the measurement of the noise reduction [@saminad] – smaller than that of fermions – constitutes the only hint that the statistic is not fermionic. Here, it is suggested that the statistics of the underlying excitations of the FQHE can be monitored via a Hanbury–Brown experiment where quasiparticles emitted from one edge and tunneling through the correlated Hall fluid are collected into two receiving edges (see Fig. \[fig1\]). This constitutes a mesoscopic analogue of a collision process which involves many quasi-particles, and therefore provides a direct probe of their underlying statistics. The Luttinger edge state theory [@wen] is used to compute the current and noise with the Keldysh technique. The analytic results for the noise in this partition experiment show that: a) upon increasing the magnetic field from the integer quantum Hall effect (IQHE) to filling factor $1/3$, the (negative) correlations are strongly reduced in amplitude; b) these correlations change sign and are positive at $\nu\leq 1/5$. This work attempts to go further than a recent proposal where statistics and scattering properties were dissociated [@isakov], which correlations are fermionic [@torres_martin]. The suggested geometry is depicted in Fig. \[fig1\]: it requires [*three*]{} edges (two of which are assumed to be decoupled), in contrast to previous noise correlation measurements [@saminad; @henny] in the IQHE and in the FQHE where a single constriction controlled the transmission between two edge states. There, negative noise correlations between the receiving ends of two edge states (inset Fig \[fig1\]a) are the consequence of a noiseless injecting channel together with current conservation, for arbitrary $\nu$. 8.5 cm On the other hand, removing excitations from one edge state and redistributing them to two other edges (Fig.\[fig1\]) is clearly relevant for uncovering the bunching/anti-bunching properties of quasi-particles. This setup can be considered as a detector of partition noise between edge $1$ and $3$, but in the presence of a “noisy” injecting current (due to backscattering between $2$ and $3$). The edge modes running along each gate, characterized by chiral bosonic fields $\phi_l$ ($l=1,2,3$) are described by a Hamiltonian $H_{0}=(v_F\hbar/4\pi)\sum_{l=1,2,3}\int ds (\partial_{s}\phi_l)^2$ with $s$ the curvilinear abscissa, and with a current $v_F\sqrt{\nu}\partial_{s}\phi_l/2\pi$, which is conserved in the absence of scattering ($v_F$ is the Fermi velocity). $\phi_l$ satisfy the commutation relation $[\phi_l(s),\phi_{l'}(s')]=i\pi \delta_{ll'}sgn(s-s')$ [@geller_loss]. The quasi-particle operators are expressed as $\psi_l^\dagger(s)=(2\pi\alpha)^{-1/2}F_le^{ik_F s} e^{i\sqrt{\nu}\phi_l(s)~}$, where $\alpha$ is a cutoff. Both the above commutation relation and the (unitary) Klein factors $F_l$ guarantee fractional exchange statistics provided that: $$F_lF_{l'}=e^{-i\pi p_{ll'}\nu}F_{l'}F_l \label{klein}$$ with $3$ possible statistical phases $p_{ll'}=-p_{l'l}=\pm (1-\delta_{ll'})$. In particular, this insures that the fields $\psi_l$ anti-commute for $\nu=1$ and commute for $\nu\rightarrow 0$. Tunneling of quasi-particles occurs at two locations $s=\pm a/2$ on edge $3$, and at $s=0$ on edges $1$ and $2$. A non–equilibrium situation is achieved by imposing a bias $\hbar\omega_l/e^*=-\partial\chi_l/\partial t$ ($l=1,2$) between $3$ and $l$, where $\chi_l$ denotes the gauge parameter which appears in the tunneling Hamiltonian $H_B=H_{B1}+H_{B2}$, with: $$H_{Bl}=\Gamma_le^{-ie^*\chi_l/\hbar c}\psi_l^\dagger(0)\psi_3 \left((-1)^la/2\right)+H.c. ~.\label{tunneling Hamiltonian}$$ $H_{B1}$ and $H_{B2}$ are required to commute [@nayak], which imposes the constraint $p_{12}+p_{23}+p_{31}=1$ on the statistical phases of Eq. (\[klein\]). Non–equilibrium averages are extracted from the Keldysh partition function. The perturbation theory is analogous to the Coulomb gas models which have been proposed to study transport in Luttinger models [@chamon_noise]. Here two contours $K_l$ ($l=1,2$) contain $m_l$ (even) charges ($\pm$) which account for the quasi-particle transfer to/from edge $3$ to $l$ at time $t_{lk}$ attached to the upper/lower branch of $K_l$. Expanding the exponential, the partition function reads: $$\begin{aligned} \label{partition} &&Z_K=\sum_{m_1,m_2=0}^\infty \left({-i\over \hbar}\right)^{m_1+m_2}\sum_{\mathcal{C}} \int\! {dt_{11}...dt_{1m_1}dt_{21}...dt_{2m_2} \over m_1! m_2!} \nonumber\\ && \times\langle T_K H_{B1}(t_{11})...H_{B1}(t_{1m_1}) H_{B2}(t_{21})...H_{B2}(t_{2m_2})\rangle_{\mathcal{C}} ~.\end{aligned}$$ where the subscript $\mathcal{C}$ identifies charge configurations. Relevant charge configurations which contribute to lowest order to the noise correlations are depicted in Fig. \[fig1\]b): charge neutrality is imposed on each contour. The terms in the nonequilibrium average Eq. (\[partition\]) can in general be decoupled into four contributions: one Keldysh ordered product of bosonic fields for each edge, which dynamics are specified by the Green’s function of the chiral boson fields [@chamon_freed] $G_{\eta\eta'}(s,t)$ (with contour branch labels $\eta,\eta'=\pm$), and a fourth contribution which arises from the Klein factors, which have no dynamics of their own, yet which are essential in order to specify the tunneling operators. The quasi-particle current operator between leads $3$ and $l=1,2$ is $I_l= -c\,\partial H_B/\partial \chi_l$. The symmetrized real time current–current correlator between edges $1$ and $2$ which is used to compute the noise contains two time arguments, which are assigned to different branches of the Keldysh contour (thus the notation $\chi_l^\eta(t)$ below). Performing functional derivatives on $Z_K$, $$S_{12}(t-t')=-(\hbar c)^2\sum_{\eta=\pm} {\delta^2 Z_K\over \delta \chi_1^\eta(t) \delta\chi_2^{-\eta}(t')}\biggl|_{\omega_{1,2}=\omega_0}~, \label{Keldysh time correlator}$$ assuming equal biases on $1$ and $2$. The leading term in Eq. (\[Keldysh time correlator\]) is of fourth order in the tunneling amplitudes, corresponding to $m_1=m_2=2$ in Eq. (\[partition\]), in contrast to the leading contribution to the individual currents and noises (second order “dipole” contributions). Exploiting the symmetry property of the Green’s function $G_{-\eta,-\eta'}(s,t)=\left[G_{\eta,\eta'}(s,t)\right]^*$: $$\begin{aligned} \label{real time Sbis} S_{12}(t)&=&4\frac{|e^*\tau_0\Gamma_1\Gamma_2|^2} {(h\alpha)^4}Re \int_{-\infty}^{\infty}\!\!dt_1\int_{-\infty}^{\infty}\!\!dt_2 \sum_{\epsilon,\eta_1,\eta_2=\pm}\epsilon\eta_1\eta_2 \\\nonumber && \times \cos\left(\omega_0(t_1+\epsilon t_2)\right) e^{2\nu\left[G_{+,\eta_1}(0,t_1)+G_{-,\eta_2}(0,t_2)\right]} \\\nonumber && \times \frac{e^{\nu\epsilon\left[\widetilde{G}_{+\eta_2}(-a,t+t_2) +\widetilde{G}_{\eta_1,-}(-a,t-t_1)\right]}}{e^{\nu\epsilon \left[\widetilde{G}_{+-}(-a,t) +\widetilde{G}_{\eta_1\eta_2}(-a,t+t_2-t_1)\right]}}~,\end{aligned}$$ where $\epsilon$ represents the product of the two charge transfer processes: $\epsilon=-/+$ when the quasiparticles tunnel in the same/opposite direction (left/right hand side of Fig. \[fig1\]b). In Eq. \[real time Sbis\], the Green’s function for edge $3$, which mediates the coupling between $K_1$ and $K_2$, has been translated due to the Klein factors: $\widetilde{G}_{\eta\eta'}(-a,t)=G_{\eta\eta'} (-a,t)+i\pi/4\left[(\eta+\eta')sgn(t)-\eta+\eta'\right]$. The integrand in the double integral in Eq. (\[real time Sbis\]) for $\nu<1$ decays slowly with both time arguments. At large times, the last factor in the integrand is equal to $1$, thus corresponding to the product of the current averages $2\langle I_1\rangle\langle I_2\rangle$. Absolute convergence is obtained for $\nu>1/2$ from the power law decay in time. For $\nu<1/2$ convergence is due to the oscillatory terms. Zero temperature, $a=0$, and a symmetric bias $\omega_{1,2}=\omega_0$ are chosen in order to enhance statistical signatures. Experimentally this implies that the two tunneling paths lie within a few Fermi wave-lengths from each other. The overlap between quasiparticles in edge $3$ is then more prominent. Here, only $\eta_2=-\eta_1=1$ is retained because first, it provides the large time behavior and second, it corresponds to the contribution of the zero frequency noise correlations (to be computed later on). Using the explicit expressions of the Green’s function at equal abscissa, $G_{\eta\eta'}(t)=-\ln[1+it(\eta\theta(t)-\eta'\theta(-t))/\tau_0]$ and rescaling the times by the short time cutoff $\tau_0\sim \alpha/v_F$, this contribution reads: $$\begin{aligned} \label{real time Sbisbis} &&S_{12}^{-+}(t)=-4\frac{|e^*\tau_0\Gamma_1\Gamma_2|^2} {(h\alpha)^4}Re \int_{-\infty}^{\infty}\!\!dt_1\int_{-\infty}^{\infty}\!\!dt_2 \sum_{\epsilon=\pm}\epsilon \nonumber\\ && \times \cos\left[\omega_0(t_1+\epsilon t_2)\right] \left(1-it_1\right)^{-2\nu} \left(1+it_2\right)^{-2\nu} (1-it)^{\epsilon\nu}\nonumber\\ && \times \left[1+i(t+t_2-t_1)\right]^{\epsilon\nu} \left(1+i|t+t_2|\right)^{-\epsilon\nu} \left(1-i|t_1-t|\right)^{-\epsilon\nu}\nonumber\\ && \times \exp\left(i\epsilon\frac{\pi}2\nu\left[sgn(t+t_2)-sgn(t-t_1)\right]\right).\end{aligned}$$ A leading contribution to $S_{12}^{-+}(t)$ is plotted in Fig. \[fig2\], as well as the excess noise at $\nu=1$ for comparison. The latter oscillates with a frequency $\omega_0$, and decays as $t^{-2}$. $S_{12}^{-+}(t)$ scales as $|\omega_0|^{4\nu-2}f(\omega_0 t)$, with $f(x)$ an oscillatory function which decays at least as $x^{-2\nu}$, thus a slower decay than that of electrons. At large times, the frequency of the oscillations stabilizes as $\omega_0=e^*V/\hbar$. The result in Eq. (\[real time Sbis\]) is now integrated over $t$ after subtracting the average current products: $\tilde{S}_{12}(\omega)\equiv \int dt\, e^{-i\omega t}[S_{12}(t)-2\langle I_1\rangle \langle I_2\rangle]$. The sign and magnitude of the $\omega=0$ correlations tell us the tendency for the quasiparticle to exhibit bunching or antibunching. Turning now to the charge configurations of Fig. \[fig1\]b, at zero temperature only $\epsilon=-1$ in $\tilde{S}_{12}(0)$ contributes, which gives the information that an “exclusion principle” prohibits the excitations to be transfered from the collectors to the emitter. The zero frequency noise correlations have the general form: $$\tilde{S}_{12}(0)= (e^{*2}|\omega_0|/ \pi) T_1^rT_2^r R(\nu) \label{general correlation}$$ where the renormalized transmission probabilities are $T_l^r=(\tau_0|\omega_0|)^{2\nu-2}\left[\tau_0 \Gamma_l/\hbar \alpha\right]^2/\Gamma(2\nu)$, and the dimensionless function $R(\nu)$ characterizes the statistical correlations. At $\nu=1$, it is shown explicitly that $R(1)=-1$ using contour integration, so that $\tilde{S}_{12}$ coincides exactly with the scattering theory result [@martin_landauer]. This issue represents a crucial test of the implementation of the Klein factors. Moreover, for arbitrary $\nu$, $R(\nu)$ could in principle be directly measured in an experiment. Indeed one can rescale the noise correlation $\tilde{S}_{12}$ by the individual shot noises $\tilde{S}_l\simeq 2e^*\langle I_l\rangle$ or equivalently (at this order) by the individual currents: $R(\nu)=|\omega_0|\tilde{S}_{12}/[4\pi\langle I_1\rangle\langle I_2\rangle]$. A central result of this letter is the analytical expression for the function $R(\nu)$ in Eq. (\[general correlation\]). It is obtained by performing a change of variables of the $3$ time integrals in $\tilde{S}_{12}(0)$, neglecting the short-time cutoff in the diagonal elements of the Keldysh Green’s function in Eq. (\[real time Sbis\]). This procedure is consistent with the limit of small biases which is assumed here $|\omega_0|\tau_0\ll 1$, but strictly speaking it is valid in the range $1/2<\nu\leq 1$. One then obtains the asymptotic series: $$\begin{aligned} \label{ratio result} R(\nu)&=&{-\sin(\pi\nu)\Gamma^2(2\nu) \over 2\sqrt{\pi}\Gamma(2\nu-1)\Gamma(2\nu-1/2)\Gamma(-\nu)} \nonumber\\ &&\sum_{n=0}^{\infty}{\Gamma(n-\nu)\Gamma(n+1-\nu)\Gamma(\nu+n-1/2) \over n! \Gamma(n+\nu)\Gamma(n+3/2-\nu)}~,\end{aligned}$$ which converges as $n^{-\nu-2}$. Here, $\nu$ can be treated as a continuous variable, whereas it has a physical meaning when it is a Laughlin fraction $1/m$ ($m$ odd). At first glance the only physical filling factor which one can reach with this series is $\nu=1$. Yet, it is possible to extend $R(\nu)$ to the range $[0,1/2]$: the zero frequency noise correlations do not contain any true divergence (it would require the introduction of an infrared cutoff with a physical origin), but the integration method which is used here breaks down at $\nu=1/2$, a feature which can already be seen in computing the average product $\langle I_1\rangle\langle I_2\rangle$ in a similar manner (although $\langle I_1\rangle$ converges for all $\nu$). It is still possible to extract a meaningful result for $\tilde{S}_{12}$ from this integration procedure: $R(\nu)$ having no poles in $[1/2,1]$, it can be analytically continued to the interval $[0,1/2]$. Indeed, note that the terms of the series of Eq. (\[ratio result\]) are still well defined for $\nu<1/2$). The continuation procedure could be jeopardized if other tunneling operators generated by the renormalization group (RG) procedure happened to be more relevant at $\nu=1/2$. Consider a higher order tunneling operator $V_{\vec{n}} e^{i\sqrt{\nu}\vec{n}.\vec{\phi}}$, where $\vec{n}=(n_1,n_2,n_3)$ ($n_l$ integer) satisfies quasi-particle conservation $\sum_{l=1}^3 n_l=0$ and $\vec{\phi}=(\phi_1,\phi_2,\phi_3)$ contains the fields of the three edges. The RG flow is then : $${dV_{\vec{n}}\over dl} =\left( 1-{\nu\over 2}\sum_{l=1}^3n_l^2 \right) V_{\vec{n}}~. \label{flow}$$ The bare tunneling terms are relevant at $\nu<1$, and always dominates all other $V_{\vec{n}}$, which become relevant below $\nu=1/3$ at most. For $\nu\simeq 1$, a check is obtained by direct numerical integration of $S_{12}(t)$. The comparison between the series solution of Eq. (\[ratio result\]) and the numerical data shows a fair agreement for $0.7\leq \nu \leq 1$. Starting from the IQHE and decreasing $\nu$ (Fig. \[fig3\]), the noise correlations between the two collector edges are reduced in amplitude at any $\nu=1/m$. When a quasi-particle is detected, in $1$, one is less likely to observe a depletion of quasi-particles in $2$ than in the case of noninteracting fermions. The reduction of the (normalized) noise correlations constitutes a direct prediction of the statistical features associated with fractional quasiparticles in transport experiments, and should be detectable at $\nu=1/3$. At $\nu=1/4$, $\tilde{S}_{12}$ vanishes and becomes positive for lower physical filling factors ($1/5,1/7,...$), which is reminiscent of bosons bunching up together. Positive correlations have been predicted in superconductor–normal metal junctions [@torres_martin], and bosonic behavior was attributed to the presence of Cooper pairs – effective bosons – leaking on the normal side. Here the positive correlations can be either attributed to the fact that the fractional statistics are bosonic at $\nu\rightarrow 0$, or to the eventual presence of composite bosons resulting from attachment of an odd number of flux tubes [@zang_girvin]. On the one hand, one is dealing with a fermionic system where [*large*]{} negative correlations are the norm. On the other hand, the presence of an external magnetic field and the (resulting) collective modes of the edge excitations favor bosonic behavior. The competition between these two tendencies yields a statistical signature which is close to zero – analogous to the noise correlations of “classical” particles. Independent of this sign issue, for $\nu\leq 1/3$, the amplitude of the normalized correlations is strongly reduced and this effect could be checked experimentally for Laughlin fractions. The tendency for the noise correlation ratio to be reduced compared to its non interacting value is consistent with the existing data for two-terminal devices [@saminad], as a connection exists between the two types of measurements [@martin_landauer; @torres_martin]. There, shot noise suppression was observed to be weaker than that of bare electrons, which then multiplies the shot noise by $1-T$ [@martin_landauer; @reznikov_prl], the reflection amplitude. However, a qualitative analysis of noise reduction in this situation is rendered difficult because of the nonlinear current–voltage characteristics. In contrast, an HBT experiment constitutes a direct and crucial test of the Luttinger liquid models used to describe the edge excitations in the FQHE, as it addresses the role of fractional statistics in transport experiments. These experiments could also probe hierachical fractions of the FQHE, as well as non-chiral Luttinger systems such as carbon nanotubes. We thank H. Saleur for pointing out the relevance of the Klein factors. Discussions with D.C. Glattli and G. Lesovik are gratefully acknowledged. Two of us (I.S and T.M) are deeply indebted to their mentors R. Landauer and H. J. Schulz. [10]{} R. B. Laughlin, Rev. Mod. Phys. [**71**]{}, 863 (1999); H.L. Stormer, [*ibid*]{}, 875 (1999). L. Saminadayar, D. C. Glattli, Y. Jin, and B. Etienne, Phys. Rev. Lett. [**79**]{}, 2526 (1997). , Nature [**389**]{}, 162 (1997). C. L. Kane and M. P. A. Fisher, Phys. Rev. Lett. [**72**]{}, 724 (1994); P. Fendley, A. W. W. Ludwig, and H. Saleur, [*ibid.*]{} [**75**]{}, 2196 (1995). C. de C. Chamon, D. E. Freed, and X. G. Wen, Phys. Rev. B [**51**]{}, 2363 ([1995-II]{}). X.G. Wen, Int. J. Mod. Phys. B [**6**]{}, 1711 (1992); Adv. Phys. [**44**]{}, 405 (1995). M. Henny [*et al.*]{}, Science [**296**]{}, 284 (1999). W. Oliver [*et al.*]{}, Science [**296**]{}, 299 (1999). Hanbury-Brown and Q. R. Twiss, Nature [**177**]{}, 27 (1956). T. Martin and R. Landauer, Phys. Rev. B [**45**]{}, 1742 (1992); M. [Büttiker]{}, Phys. Rev. Lett. [**65**]{}, 2901 (1990); Phys. Rev. B [**46**]{}, 12485 (1992). S. Isakov, T. Martin, and S. Ouvry, Phys. Rev. Lett. [**83**]{}, 580 (1999). J. Torrès and T. Martin, Eur. Phys. J. B [**12**]{}, 319 (1999). M. Geller and D. Loss, Phys. Rev. B [**56**]{}, 9692 (1997). C. Nayak [*et al.*]{}, Phys. Rev. B [**59**]{}, 15694 (1999). I. Safi, P. Devillard and T. Martin, [*in preparation*]{}. C. Chamon and D. Freed, Phys. Rev. B [**60**]{}, 1842 (1999). S. C. Zang [*et al.*]{}, Phys. Rev. Lett. [**62**]{}, 82 (1988); S. M. Girvin and A. H. MacDonald, [*ibid.*]{} [**58**]{}, 1252 (1987). M. Reznikov [*et al.*]{}, Phys. Rev. Lett. [**75**]{}, 3340 (1995); A. Kumar [*et al.*]{}, [*ibid.*]{} [**76**]{}, 2778 (1996).
--- abstract: 'We introduce a proximal version of the stochastic dual coordinate ascent method and show how to accelerate the method using an inner-outer iteration procedure. We analyze the runtime of the framework and obtain rates that improve state-of-the-art results for various key machine learning optimization problems including SVM, logistic regression, ridge regression, Lasso, and multiclass SVM. Experiments validate our theoretical findings.' author: - 'Shai Shalev-Shwartz[^1]' - 'Tong Zhang[^2] [^3]' bibliography: - 'curRefs.bib' title: Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization --- Introduction ============ We consider the following generic optimization problem associated with regularized loss minimization of linear predictors: Let $X_1,\ldots,X_n$ be matrices in ${\mathbb{R}}^{d \times k}$ (referred to as instances), let $\phi_1,\ldots,\phi_n$ be a sequence of vector convex functions defined on ${\mathbb{R}}^k$ (referred to as loss functions), let $g(\cdot)$ be a convex function defined on ${\mathbb{R}}^d$ (referred to as a regularizer), and let $\lambda \geq 0$ (referred to as a regularization parameter). Our goal is to solve: $$\label{eqn:PrimalProblem} \min_{w \in {\mathbb{R}}^d} P(w) ~~~~\textrm{where}~~~~ P(w) = \left[ \frac{1}{n} \sum_{i=1}^n \phi_i( X_i^\top w) + \lambda g(w) \right] .$$ For example, in ridge regression the regularizer is $g(w) = \frac{1}{2} \|w\|_2^2$, the instances are column vectors, and for every $i$ the $i$’th loss function is $\phi_i(a) = \frac{1}{2} (a-y_i)^2$, for some scalar $y_i$. Let $w^* = \operatorname*{argmin}_w P(w)$ (we will later make assumptions that imply that $w^*$ is unique). We say that $w$ is $\epsilon$-accurate if $P(w) - P(w^*) \le \epsilon$. Our main result is a new algorithm for solving . If $g$ is $1$-strongly convex and each $\phi_i$ is $(1/\gamma)$-smooth (meaning that its gradient is $(1/\gamma)$-Lipschitz), then our algorithm finds, with probability of at least $1-\delta$, an $\epsilon$-accurate solution to in time $$\begin{aligned} &O\left(d\left(n+ \min\left\{\frac{1}{\lambda\,\gamma},\sqrt{\frac{n}{\lambda\,\gamma}}\right\}\right)\log(1/\epsilon)\, \log(1/\delta)\,\max\{1,\log^2(1/(\lambda\,\gamma\,n))\}\right) \\ &~~=~~ \tilde{O}\left(d\left(n+ \min\left\{\frac{1}{\lambda\,\gamma},\sqrt{\frac{n}{\lambda\,\gamma}}\right\}\right)\right) ~.\end{aligned}$$ This applies, for example, to ridge regression and to logistic regression with $L_2$ regularization. The $O$ notation hides constants terms and the $\tilde{O}$ notation hides constants and logarithmic terms. We make these explicit in the formal statement of our theorems. Intuitively, we can think of $\frac{1}{\lambda \gamma}$ as the condition number of the problem. If the condition number is $O(n)$ then our runtime becomes $\tilde{O}(dn)$. This means that the runtime is nearly linear in the data size. This matches the recent result of @ShalevZh2013 [@LSB12-sgdexp], but our setting is significantly more general. When the condition number is much larger than $n$, our runtime becomes $\tilde{O}(d \sqrt{\frac{n}{\lambda\,\gamma}})$. This significantly improves over the result of [@ShalevZh2013; @LSB12-sgdexp]. It also significantly improves over the runtime of accelerated gradient descent due to @nesterov2007gradient, which is $\tilde{O}(d\,n\,\sqrt{\frac{1}{\lambda\,\gamma}})$. By applying a smoothing technique to $\phi_i$, we also derive a method that finds an $\epsilon$-accurate solution to assuming that each $\phi_i$ is $O(1)$-Lipschitz, and obtain the runtime $$\tilde{O}\left(d\left(n+ \min\left\{\frac{1}{\lambda\,\epsilon},\sqrt{\frac{n}{\lambda\,\epsilon}}\right\}\right)\right) ~.$$ This applies, for example, to SVM with the hinge-loss. It significantly improves over the rate $\frac{d}{\lambda \epsilon}$ of SGD (e.g. [@ShalevSiSr07]), when $\frac{1}{\lambda \epsilon} \gg n$. We can also apply our results to non-strongly convex regularizers (such as the $L_1$ norm regularizer), or to non-regularized problems, by adding a slight $L_2$ regularization. For example, for $L_1$ regularized problems, and assuming that each $\phi_i$ is $(1/\gamma)$-smooth, we obtain the runtime of $$\tilde{O}\left(d\left(n+ \min\left\{\frac{1}{\epsilon\,\gamma},\sqrt{\frac{n}{\epsilon\,\gamma}}\right\}\right)\right) ~.$$ This applies, for example, to the Lasso problem, in which the goal is to minimize the squared loss plus an $L_1$ regularization term. To put our results in context, in the table below we specify the runtime of various algorithms (while ignoring constants and logarithmic terms) for three key machine learning applications; SVM in which $\phi_i(a) = \max\{0,1-a\}$ and $g(w) = \frac{1}{2} \|w\|_2^2$, Lasso in which $\phi_i(a) = \frac{1}{2}(a-y_i)^2$ and $g(w) = \sigma \|w\|_1$, and Ridge Regression in which $\phi_i(a) = \frac{1}{2}(a-y_i)^2$ and $g(w) = \frac{1}{2} \|w\|_2^2$. Additional applications, and a more detailed runtime comparison to previous work, are given in [Section \[sec:applications\]]{}. In the table below, SGD stands for Stochastic Gradient Descent, and AGD stands for Accelerated Gradient Descent. ---------------------------------------------------------------------------------------------------------------------------------------------------- Problem Algorithm Runtime --------- ------------------------------------------------------------------------- ---------------------------------------------------------------- SGD [@ShalevSiSr07] $\frac{d}{\lambda \epsilon}$ AGD [@nesterov2005smooth] $dn \sqrt{\frac{1}{\lambda\,\epsilon}}$ **This paper** $d\left(n + \min\{\frac{1}{\lambda\,\epsilon},\sqrt{\frac{n}{\lambda \epsilon}}\}\right)$ SGD and variants (e.g. [@Zhang02-dual; @Xiao10; @shalev2011stochastic]) $\frac{d}{\epsilon^2}$ Stochastic Coordinate Descent [@ShalevTe09; @Nesterov10] $\frac{dn}{\epsilon}$ FISTA [@nesterov2007gradient; @beck2009fast] $dn \sqrt{\frac{1}{\epsilon}}$ **This paper** $d\left(n + \min\{\frac{1}{\epsilon},\sqrt{\frac{n}{\epsilon}}\}\right)$ Exact $d^2n + d^3$ SGD [@LSB12-sgdexp], SDCA [@ShalevZh2013] $d\left(n + \frac{1}{\lambda}\right)$ AGD [@nesterov2007gradient] $dn \sqrt{\frac{1}{\lambda}}$ **This paper** $d\left(n + \min\{\frac{1}{\lambda},\sqrt{\frac{n}{\lambda}}\}\right)$ ---------------------------------------------------------------------------------------------------------------------------------------------------- #### Technical contribution: Our algorithm combines two ideas. The first is a proximal version of stochastic dual coordinate ascent (SDCA).[^4] In particular, we generalize the recent analysis of [@ShalevZh2013] in two directions. First, we allow the regularizer, $g$, to be a general strongly convex function (and not necessarily the squared Euclidean norm). This allows us to consider non-smooth regularization function, such as the $L_1$ regularization. Second, we allow the loss functions, $\phi_i$, to be vector valued functions which are smooth (or Lipschitz) with respect to a general norm. This generalization is useful in multiclass applications. As in [@ShalevZh2013], the runtime of this procedure is $\tilde{O}\left(d\left(n + \frac{1}{\lambda \gamma}\right)\right)$. This would be a nearly linear time (in the size of the data) if $\frac{1}{\lambda \gamma} = O(n)$. Our second idea deals with the case $\frac{1}{\lambda \gamma} \gg n$ by iteratively approximating the objective function $P$ with objective functions that have a stronger regularization. In particular, each iteration of our acceleration procedure involves approximate minimization of $P(w) + \frac{\kappa}{2} \|w-y\|_2^2$, with respect to $w$, where $y$ is a vector obtained from previous iterates and $\kappa$ is order of $1/(\gamma n)$. The idea is that the addition of the relatively strong regularization makes the runtime of our proximal stochastic dual coordinate ascent procedure be $\tilde{O}(dn)$. And, with a proper choice of $y$ at each iteration, we show that the sequence of solutions of the problems with the added regularization converge to the minimum of $P$ after $\sqrt{\frac{1}{\lambda \gamma n}}$ iterations. This yields the overall runtime of $d \sqrt{\frac{n}{\lambda \gamma}}$. #### Additional related work: As mentioned before, our first contribution is a proximal version of the stochastic dual coordinate ascent method and extension of the analysis given in @ShalevZh2013. Stochastic dual coordinate ascent has also been studied in @CollinsGlKoCaBa08 but in more restricted settings than the general problem considered in this paper. One can also apply the analysis of stochastic coordinate descent methods given in @richtarik2012iteration on the dual problem. However, here we are interested in understanding the primal sub-optimality, hence an analysis which only applies to the dual problem is not sufficient. The generality of our approach allows us to apply it for multiclass prediction problems. We discuss this in detail later on in [Section \[sec:applications\]]{}. Recently, [@lacoste2012stochastic] derived a stochastic coordinate ascent for structural SVM based on the Frank-Wolfe algorithm. Although with different motivations, for the special case of multiclass problems with the hinge-loss, their algorithm ends up to be the same as our proximal dual ascent algorithm (with the same rate). Our approach allows to accelerate the method and obtain an even faster rate. The proof of our acceleration method adapts Nesterov’s estimation sequence technique, studied in @OlGlNe11 [@ScRoBa11], to allow approximate and stochastic proximal mapping. See also [@baes2009estimate; @d2008smooth]. In particular, it relies on similar ideas as in Proposition 4 of [@ScRoBa11]. However, our specific requirement is different, and the proof presented here is different and significantly simpler than that of [@ScRoBa11]. There have been several attempts to accelerate stochastic optimization algorithms. See for example [@hu2009accelerated; @ghadimi2012optimal; @cotter2011better] and the references therein. However, the runtime of these methods have a polynomial dependence on $1/\epsilon$ even if $\phi_i$ are smooth and $g$ is $\lambda$-strongly convex, as opposed to the logarithmic dependence on $1/\epsilon$ obtained here. As in [@LSB12-sgdexp; @ShalevZh2013], we avoid the polynomial dependence on $1/\epsilon$ by allowing more than a single pass over the data. Preliminaries ============= All the functions we consider in this paper are proper convex functions over a Euclidean space. We use ${\mathbb{R}}$ to denote the set of real numbers and to simplify our notation, when we use ${\mathbb{R}}$ to denote the range of a function $f$ we in fact allow $f$ to output the value $+\infty$. Given a function $f : {\mathbb{R}}^d \to {\mathbb{R}}$ we denote its **conjugate** function by $$f^*(y) = \sup_x~ [ y^\top x - f(x)] ~.$$ Given a norm $\|\cdot\|_P$ we denote the **dual norm** by $\|\cdot\|_D$ where $$\|y\|_D = \sup_{x:\|x\|_P=1} y^\top x.$$ We use $\|\cdot\|$ or $\|\cdot\|_2$ to denote the $L_2$ norm, $\|x\| = x^\top x$. We also use $\|x\|_1 = \sum_i |x_i|$ and $\|x\|_\infty = \max_i |x_i|$. The **operator norm** of a matrix $X$ with respect to norms $\|\cdot\|_P,\|\cdot\|_{P'}$ is defined as $$\|X\|_{P \to P'} = \sup_{u: \|u\|_P =1} \|X u\|_{P'} ~.$$ A function $f: {\mathbb{R}}^k \to {\mathbb{R}}^d$ is $L$-**Lipschitz** with respect to a norm $\|\cdot\|_P$, whose dual norm is $\|\cdot\|_D$, if for all $a, b \in {\mathbb{R}}^d$, we have $$\|f(a)- f(b)\|_D \leq L\,\|a-b\|_P .$$ A function $f: {\mathbb{R}}^d \to {\mathbb{R}}$ is $(1/\gamma)$-**smooth** with respect to a norm $\|\cdot\|_P$ if it is differentiable and its gradient is $(1/\gamma)$-Lipschitz with respect to $\|\cdot\|_P$. An equivalent condition is that for all $a, b \in {\mathbb{R}}^d$, we have $$f(a) \leq f(b) + \nabla f(b)^\top (a-b) + \frac{1}{2\gamma} \|a-b\|_{P}^2 .$$ A function $f : {\mathbb{R}}^d \to {\mathbb{R}}$ is $\gamma$-**strongly convex** with respect to $\|\cdot\|_{P}$ if $$f(w+ v) \geq f(w) + \nabla f(w)^\top v + \frac{\gamma}{2} \|v\|_{P}^2 .$$ It is well known that $f$ is $\gamma$-strongly convex with respect to $\|\cdot\|_P$ if and only if $f^*$ is $(1/\gamma)$-smooth with respect to the dual norm, $\|\cdot\|_D$. The **dual problem** of is $$\label{eqn:DualProblem} \max_{\alpha \in {\mathbb{R}}^{k \times n}} D(\alpha) ~~~\textrm{where}~~~ D(\alpha) = \left[ \frac{1}{n} \sum_{i=1}^n -\phi_i^*(-\alpha_i) - \lambda g^*\left( \tfrac{1}{\lambda n} \sum_{i=1}^n X_i \alpha_i \right) \right] ~,$$ where $\alpha_i$ is the $i$’th column of the matrix $\alpha$, which forms a vector in ${\mathbb{R}}^k$. We will assume that $g$ is strongly convex which implies that $g^*(\cdot)$ is continuous differentiable. If we define $$\label{eqn:walpha} v(\alpha)= \frac{1}{\lambda n} \sum_{i=1}^n X_i \alpha_i \qquad \text{and} \qquad w(\alpha) = \nabla g^*(v(\alpha)),$$ then it is known that $w(\alpha^*)=w^*$, where $\alpha^*$ is an optimal solution of (\[eqn:DualProblem\]). It is also known that $P(w^*)=D(\alpha^*)$ which immediately implies that for all $w$ and $\alpha$, we have $P(w) \geq D(\alpha)$, and hence the **duality gap** defined as $$P(w(\alpha))-D(\alpha)$$ can be regarded as an upper bound on both the **primal sub-optimality**, $P(w(\alpha))-P(w^*)$, and on the **dual sub-optimality**, $D(\alpha^*)-D(\alpha)$. Main Results ============ In this section we describe our algorithms and their analysis. We start in [Section \[sec:rdca\]]{} with a description of our proximal stochastic dual coordinate ascent procedure (Prox-SDCA). Then, in [Section \[sec:accelerate\]]{} we show how to accelerate the method by calling Prox-SDCA on a sequence of problems with a strong regularization. Throughout the first two sections we assume that the loss functions are smooth. Finally, we discuss the case of Lipschitz loss functions in [Section \[sec:Lipschitz\]]{}. The proofs of the main acceleration theorem ([Theorem \[thm:acceleratedThmMain\]]{}) is given in [Section \[sec:acceleratedThmMain\]]{}. The rest of the proofs are provided in the appendix. Proximal Stochastic Dual Coordinate Ascent {#sec:rdca} ------------------------------------------ We now describe our proximal stochastic dual coordinate ascent procedure for solving . Our results in this subsection holds for $g$ being a $1$-strongly convex function with respect to some norm $\|\cdot\|_{P'}$ and every $\phi_i$ being a $(1/\gamma)$-smooth function with respect to some other norm $\|\cdot\|_P$. The corresponding dual norms are denoted by $\|\cdot\|_{D'}$ and $\|\cdot\|_D$ respectively. The dual objective in has a different dual vector associated with each example in the training set. At each iteration of dual coordinate ascent we only allow to change the $i$’th column of $\alpha$, while the rest of the dual vectors are kept intact. We focus on a *randomized* version of dual coordinate ascent, in which at each round we choose which dual vector to update uniformly at random. At step $t$, let $v^{(t-1)} = (\lambda n)^{-1} \sum_i X_i \alpha_i^{(t-1)}$ and let $w^{(t-1)} = \nabla g^*(v^{(t-1)})$. We will update the $i$-th dual variable $\alpha_i^{(t)} = \alpha_i^{(t-1)} + \Delta \alpha_i$, in a way that will lead to a sufficient increase of the dual objective. For the primal problem, this would lead to the update $v^{(t)} = v^{(t-1)} + (\lambda n)^{-1} X_i \Delta \alpha_i$, and therefore $w^{(t)} = \nabla g^*(v^{(t)})$ can also be written as $$w^{(t)}= \operatorname*{argmax}_{w} \left[w^\top v^{(t)} - g(w) \right] ~=~ \operatorname*{argmin}_w \left[ - w^\top \left(n^{-1}\sum_{i=1}^n X_i \alpha_i^{(t)}\right) + \lambda g(w)\right] ~.$$ Note that this particular update is rather similar to the update step of proximal-gradient dual-averaging method (see for example @Xiao10). The difference is on how $\alpha^{(t)}$ is updated. The goal of dual ascent methods is to increase the dual objective as much as possible, and thus the optimal way to choose $\Delta \alpha_i$ would be to maximize the dual objective, namely, we shall let $$\Delta \alpha_i = \operatorname*{argmax}_{\Delta \alpha_i \in {\mathbb{R}}^k} \left[ -\frac{1}{n} \phi^*_i(-(\alpha_i + \Delta \alpha_i)) - \lambda g^*( v^{(t-1)} + (\lambda n)^{-1} X_i \Delta \alpha_i) \right] ~.$$ However, for a complex $g^*(\cdot)$, this optimization problem may not be easy to solve. To simplify the optimization problem we can rely on the smoothness of $g^*$ (with respect to a norm $\|\cdot\|_{D'}$) and instead of directly maximizing the dual objective function, we try to maximize the following proximal objective which is a lower bound of the dual objective: $$\begin{aligned} &~\operatorname*{argmax}_{\Delta \alpha_i \in {\mathbb{R}}^k} \left[ - \frac{1}{n} \phi^*_i(-(\alpha_i + \Delta \alpha_i)) - \lambda \left(\nabla g^*(v^{(t-1)})^\top (\lambda n)^{-1} X_i \Delta \alpha_i + \frac{1}{2} \| (\lambda n)^{-1} X_i \Delta \alpha_i\|_{D'}^2 \right) \right] \\ =&~\operatorname*{argmax}_{\Delta \alpha_i \in {\mathbb{R}}^k} \left[ -\phi^*_i(-(\alpha_i + \Delta \alpha_i)) - w^{(t-1)\,\top} X_i \Delta \alpha_i - \frac{1}{2\lambda n} \| X_i \Delta \alpha_i\|_{D'}^2 \right] .\end{aligned}$$ In general, this optimization problem is still not necessarily simple to solve because $\phi^*$ may also be complex. We will thus also propose alternative update rules for $\Delta \alpha_i$ of the form $\Delta \alpha_i = s (- \nabla \phi_i(X_i^\top w^{(t-1)}) - \alpha_i^{(t-1)})$ for an appropriately chosen step size parameter $s>0$. Our analysis shows that an appropriate choice of $s$ still leads to a sufficient increase in the dual objective. It should be pointed out that we can always pick $\Delta \alpha_i$ so that the dual objective is non-decreasing. In fact, if for a specific choice of $\Delta \alpha_i$, the dual objective decreases, we may simply set $\Delta \alpha_i=0$. Therefore throughout the proof we will assume that the dual objective is non-decreasing whenever needed. [Procedure Proximal Stochastic Dual Coordinate Ascent: Prox-SDCA($P,\epsilon,\alpha^{(0)}$)]{} **Goal:** Minimize $P(w) = \frac{1}{n} \sum_{i=1}^n \phi_i(X_i^\top w) + \lambda g(w)$\ **Input:** Objective $P$, desired accuracy $\epsilon$, initial dual solution $\alpha^{(0)}$ (default: $\alpha^{(0)}=0$)\ **Assumptions:**\ $\forall i$, $\phi_i$ is $(1/\gamma)$-smooth w.r.t. $\|\cdot\|_P$ and let $\|\cdot\|_D$ be the dual norm of $\|\cdot\|_P$\ $g$ is $1$-strongly convex w.r.t. $\|\cdot\|_{P'}$ and let $\|\cdot\|_{D'}$ be the dual norm of $\|\cdot\|_{P'}$\ $\forall i$, $\|X_i\|_{D \to D'} \leq R$\ **Initialize** $v^{(0)}=\frac{1}{\lambda n} \sum_{i=1}^n X_i \alpha_i^{(0)}$, $w^{(0)}=\nabla g^*(0)$\ **Iterate:** for $t=1,2,\dots$\ Randomly pick $i$\ Find $\Delta \alpha_i$ using any of the following options\ (or any other update that achieves a larger dual objective):\ **Option I:**\ $\displaystyle \Delta \alpha_i = \operatorname*{argmax}_{\Delta \alpha_i} \left[-\phi_i^*(-(\alpha_i^{(t-1)} + \Delta \alpha_i) ) - w^{(t-1)^\top} X_i \Delta \alpha_i - \frac{1}{2\lambda n} \left\| X_i \Delta \alpha_i \right\|_{D'}^2\right]$\ **Option II:**\ Let $u = - \nabla \phi_i(X_i^\top w^{(t-1)})$ and $q = u- \alpha_i^{(t-1)} $\ Let $\displaystyle s = \operatorname*{argmax}_{s \in [0,1]} \left[-\phi_i^*(-(\alpha_i^{(t-1)} + sq) ) - s\,w^{(t-1)^\top} X_i q - \frac{s^2}{2\lambda n} \left\| X_i q \right\|_{D'}^2\right]$\ Set $\Delta \alpha_i = s q$\ **Option III:**\ Same as Option II but replace the definition of $s$ as follows:\ Let $s = \min\left(1,\frac{\phi_i(X_i^\top w^{(t-1)})+\phi_i^*(-\alpha_i^{(t-1)})+ w^{(t-1)^\top} X_i \alpha^{(t-1)}_i + \frac{\gamma}{2} \|q\|_D^2}{ \|q\|_D^2 (\gamma + \frac{1}{\lambda n} \|X_i\|_{D \to D'}^2 )}\right)$\ **Option IV:**\ Same as Option III but replace $\|X_i\|_{D \to D'}^2$ in the definition of $s$ with $R^2$\ **Option V:**\ Same as Option II but replace the definition of $s$ to be $s = \frac{\lambda n \gamma}{R^2 + \lambda n \gamma}$\ $\alpha^{(t)}_i \leftarrow \alpha^{(t-1)}_i + \Delta \alpha_i$ and for $j \neq i$, $\alpha^{(t)}_j \leftarrow \alpha^{(t-1)}_j$\ $v^{(t)} \leftarrow v^{(t-1)} + (\lambda n)^{-1} X_i \Delta \alpha_i$\ $w^{(t)} \leftarrow \nabla g^*(v^{(t)})$\ **Stopping condition**:\ Let $T_0 < t$ (default: $T_0 = t - n - \lceil \frac{R^2}{\lambda \gamma} \rceil$ )\ **Averaging option:**\ Let $\bar{\alpha} = \frac{1}{t-T_0} \sum_{i=T_0+1}^t \alpha^{(i-1)}$ and $\bar{w} = \frac{1}{t-T_0} \sum_{i=T_0+1}^t w^{(i-1)}$\ **Random option:**\ Let $\bar{\alpha}=\alpha^{(i)}$ and $\bar{w} = w^{(i)}$ for some random $i \in T_0+1,\ldots,t$\ Stop if $P(\bar{w})-D(\bar{\alpha}) \le \epsilon$ and output $\bar{w},\bar{\alpha},$ and $P(\bar{w})-D(\bar{\alpha})$\ The theorems below provide upper bounds on the number of iterations required by our prox-SDCA procedure. \[thm:smooth\] Consider Procedure Prox-SDCA as given in [Figure \[fig:sdca\]]{}. Let $\alpha^*$ be an optimal dual solution and let $\epsilon > 0$. For every $T$ such that $$T \geq \left(n + \frac{R^2}{\lambda \gamma}\right) \, \log\left( \left(n + \frac{R^2}{\lambda \gamma}\right) \cdot \frac{D(\alpha^*)-D(\alpha^{(0)})}{\epsilon}\right) ,$$ we are guaranteed that ${\mathbb{E}}[P(w^{(T)})-D(\alpha^{(T)})] \leq \epsilon$. Moreover, for every $T$ such that $$T \ge \left(n + \left\lceil \frac{R^2}{\lambda \gamma}\right\rceil \right) \cdot \left(1 + \log\left(\frac{D(\alpha^*)-D(\alpha^{(0)})}{\epsilon}\right) \right) ~,$$ let $T_0 = T - n - \lceil \frac{R^2}{\lambda \gamma}\rceil$, then we are guaranteed that ${\mathbb{E}}[P(\bar{w})-D(\bar{\alpha})] \leq \epsilon$. We next give bounds that hold with high probability. \[thm:HighProbsmooth\] Consider Procedure Prox-SDCA as given in [Figure \[fig:sdca\]]{}. Let $\alpha^*$ be an optimal dual solution, let $\epsilon_D,\epsilon_P > 0$, and let $\delta \in (0,1)$. 1. For every $T$ such that $$T \geq \left\lceil\left(n + \frac{R^2}{\lambda \gamma}\right) \, \log\left(\frac{2(D(\alpha^*)-D(\alpha^{(0)}))}{\epsilon_D}\right)\right\rceil \,\cdot\,\left\lceil \log_2\left(\frac{1}{\delta}\right)\right\rceil ~,$$ we are guaranteed that with probability of at least $1-\delta$ it holds that $D(\alpha^*)-D(\alpha^{(T)}) \le \epsilon_D$. 2. For every $T$ such that $$T \geq \left\lceil\left(n + \frac{R^2}{\lambda \gamma}\right) \,\left( \log\left(n + \frac{R^2}{\lambda \gamma}\right) + \log\left( \frac{2(D(\alpha^*)-D(\alpha^{(0)}))}{\epsilon_P}\right)\right)\right\rceil \,\cdot\,\left\lceil \log_2\left(\frac{1}{\delta}\right)\right\rceil ~,$$ we are guaranteed that with probability of at least $1-\delta$ it holds that $P(w^{(T)}) - D(\alpha^{(T)}) \le \epsilon_P$. 3. Let $T$ be such that $$T \ge \left(n + \left\lceil \frac{R^2}{\lambda \gamma}\right\rceil \right) \cdot \left(1 + \left\lceil \log\left(\frac{2(D(\alpha^*)-D(\alpha^{(0)}))}{\epsilon_P}\right)\right\rceil \right) \,\cdot\,\left\lceil \log_2\left(\frac{2}{\delta}\right)\right\rceil ~,$$ and let $T_0 = T - n - \lceil \frac{R^2}{\lambda \gamma}\rceil$. Suppose we choose $\lceil \log_2(2/\delta) \rceil$ values of $t$ uniformly at random from $T_0+1,\ldots,T$, and then choose the single value of $t$ from these $\lceil \log_2(2/\delta) \rceil$ values for which $P(w^{(t)})-D(\alpha^{(t)})$ is minimal. Then, with probability of at least $1-\delta$ we have that $P(w^{(t)})-D(\alpha^{(t)}) \le \epsilon_P$. The above theorem tells us that the runtime required to find an $\epsilon$ accurate solution, with probability of at least $1-\delta$, is $$\label{eqn:totalRuntimeHighProb} O\left(d\, \left(n + \frac{R^2}{\lambda \gamma} \right) \cdot \log\left(\frac{D(\alpha^*)-D(\alpha^{(0)})}{\epsilon}\right) \,\cdot\,\log\left(\frac{1}{\delta}\right) \right) ~.$$ This yields the following corollary. \[cor:smooth\] The expected runtime required to minimize $P$ up to accuracy $\epsilon$ is $$O\left(d\, \left(n + \frac{R^2}{\lambda \gamma} \right) \cdot \log\left(\frac{D(\alpha^*)-D(\alpha^{(0)})}{\epsilon}\right)\right) ~.$$ We have shown that with a runtime of $O\left(d\, \left(n + \frac{R^2}{\lambda \gamma} \right) \cdot \log\left(\frac{2(D(\alpha^*)-D(\alpha^{(0)}))}{\epsilon}\right)\right)$ we can find an $\epsilon$ accurate solution with probability of at least $1/2$. Therefore, we can run the procedure for this amount of time and check if the duality gap is smaller than $\epsilon$. If yes, we are done. Otherwise, we would restart the process. Since the probability of success is $1/2$ we have that the average number of restarts we need is $2$, which concludes the proof. Acceleration {#sec:accelerate} ------------ The Prox-SDCA procedure described in the previous subsection has the iteration bound of $\tilde{O}\left(n + \tfrac{R^2}{\lambda \gamma}\right)$. This is a nearly linear runtime whenever the condition number, $R^2/(\lambda \gamma)$, is $O(n)$. In this section we show how to improve the dependence on the condition number by an acceleration procedure. In particular, throughout this section we assume that $10\,n < \frac{R^2}{\lambda \gamma}$. We further assume throughout this subsection that the regularizer, $g$, is $1$-strongly convex with respect to the Euclidean norm, i.e. $\|u\|_{P'}=\|\cdot\|_2$. This also implies that $\|u\|_{D'}$ is the Euclidean norm. A generalization of the acceleration technique for strongly convex regularizers with respect to general norms is left to future work. The main idea of the acceleration procedure is to iteratively run the Prox-SDCA procedure, where at iteration $t$ we call Prox-SDCA with the modified objective, $\tilde{P}_t(w) = P(w) + \frac{\kappa}{2} \|w-y^{(t-1)}\|^2$, where $\kappa$ is a relatively large regularization parameter and the regularization is centered around the vector $$y^{(t-1)} = w^{(t-1)} + \beta(w^{(t-1)}-w^{(t-2)})$$ for some $\beta \in (0,1)$. That is, our regularization is centered around the previous solution plus a “momentum term” $\beta(w^{(t-1)}-w^{(t-2)})$. A pseudo-code of the algorithm is given in [Figure \[fig:acc-SDCA\]]{}. Note that all the parameters of the algorithm are determined by our theory. [Procedure Accelerated Prox-SDCA]{} **Goal:** Minimize $P(w) = \frac{1}{n} \sum_{i=1}^n \phi_i(X_i^\top w) + \lambda g(w)$\ **Input:** Target accuracy $\epsilon$ (only used in the stopping condition)\ **Assumptions:**\ $\forall i$, $\phi_i$ is $(1/\gamma)$-smooth w.r.t. $\|\cdot\|_P$ and let $\|\cdot\|_D$ be the dual norm of $\|\cdot\|_P$\ $g$ is $1$-strongly convex w.r.t. $\|\cdot\|_2$\ $\forall i$, $\|X_i\|_{D\to 2} \le R$\ $\frac{R^2}{\gamma \lambda} > 10\,n$ (otherwise, solve the problem using vanilla Prox-SDCA)\ **Define** $\kappa =\frac{R^2}{\gamma n}-\lambda$, $\mu = \lambda/2$, $\rho = \mu+\kappa$, $\eta= \sqrt{\mu/\rho}$, $\beta = \frac{1-\eta}{1+\eta}$,\ **Initialize** $y^{(1)}=w^{(1)}=0$,  $\alpha^{(1)}=0$, $\xi_1 = (1+\eta^{-2})(P(0)-D(0))$\ **Iterate:** for $t=2,3,\ldots$\ **Let** $\tilde{P}_t(w) = \frac{1}{n} \sum_{i=1}^n \phi_i(X_i^\top w) + \tilde{\lambda} \tilde{g}_t(w)$\ where $\tilde{\lambda} \tilde{g}_t(w) = \lambda g(w) + \frac{\kappa}{2} \|w\|_2^2 - \kappa\, w^\top y^{(t-1)}$\ **Call** $(w^{(t)},\alpha^{(t)},\epsilon_t) = \textrm{Prox-SDCA}\left(\tilde{P}_t,\frac{\eta}{2(1+\eta^{-2})} \xi_{t-1},\alpha^{(t-1)}\right)$\ **Let** $y^{(t)} = w^{(t)} + \beta( w^{(t)} - w^{(t-1)})$\ **Let** $\xi_t = (1-\eta/2)^{t-1}\,\xi_1$\ **Stopping conditions:** break and return $w^{(t)}$ if one of the following conditions hold:\ 1.   $t \ge 1 + \frac{2}{\eta} \log(\xi_1/\epsilon) $\ 2.   $(1+\rho/\mu)\epsilon_t + \frac{\rho\kappa}{2\mu} \|w^{(t)}-y^{(t-1)}\|^2 \le \epsilon$\ In the pseudo-code below, we specify the parameters based on our theoretical derivation. In our experiments, we found out that this choice of parameters also work very well in practice. However, we also found out that the algorithm is not very sensitive to the choice of parameters. For example, we found out that running $5n$ iterations of Prox-SDCA (that is, $5$ epochs over the data), without checking the stopping condition, also works very well. The main theorem is the following. \[thm:acceleratedThmMain\] Consider the accelerated Prox-SDCA algorithm given in [Figure \[fig:acc-SDCA\]]{}. - Correctness: When the algorithm terminates we have that $P(w^{(t)})-P(w^*)\le \epsilon$. - Runtime: - The number of outer iterations is at most $$1 + \frac{2}{\eta} \log(\xi_1/\epsilon) ~\le~ 1 + \sqrt{\frac{8R^2}{\lambda \gamma n}} \left(\log\left(\frac{2R^2}{\lambda \gamma n}\right) + \log\left(\frac{P(0)-D(0)}{\epsilon}\right) \right) ~.$$ - Each outer iteration involves a single call to Prox-SDCA, and the averaged runtime required by each such call is $$O\left(d\,n \log\left(\frac{R^2}{\lambda \gamma n}\right)\right) ~.$$ By a straightforward amplification argument we obtain that for every $\delta \in (0,1)$ the total runtime required by accelerated Prox-SDCA to guarantee an $\epsilon$-accurate solution with probability of at least $1-\delta$ is $$O\left(d\,\sqrt{\frac{nR^2}{\lambda\,\gamma}}\, \log\left(\frac{R^2}{\lambda\,\gamma\,n}\right)\,\left(\log\left(\frac{R^2}{\lambda \gamma n}\right) + \log\left(\frac{P(0)-D(0)}{\epsilon}\right) \right) \, \log\left(\frac{1}{\delta}\right) \right) ~.$$ Non-smooth, Lipschitz, loss functions {#sec:Lipschitz} ------------------------------------- So far we have assumed that for every $i$, $\phi_i$ is a $(1/\gamma)$-smooth function. We now consider the case in which $\phi_i$ might be non-smooth, and even non-differentiable, but it is $L$-Lipschitz. Following @nesterov2005smooth, we apply a “smoothing” technique. We first observe that if $\phi$ is $L$-Lipschitz function then the domain of $\phi^*$ is in the ball of radius $L$. \[lem:LipConjDom\] Let $\phi : {\mathbb{R}}^k \to {\mathbb{R}}$ be an $L$-Lipschitz function w.r.t. a norm $\|\cdot\|_P$ and let $\|\cdot\|_D$ be the dual norm. Then, for any $\alpha \in {\mathbb{R}}^k$ s.t. $\|\alpha\|_D > L$ we have that $\phi^*(\alpha) = \infty$. Fix some $\alpha$ with $\|\alpha\|_D > L$. Let $x_0$ be a vector such that $\|x_0\|_P = 1$ and $\alpha^\top x_0 = \|\alpha\|_D$ (this is a vector that achieves the maximal objective in the definition of the dual norm). By definition of the conjugate we have $$\begin{aligned} \phi^*(\alpha) &= \sup_x [\alpha^\top\,x - \phi(x)] \\ &= -\phi(0) + \sup_{x } [\alpha^\top\,x - (\phi(x) - \phi(0))] \\ &\ge -\phi(0) + \sup_{x } [\alpha^\top\,x - L \|x-0\|_P] \\ &\ge -\phi(0) + \sup_{c > 0 } [\alpha^\top\,(cx_0) - L \|cx_0\|_P] \\ &= -\phi(0) + \sup_{c > 0} (\|\alpha\|_D-L)\,c = \infty ~.\end{aligned}$$ This observation allows us to smooth $L$-Lipschitz functions by adding regularization to their conjugate. In particular, the following lemma generalizes Lemma 2.5 in [@shalev2010trading]. \[lem:smoothingLemma\] Let $\phi$ be a proper, convex, $L$-Lipschitz function w.r.t. a norm $\|\cdot\|_P$, let $\|\cdot\|_D$ be the dual norm, and let $\phi^*$ be the conjugate of $\phi$. Assume that $\|\cdot\|_2 \le \|\cdot\|_D$. Define $\tilde{\phi}^*(\alpha) = \phi^*(\alpha) + \frac{\gamma}{2} \|\alpha\|_2^2$ and let $\tilde{\phi}$ be the conjugate of $\tilde{\phi}^*$. Then, $\tilde{\phi}$ is $(1/\gamma)$-smooth w.r.t. the Euclidean norm and $$\forall a,~~ 0 \le \phi(a) - \tilde{\phi}(a) \le \gamma L^2/2 ~.$$ The fact that $\tilde{\phi}$ is $(1/\gamma)$-smooth follows directly from the fact that $\tilde{\phi}^*$ is $\gamma$-strongly convex. For the second claim note that $$\tilde{\phi}(a) = \sup_b \left[b a - \phi^*(b) - \frac{\gamma}{2} \|b\|_2^2 \right] \le \sup_b \left[b a - \phi^*(b) \right] = \phi(a)$$ and $$\begin{aligned} \tilde{\phi}(a) &= \sup_b \left[b a - \phi^*(b) - \frac{\gamma}{2} \|b\|_2^2 \right] = \sup_{b:\|b\|_D \le L} \left[b a - \phi^*(b) - \frac{\gamma}{2} \|b\|_2^2 \right] \\ &\ge \sup_{b:\|b\|_D \le L} \left[b a - \phi^*(b) - \frac{\gamma}{2} \|b\|_D^2 \right] \ge \sup_{b:\|b\|_D \le L} \left[b a - \phi^*(b) \right] - \frac{\gamma}{2} L^2 \\ &= \phi(a) - \frac{\gamma}{2} L^2 ~.\end{aligned}$$ It is also possible to smooth using different regularization functions which are strongly convex with respect to other norms. See @nesterov2005smooth for discussion. Proof of [Theorem \[thm:acceleratedThmMain\]]{} {#sec:acceleratedThmMain} =============================================== The first claim of the theorem is that when the procedure stops we have $ P(w^{(t)})-P(w^*) \le \epsilon$. We therefore need to show that each stopping condition guarantees that $ P(w^{(t)})-P(w^*) \le \epsilon$. For the second stopping condition, recall that $w^{(t)}$ is an $\epsilon_t$-accurate minimizer of $P(w) + \frac{\kappa}{2} \|w-y^{(t-1)}\|^2$, and hence by [Lemma \[lem:lowerBoundQ\]]{} below (with $z=w^*$, $w^+=w^{(t)}$, and $y=y^{(t-1)}$): $$\begin{aligned} P(w^*) &\ge P(w^{(t)}) + Q_{\epsilon}(w^*;w^{(t)},y^{(t-1)}) \\ &\ge P(w^{(t)}) - \frac{\rho\kappa}{2\mu}\|y^{(t-1)}-w^{(t)}\|^2 - (1+\rho/\mu)\epsilon_t ~.\end{aligned}$$ It is left to show that the first stopping condition is correct, namely, to show that after $1 + \frac{2}{\eta} \log(\xi_1/\epsilon)$ iterations the algorithm must converge to an $\epsilon$-accurate solution. Observe that the definition of $\xi_t$ yields that $ \xi_{t} = (1-\eta/2)^{t-1} \, \xi_1 \le e^{-\eta(t-1)/2} \xi_1$. Therefore, to prove that the first stopping condition is valid, it suffices to show that for every $t$, $ P(w^{(t)})-P(w^*) \le \xi_t$. Recall that at each outer iteration of the accelerated procedure, we approximately minimize an objective of the form $$P(w;y) = P(w) + \frac{\kappa}{2} \|w-y\|^2 ~.$$ Of course, minimizing $P(w;y)$ is not the same as minimizing $P(w)$. Our first lemma shows that for every $y$, if $w^+$ is an $\epsilon$-accurate minimizer of $P(w;y)$ then we can derive a lower bound on $P(w)$ based on $P(w^+)$ and a convex quadratic function of $w$. \[lem:lowerBoundQ\] Let $\mu=\lambda/2$ and $\rho = \mu+\kappa$. Let $w^+$ be a vector such that $P(w^+;y) \le \min_w P(w,y) + \epsilon$. Then, for every $z$, $$P(z) \ge P(w^+) + Q_{\epsilon}(z;w^+,y) ~,$$ where $$Q_{\epsilon}(z;w^+,y) = \frac{\mu}{2} \left\| z - \left(y - \tfrac{\rho}{\mu}(y-w^+)\right)\right\|^2 - \frac{\rho\kappa}{2\mu}\|y-w^+\|^2 - (1+\rho/\mu)\epsilon ~.$$ Denote $$\Psi(w) = P(w) - \frac{\mu}{2} \|w\|^2 ~.$$ We can write $$\frac{1}{2} \|w\|^2 = \frac{1}{2} \|y\|^2 + y^\top (w-y) + \frac{1}{2} \|w-y\|^2 ~.$$ It follows that $$P(w) = \Psi(w) + \frac{\mu}{2} \|w\|^2 = \Psi(w) + \frac{\mu}{2} \|y\|^2 + \mu\,y^\top (w-y) + \frac{\mu}{2} \|w-y\|^2 ~.$$ Therefore, we can rewrite $P(w;y)$ as: $$P(w;y) = \Psi(w) + \frac{\mu}{2} \|y\|^2 + \mu \,y^\top (w-y) + \frac{\rho}{2} \|w-y\|^2 ~.$$ Let $\tilde{w} = \operatorname*{argmin}_w P(w;y)$. Therefore, the gradient[^5] of $P(w;y)$ w.r.t. $w$ vanishes at $\tilde{w}$, which yields $$\nabla \Psi(\tilde{w}) + \mu y + \rho(\tilde{w}-y) = 0 ~~\Rightarrow~~ \nabla \Psi(\tilde{w}) + \mu y = \rho(y-\tilde{w}) ~.$$ By the $\mu$-strong convexity of $\Psi$ we have that for every $z$, $$\Psi(z) \ge \Psi(\tilde{w}) + \nabla \Psi(\tilde{w})^\top (z-\tilde{w}) + \frac{\mu}{2}\|z-\tilde{w}\|^2 ~.$$ Therefore, $$\begin{aligned} P(z) &= \Psi(z) + \frac{\mu}{2} \|y\|^2 + \mu \,y^\top (z-y) + \frac{\mu}{2} \|z-y\|^2 \\ &\ge \Psi(\tilde{w}) + \nabla \Psi(\tilde{w})^\top (z-\tilde{w}) + \frac{\mu}{2}\|z-\tilde{w}\|^2 + \frac{\mu}{2} \|y\|^2 + \mu \,y^\top (z-y) + \frac{\mu}{2} \|z-y\|^2 \\ &= P(\tilde{w};y) - \frac{\rho}{2}\|\tilde{w}-y\|^2+ \nabla \Psi(\tilde{w})^\top (z-\tilde{w}) + \mu \,y^\top (z-\tilde{w}) + \frac{\mu}{2}\left(\|z-\tilde{w}\|^2 +\|z-y\|^2 \right) \\ &= P(\tilde{w};y) - \frac{\rho}{2}\|\tilde{w}-y\|^2 + \rho(y-\tilde{w})^\top(z-\tilde{w}) + \frac{\mu}{2}\left(\|z-\tilde{w}\|^2 +\|z-y\|^2 \right) \\ &= P(\tilde{w};y) + \frac{\rho}{2}\|\tilde{w}-y\|^2 + \rho(y-\tilde{w})^\top(z-y) + \frac{\mu}{2}\left(\|z-\tilde{w}\|^2 + \|z-y\|^2 \right) ~.\end{aligned}$$ In addition, by standard algebraic manipulations, $$\begin{aligned} &\frac{\rho}{2}\|\tilde{w}-y\|^2 + \rho(y-\tilde{w})^\top(z-y) + \frac{\mu}{2}\|z-\tilde{w}\|^2 - \left( \frac{\rho}{2}\|w^+-y\|^2 + \rho(y-w^+)^\top(z-y) + \frac{\mu}{2}\|z-w^+\|^2\right)\\ &= \left(\rho(w^+-y)-\rho(z-y)+\mu(w^+-z)\right)^\top(\tilde{w}-w^+) + \frac{\rho+\mu}{2} \|\tilde{w}-w^+\|^2\\ &= (\rho+\mu)(w^+-z)^\top(\tilde{w}-w^+) + \frac{\rho+\mu}{2} \|\tilde{w}-w^+\|^2\\ &= \frac{1}{2}\left\| \sqrt{\mu}(w^+-z) + \frac{\rho+\mu}{\sqrt{\mu}} (\tilde{w}-w^+) \right\|^2 - \frac{\mu}{2} \|z-w^+\|^2 - \frac{(\rho+\mu)^2}{2\mu}\|\tilde{w}-w^+\|^2 + \frac{\rho+\mu}{2} \|\tilde{w}-w^+\|^2 \\ &\ge - \frac{\mu}{2} \|z-w^+\|^2 - \frac{\rho(\rho+\mu)}{2\mu}\|\tilde{w}-w^+\|^2 ~.\end{aligned}$$ Since $P(\cdot;y)$ is $(\rho+\mu)$-strongly convex and $\tilde{w}$ minimizes $P(\cdot;y)$, we have that for every $w^+$ it holds that $\frac{\rho+\mu}{2} \|\tilde{w}-w^+\|^2 \le P(w^+;y)-P(\tilde{w};y)$. Combining all the above and using the fact that for every $w,y$, $P(w;y) \ge P(w)$, we obtain that for every $w^+$, $$P(z) \ge P(w^+) + \frac{\rho}{2}\|w^+-y\|^2 + \rho(y-w^+)^\top(z-y) + \frac{\mu}{2}\|z-y\|^2 - \left(1+\frac{\rho}{\mu}\right)\left(P(w^+;y)-P(\tilde{w};y)\right) ~.$$ Finally, using the assumption $P(w^+;y) \le \min_w P(w;y) + \epsilon$ we conclude our proof. We saw that the quadratic function $P(w^+) + Q_{\epsilon}(z;w^+,y)$ lower bounds the function $P$ everywhere. Therefore, any convex combination of such functions would form a quadratic function which lower bounds $P$. In particular, the algorithm (implicitly) maintains a sequence of quadratic functions, $h_1,h_2,\ldots$, defined as follows. Choose $\eta \in (0,1)$ and a sequence $y^{(1)},y^{(2)},\ldots$ that will be specified later. Define, $$h_1(z) = P(0) + Q_{P(0)-D(0)}(z;0,0) = P(0) + \frac{\mu}{2}\|z\|^2 - (1+\rho/\mu)(P(0)-D(0)) ~,$$ and for $t \ge 1$, $$h_{t+1}(z) = (1-\eta) h_{t}(z) + \eta (P(w^{(t+1)}) + Q_{\epsilon_{t+1}}(z;w^{(t+1)},y^{(t)})) ~.$$ The following simple lemma shows that for every $t \ge 1$ and $z$, $h_t(z)$ lower bounds $P(z)$. \[lem:boundPbyh\] Let $\eta \in (0,1)$ and let $y^{(1)},y^{(2)},\ldots$ be any sequence of vectors. Assume that $w^{(1)}=0$ and for every $t \ge 1$, $w^{(t+1)}$ satisfies $P(w^{(t+1)};y^{(t)}) \le \min_w P(w;y^{(t)}) + \epsilon_{t+1}$. Then, for every $t \ge 1$ and every vector $z$ we have $$h_t(z) \le P(z) ~.$$ The proof is by induction. For $t=1$, observe that $P(0;0) = P(0)$ and that for every $w$ we have $P(w;0) \ge P(w) \ge D(0)$. This yields $P(0;0) - \min_w P(w;0) \le P(0)-D(0)$. The claim now follows directly from [Lemma \[lem:lowerBoundQ\]]{}. Next, for the inductive step, assume the claim holds for some $t-1 \ge 1$ and let us prove it for $t$. By the recursive definition of $h_t$ and by using [Lemma \[lem:lowerBoundQ\]]{} we have $$h_t(z) = (1-\eta) h_{t-1}(z) + \eta (P(w^{(t)}) + Q_{\epsilon_{t}}(z;w^{(t)},y^{(t-1)})) \le (1-\eta) h_{t-1}(z) + \eta P(z) ~.$$ Using the inductive assumption we obtain that the right-hand side of the above is upper bounded by $(1-\eta)P(z)+\eta P(z) = P(z)$, which concludes our proof. The more difficult part of the proof is to show that for every $t \ge 1$, $$P(w^{(t)}) \le \min_w h_t(w) + \xi_t ~.$$ If this holds true, then we would immediately get that for every $w^*$, $$P(w^{(t)})-P(w^*) \le P(w^{(t)}) - h_t(w^*) \le P(w^{(t)}) - \min_w h_t(w) \le \xi_t ~.$$ This will conclude the proof of the first part of [Theorem \[thm:acceleratedThmMain\]]{}, since $\xi_t = \xi_1 (1-\eta/2)^{t-1} \le \xi_1\,e^{-(t-1)\eta/2}$, and therefore, $1 + \frac{2}{\eta} \log(\xi_1/\epsilon)$ iterations suffice to guarantee that $P(w^{(t)})-P(w^*) \le \epsilon$. Define $$v^{(t)} = \operatorname*{argmin}_w h_t(w) ~.$$ Let us construct an explicit formula for $v^{(t)}$. Clearly, $v^{(1)} = 0$. Assume that we have calculated $v^{(t)}$ and let us calculate $v^{(t+1)}$. Note that $h_t$ is a quadratic function which is minimized at $v^{(t)}$. Furthermore, it is easy to see that for every $t$, $h_t$ is $\mu$-strongly convex quadratic function. Therefore, $$h_t(z) = h_t(v^{(t)}) + \frac{\mu}{2} \|z - v^{(t)}\|^2 ~.$$ By the definition of $h_{t+1}$ we obtain that $$h_{t+1}(z) = (1-\eta) (h_t(v^{(t)}) + \frac{\mu}{2} \|z - v^{(t)}\|^2 ) + \eta (P(w^{(t+1)}) + Q_{\epsilon_{t+1}}(z;w^{(t+1)},y^{(t)})) ~.$$ Since the gradient of $h_{t+1}(z)$ at $v^{(t+1)}$ should be zero, we obtain that $v^{(t+1)}$ should satisfy $$(1-\eta)\mu(v^{(t+1)} -v^{(t)}) + \eta \mu \left( v^{(t+1)} - (y^{(t)} - \tfrac{\rho}{\mu}(y^{(t)}-w^{(t+1)}) ) \right) = 0$$ Rearranging, we obtain $$\label{eqn:vtp1exp} v^{(t+1)} = (1-\eta)v^{(t)} + \eta (y^{(t)} - \tfrac{\rho}{\mu}(y^{(t)}-w^{(t+1)}) ) ~.$$ Getting back to our second phase of the proof, we need to show that for every $t$ we have $P(w^{(t)}) \le h_t(v^{(t)}) + \xi_t$. We do so by induction. For the case $t=1$ we have $$P(w^{(1)}) - h_1(v^{(1)}) = P(0) - h_1(0) = (1+\rho/\mu)(P(0)-D(0)) = \xi_1 ~.$$ For the induction step, assume the claim holds for $t \ge 1$ and let us prove it for $t+1$. We use the shorthands, $$Q_t(z) = Q_{\epsilon_t}(z;w^{(t)},y^{(t-1)}) ~~~\text{and}~~~ \psi_t(z) = Q_t(z) + P(w^{(t)}) ~~.$$ Let us rewrite $h_{t+1}(v^{(t+1)})$ as $$\begin{aligned} h_{t+1}(v^{(t+1)}) &= (1-\eta)h_t(v^{(t+1)}) + \eta \psi_{t+1}(v^{(t+1)}) \\ &= (1-\eta)(h_t(v^{(t)})+\frac{\mu}{2} \|v^{(t)}-v^{(t+1)}\|^2) + \eta \psi_{t+1}(v^{(t+1)}) ~.\end{aligned}$$ By the inductive assumption we have $h_t(v^{(t)}) \ge P(w^{(t)}) -\xi_t$ and by [Lemma \[lem:lowerBoundQ\]]{} we have $P(w^{(t)}) \ge \psi_{t+1}(w^{(t)})$. Therefore, $$\begin{aligned} \label{eqn:inductivebeg} h_{t+1}(v^{(t+1)}) &\ge (1-\eta)(\psi_{t+1}(w^{(t)})-\xi_t+\frac{\mu}{2} \|v^{(t)}-v^{(t+1)}\|^2) + \eta \psi_{t+1}(v^{(t+1)}) \\ \nonumber &= \frac{(1-\eta)\mu}{2} \|v^{(t)}-v^{(t+1)}\|^2 + \eta \psi_{t+1}(v^{(t+1)}) + (1-\eta)\psi_{t+1}(w^{(t)}) - (1-\eta)\xi_t ~.\end{aligned}$$ Next, note that we can rewrite $$Q_{t+1}(z) =\frac{\mu}{2} \|z-y^{(t)}\|^2 + \rho(z-y^{(t)})^\top(y^{(t)}-w^{(t+1)}) + \frac{\rho}{2} \|y^{(t)}-w^{(t+1)}\|^2 - (1+\rho/\mu)\epsilon_{t+1}~.$$ Therefore, $$\begin{aligned} \label{eqn:inductivepsi} &\eta \psi_{t+1}(v^{(t+1)}) + (1-\eta)\psi_{t+1}(w^{(t)}) - P(w^{(t+1)}) + (1+\rho/\mu)\epsilon_{t+1}\\ \nonumber &= \frac{\eta\mu}{2} \|v^{(t+1)}-y^{(t)}\|^2 +\frac{(1-\eta)\mu}{2} \|w^{(t)}-y^{(t)}\|^2 + \rho(\eta v^{(t+1)} + (1-\eta)w^{(t)}-y^{(t)})^\top(y^{(t)}-w^{(t+1)}) \\ \nonumber &+ \frac{\rho}{2} \|y^{(t)}-w^{(t+1)}\|^2\end{aligned}$$ So far we did not specify $\eta$ and $y^{(t)}$ (except $y^{(0)}=0$). We next set $$\eta = \sqrt{\mu/\rho} ~~~\text{and}~~~ \forall t \ge 1,~y^{(t)} = (1+\eta)^{-1}(\eta v^{(t)} + w^{(t)}) ~.$$ This choices guarantees that (see ) $$\begin{aligned} \eta v^{(t+1)} + (1-\eta)w^{(t)} &= \eta(1-\eta)v^{(t)}+\eta^2(1-\frac{\rho}{\mu})y^{(t)} + \eta^2 \frac{\rho}{\mu} w^{(t+1)} + (1-\eta)w^{(t)} \\ &= w^{(t+1)} + (1-\eta) \left[ \eta v^{(t)} + \frac{\eta^2(1-\frac{\rho}{\mu})}{1-\eta}y^{(t)} + w^{(t)} \right] \\ &= w^{(t+1)} + (1-\eta) \left[ \eta v^{(t)} - \frac{1-\eta^2}{1-\eta}y^{(t)} + w^{(t)} \right] \\ &= w^{(t+1)} + (1-\eta) \left[ \eta v^{(t)} - (1+\eta)y^{(t)} + w^{(t)} \right] \\ &= w^{(t+1)} ~.\end{aligned}$$ We also observe that $\epsilon_{t+1} \le \frac{\eta \xi_t}{2(1+\eta^{-2})}$ which implies that $ (1+\rho/\mu)\epsilon_{t+1} + (1-\eta) \xi_t \le (1-\eta/2)\xi_t = \xi_{t+1}$. Combining the above with and , and rearranging terms, we obtain that $$\begin{aligned} &h_{t+1}(v^{(t+1)}) - P(w^{(t+1)}) + \xi_{t+1} - \frac{(1-\eta)\mu}{2} \|w^{(t)}-y^{(t)}\|^2\\ &\ge \frac{(1-\eta)\mu}{2} \|v^{(t)}-v^{(t+1)}\|^2 + \frac{\eta\mu}{2} \|v^{(t+1)}-y^{(t)}\|^2 - \frac{\rho}{2} \|y^{(t)}-w^{(t+1)}\|^2 ~.\end{aligned}$$ Next, observe that $\rho \eta^2= \mu$ and that by we have $$y^{(t)}-w^{(t+1)} = \eta\left[ \eta y^{(t)} + (1-\eta)v^{(t)} - v^{(t+1)}\right] ~.$$ We therefore obtain that $$\begin{aligned} &h_{t+1}(v^{(t+1)}) - P(w^{(t+1)}) + \xi_{t+1} - \frac{(1-\eta)\mu}{2} \|w^{(t)}-y^{(t)}\|^2 \\ &\ge \frac{(1-\eta)\mu}{2} \|v^{(t)}-v^{(t+1)}\|^2 + \frac{\eta\mu}{2} \|y^{(t)} -v^{(t+1)}\|^2 - \frac{\mu}{2} \|\eta y^{(t)} + (1-\eta)v^{(t)} - v^{(t+1)} \|^2 ~.\end{aligned}$$ The right-hand side of the above is non-negative because of the convexity of the function $f(z) = \frac{\mu}{2} \|z-v^{(t+1)}\|^2$, which yields $$P(w^{(t+1)}) \le h_{t+1}(v^{(t+1)}) + \xi_{t+1} - \frac{(1-\eta)\mu}{2} \|w^{(t)}-y^{(t)}\|^2 \le h_{t+1}(v^{(t+1)}) + \xi_{t+1} ~.$$ This concludes our inductive argument. #### Proving the “runtime” part of [Theorem \[thm:acceleratedThmMain\]]{}: We next show that each call to Prox-SDCA will terminate quickly. By the definition of $\kappa$ we have that $$\frac{R^2}{(\kappa+\lambda)\gamma} = n ~.$$ Therefore, based on [Corollary \[cor:smooth\]]{} we know that the averaged runtime at iteration $t$ is $$O\left(d\,n \log\left( \frac{\tilde{D}_{t}(\alpha^*)-\tilde{D}_{t}(\alpha^{(t-1)})}{\frac{\eta}{2(1+\eta^{-2})}\xi_{t-1}} \right)\right) ~.$$ The following lemma bounds the initial dual sub-optimality at iteration $t \ge 4$. Similar arguments will yield a similar result for $t < 4$. $$\tilde{D}_{t}(\alpha^*)-\tilde{D}_{t}(\alpha^{(t-1)}) \le \epsilon_{t-1} + \frac{36\kappa}{\lambda} \xi_{t-3} ~.$$ Define $\tilde{\lambda} = \lambda + \kappa$, $f(w) = \frac{\lambda}{\tilde{\lambda}}g(w) + \frac{\kappa}{2\tilde{\lambda}}\|w\|^2$, and $\tilde{g}_t(w) = f(w) - \frac{\kappa}{\tilde{\lambda}} w^\top y^{(t-1)}$. Note that $\tilde{\lambda}$ does not depend on $t$ and therefore $v(\alpha) = \frac{1}{n \tilde{\lambda}} \sum_i X_i \alpha_i$ is the same for every $t$. Let, $$\tilde{P}_t(w) = \frac{1}{n} \sum_{i=1}^n \phi_i(X_i^\top w) + \tilde{\lambda} \tilde{g}_t(w) ~.$$ We have $$\label{eqn:tildePbso} \tilde{P}_t(w^{(t-1)}) = \tilde{P}_{t-1}(w^{(t-1)}) + \kappa w^{(t-1) \top} (y^{(t-2)}-y^{(t-1)}) ~.$$ Since $$\tilde{g}_t^*(\theta) = \max_w w^\top (\theta+\frac{\kappa}{\tilde{\lambda}} y^{(t-1)}) - f(w) = f^*(\theta+\frac{\kappa}{\tilde{\lambda}} y^{(t-1)}) ~,$$ we obtain that the dual problem is $$\tilde{D}_t(\alpha) = -\frac{1}{n} \sum_i \phi^*_i(-\alpha_i) - \tilde{\lambda} f^*(v(\alpha) +\frac{\kappa}{\tilde{\lambda}} y^{(t-1)})$$ Let $z = \frac{\kappa}{\tilde{\lambda}} (y^{(t-1)} - y^{(t-2)})$, then, by the smoothness of $f^*$ we have $$f^*(v(\alpha) +\frac{\kappa}{\tilde{\lambda}} y^{(t-1)}) = f^*(v(\alpha) +\frac{\kappa}{\tilde{\lambda}} y^{(t-2)} + z) \le f^*(v(\alpha) +\frac{\kappa}{\tilde{\lambda}} y^{(t-2)}) + \nabla f^*(v(\alpha) +\frac{\kappa}{\tilde{\lambda}} y^{(t-2)})^\top z + \frac{1}{2} \|z\|^2 ~.$$ Applying this for $\alpha^{(t-1)}$ and using $w^{(t-1)} = \nabla \tilde{g}_{t-1}^*(v(\alpha^{(t-1)})) = \nabla f^*(v(\alpha^{(t-1)}) + \frac{\kappa}{\tilde{\lambda}}y^{(t-2)})$, we obtain $$f^*(v(\alpha^{(t-1)} ) +\frac{\kappa}{\tilde{\lambda}} y^{(t-1)}) \le f^*(v(\alpha^{(t-1)} ) +\frac{\kappa}{\tilde{\lambda}} y^{(t-2)}) + w^{(t-1)\,\top} z + \frac{1}{2} \|z\|^2 ~.$$ It follows that $$-\tilde{D}_t(\alpha^{(t-1)}) + \tilde{D}_{t-1}(\alpha^{(t-1)}) \le \kappa w^{(t-1) \top} (y^{(t-1)}-y^{(t-2)}) + \frac{\kappa^2}{2\tilde{\lambda}} \|y^{(t-1)}-y^{(t-2)}\|^2 ~.$$ Combining the above with , we obtain that $$\tilde{P}_t(w^{(t-1)}) - \tilde{D}_t(\alpha^{(t-1)}) \le \tilde{P}_{t-1}(w^{(t-1)}) - \tilde{D}_{t-1}(\alpha^{(t-1)}) + \frac{\kappa^2}{2 \tilde{\lambda}} \|y^{(t-1)}-y^{(t-2)}\|^2 ~.$$ Since $\tilde{P}_t(w^{(t-1)}) \ge \tilde{D}_t(\alpha^*)$ and since $\tilde{\lambda} \ge \kappa$ we get that $$\tilde{D}_t(\alpha^*) - \tilde{D}_t(\alpha^{(t-1)}) \le \epsilon_{t-1} + \frac{\kappa}{2} \|y^{(t-1)}-y^{(t-2)}\|^2 ~.$$ Next, we bound $\|y^{(t-1)}-y^{(t-2)}\|^2$. We have $$\begin{aligned} \|y^{(t-1)}-y^{(t-2)}\| &= \|w^{(t-1)} - w^{(t-2)} + \beta(w^{(t-1)}-w^{(t-2)} - w^{(t-2)} + w^{(t-3)})\| \\ &\le 3 \max_{i \in \{1,2\}} \|w^{(t-i)} - w^{(t-i-1)}\| ~,\end{aligned}$$ where we used the triangle inequality and $\beta < 1$. By strong convexity of $P$ we have, for every $i$, $$\|w^{(i)}-w^*\| \le \sqrt{\frac{P(w^{(i)})-P(w^*)}{\lambda/2}} \le \sqrt{\frac{\xi_i}{\lambda/2}} ~,$$ which implies $$\|w^{(t-i)}-w^{(t-i-1)}\| \le \|w^{(t-i)}-w^*\| + \|w^* - w^{(t-i-1)}\| \le 2 \sqrt{\frac{\xi_{t-i-1}}{\lambda/2}} ~.$$ This yields the bound $$\|y^{(t-1)}-y^{(t-2)}\|^2 \le 72 \frac{\xi_{t-3}}{\lambda} ~.$$ All in all, we have obtained that $$\tilde{D}_t(\alpha^*) - \tilde{D}_t(\alpha^{(t-1)}) \le \epsilon_{t-1} + \frac{36\kappa}{\lambda} \xi_{t-3} ~.$$ Getting back to the proof of the second claim of [Theorem \[thm:acceleratedThmMain\]]{}, we have obtained that $$\begin{aligned} \frac{\tilde{D}_{t}(\alpha^*)-\tilde{D}_{t}(\alpha^{(t-1)})}{\frac{\eta}{2(1+\eta^{-2})}\xi_{t-1}} &\le \frac{\epsilon_{t-1}}{\frac{\eta}{2(1+\eta^{-2})}\xi_{t-1}} + \frac{36\kappa \xi_{t-3}}{\lambda \frac{\eta}{2(1+\eta^{-2})}\xi_{t-1}} \\ &\le (1-\eta/2)^{-1} + \frac{36\kappa 2(1+\eta^{-2}) }{\lambda \eta} (1-\eta/2)^{-2} \\ &\le (1-\eta/2)^{-4}\left(1 + \frac{72\kappa (1+\eta^{-2}) }{\lambda \eta} \right) \\ &\le (1-\eta/2)^{-2}\left(1 + 36\eta^{-5}\right) ~~,\end{aligned}$$ where in the last inequality we used $\eta^{-2} - 1 = \frac{2\kappa}{\lambda}$, which implies that $\frac{2\kappa}{\lambda}(1+\eta^{-2}) \le \eta^{-4}$. Using $1 < \eta^{-5}$, $1-\eta/2 \ge 0.5$, and taking log to both sides, we get that $$\log\left(\frac{\tilde{D}_{t}(\alpha^*)-\tilde{D}_{t}(\alpha^{(t-1)})}{\frac{\eta}{2(1+\eta^{-2})}\xi_{t-1}}\right) \le 2\log(2) + \log(37) - 5 \log(\eta) \le 7 + 2.5 \log\left(\frac{R^2}{\lambda \gamma n}\right) ~.$$ All in all, we have shown that the average runtime required by Prox-SDCA$(\tilde{P}_t,\frac{\eta}{2(1+\eta^{-2})}\xi_{t-1},\alpha^{(t-1)})$ is upper bounded by $$O\left(d\,n \log\left(\frac{R^2}{\lambda \gamma n}\right)\right) ~,$$ which concludes the proof of the second claim of [Theorem \[thm:acceleratedThmMain\]]{}. Applications {#sec:applications} ============ In this section we specify our algorithmic framework to several popular machine learning applications. In [Section \[sec:appLossFunc\]]{} we start by describing several loss functions and deriving their conjugate. In [Section \[sec:appRegularizers\]]{} we describe several regularization functions. Finally, in the rest of the subsections we specify our algorithm for Ridge regression, SVM, Lasso, logistic regression, and multiclass prediction. Loss functions {#sec:appLossFunc} -------------- #### Squared loss: $\phi(a) = \frac{1}{2} (a-y)^2$ for some $y \in {\mathbb{R}}$. The conjugate function is $$\phi^*(b) = \max_a a b - \frac{1}{2} (a-y)^2 = \frac{1}{2} b^2 + yb$$ #### Logistic loss: $\phi(a) = \log(1+e^{a})$. The derivative is $\phi'(a) = 1/(1+e^{-a})$ and the second derivative is $\phi''(a) = \frac{1}{(1+e^{-a})(1+e^a)} \in [0,1/4]$, from which it follows that $\phi$ is $(1/4)$-smooth. The conjugate function is $$\phi^*(b) = \max_a a b - \log(1+e^{a}) = \begin{cases} b \log(b) + (1-b)\log(1-b) & \textrm{if}~ b \in [0,1] \\ \infty & \textrm{otherwise} \end{cases}$$ #### Hinge loss: $\phi(a) = [1-a]_+ := \max\{0,1-a\}$. The conjugate function is $$\phi^*(b) = \max_a a b - \max\{0,1-a\} = \begin{cases} b & \textrm{if}~ b \in [-1,0] \\ \infty & \textrm{otherwise} \end{cases}$$ #### Smooth hinge loss: This loss is obtained by smoothing the hinge-loss using the technique described in [Lemma \[lem:smoothingLemma\]]{}. This loss is parameterized by a scalar $\gamma > 0$ and is defined as: $$\label{eqn:smoothhinge} \tilde{\phi}_\gamma(a) = \begin{cases} 0 & a \ge 1 \\ 1-a-\gamma/2 & a \le 1-\gamma \\ \frac{1}{2\gamma}(1-a)^2 & \textrm{o.w.} \end{cases}$$ The conjugate function is $$\tilde{\phi}_\gamma^*(b) = \begin{cases} b + \frac{\gamma}{2} b^2 & \textrm{if}~ b \in [-1,0] \\ \infty & \textrm{otherwise} \end{cases}$$ It follows that $\tilde{\phi}_\gamma^*$ is $\gamma$ strongly convex and $\tilde{\phi}$ is $(1/\gamma)$-smooth. In addition, if $\phi$ is the vanilla hinge-loss, we have for every $a$ that $$\phi(a)-\gamma/2 \le \tilde{\phi}(a) \le \phi(a) ~.$$ #### Max-of-hinge: The max-of-hinge loss function is a function from ${\mathbb{R}}^{k}$ to ${\mathbb{R}}$, which is defined as: $$\phi(a) = \max_j \, [c_j + a_j]_+ ~,$$ for some $c \in {\mathbb{R}}^k$. This loss function is useful for multiclass prediction problems. To calculate the conjugate of $\phi$, let $$\label{eqn:Sdef} S = \{\beta \in {\mathbb{R}}_+^k : \|\beta\|_1 \le 1\}$$ and note that we can write $\phi$ as $$\phi(a) = \max_{\beta \in S} \sum_j \beta_j (c_j + a_j) ~.$$ Hence, the conjugate of $\phi$ is $$\begin{aligned} \phi^*(b) &= \max_{a} \left[ a^\top b - \phi(a) \right] = \max_{a} \min_{\beta \in S} \left[ a^\top b - \sum_j \beta_j (c_j + a_j) \right] \\ &= \min_{\beta \in S} \max_{a} \left[ a^\top b - \sum_j \beta_j (c_j + a_j) \right] = \min_{\beta \in S} \left[ - \sum_j \beta_j c_j + \sum_j \max_{a_j} a_j (b_j - \beta_j)\right] .\end{aligned}$$ Each inner maximization over $a_j$ would be $\infty$ unless $\beta_j = b_j$. Therefore, $$\label{eqn:maxOfHingeConj} \phi^*(b) = \begin{cases} - c^\top b & ~\textrm{if}~ b \in S\\ \infty & ~\textrm{otherwise} \end{cases}$$ #### Smooth max-of-hinge This loss obtained by smoothing the max-of-hinge loss using the technique described in [Lemma \[lem:smoothingLemma\]]{}. This loss is parameterized by a scalar $\gamma > 0$. We start by adding regularization to the conjugate of the max-of-hinge given in and obtain $$\label{eqn:SmoothMaxOfHingeConj} \tilde{\phi}^*_\gamma(b) = \begin{cases} \frac{\gamma}{2} \|b\|^2 - c^\top b & ~\textrm{if}~ b \in S\\ \infty & ~\textrm{otherwise} \end{cases}$$ Taking the conjugate of the conjugate we obtain $$\begin{aligned} \nonumber \tilde{\phi}_\gamma(a) &= \max_{b} b^\top a - \tilde{\phi}^*_\gamma(b) \\ \nonumber &= \max_{b \in S} b^\top (a+c) - \frac{\gamma}{2} \|b\|^2 \\ &= \frac{\gamma}{2} \|(a+c)/\gamma\|^2 - \frac{\gamma}{2} \min_{b \in S} \|b - (a+c)/\gamma\|^2 \label{eqn:SmoothMaxOfHingeConj}\end{aligned}$$ While we do not have a closed form solution for the minimization problem over $b$ in the definition of $\tilde{\phi}_\gamma$ above, this is a problem of projecting onto the intersection of the $L_1$ ball and the positive orthant, and can be solved efficiently using the following procedure, adapted from [@duchi2008efficient]. [Project$(\mu)$]{} **Goal:** solve $\operatorname*{argmin}_b \|b-\mu\|^2 ~\textrm{s.t.}~ b \in {\mathbb{R}}_+^k, \|b\|_1 \le 1$\ **Let:** $\forall i, ~\tilde{\mu}_i = \max\{0,\mu_i\}$\ **If:** $\|\tilde{\mu}\|_1 \le 1$ stop and return $b = \tilde{\mu}$\ **Sort:** let $i_1,\ldots,i_k$ be s.t. $\mu_{i_1} \ge \mu_{i_2} \ge \ldots \ge \mu_{i_k}$\ **Find:** $j^* = \max\left\{ j : j\,\tilde{\mu}_{i_j} + 1 - \sum_{r=1}^j \tilde{\mu}_{i_r} > 0 \right\}$\ **Define:** $\theta = -1 + \sum_{r=1}^{j^*} \tilde{\mu}_{i_r} $\ **Return:** $b$ s.t. $\forall i,~b_i = \max\{\mu_i - \theta/j^*, 0 \}$ It also holds that $\nabla \tilde{\phi}_\gamma(a) = \operatorname*{argmin}_{b \in S} \|b - (a+c)/\gamma\|^2$, and therefore the gradient can also be calculated using the above projection procedure. Note that if $\phi$ being the max-of-hinge loss, then $\phi^*(b)+\gamma/2 \ge \tilde{\phi}^*_\gamma(b) \ge \phi^*(b)$ and hence $\phi(a) - \gamma/2 \le \tilde{\phi}_\gamma(a) \le \phi(a)$. Observe that all negative elements of $a+c$ does not contribute to $\tilde{\phi}_\gamma$. This immediately implies that if $\phi(a) = 0$ then we also have $\tilde{\phi}_\gamma(a)=0$. #### Soft-max-of-hinge loss function: Another approach to smooth the max-of-hinge loss function is by using soft-max instead of max. The resulting soft-max-of-hinge loss function is defined as $$\label{eqn:soft-max-loss} \phi_\gamma(a) = \gamma \log\left( 1 + \sum_{i=1}^k e^{(c_i+a_i)/\gamma} \right) ~,$$ where $\gamma > 0$ is a parameter. We have $$\max_i [c_i+a_i]_+ \le \phi_\gamma(a) \le \max_i [c_i+a_i]_+ + \gamma\,\log(k+1)~.$$ The $j$’th element of the gradient of $\phi$ is $$\nabla_j \phi_\gamma(a) = \frac{e^{(c_j+a_j)/\gamma}}{1 + \sum_{i=1}^k e^{(c_i+a_i)/\gamma} } ~.$$ By the definition of the conjugate we have $\phi_\gamma^*(b) = \max_a a^\top b - \phi_\gamma(a)$. The vector $a$ that maximizes the above must satisfy $$\forall j,~~ b_j = \frac{e^{(c_j+a_j)/\gamma}}{1 + \sum_{i=1}^k e^{(c_i+a_i)/\gamma} } ~.$$ This can be satisfied only if $b_j \ge 0$ for all $j$ and $\sum_j b_j \le 1$. That is, $b \in S$. Denote $Z = \sum_{i=1}^k e^{(c_i+a_i)/\gamma}$ and note that $$(1+Z) \|b\|_1 = Z ~~\Rightarrow~~ Z = \frac{\|b\|_1}{1-\|b\|_1} ~~\Rightarrow~~ 1+Z = \frac{1}{1-\|b\|_1} ~.$$ It follows that $$a_j = \gamma(\log(b_j) + \log(1+Z)) - c_j = \gamma(\log(b_j) - \log(1-\|b\|_1)) - c_j$$ which yields $$\begin{aligned} \phi_\gamma^*(b) &= \sum_j \left(\gamma(\log(b_j) - \log(1-\|b\|_1)) - c_j\right) b_j + \gamma \log( 1 - \|b\|_1) \\ &= -c^\top b + \gamma \left((1-\|b\|_1)\log(1-\|b\|_1) + \sum_j b_j \log(b_j) \right) ~.\end{aligned}$$ Finally, if $b \notin S$ then the gradient of $a^\top b - \phi_\gamma(a)$ does not vanish anywhere, which means that $\phi_\gamma^*(b) = \infty$. All in all, we obtain $$\label{eqn:soft-max-loss-conjugate} \phi_\gamma^*(b) = \begin{cases} -c^\top b + \gamma \left((1-\|b\|_1)\log(1-\|b\|_1) + \sum_j b_j \log(b_j) \right) & ~\textrm{if}~ b \in S \\ \infty & ~\textrm{otherwise} \end{cases}$$ Since the entropic function, $\sum_j b_j \log(b_j)$ is $1$-strongly convex over $S$ with respect to the $L_1$ norm, we obtain that $\phi^*_\gamma$ is $\gamma$-strongly convex with respect to the $L_1$ norm, from which it follows that $\phi_\gamma$ is $(1/\gamma)$-smooth with respect to the $L_\infty$ norm. Regularizers {#sec:appRegularizers} ------------ #### $L_2$ regularization: The simplest regularization is the squared $L_2$ regularization $$g(w) = \frac{1}{2} \|w\|_2^2 ~.$$ This is a $1$-strongly convex regularization function whose conjugate is $$g^*(\theta) = \frac{1}{2} \|\theta\|_2^2 ~.$$ We also have $$\nabla g^*(\theta) = \theta ~.$$ For our acceleration procedure, we also use the $L_2$ regularization plus a linear term, namely, $$g(w) = \frac{1}{2} \|w\|^2 - w^\top z ~,$$ for some vector $z$. The conjugate of this function is $$g^*(\theta) = \max_w \left[w^\top (\theta+z) - \frac{1}{2} \|w\|^2 \right] = \frac{1}{2} \|\theta+z\|^2 ~.$$ We also have $$\nabla g^*(\theta) = \theta + z~.$$ #### $L_1$ regularization: Another popular regularization we consider is the $L_1$ regularization, $$f(w) = \sigma\, \|w\|_1 ~.$$ This is not a strongly convex regularizer and therefore we will add a slight $L_2$ regularization to it and define the $L_1$-$L_2$ regularization as $$\label{eqn:gdefl1l2} g(w) = \frac{1}{2} \|w\|_2^2 + \sigma'\, \|w\|_1 ~,$$ where $\sigma' = \frac{\sigma}{\lambda}$ for some small $\lambda$. Note that $$\lambda g(w) = \frac{\lambda}{2} \|w\|_2^2 + \sigma \|w\|_1 ~,$$ so if $\lambda$ is small enough (as will be formalized later) we obtain that $\lambda g(w) \approx \sigma \|w\|_1$. The conjugate of $g$ is $$\begin{aligned} g^*(v) &= \max_{w} \left[w^\top v - \frac{1}{2} \|w\|_2^2 - \sigma' \|w\|_1 \right] ~.\end{aligned}$$ The maximizer is also $\nabla g^*(v)$ and we now show how to calculate it. We have $$\begin{aligned} \nabla g^*(v) &= \operatorname*{argmax}_{w} \left[w^\top v - \frac{1}{2} \|w\|_2^2 - \sigma' \|w\|_1 \right] \\ &= \operatorname*{argmin}_w \left[ \frac{1}{2} \|w-v\|_2^2 + \sigma' \|w\|_1 \right]\end{aligned}$$ A sub-gradient of the objective of the optimization problem above is of the form $w-v + \sigma' z = 0$, where $z$ is a vector with $z_i = {{\mathrm {sign}}}(w_i)$, where if $w_i=0$ then $z_i \in [-1,1]$. Therefore, if $w$ is an optimal solution then for all $i$, either $w_i=0$ or $w_i = v_i - \sigma' {{\mathrm {sign}}}(w_i)$. Furthermore, it is easy to verify that if $w$ is an optimal solution then for all $i$, if $w_i \neq 0$ then the sign of $w_i$ must be the sign of $v_i$. Therefore, whenever $w_i \neq 0$ we have that $w_i = v_i - \sigma' {{\mathrm {sign}}}(v_i)$. It follows that in that case we must have $|v_i| > \sigma'$. And, the other direction is also true, namely, if $|v_i| > \sigma'$ then setting $w_i = v_i - \sigma' {{\mathrm {sign}}}(v_i)$ leads to an objective value whose $i$’th component is $$\frac{1}{2} \left(\sigma'\right)^2 + \sigma' (|v_i| - \sigma') \le \frac{1}{2} |v_i|^2 ~,$$ where the right-hand side is the $i$’th component of the objective value we will obtain by setting $w_i=0$. This leads to the conclusion that $$\nabla_i g^*(v) = {{\mathrm {sign}}}(v_i)\left[ |v_i| - \sigma'\right]_+ = \begin{cases} v_i - \sigma' {{\mathrm {sign}}}(v_i) & \textrm{if}~ |v_i| > \sigma' \\ 0 & \textrm{o.w.} \end{cases}$$ It follows that $$\begin{aligned} g^*(v) &= \sum_i {{\mathrm {sign}}}(v_i)\left[ |v_i| - \sigma'\right]_+ \, v_i - \frac{1}{2} \sum_i (\left[ |v_i| - \sigma'\right]_+)^2 - \sigma' \sum_i \left[ |v_i| - \sigma'\right]_+ \\ &= \sum_i \left[ |v_i| - \sigma'\right]_+ \left( |v_i| - \sigma' - \frac{1}{2}\left[ |v_i| - \sigma'\right]_+ \right) \\ &= \frac{1}{2} \sum_i \left(\left[ |v_i| - \sigma'\right]_+\right)^2 ~.\end{aligned}$$ Another regularization function we’ll use in the accelerated procedure is $$\label{eqn:gdefl1l2acc} g(w) = \frac{1}{2} \|w\|_2^2 + \sigma'\, \|w\|_1 - z^\top w ~.$$ The conjugate function is $$g^*(v) = \frac{1}{2} \sum_i \left(\left[ |v_i + z_i| - \sigma'\right]_+\right)^2 ~,$$ and its gradient is $$\nabla_i g^*(v) = {{\mathrm {sign}}}(v_i + z_i)\left[ |v_i + z_i| - \sigma'\right]_+$$ Ridge Regression ---------------- In ridge regression, we minimize the squared loss with $L_2$ regularization. That is, $g(w) = \frac{1}{2} \|w\|^2$ and for every $i$ we have that $x_i \in {\mathbb{R}}^d$ and $\phi_i(a) = \frac{1}{2} (a-y_i)^2$ for some $y_i \in {\mathbb{R}}$. The primal problem is therefore $$P(w) = \frac{1}{2n} \sum_{i=1}^n (x_i^\top w - y_i)^2 + \frac{\lambda}{2} \|w\|^2 ~.$$ Below we specify Prox-SDCA for ridge regression. We use Option I since it is possible to derive a closed form solution to the maximization of the dual with respect to $\Delta \alpha_i$. Indeed, since $-\phi_i^*(-b) = -\frac{1}{2} b^2 + y_i b $ we have that the maximization problem is $$\begin{aligned} \Delta \alpha_i &=~ \operatorname*{argmax}_{b} - \frac{1}{2} (\alpha^{(t+1)}_i + b)^2 + y_i (\alpha^{(t+1)}_i + b) - w^{(t-1) \top} x_i b - \frac{b^2 \|x_i\|^2}{2\lambda n} \\ &=~\operatorname*{argmax}_{b} - \frac{1}{a} \left(1 + \frac{\|x_i\|^2}{2\lambda n} \right) b^2 - \left( \alpha^{(t+1)}_i + w^{(t-1) \top} x_i - y_i\right) \,b\\ &=~ - \frac{\alpha^{(t+1)}_i + w^{(t-1) \top} x_i - y_i}{1 + \frac{\|x_i\|^2}{2\lambda n} } ~.\end{aligned}$$ Applying the above update and using some additional tricks to improve the running time we obtain the following procedure. [Prox-SDCA($(x_i,y_i)_{i=1}^n,\epsilon,\alpha^{(0)},z$) for solving ridge regression]{} **Goal:** Minimize $P(w) = \frac{1}{2n} \sum_{i=1}^n (x_i^\top w -y_i)^2 + \lambda \left(\frac{1}{2} \|w\|^2 - w^\top z\right)$\ **Initialize** $v^{(0)}=\frac{1}{\lambda n} \sum_{i=1}^n \alpha_i^{(0)} x_i$, $\forall i,~ \tilde{y}_i = y_i - x_i^\top z$\ **Iterate:** for $t=1,2,\dots$\ Randomly pick $i$\ $\Delta \alpha_i = - \frac{\alpha^{(t-1)}_i + v^{(t-1) \top} x_i - \tilde{y}_i}{1 + \frac{\|x_i\|^2}{2\lambda n} } $\ $\alpha^{(t)}_i \leftarrow \alpha^{(t-1)}_i + \Delta \alpha_i$ and for $j \neq i$, $\alpha^{(t)}_j \leftarrow \alpha^{(t-1)}_j$\ $v^{(t)} \leftarrow v^{(t-1)} + \frac{\Delta \alpha_i}{\lambda n}x_i $\ **Stopping condition**:\ Let $w^{(t)} = v^{(t)} + z$\ Stop if $\frac{1}{2n} \sum_{i=1}^n \left( ( x_i^\top w^{(t)}-y_i)^2 + (\alpha^{(t)}_i + y_i)^2 - y_i^2\right) + \lambda w^{(t)\,^\top} v^{(t)} \le \epsilon$ The runtime of Prox-SDCA for ridge regression becomes $$\tilde{O}\left(d\left(n+ \frac{R^2}{\lambda}\right)\right) ~,$$ where $R = \max_i \|x_i\|$. This matches the recent results of [@LSB12-sgdexp; @ShalevZh2013]. If $R^2/\lambda \gg n$ we can apply the accelerated procedure and obtain the improved runtime $$\tilde{O}\left(d\sqrt{\frac{nR^2}{\lambda}}\right) ~.$$ Logistic Regression ------------------- In logistic regression, we minimize the logistic loss with $L_2$ regularization. That is, $g(w) = \frac{1}{2} \|w\|^2$ and for every $i$ we have that $x_i \in {\mathbb{R}}^d$ and $\phi_i(a) = log(1+e^a)$. The primal problem is therefore[^6] $$P(w) = \frac{1}{n} \sum_{i=1}^n \log(1+e^{x_i^\top w}) + \frac{\lambda}{2} \|w\|^2 ~.$$ The dual problem is $$D(\alpha) = \frac{1}{n} \sum_{i=1}^n (\alpha_i\log(-\alpha_i) - (1+\alpha_i)\log(1+\alpha_i)) - \frac{\lambda}{2} \|v(\alpha)\|^2 ~,$$ and the dual constraints are $\alpha \in [-1,0]^n$. Below we specify Prox-SDCA for logistic regression using Option III. [Prox-SDCA($(x_i)_{i=1}^n,\epsilon,\alpha^{(0)},z$) for logistic regression]{} **Goal:** Minimize $P(w) = \frac{1}{n} \sum_{i=1}^n \log(1 + e^{x_i^\top w}) + \lambda \left(\frac{1}{2} \|w\|^2 - w^\top z\right)$\ **Initialize** $v^{(0)}=\frac{1}{\lambda n} \sum_{i=1}^n \alpha_i^{(0)} x_i$, and $\forall i,~~ p_i = x_i^\top z$\ **Define:** $\phi^*(b) = b\log(b)+(1-b)\log(1-b)$\ **Iterate:** for $t=1,2,\dots$\ Randomly pick $i$\ $p = x_i^\top w^{(t-1)}$\ $q = -1/(1+e^{-p}) -\alpha_i^{(t-1)}$\ $s = \min\left(1,\frac{\log(1+e^p) + \phi^*(-\alpha_i^{(t-1)}) + p \alpha^{(t-1)}_i + 2 q^2}{ q^2 (4 + \frac{1}{\lambda n} \|x_i\|^2 )}\right)$\ $\Delta \alpha_i = sq $\ $\alpha^{(t)}_i = \alpha^{(t-1)}_i + \Delta \alpha_i$ and for $j \neq i$, $\alpha^{(t)}_j = \alpha^{(t-1)}_j$\ $v^{(t)} = v^{(t-1)} + \frac{\Delta \alpha_i}{\lambda n}x_i $\ **Stopping condition**:\ let $w^{(t)} = v^{(t)} + z$\ Stop if $\frac{1}{n} \sum_{i=1}^n \left( \log(1 + e^{x_i^\top w^{(t)}}) + \phi^*(-\alpha_i^{(t-1)}) \right) + \lambda w^{(t) \top} v^{(t)} \le \epsilon$ The runtime analysis is similar to the analysis for ridge regression. Lasso ----- In the Lasso problem, the loss function is the squared loss but the regularization function is $L_1$. That is, we need to solve the problem: $$\label{eqn:Lasso} \min_w \left[ \frac{1}{2n} \sum_{i=1}^n ( x_i^\top w - y_i)^2 + \sigma \|w\|_1 \right] ~,$$ with a positive regularization parameter $\sigma \in {\mathbb{R}}_+$. Let $\bar{y} = \frac{1}{2n} \sum_{i=1}^n y_i^2$, and let $\bar{w}$ be an optimal solution of . Then, the objective at $\bar{w}$ is at most the objective at $w=0$, which yields $$\sigma \|\bar{w}\|_1 \le \bar{y} ~~\Rightarrow~~ \|\bar{w}\|_2 \le \|\bar{w}\|_1 \le \frac{\bar{y}}{\sigma} ~.$$ Consider the optimization problem $$\label{eqn:LassoL1L2} \min_w P(w) ~~~\textrm{where}~~~ P(w) = \frac{1}{2n} \sum_{i=1}^n ( x_i^\top w - y_i)^2 + \lambda \left( {\frac{1}{2}}\|w\|_2^2 + \frac{\sigma}{\lambda} \|w\|_1 \right) ~,$$ for some $\lambda > 0$. This problem fits into our framework, since now the regularizer is strongly convex. Furthermore, if $w^*$ is an $(\epsilon/2)$-accurate solution to the problem in , then $P(w^*) \le P(\bar{w}) +\epsilon/2$ which yields $$\left[\frac{1}{2n} \sum_{i=1}^n ( x_i^\top w^* - y_i)^2 + \sigma \|w^*\|_1 \right] \le \left[ \frac{1}{2n} \sum_{i=1}^n ( x_i^\top \bar{w} - y_i)^2 + \sigma \|\bar{w}\|_1 \right] + \frac{\lambda}{2} \|\bar{w}\|_2^2 + \epsilon/2~.$$ Since $\|\bar{w}\|_2^2 \le \left({\bar{y}}/{\sigma} \right)^2$, we obtain that setting $\lambda = \epsilon (\sigma/\bar{y})^2$ guarantees that $w^*$ is an $\epsilon$ accurate solution to the original problem given in . In light of the above, from now on we focus on the problem given in . As in the case of ridge regression, we can apply Prox-SDCA with Option I. The resulting pseudo-code is given below. Applying the above update and using some additional tricks to improve the running time we obtain the following procedure. [Prox-SDCA($(x_i,y_i)_{i=1}^n,\epsilon,\alpha^{(0)},z$) for solving $L_1-L_2$ regression]{} **Goal:** Minimize $P(w) = \frac{1}{2n} \sum_{i=1}^n (x_i^\top w -y_i)^2 + \lambda \left(\frac{1}{2} \|w\|^2 + \sigma' \|w\|_1 - w^\top z\right)$\ **Initialize** $v^{(0)}=\frac{1}{\lambda n} \sum_{i=1}^n \alpha_i^{(0)} x_i$, and $\forall j,~w^{(0)}_j = {{\mathrm {sign}}}(v_j^{(0)}+z_j)[ |v_j^{(0)}+z_j| - \sigma']_+$\ **Iterate:** for $t=1,2,\dots$\ Randomly pick $i$\ $\Delta \alpha_i = - \frac{\alpha^{(t-1)}_i + w^{(t-1) \top} x_i - y_i}{1 + \frac{\|x_i\|^2}{2\lambda n} } $\ $\alpha^{(t)}_i = \alpha^{(t-1)}_i + \Delta \alpha_i$ and for $j \neq i$, $\alpha^{(t)}_j = \alpha^{(t-1)}_j$\ $v^{(t)} = v^{(t-1)} + \frac{\Delta \alpha_i}{\lambda n}x_i $\ $\forall j,~w^{(t)}_j = {{\mathrm {sign}}}(v_j^{(t)}+z_j)[ |v_j^{(t)}+z_j| - \sigma']_+$\ **Stopping condition**:\ Stop if $\frac{1}{2n} \sum_{i=1}^n \left( (x_i^\top w^{(t)}-y_i)^2 - 2y_i\alpha^{(t)}_i + (\alpha^{(t)}_i)^2 \right) + \lambda w^{(t) \top} v^{(t)} \le \epsilon$ Let us now discuss the runtime of the resulting method. Denote $R=\max_i \|x_i\|$ and for simplicity, assume that $\bar{y} = O(1)$. Choosing $\lambda = \epsilon (\sigma/\bar{y})^2$, the runtime of our method becomes $$\tilde{O}\left(d\left(n+ \min\left\{\frac{R^2}{\epsilon\,\sigma^2},\sqrt{\frac{nR^2}{\epsilon\,\sigma^2}}\right\}\right)\right) ~.$$ It is also convenient to write the bound in terms of $B = \|\bar{w}\|_2$, where, as before, $\bar{w}$ is the optimal solution of the $L_1$ regularized problem. With this parameterization, we can set $\lambda = \epsilon/B^2$ and the runtime becomes $$\tilde{O}\left(d\left(n+ \min\left\{\frac{R^2B^2}{\epsilon},\sqrt{\frac{n\,R^2B^2}{\epsilon}}\right\}\right)\right) ~.$$ The runtime of standard SGD is $O(dR^2 B^2 / \epsilon^2)$ even in the case of smooth loss functions such as the squared loss. Several variants of SGD, that leads to sparser intermediate solutions, have been proposed (e.g. [@LangfordLiZh09; @shalev2011stochastic; @Xiao10; @duchi2009efficient; @DuchiShSiTe10]). However, all of these variants share the runtime of $O(dR^2 B^2 / \epsilon^2)$, which is much slower than our runtime when $\epsilon$ is small. Another relevant approach is the FISTA algorithm of [@beck2009fast]. The shrinkage operator of FISTA is the same as the gradient of $g^*$ used in our approach. It is a batch algorithm using Nesterov’s accelerated gradient technique. For the squared loss function, the runtime of FISTA is $$O\left( d\,n\, \sqrt{\frac{R^2B^2}{\epsilon }} \right) ~.$$ This bound is worst than our bound by a factor of at least $\sqrt{n}$. Another approach to solving is stochastic coordinate descent over the primal problem. [@shalev2011stochastic] showed that the runtime of this approach is $$O\left(\frac{dnB^2}{\epsilon}\right) ~,$$ under the assumption that $\|x_i\|_\infty \le 1$ for all $i$. Similar results can also be found in [@Nesterov10]. For our method, the runtime depends on $R^2 = \max_i \|x_i\|_2^2$. If $R^2 = O(1)$ then the runtime of our method is much better than that of [@shalev2011stochastic]. In the general case, if $\max_i \|x_i\|_\infty \le 1$ then $R^2 \le d$, which yields the runtime of $$\tilde{O}\left(d\left(n+ \min\left\{\frac{dB^2}{\epsilon},\sqrt{\frac{n\,dB^2}{\epsilon}}\right\}\right)\right) ~.$$ This is the same or better than [@shalev2011stochastic] whenever $d = O(n)$. Linear SVM ---------- Support Vector Machines (SVM) is an algorithm for learning a linear classifier. Linear SVM (i.e., SVM with linear kernels) amounts to minimizing the objective $$P(w) = \frac{1}{n} \sum_{i=1}^n [1 - x_i^\top w]_+ + \frac{\lambda}{2} \|w\|^2 ~,$$ where $[a]_+ = \max\{0,a\}$, and for every $i$, $x_i \in {\mathbb{R}}^d$. This can be cast as the objective given in by letting the regularization be $g(w) = \frac{1}{2} \|w\|_2^2$, and for every $i$, $\phi_i(a) = [1-a]_+$, is the hinge-loss. Let $R =\max_i \|x_i\|_2$. SGD enjoys the rate of $O\left(\frac{1}{\lambda \epsilon}\right)$. Many software packages apply SDCA and obtain the rate $\tilde{O}\left(n + \frac{1}{\lambda \epsilon}\right)$. We now show how our accelerated proximal SDCA enjoys the rate $\tilde{O}\left(n + \sqrt{\frac{n}{\lambda \epsilon}}\right)$. This is significantly better than the rate of SGD when $\lambda \epsilon < 1/n$. We note that a default setting for $\lambda$, which often works well in practice, is $\lambda = 1/n$. In this case, $\lambda \epsilon = \epsilon/n \ll 1/n$. Our first step is to smooth the hinge-loss. Let $\gamma = \epsilon$ and consider the smooth hinge-loss as defined in . Recall that the smooth hinge-loss satisfies $$\forall a,~~\phi(a)-\gamma/2 \le \tilde{\phi}(a) \le \phi(a) ~.$$ Let $\tilde{P}$ be the SVM objective while replacing the hinge-loss with the smooth hinge-loss. Therefore, for every $w'$ and $w$, $$P(w')- P(w) \le \tilde{P}(w') - \tilde{P}(w) + \gamma/2 ~.$$ It follows that if $w'$ is an $(\epsilon/2)$-optimal solution for $\tilde{P}$, then it is $\epsilon$-optimal solution for $P$. For the smoothed hinge loss, the optimization problem given in Option I of Prox-SDCA has a closed form solution and we obtain the following procedure: [Prox-SDCA($(x_1,\ldots,x_n),\epsilon,\alpha^{(0)},z$) for solving SVM (with smooth hinge-loss as in )]{} **Define:** $\tilde{\phi}_\gamma$ as in\ **Goal:** Minimize $P(w) = \frac{1}{n} \sum_{i=1}^n \tilde{\phi}_\gamma(x_i^\top w) + \lambda \left(\frac{1}{2} \|w\|^2 - w^\top z\right)$\ **Initialize** $w^{(0)}=z + \frac{1}{\lambda n} \sum_{i=1}^n \alpha_i^{(0)} x_i$\ **Iterate:** for $t=1,2,\dots$\ Randomly pick $i$\ $\Delta \alpha_i = \max\left(-\alpha^{(t-1)}_i~,~ \min\left(1 -\alpha^{(t-1)}_i~,~ \frac{1 - x_i^\top w^{(t-1)} - \gamma\,\alpha^{(t-1)}_i}{\|x_i\|^2/(\lambda n)+\gamma} \right) \right) $\ $\alpha^{(t)}_i \leftarrow \alpha^{(t-1)}_i + \Delta \alpha_i$ and for $j \neq i$, $\alpha^{(t)}_j \leftarrow \alpha^{(t-1)}_j$\ $w^{(t)} \leftarrow w^{(t-1)} + \frac{\Delta \alpha_i}{\lambda n}x_i $\ **Stopping condition**:\ Stop if $\frac{1}{n} \sum_{i=1}^n \left( \tilde{\phi}_\gamma(x_i^\top w^{(t)}) - \alpha^{(t)}_i + \frac{\gamma}{2} (\alpha^{(t)}_i)^2 \right) + \lambda w^{(t)^\top} (w^{(t)}-z) \le \epsilon$ Denote $R=\max_i \|x_i\|$. Then, the runtime of the resulting method is $$\tilde{O}\left(d\left(n+ \min\left\{\frac{R^2}{\gamma\,\lambda},\sqrt{\frac{nR^2}{\gamma\,\lambda}}\right\}\right)\right) ~.$$ In particular, choosing $\gamma = \epsilon$ we obtain a solution to the original SVM problem in runtime of $$\tilde{O}\left(d\left(n+ \min\left\{\frac{R^2}{\epsilon\,\lambda},\sqrt{\frac{nR^2}{\epsilon\,\lambda}}\right\}\right)\right) ~.$$ As mentioned before, this is better than SGD when $\frac{1}{\lambda \epsilon} \gg n$. Multiclass SVM -------------- Next we consider Multiclass SVM using the construction described in @CrammerSi01a. Each example consists of an instance vector $x_i \in {\mathbb{R}}^d$ and a label $y_i \in \{1,\ldots,k\}$. The goal is to learn a matrix $W \in {\mathbb{R}}^{d,k}$ such that $W^\top x_i$ is a $k$’th dimensional vector of scores for the different classes. The prediction is the coordinate of $W^\top x_i$ of maximal value. The loss function is $$\max_{j \neq y_i} (1 + (W^\top x_i)_j - (W^\top x_i)_{y_i} ) ~.$$ This can be written as $\phi((W^\top x_i) - (W^\top x_i)_{y_i})$ where $$\phi_i(a) = \max_j [c_{i,j} + a_j]_+ ~,$$ with $c_i$ being the all ones vector except $0$ in the $y_i$ coordinate. We can model this in our framework as follows. Given a matrix $M$ let $\textrm{vec}(M)$ be the column vector obtained by concatenating the columns of $M$. Let $e_j$ be the all zeros vector except $1$ in the $j$’th coordinate. For every $i$, let $c_i = \mathbf{1} - e_{y_i}$ and let $X_i \in {\mathbb{R}}^{dk,k}$ be the matrix whose $j$’th column is $\textrm{vec}(x_i (e_j-e_{y_i})^\top)$. Then, $$X_i^\top \textrm{vec}(W) = W^\top x_i - (W^\top x_i)_{y_i} ~.$$ Therefore, the optimization problem of multiclass SVM becomes: $$\min_{w \in {\mathbb{R}}^{dk}} P(w) ~~~\textrm{where}~~~ P(w) = \frac{1}{n} \sum_{i=1}^n \phi_i(X_i^\top w) + \frac{\lambda}{2} \|w\|^2 ~.$$ As in the case of SVM, we will use the smooth version of the max-of-hinge loss function as described in . If we set the smoothness parameter $\gamma$ to be $\epsilon$ then an $(\epsilon/2)$-accurate solution to the problem with the smooth loss is also an $\epsilon$-accurate solution to the original problem with the non-smooth loss. Therefore, from now on we focus on the problem with the smooth max-of-hinge loss. We specify Prox-SDCA for multiclass SVM using Option I. We will show that the optimization problem in Option I can be calculated efficiently by sorting a $k$ dimensional vector. Such ideas were explored in [@CrammerSi01a] for the non-smooth max-of-hinge loss. Let $\hat{w} = w - \frac{1}{\lambda n} X_i \alpha^{(t-1)}_i$. Then, the optimization problem over $\alpha_i$ can be written as $$\label{eqn:OptDualForMultiVector} \operatorname*{argmax}_{\alpha_i : -\alpha_i \in S} ~~(-c_i^\top - \hat{w}^\top X_i) \alpha_i - \frac{\gamma}{2} \|\alpha_i\|^2 - \frac{1}{2\lambda n} \|X_i \alpha_i \|^2 ~.$$ As shown before, if we organize $\hat{w}$ as a $d \times k$ matrix, denoted $\hat{W}$, we have that $X_i^\top \hat{w} = \hat{W}^\top x_i - (\hat{W}^\top x_i)_{y_i}$. We also have that $$X_i \alpha_i = \sum_j \textrm{vec}(x_i (e_j-e_{y_i})^\top) \alpha_{i,j} = \textrm{vec}(x_i \sum_j \alpha_{i,j} (e_j - e_{y_i})^\top) = \textrm{vec}(x_i (\alpha_i - \|\alpha_i\|_1 e_{y_i})^\top) ~.$$ It follows that an optimal solution to must set $\alpha_{i,y_i} = 0$ and we only need to optimize over the rest of the dual variables. This also yields, $$\|X_i\alpha_i\|^2 = \|x_i\|^2 \|\alpha_i\|_2^2 + \|x_i\|^2 \|\alpha_i\|_1^2 ~.$$ So, becomes: $$\label{eqn:OptDualForMultiVector2} \operatorname*{argmax}_{\alpha_i : -\alpha_i \in S, \alpha_{i,y_i}=0} ~~(-c_i^\top - \hat{w}^\top X_i) \alpha_i - \frac{\gamma}{2} \|\alpha_i\|_2^2 - \frac{\|x_i\|^2}{2\lambda n} \|\alpha_i \|_2^2 - \frac{\|x_i\|^2}{2\lambda n} \|\alpha_i \|_1^2 ~.$$ This is equivalent to a problem of the form: $$\label{eqn:aProxForMC} \operatorname*{argmin}_{a \in {\mathbb{R}}_+^{k-1}, \beta} \|a - \mu\|_2^2 + C \beta^2 ~~~\textrm{s.t.}~~~ \|a\|_1 = \beta \le 1 ~,$$ where $$\mu = \frac{c_i^\top + \hat{w}^\top X_i}{\gamma + \frac{\|x_i\|^2}{\lambda n}} ~~~\textrm{and}~~~ C = \frac{\frac{\|x_i\|^2}{\lambda n}}{\gamma + \frac{\|x_i\|^2}{\lambda n}} = \frac{1}{ \frac{\gamma \lambda n}{\|x_i\|^2} + 1} ~.$$ The equivalence is in the sense that if $(a,\beta)$ is a solution of then we can set $\alpha_i = -a$. Assume for simplicity that $\mu$ is sorted in a non-increasing order and that all of its elements are non-negative (otherwise, it is easy to verify that we can zero the negative elements of $\mu$ and sort the non-negative, without affecting the solution). Let $\bar{\mu}$ be the cumulative sum of $\mu$, that is, for every $j$, let $\bar{\mu}_j = \sum_{r=1}^j \mu_r$. For every $j$, let $z_j = \bar{\mu}_j - j \mu_j$. Since $\mu$ is sorted we have that $$z_{j+1} = \sum_{r=1}^{j+1} \mu_r - (j+1) \mu_{j+1} = \sum_{r=1}^j \mu_r - j \mu_{j+1} \ge \sum_{r=1}^j \mu_r - j \mu_{j} = z_j ~.$$ Note also that $z_1 = 0$ and that $z_{k} = \bar{\mu}_k = \|\mu\|_1$ (since the coordinate of $\mu$ that corresponds to $y_i$ is zero). By the properties of projection onto the simplex (see [@duchi2008efficient]), for every $z \in (z_j,z_{j+1})$ we have that the projection of $\mu$ onto the set $\{b \in {\mathbb{R}}_+^k : \|b\|_1=z\}$ is of the form $a_r = \max\{0,\mu_r - \theta/j\}$ where $\theta = (-z + \bar{\mu}_j)/j$. Therefore, the objective becomes (ignoring constants that do not depend on $z$), $$j \theta^2 + C z^2 = (-z + \bar{\mu}_j)^2/j + C z^2 ~.$$ The first order condition for minimality w.r.t. $z$ is $$-(-z + \bar{\mu}_j)/j + Cz = 0 ~~\Rightarrow~~ z = \frac{\bar{\mu}_j}{1 + jC} ~.$$ If this value of $z$ is in $(z_j,z_{j+1})$, then it is the optimal $z$ and we’re done. Otherwise, the optimum should be either $z=0$ (which yields $\alpha=0$ as well) or $z=1$. [$a = \textrm{OptimizeDual}(\mu,C)$ ]{} **Solve** the optimization problem given in\ **Initialize:** $\forall i, ~ \hat{\mu}_i = \max\{0,\mu_i\}$, and sort $\hat{\mu}$ s.t. $\hat{\mu}_1 \ge \hat{\mu}_2 \ge \ldots \ge \hat{\mu}_k$\ **Let:** $\bar{\mu}$ be s.t. $\bar{\mu}_j = \sum_{i=1}^j \hat{\mu}_i$\ **Let:** $z$ be s.t. $z_j = \min\{\bar{\mu}_j - j \hat{\mu}_j,1\}$ and $z_{k+1} = 1$\ **If:** $\exists j$ s.t. $\frac{\bar{\mu}_j}{1 + jC} \in [z_j,z_{j+1}]$\ return $a$ s.t. $\forall i,~ a_i = \max\left\{0,\mu_i - \left(-\frac{\bar{\mu}_j}{1 + jC} + \bar{\mu}_j\right)/j\right\}$\ **Else:**\ Let $j$ be the minimal index s.t. $z_j=1$\ set $a$ s.t. $\forall i,~~a_i = \max\{0,\mu_i - (-z_j + \bar{\mu}_j)/j\}$\ **If:** $\|a-\mu\|^2 + C \le \|\mu\|^2$\ return $a$\ **Else:**\ return $(0,\ldots,0)$ The resulting pseudo-codes for Prox-SDCA is given below. We specify the procedure while referring to $W$ as a matrix, because it is the more natural representation. For convenience of the code, we also maintain in $\alpha_{i,y_i}$ the value of $-\sum_{j \neq y_i} \alpha_{i,j}$ (instead of the optimal value of $0$). [Prox-SDCA($(x_1,y_1)_{i=1}^n,\epsilon,\alpha,Z$) for solving Multiclass SVM (with smooth hinge-loss as in )]{} **Define:** $\tilde{\phi}_\gamma$ as in\ **Goal:** Minimize\ $P(W) = \frac{1}{n} \sum_{i=1}^n \tilde{\phi}_\gamma((W^\top x_i) - (W^\top x_i)_{y_i}) + \lambda \left(\frac{1}{2} \textrm{vec}(W)^\top \textrm{vec}(W) - \textrm{vec}(W)^\top \textrm{vec}(Z)\right)$\ **Initialize** $W=Z + \frac{1}{\lambda n} \sum_{i=1}^n x_i \alpha_i^\top$\ **Iterate:** for $t=1,2,\dots$\ Randomly pick $i$\ $\hat{W} = W - \frac{1}{\lambda n} x_i \alpha^{\top}_i$\ $p = x_i^\top \hat{W}$,   $p = p - p_{y_i}$,  $c = \mathbf{1} - e_{y_i}$,  $\mu = \frac{c + p}{\gamma + \|x_i\|^2/(\lambda n)} $, $ C = \frac{1}{1 + \gamma \lambda n / \|x_i\|^2}$\ $a = \textrm{OptimizeDual}(\mu,C)$\ $\alpha_i = -a$, $\alpha_{y_i} = \|a\|_1$\ $W = \hat{W} + \frac{1}{\lambda n} x_i \alpha^{\top}_i$\ **Stopping condition**:\ let $G = 0$\ for $i=1,\ldots,n$\ $a = W^\top x_i$, $a = a-a_{y_i}$, $c = \mathbf{1} - e_{y_i}$, $b = \textrm{Project}((a+c)/\gamma)$\ $G = G + \frac{\gamma}{2} (\|(a+c)/\gamma\|^2 - \|b-(a+c)/\gamma\|^2) + c^\top \alpha^{(t)}_i + \frac{\gamma}{2} (\|\alpha^{(t)}_i\|^2 - (\alpha^{(t)}_{i,y_i})^2)$\ Stop if $G/n + \lambda \textrm{vec}(W)^\top \textrm{vec}(W-Z) \le \epsilon$ Experiments =========== [ @ L | @ S @ S @ S @ ]{} $\lambda$ & & &\ In this section we compare Prox-SDCA, its accelerated version Accelerated-Prox-SDCA, and the FISTA algorithm of [@beck2009fast], on $L_1-L_2$ regularized loss minimization problems. The experiments were performed on three large datasets with very different feature counts and sparsity, which were kindly provided by Thorsten Joachims (the datasets were also used in [@ShZh12-sdca]). The astro-ph dataset classifies abstracts of papers from the physics ArXiv according to whether they belong in the astro-physics section; CCAT is a classification task taken from the Reuters RCV1 collection; and cov1 is class 1 of the covertype dataset of Blackard, Jock & Dean. The following table provides details of the dataset characteristics. Dataset Training Size Testing Size Features Sparsity ---------- --------------- -------------- ---------- ----------- astro-ph $29882$ $32487$ $99757$ $0.08\%$ CCAT $781265$ $23149$ $47236$ $0.16\%$ cov1 $522911$ $58101$ $54$ $22.22\%$ These are binary classification problems, with each $x_i$ being a vector which has been normalized to be $\|x_i\|_2=1$, and $y_i$ being a binary class label of $\pm 1$. We multiplied each $x_i$ by $y_i$ and following [@ShZh12-sdca], we employed the smooth hinge loss, $\tilde{\phi}_\gamma$, as in , with $\gamma=1$. The optimization problem we need to solve is therefore $$\min_w P(w) ~~~\textrm{where}~~~ P(w) = \frac{1}{n} \sum_{i=1}^n \tilde{\phi}_\gamma(x_i^\top w) + \frac{\lambda}{2} \|w\|_2^2 + \sigma \|w\|_1 ~.$$ In the experiments, we set $\sigma=10^{-5}$ and vary $\lambda$ in the range $\{10^{-6}, 10^{-7}, 10^{-8}, 10^{-9}\}$. The convergence behaviors are plotted in Figure \[fig:conv\]. In all the plots we depict the primal objective as a function of the number of passes over the data (often referred to as “epochs”). For FISTA, each iteration involves a single pass over the data. For Prox-SDCA, each $n$ iterations are equivalent to a single pass over the data. And, for Accelerated-Prox-SDCA, each $n$ inner iterations are equivalent to a single pass over the data. For Prox-SDCA and Accelerated-Prox-SDCA we implemented their corresponding stopping conditions and terminate the methods once an accuracy of $10^{-3}$ was guaranteed. It is clear from the graphs that Accelerated-Prox-SDCA yields the best results, and often significantly outperform the other methods. Prox-SDCA behaves similarly when $\lambda$ is relatively large, but it converges much slower when $\lambda$ is small. This is consistent with our theory. Finally, the relative performance of FISTA and Prox-SDCA depends on the ratio between $\lambda$ and $n$, but in all cases, Accelerated-Prox-SDCA is much faster than FISTA. This is again consistent with our theory. Discussion and Open Problems ============================ We have described and analyzed a proximal stochastic dual coordinate ascent method and have shown how to accelerate the procedure. The overall runtime of the resulting method improves state-of-the-art results in many cases of interest. There are two main open problems that we leave to future research. When $\frac{1}{\lambda \gamma}$ is larger than $n$, the runtime of our procedure becomes $\tilde{O}\left(d\sqrt{\frac{n}{\lambda \gamma}}\right)$. Is it possible to derive a method whose runtime is $\tilde{O}\left(d\left(n+\sqrt{\frac{1}{\lambda \gamma}}\right)\right)$ ? Our Prox-SDCA procedure and its analysis works for regularizers which are strongly convex with respect to an arbitrary norm. However, our acceleration procedure is designed for regularizers which are strongly convex with respect to the Euclidean norm. Is is possible to extend the acceleration procedure to more general regularizers? Acknowledgements {#acknowledgements .unnumbered} ================ The authors would like to thank Fen Xia for careful proof-reading of the paper which helped us to correct numerous typos. Shai Shalev-Shwartz is supported by the following grants: Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI) and ISF 598-10. Tong Zhang is supported by the following grants: NSF IIS-1016061, NSF DMS-1007527, and NSF IIS-1250985. Proofs of Iteration Bounds for Prox-SDCA ======================================== The proof technique follows that of @ShalevZh2013, but with the required generality for handling general strongly convex regularizers and smoothness/Lipschitzness with respect to general norms. We prove the theorems for running Prox-SDCA while choosing $\Delta \alpha_i$ as in Option I. A careful examination of the proof easily reveals that the results hold for the other options as well. More specifically, Lemma \[lem:key\] only requires choosing $\Delta \alpha_i = s (u_i^{(t-1)}-\alpha_i^{(t-1)})$ as in , and Option III chooses $s$ to optimize the bound on the right hand side of , and hence ensures that the choice can do no worse than the result of Lemma \[lem:key\] with any $s$. The simplification in Option IV and V employs the specific simplification of the bound in Lemma \[lem:key\] in the proof of the theorems. The key lemma is the following: \[lem:key\] Assume that $\phi^*_i$ is $\gamma$-strongly-convex. For any iteration $t$, let ${\mathbb{E}}_{t}$ denote the expectation with respect to the randomness in choosing $i$ at round $t$, conditional on the value of $\alpha^{(t-1)}$. Then, for any iteration $t$ and any $s \in [0,1]$ we have $${\mathbb{E}}_t[D(\alpha^{(t)})-D(\alpha^{(t-1)})] \ge \frac{s}{n}\, [P(w^{(t-1)})-D(\alpha^{(t-1)})] - \left(\frac{s}{n}\right)^2 \frac{G^{(t)}}{2\lambda} ~,$$ where $$G^{(t)} = \frac{1}{n} \sum_{i=1}^n \left(\|X_i\|_{D\to D'}^2 - \frac{\gamma(1-s)\lambda n}{s}\right) \; {\mathbb{E}}_t \left[\|u^{(t-1)}_i-\alpha^{(t-1)}_i\|_D^2\right] ,$$ and $-u^{(t-1)}_i = \nabla \phi_i(X_i^\top w^{(t-1)})$. Since only the $i$’th element of $\alpha$ is updated, the improvement in the dual objective can be written as $$\begin{aligned} & n[D(\alpha^{(t)}) - D(\alpha^{(t-1)})] \\ = & \left(-\phi^*(-\alpha^{(t)}_i) - \lambda n g^*\left(v^{(t-1)} + (\lambda n)^{-1} X_i \Delta \alpha_i\right) \right) - \left(-\phi^*(-\alpha^{(t-1)}_i) - \lambda n g^*\left(v^{(t-1)}\right) \right) \end{aligned}$$ The smoothness of $g^*$ implies that $g^*(v+ \Delta v) \leq h(v;\Delta v)$, where $h(v;\Delta v) := g^*(v) + \nabla g^*(v)^\top \Delta v + \frac{1}{2} \|\Delta v\|_{D'}^2$. Therefore, $$\begin{aligned} & n[D(\alpha^{(t)}) - D(\alpha^{(t-1)})] \\ \geq & \underbrace{\left(-\phi^*(-\alpha^{(t)}_i) - \lambda n h\left(v^{(t-1)}; (\lambda n)^{-1} X_i \Delta \alpha_i\right)\right) }_A - \underbrace{\left(-\phi^*(-\alpha^{(t-1)}_i) - \lambda n g^*\left(v^{(t-1)}\right) \right)}_B .\end{aligned}$$ By the definition of the update we have for all $s \in [0,1]$ that $$\begin{aligned} \nonumber A &= \max_{\Delta \alpha_i} -\phi^*(-(\alpha^{(t-1)}_i + \Delta\alpha_i)) - \lambda n h\left(v^{(t-1)}; (\lambda n)^{-1} X_i \Delta \alpha_i\right) \\ &\ge -\phi^*(-(\alpha^{(t-1)}_i + s(u^{(t-1)}_i - \alpha^{(t-1)}_i) )) - \lambda n h(v^{(t-1)}; (\lambda n)^{-1} s X_i (u^{(t-1)}_i -\alpha^{(t-1)}_i)) . \label{eqn:PC1}\end{aligned}$$ From now on, we omit the superscripts and subscripts. Since $\phi^*$ is $\gamma$-strongly convex, we have that $$\label{eqn:PC2} \phi^*(-(\alpha+ s(u - \alpha) )) = \phi^*(s (-u) + (1-s) (-\alpha)) \le s \phi^*(-u) + (1-s) \phi^*(-\alpha) - \frac{\gamma}{2} s (1-s) \|u-\alpha\|_D^2$$ Combining this with and rearranging terms we obtain that $$\begin{aligned} A &\ge -s \phi^*(-u) - (1-s) \phi^*(-\alpha) + \frac{\gamma}{2} s (1-s) \|u-\alpha\|_D^2 - \lambda n h(v; (\lambda n)^{-1} s X(u - \alpha) ) \\ &= -s \phi^*(-u) - (1-s) \phi^*(-\alpha) + \frac{\gamma}{2} s (1-s) \|u-\alpha\|_{D}^2 - \lambda n g^*(v) - s w^\top X (u-\alpha) - \frac{s^2 \|X(u-\alpha)\|_{D'}^2}{2\lambda n} \\ &\ge -s(\phi^*(-u)+w^\top X u) + (-\phi^*(-\alpha) - \lambda n g^*(v)) \\ &~~~~~~~~~~+ \frac{s}{2}\left(\gamma(1-s)-\frac{s \|X \|_{D\to D'}^2}{\lambda n}\right)\|u-\alpha\|_D^2 + s(\phi^*(-\alpha)+ w^\top X \alpha) .\end{aligned}$$ Since $-u = \nabla \phi(X^\top w)$ we have $\phi^*(-u) + w^\top X u = - \phi(X^\top w)$, which yields $$\label{eqn:PC3} A-B \ge s\left[\phi(X^\top w) + \phi^*(-\alpha) + w^\top X \alpha + \left(\frac{\gamma(1-s)}{2} - \frac{s \|X\|_{D\to D'}^2}{2\lambda n}\right) \|u-\alpha\|_D^2 \right] ~.$$ Next note that with $w=\nabla g^*(v)$, we have $g(w)+g^*(v)= w^\top v$. Therefore: $$\begin{aligned} P(w)-D(\alpha) &= \frac{1}{n} \sum_{i=1}^n \phi_i(X_i^\top w) + \lambda g(w) - \left(-\frac{1}{n} \sum_{i=1}^n \phi^*_i(-\alpha_i) - \lambda g^*(v) \right) \\ &= \frac{1}{n} \sum_{i=1}^n \phi_i(X_i^\top w) + \frac{1}{n} \sum_{i=1}^n \phi^*_i(-\alpha_i) + \lambda w^\top v \\ &= \frac{1}{n} \sum_{i=1}^n \left( \phi_i(X_i^\top w) + \phi^*_i(-\alpha_i) + w^\top X_i \alpha_i \right) .\end{aligned}$$ Therefore, if we take expectation of w.r.t. the choice of $i$ we obtain that $$\frac{1}{s}\, {\mathbb{E}}_t[A-B] \ge [P(w)-D(\alpha)] - \frac{s}{2\lambda n} \cdot \underbrace{\frac{1}{n} \sum_{i=1}^n \left(\|X_i\|_{D\to D'}^2 - \frac{\gamma(1-s)\lambda n}{s}\right) {\mathbb{E}}_t[\|u_i-\alpha_i\|_D^2] }_{= G^{(t)}} .$$ We have obtained that $$\label{eqn:DualSObyGap} \frac{n}{s}\, {\mathbb{E}}_t[D(\alpha^{(t)})-D(\alpha^{(t-1)})] \ge [P(w^{(t-1)})-D(\alpha^{(t-1)})] - \frac{s\,G^{(t)}}{2\lambda n} ~.$$ Multiplying both sides by $s/n$ concludes the proof of the lemma. Equipped with the above lemmas we are ready to prove [Theorem \[thm:smooth\]]{} and [Theorem \[thm:HighProbsmooth\]]{}. The assumption that $\phi_i$ is $(1/\gamma)$-smooth implies that $\phi_i^*$ is $\gamma$-strongly-convex. We will apply [Lemma \[lem:key\]]{} with $$s = \frac{n}{n + R^2/(\lambda \gamma)} = \frac{\lambda n \gamma}{R^2 + \lambda n \gamma } \in [0,1] ~.$$ Recall that $\|X_i\|_{D\to D'} \le R$. Therefore, the choice of $s$ implies that $$\|X_i\|_{D\to D'}^2 - \frac{\gamma(1-s)\lambda n}{s} \le R^2 - \frac{1-s}{s/(\lambda n \gamma )} = R^2 - R^2 = 0 ~,$$ and hence $G^{(t)} \le 0$ for all $t$. This yields, $$\label{eqn:lem1CorForSmooth} {\mathbb{E}}_t[D(\alpha^{(t)})-D(\alpha^{(t-1)})] \ge \frac{s}{n}\, (P(w^{(t-1)})-D(\alpha^{(t-1)})) ~.$$ Taking expectation of both sides with respect to the randomness at previous rounds, and using the law of total expectation, we obtain that $$\label{eqn:Ialsoneedthis} {\mathbb{E}}[D(\alpha^{(t)})-D(\alpha^{(t-1)})] \ge \frac{s}{n}\, {\mathbb{E}}[P(w^{(t-1)})-D(\alpha^{(t-1)})] ~.$$ But since $\epsilon_D^{(t-1)} := D(\alpha^*)-D(\alpha^{(t-1)}) \le P(w^{(t-1)})-D(\alpha^{(t-1)})$ and $D(\alpha^{(t)})-D(\alpha^{(t-1)}) = \epsilon_D^{(t-1)} - \epsilon_D^{(t)}$, we obtain that $${\mathbb{E}}[ \epsilon_D^{(t)} ] \le \left(1 - \tfrac{s}{n}\right){\mathbb{E}}[\epsilon_D^{(t-1)}] \le \left(1 - \tfrac{s}{n}\right)^t \,\epsilon_D^{(0)} \le \epsilon_D^{(0)}\,e^{-\frac{st}{n}}~.$$ Therefore, whenever $$t \ge \frac{n}{s}\,\log(\epsilon_D^{(0)}/\epsilon_D) = \left(n + \tfrac{R^2}{\lambda \gamma}\right) \, \log(\epsilon_D^{(0)}/\epsilon_D) ~,$$ we are guaranteed that ${\mathbb{E}}[ \epsilon_D^{(t)} ]$ would be smaller than $\epsilon_D$. Using again , we can also obtain that $${\mathbb{E}}[P(w^{(t)})-D(\alpha^{(t)})] \le \frac{n}{s} {\mathbb{E}}[D(\alpha^{(t+1)})-D(\alpha^{(t)})] = \frac{n}{s} {\mathbb{E}}[\epsilon_D^{(t)} - \epsilon_D^{(t+1)}] \le \frac{n}{s} {\mathbb{E}}[\epsilon_D^{(t)}] . \label{eqn:dgap-bound-smooth}$$ So, requiring ${\mathbb{E}}[\epsilon_D^{(t)}] \le \frac{s}{n} \epsilon_P$ we obtain an expected duality gap of at most $\epsilon_P$. This means that we should require $$t \ge \left(n + \tfrac{R^2}{\lambda \gamma}\right) \, \log( (n + \tfrac{R^2}{\lambda \gamma}) \cdot \tfrac{\epsilon_D^{(0)}}{\epsilon_P}) ~,$$ which proves the first part of [Theorem \[thm:smooth\]]{}. Next, we sum the first inequality of over $t=T_0+1,\ldots,T$ to obtain $${\mathbb{E}}\left[ \frac{1}{T-T_0} \sum_{t=T_0+1}^{T} (P(w^{(t)})-D(\alpha^{(t)}))\right] \le \frac{n}{s(T-T_0)} {\mathbb{E}}[D(\alpha^{(T+1)})-D(\alpha^{(T_0+1)})] .$$ Now, if we choose $\bar{w},\bar{\alpha}$ to be either the average vectors or a randomly chosen vector over $t \in \{T_0+1,\ldots,T\}$, then the above implies $$\begin{aligned} {\mathbb{E}}[ P(\bar{w})-D(\bar{\alpha})] &\le \frac{n}{s(T-T_0)} {\mathbb{E}}[D(\alpha^{(T+1)})-D(\alpha^{(T_0+1)})] \\ &\le \frac{n}{s(T-T_0)} {\mathbb{E}}[\epsilon_D^{(T_0+1)})] \\ &\le \frac{n}{s(T-T_0)} \epsilon_D^{(0)} e^{-\frac{sT_0}{n}}. \end{aligned}$$ It follows that in order to obtain a result of ${\mathbb{E}}[ P(\bar{w})-D(\bar{\alpha})] \le \epsilon_P$, we need to have $$T_0 \ge \frac{n}{s} \log\left( \frac{n \epsilon_D^{(0)}}{s (T-T_0) \epsilon_P} \right) ~.$$ In particular, the choice of $T-T_0 = \frac{n}{s}$ and $T_0 = \frac{n}{s} \log(\epsilon_D^{(0)}/\epsilon_P)$ satisfies the above requirement. Define $t_0 = \lceil \frac{n}{s} \log(2\epsilon_D^{(0)}/\epsilon_D) \rceil$. The proof of [Theorem \[thm:smooth\]]{} implies that for every $t$, ${\mathbb{E}}[\epsilon_D^{(t)}] \le \epsilon_D^{(0)}\,e^{-\frac{st}{n}}$. By Markov’s inequality, with probability of at least $1/2$ we have $\epsilon_D^{(t)} \le 2\epsilon_D^{(0)}\,e^{-\frac{st}{n}}$. Applying it for $t=t_0$ we get that $\epsilon_D^{(t_0)} \le \epsilon_D$ with probability of at least $1/2$. Now, lets apply the same argument again, this time with the initial dual sub-optimality being $\epsilon_D^{(t_0)}$. Since the dual is monotonically non-increasing, we have that $\epsilon_D^{(t_0)} \le \epsilon_D^{(0)}$. Therefore, the same argument tells us that with probability of at least $1/2$ we would have that $\epsilon_D^{(2t_0)} \le \epsilon_D$. Repeating this $\lceil \log_2(1/\delta) \rceil$ times, we obtain that with probability of at least $1-\delta$, for some $k$ we have that $\epsilon_D^{(kt_0)} \le \epsilon_D$. Since the dual is monotonically non-decreasing, the claim about the dual sub-optimality follows. Next, for the duality gap, using we have that for every $t$ such that $\epsilon_D^{(t-1)} \le \epsilon_D$ we have $$P(w^{(t-1)})-D(\alpha^{(t-1)}) ~\le~ \frac{n}{s} \, {\mathbb{E}}[D(\alpha^{(t)})-D(\alpha^{(t-1)})] \le \frac{n}{s} \, \epsilon_D ~.$$ This proves the second claim of [Theorem \[thm:HighProbsmooth\]]{}. For the last claim, suppose that at round $T_0$ we have $\epsilon_D^{(T_0)} \le \epsilon_D$. Let $T = T_0 + n/s$. It follows that if we choose $t$ uniformly at random from $\{T_0,\ldots,T-1\}$, then ${\mathbb{E}}[ P(w^{(t)})-D(\alpha^{(t)})] \le \epsilon_D$. By Markov’s inequality, with probability of at least $1/2$ we have $ P(w^{(t)})-D(\alpha^{(t)}) \le 2\epsilon_D$. Therefore, if we choose $\log_2(2/\delta)$ such random $t$, with probability $\ge 1-\delta/2$, at least one of them will have $ P(w^{(t)})-D(\alpha^{(t)}) \le 2\epsilon_D$. Combining with the first claim of the theorem, choosing $\epsilon_D = \epsilon_P/2$, and applying the union bound, we conclude the proof of the last claim of [Theorem \[thm:HighProbsmooth\]]{}. [^1]: School of Computer Science and Engineering, The Hebrew University, Jerusalem, Israel [^2]: Department of Statistics, Rutgers University, NJ, USA [^3]: Baidu Inc., Beijing, China [^4]: Technically speaking, it may be more accurate to use the term *randomized* dual coordinate ascent, instead of *stochastic* dual coordinate ascent. This is because our algorithm makes more than one pass over the data, and therefore cannot work directly on distributions with infinite support. However, following the convention in the prior machine learning literature, we do not make this distinction. [^5]: If the regularizer $g(w)$ in the definition of $P(w)$ is non-differentiable, we can replace $\nabla \Psi(\tilde{w})$ with an appropriate sub-gradient of $\Psi$ at $\tilde{w}$. It is easy to verify that the proof is still valid. [^6]: Usually, the training data comes with labels, $y_i \in \{\pm 1\}$, and the loss function becomes $\log(1+e^{-y_i x_i^\top w})$. However, we can easily get rid of the labels by re-defining $x_i \leftarrow -y_i x_i$.
--- abstract: '=1.5em We show that in a pioneering paper by Polnarev and Zembowicz, some conclusions concerning the characteristics of the Turok-strings are generally not correct. In addition we show that the probability of string collapse given there, is off by a large prefactor ($\sim 10^3$).' author: - 'R.N. Hansen[^1], M. Christensen[^2] and A.L. Larsen[^3]' title: 'Comment on “Formation of primordial black holes by cosmic strings"' --- *Physics Department, University of Odense,* \ *Campusvej 55, 5230 Odense M, Denmark* In one of the pioneering and often cited papers on the probability of cosmic string collapse [@pol], Polnarev and Zembowicz analyzed the 2-parameter Turok-strings [@turok]: $$\begin{aligned} X(\tau,\sigma) & = & \frac{A}{2}~\left[{(1-\alpha)\sin (\sigma -\tau) + \frac{\alpha}{3}\sin 3 (\sigma - \tau)+ \sin (\sigma +\tau) }\right] \nonumber\\ Y(\tau,\sigma) & = & \frac{A}{2}~\left[{(1-\alpha)\cos (\sigma -\tau) + \frac{\alpha}{3}\cos 3 (\sigma - \tau)+ (1-2 \beta ) \cos (\sigma +\tau) }\right] \nonumber \\ Z(\tau,\sigma) & = & \frac{A}{2}~\left[{2\sqrt{\alpha(1-\alpha)}\cos (\sigma - \tau) + 2\sqrt{\beta(1-\beta)}\cos (\sigma + \tau) }\right] \label{turokstring}\end{aligned}$$ (We included a dimensionfull parameter $A$ to keep $\tau$ and $\sigma$ dimensionless).\ It was concluded [@pol], among other things, that: - The strings have their minimal size $R$ at $$\tau = \frac{\pi}{2} \label{polz1}$$ - For generic parameters $(\alpha, \beta)$: $$\frac {R^2}{A^2} = \left(\sqrt{\alpha(1-\alpha)} - \sqrt{\beta(1-\beta)}\;\right)^2 + \left(\frac{\alpha}{3}-\beta\right)^2 \label{polz2}$$ We now give two simple explicit examples showing that the two conclusions (\[polz1\]), (\[polz2\]) cannot generally be correct.\ [**A**]{}. Consider first the case $\alpha = 1,\; \beta = 0$. Besides $Z = 0,$ this corresponds to: $$\begin{aligned} X(\tau,\sigma) = \frac{A}{2} \left[{\frac{1}{3}\sin 3 (\sigma -\tau) + \sin(\sigma + \tau)}\right] \nonumber\\ Y(\tau,\sigma) = \frac{A}{2} \left[{\frac{1}{3}\cos 3 (\sigma -\tau) + \cos(\sigma + \tau)}\right]\end{aligned}$$ This is in fact a rigidly rotating string: $$\left( \begin{array}{cc} {\mbox{X}}\left( \tau, \sigma \right) \\ {\mbox{Y}}\left( \tau, \sigma \right) \\ \end{array} \right) = \left( \begin{array}{cc} \cos(3\tau)&\sin(3\tau)\\ -\sin(3\tau)&\cos(3\tau)\\ \end{array} \right) \left( \begin{array}{cc} \mbox{X}\left(0,\tilde{\sigma}\right)\\ \mbox{Y}\left(0,\tilde{\sigma}\right)\\ \end{array} \right)$$ where $\tilde{\sigma} \equiv \sigma - 2\tau$. It follows that the minimal string size $R$ (the radius of the minimal sphere that can ever enclose the string completely) is independent of time. Thus it can be computed at any time, say $\tau = 0$: $$R = \begin{array}{c} \mbox{Maximum} \\ \sigma \in [0,2\pi ] \end{array} \left[ \sqrt{X^2(0,\sigma ) + Y^2(0,\sigma )} \, \right] = \frac{2A}{3}$$ Notice that the minimal sphere is found by maximization over $\sigma$. Thus the result (\[polz2\]) is not correct in this case. In fact, it gives the $\underline{minimal}$ distance from origo to the string (namely $A/3$), but to completely enclose the string, one needs a sphere with radius corresponding to the $\underline{maximal}$ distance (namely $2A/3$).\ [**B**]{}. Now consider the case $\alpha = 1/2,\;\beta = 1$. Let us consider the distance from origo to the string as a function of $\sigma$ at two different times, namely $\tau = 0$ and $\tau =\pi/2$. It is straightforward to show that $$\begin{aligned} \begin{array}{c} \mbox{Maximum} \\ \sigma \in [0,2\pi ] \end{array} \: \left[ \sqrt{X^2(0,\sigma ) + Y^2(0,\sigma ) + Z^2(0,\sigma )} \: \right] \: < \nonumber \\ \begin{array}{c} \mbox{Maximum} \\ \sigma \in [0,2\pi ] \end{array} \: \left[ \sqrt{X^2(\pi /2,\sigma ) + Y^2(\pi /2,\sigma ) + Z^2(\pi /2,\sigma )} \: \right]\end{aligned}$$ Thus the string does not have its minimal size at $\tau = \pi/2$; at $\tau = 0$ it can be enclosed in a much smaller sphere. More precisely, at $\tau = 0$, the string can be enclosed in a sphere of radius $\sqrt{155/288} \, A$ while at $\tau = \pi/2$, a sphere of radius $\sqrt{17/18} \, A$ is needed. Therefore, the result (\[polz1\]) is not correct in this case.\ On the other hand, for some other particular examples, it seemed that the conclusions (2)-(3) were indeed correct. Thus to clarify the situation, we did a complete re-analysis of the problem (see [@rene] for the details) using both analytical and numerical methods. This led to a precise classification of the Turok-strings, and a subsequent subdivision into 3 different families (see Fig. 1):\ [**I**]{}. These strings have their minimal size at $\tau =\pi/2$. That is, starting from their original size at $\tau$ = 0, they generally contract to their minimal size at $\tau =\pi/2$, and then generally expand back to their original size at $\tau = \pi$.\ [**II**]{}. These strings start from their minimal size at $\tau$ = 0. Then they generally expand towards their maximal size and then recontract towards their minimal size at $\tau = \pi$.\ [**III**]{}. These strings have their minimal size at two values of $\tau$ symmetrically around $\pi/2$. That is, they first generally contract and reach the minimal size at some $\tau_0\in\left[0; \pi/2\right]$. Then they expand for a while, and then recontract and reach the minimal size again at $\tau =\pi - \tau_0$. Then they expand again towards the original size at $\tau = \pi$. In this family of strings, the value of $\tau_0$ depends on $(\alpha , \beta)$.\ \ Then by comparison, we see that the conclusion (2) is correct in the region [**I**]{} of parameter-space, but incorrect in regions [**II**]{} and [**III**]{}.\ As for the conclusion (3), let us restrict ourselves to the region [**I**]{} of parameter-space. This is the most relevant region for string collapse since it includes the circular string ($\alpha=\beta=0$), and string collapse is only to be expected for low angular momentum near-circular strings. In any case, in the region [**I**]{}, it is easy to derive the exact analytical expression for the minimal string size [@rene]: $${R}^2 = \mbox{Max} \left( R^{2}_1,\,R^{2}_2\right) \label{collapse}$$ where $$\frac{R^{2}_1}{A^2} = \frac{4{\alpha}^2}{9} \label{rone}$$ and $$\frac{R^{2}_2}{A^2} = \left(\sqrt{\alpha\left(1-\alpha\right)} - \sqrt{\beta\left(1-\beta\right)}\right)^2 + \left(\frac{\alpha}{3} - \beta\right)^2 \label{rtwo}$$ Notice that Eq. (\[rtwo\]) is precisely the result (\[polz2\]) of Polnarev and Zembowicz [@pol]. However, in Ref. [@pol], the other solution (\[rone\]) was completely missed, and this is actually the relevant solution in Eq. (\[collapse\]) in approximately half of the parameter-space $\left(\alpha , \beta\right)$.\ Finally, let us also compute the probability $f$ of string collapse in the region [**I**]{} of parameter space: $$\begin{aligned} {f} = \int_{R\leq R_S}^{}d\alpha\,d\beta\end{aligned}$$ where $R_{S}=4\pi A G\mu\;$ is the Schwarzschild radius of the string. Using Eqs. (8)-(11), and assuming that $G\mu<<1$ [@shellard], one finds [@rene]: $$\begin{aligned} {f} = \frac{12\sqrt{6}}{5}\left(4\pi G\mu\right)^{\frac{5}{2}} \int_{0}^{1} \frac{t^{2}dt}{\sqrt{1- t^4}}\,+\,\mathcal{O}\left(\left(G\mu\right)^{\frac{7}{2}}\right) \nonumber \\ = \frac{3^{\frac{3}{2}}\left(4\pi\right)^4}{5\, \Gamma^{2}\left(\frac{1}{4}\right)} \left(G\mu\right)^{\frac{5}{2}}\,+\,\mathcal{O} \left(\left(G\mu\right)^{\frac{7} {2}}\right) \label{approxprob}\end{aligned}$$ The result (\[approxprob\]) is a very good approximation for $G\mu < 10^{-2}$, thus for any “realistic" cosmic strings we conclude: $${f} \approx 2\cdot10^{3}\cdot\left(G\mu\right)^{\frac{5}{2}}$$ Our result (13) partly agrees with that of Ref. [@pol] in the sense that $f\propto(G\mu)^{5/2}$. However, we find that there is in addition a large numerical prefactor in the relation. This factor is of the order $10^3$.\ To conclude, simple explicit examples show that the conclusions of [@pol] concerning the minimal string size of the Turok-strings are generally not correct. In this comment we re-analyzed the problem and performed a classification of the Turok-strings, to clarify the situation. We also computed the probability of string collapse again, and found that the original result [@pol] is off by approximately 3 orders of magnitude.\ [II]{} A. Polnarev and R. Zembowicz, Phys. Rev. [**D43**]{}, 1106 (1991). N. Turok, Nucl. Phys. [**B242**]{}, 520 (1984). A. Vilenkin and E.P.S. Shellard, [*“Cosmic Strings and other Topological Defects"*]{} (Cambridge University Press, 1994). R.N. Hansen, M. Christensen and A.L. Larsen, [*“Cosmic String Loops Collapsing to Black Holes"*]{}, gr-qc/9902048 (unpublished). \[fig1\] [^1]: Electronic address: rnh@fysik.ou.dk [^2]: Electronic address: mc@bose.fys.ou.dk [^3]: Electronic address: all@fysik.ou.dk
--- abstract: 'We calculate the predicted $UBVRIJHK$ absolute magnitudes for models of supernova progenitors and apply the result to the case of supernova 2005cs. We agree with previous results that the initial mass of the star was of low, around 6 to 8 M$_{\odot}$. However such stars are thought to go through second dredge-up to become AGB stars. We show that had this occurred to the progenitor of 2005cs it would have been observed in $JHK$ pre-explosion images. The progenitor was not detected in these bands and therefore we conclude that it was not an AGB star. Furthermore if some AGB stars do produce supernovae they will have a clear signature in pre-explosion near-infrared images. Electron-capture supernovae are thought to occur in AGB stars, hence the implication is that 2005cs was not an electron-capture supernova but was the collapse of an iron core.' author: - | J. J. Eldridge[^1], S. Mattila and S.J. Smartt.\ Astronomy Research Centre, School of Maths & Physics, Queen’s University Belfast,Belfast, BT7 1NN, Northern Ireland, UK\ title: 'Ruling out a massive-asymmptoic giant-branch star as the progenitor of supernova 2005cs.' --- \[firstpage\] stars: evolution – supernova: general – stars: AGB – supernova: 2005cs – infrared: stars Introduction ============ Core-collapse supernovae (SNe) are the spectacular events associated with the death of massive stars. They occur once the core is no longer supported by nuclear-fusion reactions or electron degeneracy-pressure. The core collapses to form a neutron star or, if the core is massive enough, a black hole. In the neutron star formation a large neutrino flux is produced that transfers a large fraction of energy to the stellar envelope that is ejected and produces the observed luminous display. There are two main evolutionary paths that lead to core-collapse. The first is the widely known iron core-collapse. This occurs in stars initially more massive than about 10 M$_{\odot}$[^2] where nuclear burning progresses all the way to production of an iron core. Iron being the most stable element does not provide energy by further fusion reactions so the core collapses. The second core-collapse path is restricted to stars below about 10M$_{\odot}$. In these stars nuclear burning progresses no further than carbon burning. After carbon burning neutrino cooling reduces the temperature of the core preventing further nuclear reactions. Core collapse occurs when the oxygen-neon (ONe) core reaches the Chandrasekhar mass (M$_{\rm Ch} \approx 1.4M_{\odot}$) and electron degeneracy-pressure can no longer support the core. The central density increases until a density of around 10$^{9.6}$ g/cm$^{3}$ is reached. Then electron-capture by magnesium-24 ($^{24}$Mg) and/or neon-20 ($^{20}$Ne) reduces the electron degeneracy-pressure further and the collapse accelerates [@ecapture1; @emore2]. Electron-capture SNe are thought to occur predominately in massive Asymptotic Giant-Branch (AGB) stars [@WHW02]. AGB evolution occurs after helium burning in stars from 0.8 to about 8 M$_{\odot}$. After the main sequence and formation of a helium core a star will ascend the red giant branch. When this occurs the convective envelope penetrates into the helium core mixing products from hydrogen burning to the surface, this process is known as dredge-up. It decreases the hydrogen, carbon and oxygen abundance while increasing helium and nitrogen abundance. After some time helium will ignite in the core and the star will move back to the blue. When helium burning ends and a carbon-oxygen (CO) core is formed the star moves back to the giant branch and dredge-up occurs for the second time. This time dredge-up penetrates deeper and mixes up nearly all the helium leaving just a thin layer covering the CO core and the hydrogen and helium burning shells in close proximity. This arrangement is unstable and nuclear burning progresses as a series of pulses. The hydrogen burning shell burns until the layer of helium is thick enough to ignite. This helium rapidly burns until it is exhausted and the cycle restarts. AGB stars are more luminous than red giants of the same mass because the hydrogen shell is at a much higher temperature in close proximity to the helium burning shell. In the most massive AGB stars core carbon-burning occurs around the time of 2nd dredge-up and so there is an ONe core (carbon burning extinguishes after the CO core is completely converted to an ONe core and never reignites). This means that if the core grows via thermal pulses it may reach the M$_{\rm Ch}$ and produce an electron-capture SN. It is a race between mass-loss and core growth as to whether a SN will occur. The evolution of these stars has been well studied, e.g. @ecapture2 [@sagb3] and @sagb4. While @ETsne and @tagb have concentrated on the mass-ranges over which these events occur and when these SNe might occur. It is surprising that two quite different mechanisms for producing a neutron star could lead to similar type IIP SN events. Type IIP SNe are classified by having hydrogen lines in their spectra and a long plateau phase in their lightcurves during which the luminosity is roughly constant. To produce this type of SN a massive and extended hydrogen envelope is required. This envelope which produces the SN display does not differ substantially between the two paths of core-collapse. Discriminating between these two SN types can only be achieved by either observing differences in the nucleosynthesis products from the different collapse mechanisms, or by observing the progenitor star. In this letter we demonstrate the latter method for SN 2005cs. During the SN explosion nucleosynthesis occurs and one product is nickel-56 ($^{56}$Ni) from explosive burning of oxygen and silicon [@WHW02]. $^{56}$Ni provides the late time SN luminosity as it decays to cobalt-56 ($^{56}$Co) and then iron-56 ($^{56}$Fe). SN 2005cs produced a lower than average amount of nickel in the explosion, of the order of 0.01 M$_{\odot}$ [@andrea; @ni56t]. Pastorello et al. (in prep) who use image subtraction to remove the background to accurately estimate the sources late-time SN luminosity. They suggest it could be as low as 0.004 M$_{\odot}$. Typical nickel masses are for example 0.016 M$_{\odot}$ for SN 2003gd [@03gd] or 0.075 M$_{\odot}$ for SN 1987A. To produce nickel, oxygen and silicon are required. Low nickel mass therefore indicates that there was very little of these two elements around the collapsing core and such a structure occurs within massive-AGB stars. Thus the progenitor of 2005cs is a good candidate to be such a star. Furthermore simulations of collapsing ONe white-dwarfs indicate that they produce less than 0.001 $M_{\odot}$ of $^{56}$Ni [@aic]. In this letter we first discuss our synthetic photometry method to calculate $UBVRIJHK$ magnitudes for our stellar models. We then compare model predictions to the observational progenitor limits for SN 2005cs. It was detected in a pre-SN $I$ band image but not in deep $JHK$ observations. We show that the latter can provide a strong limit on whether 2nd dredge-up had occurred Synthetic $UBVRI$ & $JHK$ magnitudes. ===================================== Stellar evolution codes only produce a few details of the observable characteristics such as the luminosity, radius, surface temperature and composition where the surface is defined to be where the average optical depth reaches two thirds. Comparing models to observed stars can only be done accurately from spectroscopic observations of the stars. However spectroscopic data are generally not available for SN progenitors, apart from the rare case of SN 1987A. Recent work in searching for information on the progenitors of supernovae has focused, successfully, on finding observations through numerous broad-band filters (combinations of $UBVRIJHK$). Calculating stellar parameters from broad-band photometry obviously provides less accurate results than from spectroscopic analyses, but the method is the only viable one open to us unless a Milky Way, or possibly Local Group SN occurs. Previous attempts to determine stellar parameters of SN progenitors assigned bolometric corrections based on stellar type (effective surface temperature) for the progenitors estimated from broad-band colours e.g. @SMARTT. However an alternative method is to predict the photometric magnitudes of stars along theoretical stellar evolution tracks using atmosphere models. ![Hertzsprung-Russel (HR) diagrams comparing the STARS models (red lines) with the Geneva models (blue lines). From bottom to top the initial masses are 5, 7, 9, 12, 15 and 20M$_{\odot}$. Upper panel is plotted with $\log(L/L_{\odot})$ versus $\log(T_{eff}/K)$ and the lower panel is plotted by the absolute V magnitude and the V-I colour.[]{data-label="hrO"}](figjje1.ps "fig:"){width="84mm"} ![Hertzsprung-Russel (HR) diagrams comparing the STARS models (red lines) with the Geneva models (blue lines). From bottom to top the initial masses are 5, 7, 9, 12, 15 and 20M$_{\odot}$. Upper panel is plotted with $\log(L/L_{\odot})$ versus $\log(T_{eff}/K)$ and the lower panel is plotted by the absolute V magnitude and the V-I colour.[]{data-label="hrO"}](figjje2.ps "fig:"){width="84mm"} We used the method of @LS01 to perform synthetic photometry on the Cambridge STARS evolution models [@E1] (see `http://www.ast.cam.ac.uk/\simstars` for more details). This involves using the BaSeL model atmosphere library [@basel3] to work out the flux in different broad-band filters [@ubvribands; @jhkbands] for different surface temperatures, gravities and metallicities. Then interpolating in this grid to obtain colours for each model. To calculate the absolute magnitudes we used a theoretical spectrum of Vega from the same atmosphere library, setting all colours to zero. In this we assumed a radius of 3.1 R$_{\odot}$ for Vega which is larger than its known radius of $2.7 \, R_{\odot}$ [@vega]. However using the theoretical Vega spectrum it provides the correct flux for Vega in the $V$ band. The adopted radius is the primary source of error in the absolute magnitudes but does not affect the colours. The resultant colours agree with the colours from the commonly used Geneva models [@LS01]. Both sets of models are plotted in theoretical Hertzsprung-Russel diagrams in Figure \[hrO\]. The slight disagreement is due to differences in the two evolutionary codes, such as updated opacity tables, different initial abundances and different mass-loss rates. Also our models extend further in time as they are calculated to the beginning of neon burning while the Geneva models end after core carbon-burning and do not have 2nd dredge-up. This means the track endpoints of our models are more luminous and slightly cooler. Our models for 8 M$_{\odot}$ and below experience 2nd dredge-up. It is difficult to estimate the uncertainty in our colours. The largest errors are due to the assumption of the radius for Vega and also assuming that all filters have an absolute magnitude of zero for Vega. We estimate the error in the absolute magnitudes due to this are around $\pm$0.3 mags. This was estimated by using different methods to calibrate the zero point of our system such using the colours of Sirius and the Sun, and varying the radius of Vega. ![The comparison of observed and predicted $I-H$ colours for RSGs. The thick black lines are the observed colours from @obsrsgs. The red lines are the STARS models, while the blue lines are from the Geneva models.[]{data-label="IH"}](figjje3.ps){width="84mm"} We have checked the predicted colours against those of observed RSGs by @obsrsgs. They supply the average colours expected for different spectral types of RSGs. However recently there has been a reassessment of the effective temperatures of RSGs by @cool which we take into account in our analysis. We show examples of the comparisons in Figure \[IH\]. The agreement of the models with the data of @obsrsgs is poor with the average discrepancy of around 0.5 to 0.7 mags for all the colours. We have also calculated the expected colours for RSGs from the MARCS atmosphere models which were used by @cool to derive their temperature calibration. We find that the colours are close to those from the atmosphere models we use with a discrepancy of less than 0.1 mags. It is more difficult to compare the predicted colours of the AGB stars with observations. While lists of AGB colours exist [@agbcol]. AGB stars come from a wide range of initial masses ($0.8 \la M \la 7 M_{\odot}$) and therefore have a wide range of core masses and luminosities. However the AGB stars with larger cores, those closest to the point of core-collapse, tend to be the most luminous one. The predicted colours of our AGB stars broadly agree with the observed AGB colours of @agbcol and @agbfluks. Also we have compared the colours to the more detailed AGB atmosphere models of @agbgro and again there is a broad agreement but the range of possible colours is much greater in these samples. We will use these AGB colours as well as those from our models to determine whether the progenitor of SN 2005cs was an AGB star. The disagreement between the observed and theoretical colours is in general greatest at the end points of the stellar models where the RSGs are most luminous and extended. This may be due to the way the stellar radius is calculated in theoretical stellar models. Stellar evolution codes use wavelength averaged opacity and the photosphere is defined to be where the optical depth becomes two thirds. However in RSG atmospheres opacity is wavelength dependent which results in longer wavelength (infra-red) emission coming from a smaller radius than the shorter wavelength (visible) emission [@bj]. If this radius is not comparable to RSG radii then a systematic error is introduced to the predicted colours. However there is little we can do to correct the models but we can calibrate our predicted colours by introducing an empirical correction. Therefore we produced a second set of tracks where for RSGs we correct the $JHK$ magnitudes to agree with @obsrsgs. The corrections we adopt are: $J_{\rm c}=J+0.5$, $H_{\rm c}=H+0.5$ and $K_{\rm c}=K+0.7$. Implications for the progenitor of SN 2005cs. ============================================= ![The predicted progenitor magnitudes from the STARS code, with the correction applied, compared to the observed $I$ band detection and $JHK$ limits. The empty diamonds are from @Maund and the filled diamonds from @Li. The shaded region is defined by the maximum and minimum colours for AGB stars from @agbgro. The solid lines are for RSG models while the dashed lines are for AGB models.[]{data-label="starstwo"}](figjje4.ps){width="84mm"} The progenitor of SN2005cs was detected in deep Hubble Space Telescope (HST) $I$ band images of M51 taken in January 2005, a few months before the SN occurred in June 2005. In $BVR$ and $JHK$ filters the progenitor was not detected. Two studies exist that use the same HST image to calculate $I$ band magnitudes for the progenitor but use different near infra-red data. The $BVR$ upper limits indicate that the progenitor could not have been a blue supergiant and was a red supergiant no hotter than a K5Ia type [@Li; @Maund]. However the $JHK$ limits also constrain how cool the progenitor was. @Maund made use of ground-based Gemini observations taken in April 2005 to produce limits on the $JHK$ magnitudes while @Li used HST NICMOS observations taken in June 1998 to produce deeper $JH$ but shallower $K$-band limits on the progenitor. Both studies used the observations of @obsrsgs to limit the spectral type of the progenitor star. The non-detections in the $JHK$ bands can be used to place very sensitive limits on the luminosity and temperature of the progenitor. If we decrease the surface temperature of the progenitor while maintaining the I band magnitude we must increase the progenitor’s luminosity and the synthetic $JHK$ magnitudes become greater than the upper limits. This restricts the type of progenitor severely. In Figure \[starstwo\] we plot the $IJHK$ magnitudes of the STARS models with the correction. In calculating the magnitudes we have used a distance modulus of 29.62 [@m51dis]. Over these lines we then add the $I$ band detection and the $JHK$ limits from both @Maund and @Li, corrected for dust extinction using an $A_{\rm V}$ of 0.34 [@andrea] and the extinction law of @dust. Without the correction it is not possible to fit the $I$ detection and still stay below the $JHK$ limits unless we adopt the very upper ends of the error bars of the @Li as their limits. With the correction to the $JHK$ colours applied in Figure \[starstwo\] it is easier to fit the $I$ band detection and the near-infrared upper limits. This supports our method of correcting our model synthetic photometry to match the observed JHK colours of @obsrsgs. The most important feature to notice is that if the progenitor had gone through second dredge-up to become an AGB star, its colours would be as shown by the dashed lines and it would have been clearly detected in the near-infrared bands. This is made more certain by the shaded region showing the range of AGB colours from @agbgro. Most of the AGB colours are far above the $JHK$ limits but those with the lowest values for the $JHK$ magnitudes have very low mass-loss rates and there are many more models with higher values. Therefore the conclusion that the progenitor was not an AGB star is firm. For the STARS and Geneva models the progenitor would be around 6 to 8 M$_{\odot}$. This of course agrees with the conclusions of @Maund and @Li. But now we can limit the amount of 2nd dredge-up that occurred in the progenitor. We can conclude that the progenitor did not experience the extreme dredge-up that produces an AGB star. However it is possible that the progenitor did experience a small amount of dredge-up decreasing the helium core mass slightly, by a few tenths of a solar mass. This is predicted in some of the models shown by @tagb and this would slightly decrease the luminosity of the progenitor meaning we would underestimate its initial mass. Therefore a more robust estimate of the final helium core mass for the progenitor is between 1.7 to 2.2 M$_{\odot}$. The important result is that the 2nd dredge-up did not lead to an AGB star. If it did it would be clearly detected in every $JHK$ image available while agreeing with the I band detection. Therefore this sets an upper limit for the masses of AGB stars and also AGB stars as SN progenitors of 6 to 8 M$_{\odot}$. This overlaps with the upper limit for AGB stars to produce a white dwarf of around 6.8 to 8.6 M$_{\odot}$ from @wd. Discussion of uncertainties. ============================ The uncertainties in our estimation of the progenitor mass come from three sources. Firstly errors in the observational photometry, secondly errors in the synthetic photometry and third errors due to the lack of understanding of RSGs. The $I$-band photometric-measurements of @Maund and @Li differ by 0.3 mags. This is mainly due to the different methods employed to remove the flux from the nearby bright objects. However we can see in Figure \[starstwo\] a difference of 0.3 mags in the $I$ band actually makes little difference to the conclusions we draw. As discussed above, it is the $JHK$ limits that constrains the progenitor more severely, particularly cool AGB stars. There are four uncertainties in the synthetic photometry. The first two are the assumptions of the Vega zero-point for all colours and our assumed Vega radius as mentioned above that introduces an error of around 0.3 dex. Third, the use of the filter functions for the $UBVRIJHK$ bands; the observed $I$ band magnitude was in fact estimated from a HST filter which is quite different from the Johnson filters. However we have compared the colours predicted by the different $JHK$ filter functions and find that the difference is typically less than 0.05 dex. We also assumed that the synthetic spectra are accurate, however any error in their calculation will result in incorrect magnitudes. For example the models do not include mass-loss. Another uncertainty arises from the use of a wavelength averaged optical depth to calculate the stellar radii as discussed above. Other uncertainties in the stellar evolution code, such as use of mixing length theory to describe convection could affect our model parameters. However these are accounted for in our correction factors. Uncertainty is also present due to a number of AGB star and RSG enigmas; dust, oscillations and asymmetry. In this case dust is not an important uncertainty. Any dust extinction in the $IJHK$ bands is much less than the $V$-band [@dust]. Because our conclusions only depend on these bands they are not affected by dust extinction unless a very large reddening (of order $A_{v} \sim 10$) is invoked. But this is inconsistent with all estimated of reddening towards this SN [@Maund; @Li; @andrea; @baronie]. A larger problem is whether the progenitor was oscillating or pulsating before the explosion. @pulse performed calculations to follow the oscillations a red supergiant may undergo before explosion. They show large changes in the surface details of the star. However there is no observational evidence how a star might oscillate before a SN. We have included by hand oscillations in our stellar models of a similar magnitude as those of @pulse. We find that while the $I$ band magnitude can vary a great deal the $JHK$ magnitudes vary only slightly. The last problem of asymmetry is best shown by observations of Betelgeuse [@bj]. Not only does this work show that the radius of a RSG is wavelength dependent but we also see that the near infra-red light comes from deeper in the star than the visible light. This visible light, originating from nearer the surface may be more affected by star spots and this would make the star look more luminous in the $I$ band than the $JHK$ bands. These many different uncertainties do contribute to the error on the estimation of the initial mass but do not affect our conclusion that the progenitor did not experience 2nd dredge-up and was not an AGB star, arguing against the core-collapse mechanism being electron capture for SN 2005cs. It is worthwhile to compare the progenitor mass for this SN to the upper mass limit for an AGB star from @wd. It is striking that the masses match at around 6 M$_{\odot}$. Therefore studies such as @tagb are required to more deeply understand this important change over in behaviour from an AGB star to a SN progenitor. In conclusion the progenitor of SN 2005cs was a low mass star of between 6 and 8 M$_{\odot}$ and may have experienced slight 2nd dredge-up but not a large amount of dredge-up that would have produced an AGB star. Furthermore if AGB stars are the progenitors of some SNe they will leave a clear signature in near-infrared pre-explosion images. An attempt to put limits on SN progenitors in the near infra-red in a systematic way is already underway by our group with deep imaging of about 50 galaxies in $JHK$ with VLT, ISAAC, Gemini and UKIRT [@Maund]. Acknowledgments {#acknowledgments .unnumbered} =============== This work, conducted as part of the award “Understanding the lives of massive stars from birth to supernovae” made under the European Heads of Research Councils and European Science Foundation EURYI Awards scheme, was supported by funds from the Participating Organisations of EURYI and the EC Sixth Framework Programme. Also JJE would like to thank Norbert Langer and Andrea Pastorello for useful discussion. [99]{} Baron E.A., 2006, astro-ph/0611545 Bessell M.S., Brett J.M., 1988, PASP, 100, 1134BBessell M.S., 1990, PASP, 102, 1181B Cardelli J.A., Clayton G.C., Mathis J.S., 1989, ApJ, 345, 245C Ciardi D.R., van Belle G.T., Akeson R.L., Thompson R.R., Lada E.A., Howell S.B., 2001, ApJ, 559, 1147C Dessart L., Burrows A., Ott C.D., Livne E., Yoon S.-C., Langer N., 2006, ApJ, 644, 1063D Dobbie P.D. et al., 2006, MNRAS, 369, 383D Eldridge J.J., Tout C.A., 2004b, MNRAS, 353, 87 Eldridge J.J., Genet F., Daigne F., Mochkovitch R., 2006, MNRAS, 367, 186E Elias J.H., Frogel J.A., Humphreys R.M., 1985, ApJS, 57, 91E Feldmeier J.J., Ciardullo R., Jacoby G.H., 1997, ApJ, 479, 231F Fluks M.A., Plez B., The P.S., de Winter D., Westerlund B.E., Steenman H.C., 1994, A&AS, 105, 311F Groenewegen M.A.T., 2006, A&A, 448, 181G Gutiérrez J., Canal R., García-Berro E., 2005, A&A, 435, 231G Heger A., Jeannin L., Langer N., Baraffe I., 1997, A&A, 327, 224H Hendry M.A. et al., 2005, MNRAS, 359, 906H Kerschbaum F., Lebzelter T., Lazaro C., 2001, A&A, 375, 527KLejeune T., Schaerer D., 2001, A&A, 366, 538L Levesque E.M., Massey P., Olsen K.A.G., Plez B., Josselin E., Maeder A., Meynet G., 2005, ApJ, 628, 973L Li W., Van Dyk S.D., Filippenko A.V., Cuillandre J.-C., Jha S., Bloom J.S., Riess A.G., Livio M., 2006, ApJ, 641, 1060 Smartt S.J., Maund J.R., Hendry M.A., Tout C.A., Gilmore G.F., Mattila S., Benn C.R., 2004, Sci, 303, 499S Maund J.R., Smartt S.J., Danziger I.J., 2005, MNRAS, 364L, 33M Miyaji S., Nomoto K., Yokoi K., Sugimoto D., 1980, PASJ, 32, 303 Nomoto K., 1987, ApJ, 322, 206 Pastorello A. et al., 2006, MNRAS, 370, 1752P Poelarends A.J.T., Herwig F., Langer N., Heger A., 2006, submitted to ApJ Ritossa C., García-Berro E., Iben I. Jr., 1999, ApJ, 515, 381R Siess L., 2006, A&A, 448, 717S Tsvetkov D.Yu., Volnova A.A., Shulga A.P., Korotkiy S.A., Elmhamdi A., Danziger I.J., Ereshko M.V., 2006, A&A, 460, 769 Westera P., Lejeune T., Buser R., 1999, in Spectrophotometric dating of stars and galaxies, ed. I. Hubeny, S. Heap, R. Cornett, ASP Conf. Ser., 192, 203, Annapolis, Maryland, USA Woosley S.E., Heger A., Weaver T.A., 2002, RvMP, 74, 1015 Young J.S. et al., 2000, MNRAS, 315, 635Y \[lastpage\] [^1]: E-mail: j.eldridge@qub.ac.uk [^2]: The value varies with the details of convection in stellar models. Here we quote masses using convective overshooting.
--- abstract: 'In this paper, we derive and investigate approaches to dynamically load balance a distributed task parallel application software. The load balancing strategy is based on task migration. Busy processes export parts of their ready task queue to idle processes. Idle–busy pairs of processes find each other through a random search process that succeeds within a few steps with high probability. We evaluate the load balancing approach for a block Cholesky factorization implementation and observe a reduction in execution time on the order of 5% in the selected test cases.' address: 'Uppsala University, Department of Information Technology, Box 337, SE-751 05 Uppsala, Sweden' author: - Afshin Zafari - Elisabeth Larsson title: Distributed dynamic load balancing for task parallel programming --- Task-based parallel programming ,dynamic load balancing ,distributed memory system ,high performance computing ,scientific computing 65Y05 ,65Y10 ,68Q10 Introduction ============ The objectives of improving the load balance across computational resources can be to reach the best possible utilization of the resources, to improve the performance of a particular application, or to achieve fairness with respect to throughput for a collection of applications. Load-balancing can be static, decided a priori, or dynamic, that is, changing during the execution of the application(s). An overview of various issues related to dynamic load balancing (DLB) is given in [@Alakeel10]. DLB strategies do not assume any specific pre-knowledge about the application. However, the strategies are still often based on some assumptions on the type of parallelization or the class of algorithms. DLB can be implemented through data migration [@Balasubramaniam04; @Martin13]. This is especially relevant when the algorithm consist of iterations or time steps, where similar computations are repeated. It is then likely that a redistribution of data will be effective over at least a few consecutive iterations. If the parallel implementation is instead task centric, where a task is a work unit, another possibility is to migrate computational work [@Khan10; @Rubensson14]. As work is usually associated with data, this could also include moving data, temporarily or permanently. An approach, that currently is receiving attention, instead migrates the computational resources between jobs or between processes [@Spiegel06; @Garcia12; @Schreiber15; @Garcia17]. The underlying assumption is that there is a hybrid parallelization, where MPI is used over the computational nodes in a cluster, but shared memory, thread-based parallelization is used within the computational nodes. Assuming that several processes are co-scheduled within one computational node, resources can, based on the malleability of the shared memory tasks, be migrated within the node. This approach alone cannot provide global load balance, but can improve utilization within the node, and can adjust global imbalance depending on the mix of processes at the node. In this paper, we consider distributed DLB in the context of distributed task-based parallel programming [@TFGBAL11; @AAFNT12; @BBDFHD13; @Rubensson14; @Zafari17] with a hybrid MPI-thread implementation. We are not assuming co-scheduling of several processes, instead we are aiming to improve the performance of a single application running on a cluster of multicore nodes. Task stealing as mechanism for load balancing has been proven efficient in the shared memory task-based parallel programming context, e.g., in the Cilk [@cilkplus] C++ language extension and the SuperGlue [@Tillenius15] framework. It is also used in the distributed task framework Chunks and Tasks [@Rubensson14]. This is the direction that we are investigating also in this paper. We derive a prediction model to decide if stealing is likely to improve utilization, in the sense that the work can be finished by the remote process, and the result returned earlier than it was processed in the current location. All decisions are taken locally to avoid bottlenecks due to global information exchange or centralized scheduling decisions. The resulting load balancing approach is implemented within the DuctTeip distributed task parallel framework [@Zafari17]. The paper is organized as follows: In Section \[sec:task\] we briefly review he aspects of task parallel programming that are important for DLB. The properties of the DLB approach we are proposing are described in Section \[sec:methods\]. Then a theoretical analysis for when it is cost efficient to export tasks is performed in Section \[sec:theor\]. The Cholesky benchmark used for evaluating the method is described in Section \[sec:chol\], while the results of the performance experiments are given in Section \[sec:exp\]. Finally, conclusions are given in Section \[sec:conc\]. Definition of the task parallel programming context {#sec:task} =================================================== In our load balancing model, we do not make any assumptions about the application as such, but we target dependency-aware task parallel implementations. We further assume that the distributed application is executed by a number of (MPI) processes $p_i$, $i=0,\ldots,P$ and that each process has a queue of ready tasks to execute. Tasks become ready when their data dependencies are fulfilled and the data they need in order to run are available locally. In the DuctTeip framework [@Zafari17], where we will implement the DLB strategy, the default situation is that a certain task is executed by the process that owns the output data of the task. That is, the data distribution also determines the task distribution. For a data parallel algorithm, this may be sufficient to achieve a reasonable load balance by a uniform splitting of the data. However, if some of the processes are slowed down due to, e.g., external interference, there can still be imbalance in the end. For more complex algorithms, it is expected that the work load of the individual processes will vary over the execution. A run-time system handles all task management decisions, such as checking when tasks are ready to run, and sending and receiving data from remote processes. The run-time system also handles DLB. We consider the possibility that the run-time system records performance data for different task types, and for the communication, but we do not assume that this requires modification of the user code or of the operating system. The dynamic load balancing approaches {#sec:methods} ===================================== We start by defining the workload $w_i(t)$ of process $p_i$ at time $t$ as the number of ready tasks in the queue. This does not take the size of the tasks into account, but it is an easily accessible number that can be stored as one integer variable per process. What is a high (or low) workload depends on the application, the blocking of the data, and the number of processes $P$. We let the threshold $W_T$ be a user defined parameter, and then define processes with $w_i>W_T$ as busy and processes with $w_i\leq W_T$ as idle. A more correct definition is to say that a process is idle when $w_i=0$, but in this case, we want the processes to start looking for more work before they run out of it. In this way, the migration of tasks can overlap with computational work. Obtaining global information about the workload of all processes is likely to become a bottleneck when scaling to larger numbers of processes, and we want to avoid this and let each process make local decisions. The idea that we are using is that each process periodically tries to become a partner in an idle–busy process pair. We do not consider any particular topology of the network, but let the processes randomly try other processes with a uniform selection probability. The probability $\mathcal{P}(k)$ of finding $k$ busy processes in $n$ tries drawn from a distribution where $K$ processes out of a total of $P$ are busy, is given by the hypergeometric probability distribution $$\mathcal{P}(k) = \frac{ \left(\begin{array}{c}P-K\\n-k\end{array}\right) \left(\begin{array}{c}K\\k\end{array}\right) }{ \left(\begin{array}{c}P\\n\end{array}\right) }.$$ The probability of at least one successful try out of $n$ is the complementary probability of failure, that is, $1-\mathcal{P}(0)$. This function is plotted for different combinations of $P$ and $K$ in Figure \[fig:prob\]. ![The probability of success for finding any of the $K$ busy processes out of a total of $P$ using $n$ tries for $P=10$ (left) and $P=100$ (right).[]{data-label="fig:prob"}](hypergeo_N=10-crop.pdf "fig:"){width="49.00000%"} ![The probability of success for finding any of the $K$ busy processes out of a total of $P$ using $n$ tries for $P=10$ (left) and $P=100$ (right).[]{data-label="fig:prob"}](hypergeo_N=100-crop.pdf "fig:"){width="49.00000%"} Since both idle and busy processes are looking for each other, the most difficult case is when 50% of the processes are idle/busy. By analyzing the formula, we find that for $K=P/2$, as the number of processes $P\rightarrow\infty$, the probability of success approaches $1-2^{-n}$, that is for $n=5$ tries, the probability is more than 96%. We therefore decide that a process looking for a partner will always perform 5 tries, then wait for a period $\delta$ before starting another round of tries. This waiting time is introduced to prevent flooding the network with requests when there is no work to share. The waiting time is the second user defined parameter that needs tuning. A successful request means that the pair of nodes will not accept or send any further requests until their work exchange transaction has completed. When a busy–idle process couple has been formed, the next step is to decide which tasks to export, if any. We consider three potential strategies 1. **Basic:** No extra information is exchanged. The busy process $p_i$ just sends its excess tasks such that the remaining queue is $w_i=W_T$. 2. **Equalizing:** The idle process $p_j$ appends information about its current work load $w_j$ to the request. The busy process $p_i$ computes the average $\bar{w}=(w_i+w_j)/2$, and sends $w_i-\bar{w}$ tasks to $p_j$. 3. **Smart:** The idle process provides information about the expected time to execute the currently enqueued tasks. The busy process estimates which of the tasks would return their results earlier if executed remotely, than the time they would be completed if executed locally. Only the tasks with an expected benefit are exported. In the latter case, performance estimates are needed. Each process records the average time for running tasks of each type as well as times for communicating task of each type and data of a certain size. The sophistication of the models applied to the measurements can vary, but they will be used in the same way. The cost for remote execution consists of the remote queuing time, the time for exporting the task and its data, the task execution time, and the communication time for returning the result, while the time for local execution is the local queuing time and the task execution time. In the method described above, there is only one threshold parameter, and all tasks are either busy or idle. An alternative would be to have a gap between the idle and busy levels. This would reduce the number of requests as some processors would be in the middle zone. Also, it could reduce the risk for overshooting in the sense that a processor that was idle, but close to the threshold immediately becomes busy after receiving work from its busy partner. A theoretical analysis of the cost for task migration {#sec:theor} ===================================================== With more knowledge about the tasks and the hardware we can perform better predictions for which tasks to share and how many to export when a work request arrives. However, this also makes the approach more intrusive in the sense that the application programmer needs to provide more information. Here we will look at the cost in time for executing a task remotely compared with executing it in location. Assume that a computational node in the considered hardware performs $S$ floating point operations per second, and can deliver $R$ doubles per second from the main memory. When exporting a task, we need to send the input data together with the task, and then we need to return the output data. Let the total number of doubles in the input and output data be $D$, and let the number of floating point operations performed by the task be $F$. Then the time for executing the task locally is $$T_L = F/S,$$ and the time for executing the task remotely and returning the data is $$T_R = F/S + D/R.$$ The fraction of extra time that is needed for remote execution is given by $$Q = \frac{S}{R}\frac{D}{F}.$$ For a modern computer system, floating point operations are faster than data transfer, and a typical ratio can be around 40. This is the case for the system used for the experiments in Section \[sec:exp\] (see [@Zafari17] for a detailed calculation). The second ratio $D/F$ represents the computational intensity of the task. If we, e.g., consider a block matrix–matrix multiplication, with blocks of size $m\times m$, then $F=2m^3$, and $D=3m^2$. This leads to a total ratio of $Q = 60/m$. That is, for such a task, the cost for remote execution is almost negligible if the block size is large enough. If we instead consider a matrix–vector multiplication task, the situation is different. Then $F=2m^2$ and $D=m^2$ leading to $Q=20$. That is, 20 tasks can be executed locally in the same time as one task is migrated, executed remotely, and the result returned. By looking at these numbers we can get an understanding for how the threshold parameter $W_T$ should be chosen. For computationally intensive applications, a rather small value will be sufficient to make sure that there is local work to cover the cost for exporting tasks. However, if the tasks are less computationally intensive, it is not worth exporting tasks until the local work load is very high with, in this case, more than 20 tasks left in the queue for each exported task. The Cholesky benchmark {#sec:chol} ====================== We use a right-looking block Cholesky factorization as a benchmark problem to investigate the performance of the suggested DLB mechanisms. Most of the tasks in this application are computationally intensive, which makes it a good candidate for success. The algorithm is implemented with the DuctTeip framework and DLB can be turned on or off. The algorithm starts from the leftmost column, first the block on the main diagonal is factorized, and then the blocks below the diagonal are updated. Finally the blocks to the right of the column are updated. This procedure continues until all columns have been factorized. Since the input matrix is symmetric, only the lower triangular part is used in the algorithm. During the execution of the algorithm there is a data flow from the top rows and first columns to the bottom rows and last columns of the matrix. The algorithm and its corresponding task graph are illustrated in Figure \[fig:chol\]. ![The Cholesky algorithm (left) and a Cholesky task graph for a $4\times4$ block matrix (right). In the algorithm, the subroutine calls in colored boxes are implemented as tasks. $N$ is here the number of blocks. The numbers in the task graph correspond to the indices of the block that is updated, the solid lines indicate must-execute-before dependencies, while the dashed lines correspond to tasks that can be performed in any order, but not at the same time.[]{data-label="fig:chol"}](choldag.pdf){width="25.00000%"} The matrix blocks are distributed block cyclically onto the virtual process grid. The amount of communication as well as the load imbalance (see e.g., [@scalapack; @Zafari17]) is minimized when the process grid is square. This is not always possible, and here we instead consider cases where the number of processes is a prime number or a product of two different prime numbers. The non-square configurations lead to significant load imbalance, and we investigate if DLB can improve the performance in these cases. Experimental results {#sec:exp} ==================== The performance experiments have been performed at the Rackham cluster at Uppsala Multidisciplinary Center for Advanced Computational Science (UPPMAX), Uppsala University. The cluster currently has 334 dual socket nodes with 128 GB/256 GB memory each. Each socket is equipped with a 10 core Intel Xeon E5 2630 v4 (Broadwell) processor running at 2.2 GHz. When running distributed applications at the cluster, a number of complete computational nodes are allocated. That is, no other application codes are running at the same nodes. The experiments are performed on applications running within the DuctTeip task parallel framework. In order to run an application with DLB, we need to find appropriate values for the work load threshold $W_T$, and the waiting time $\delta$. A suitable threshold value should depend on the application work load over the execution time. For the experiments performed here, it is determined offline by first running the application once without DLB, and then setting $W_T=\max_{i,t}w_i(t)/2$. For a production DLB version, the threshold could for example be initialized with a reasonable starting value, and updated locally by each process in relation to the local work load. In the basic model, selecting $W_T$ as described above corresponds to a behavior that resembles that of the equalizing model, as approximately half of the tasks will be exported for a busy process. The waiting time $\delta$ should instead depend on the network bandwidth and should be long enough to allow the waiting process to be found by a partner. We performed several experiments to find the expected time required for finding a busy–idle process pair. The experimental results are shown in Figure \[fig:delta\]. Both the average times and the maximum times are plotted. As expected, the average time grows slowly with the number of processes, and is largest for equal fractions of busy and idle processes. ![The average time for finding a busy–idle process pair.[]{data-label="fig:delta"}](T1_plot_max_avg-crop.pdf){width="50.00000%" height=".25\textheight"} In the following experiments, 10–15 processes are used, and according to the results in Figure \[fig:delta\], a waiting time of $\delta=10$ ms is a suitable value. The maximum work load over the execution for any process is $w_i=10$, and the threshold is chosen as $W_T=5$. Figure \[fig:dlbsxsful\] shows the workload and execution times for the Cholesky factorization for two different problem sizes and process grids. In both cases, the matrices are divided into $12\times12$ blocks and distributed block-cyclically over the processes. Here, the application of DLB is successful in both cases, and the total execution time is reduced by 5–6%. In some places, one can see that one process is much more loaded with DLB than without. These can be cases where an equalizing approach would be more beneficial. ![The work load for each process in the Cholesky factorization for matrix size $N=20\,000$, and $P=10$ processes arranged in a $2\times5$ process grid (left), and for matrix size $N=30\,000$, and $P=15$ processes arranged in a $3\times5$ process grid (right) without DLB (filled blue curves) and with DLB (red curves). []{data-label="fig:dlbsxsful"}](C3_dlb_figure_2_1-crop.pdf "fig:"){width="49.00000%"} ![The work load for each process in the Cholesky factorization for matrix size $N=20\,000$, and $P=10$ processes arranged in a $2\times5$ process grid (left), and for matrix size $N=30\,000$, and $P=15$ processes arranged in a $3\times5$ process grid (right) without DLB (filled blue curves) and with DLB (red curves). []{data-label="fig:dlbsxsful"}](C3_dlb_figure_2_2-crop.pdf "fig:"){width="50.00000%"} The process of randomly selecting partners for work migration and the variability of work load and type of tasks between different processes within an application make the results of applying DLB non-deterministic. In Figure \[fig:dlbfail\], two executions of the same application configuration are shown, where one is successful, while the other one fails to provide any improvement. The matrix is here divided into $11\times 11$ blocks, which matches the number of processes. ![The work load for each process in the Cholesky factorization for matrix size $N=100\,000$, and $P=11$ processes arranged in an $11\times 1$ process grid, without DLB (filled blue curves) and with DLB (red curves). Two executions are shown, one unsuccessful (left) and one successful (right).[]{data-label="fig:dlbfail"}](C3_dlb_figure_3_1-crop.pdf "fig:"){width="50.00000%"}![The work load for each process in the Cholesky factorization for matrix size $N=100\,000$, and $P=11$ processes arranged in an $11\times 1$ process grid, without DLB (filled blue curves) and with DLB (red curves). Two executions are shown, one unsuccessful (left) and one successful (right).[]{data-label="fig:dlbfail"}](C3_dlb_figure_3_2-crop.pdf "fig:"){width="50.00000%"} Conclusions {#sec:conc} =========== We have discussed how to create a low overhead DLB functionality in a task parallel programming framework. An important aspect of the approach is that all decisions are local and the processes act autonomously, hence avoiding bottlenecks due to global exchange of information. Processes that either have a high or low work load search for another process with the opposite load situation to share work with. This search is randomized. This could be a disadvantage if the communication is much more expensive when the computational nodes are far from each other. Then processes could be grouped and DLB be applied within the group. However, an advantage compared with for example diffusion-based DLB [@Khan10] is that load can be propagated to anywhere in the system, while diffusion needs to go via nearest neighbors. Very few assumptions are made in the model apart from the assumption that the context is a distributed task parallel run-time system. However, the threshold parameter $W_T$ is application dependent, at least in the basic model. The theoretical analysis in Section \[sec:theor\] can be used as a guideline for deciding on $W_T$. The waiting time $\delta$ can be determined once for a particular system. In the Cholesky experiments that we have performed, even though the load imbalance was not extreme, we could see that the basic DLB version could give improved performance. It is therefore of interest to perform further experiments and to develop the DLB model further. Acknowledgments {#acknowledgments .unnumbered} =============== The computations were performed on resources provided by SNIC through Uppsala Multidisciplinary Center for Advanced Computational Science (UPPMAX) under Project SNIC 2017/1-448. [10]{} , [*A guide to dynamic load balancing in distributed computer systems*]{}, IJCSNS Int. J. Comput. Sci. Netw. Secur., 10 (2010), pp. 153–160. , [ *Star[PU]{}-[MPI]{}: Task programming over clusters of machines enhanced with accelerators*]{}, in Recent Advances in the Message Passing Interface - 19th European [MPI]{} Users’ Group Meeting, EuroMPI 2012, Vienna, Austria, September 23–26, 2012, pp. 298–299. , [*A novel dynamic load balancing library for cluster computing*]{}, in Third International Symposium on Parallel and Distributed Computing/Third International Workshop on Algorithms, Models and Tools for Parallel Computing on Heterogeneous Networks, 2004, pp. 346–353. , [*ScaLAPACK Users’ Guide*]{}, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1997. , [*[P]{}a[RSEC]{}: Exploiting heterogeneity to enhance scalability*]{}, Comput. Sci. Eng., 15 (2013), pp. 36–45. , [*Intel^^ [C++]{} [C]{}ompiler 16.0 [U]{}ser and [R]{}eference [G]{}uide: [I]{}ntel^^ [C]{}ilk^TM^ [P]{}lus*]{}. <https://software.intel.com/en-us/intel-cplusplus-compiler-16.0-user-and-reference-guide-cilk-plus>, June 2016. , [*A dynamic load balancing approach with [SMPS]{}uperscalar and [MPI]{}*]{}, in Facing the Multicore - Challenge II, R. Keller, D. Kramer, and J.-P. Weiss, eds., vol. 7174 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, 2012, pp. 10–23. , [*Dynamic Load Balancing for Hybrid Applications*]{}, [P]{}h[D]{} thesis, Universitat Politècnica de Catalunya, Departament d’Arquitectura de Computadors, Barcelona, Spain, 2017. , [*Performance analysis of dynamic load balancing techniques for parallel and distributed systems*]{}, IJCNS Int. J. Comput. Netw. Secur., 2 (2010), pp. 123–127. , [ *[FLEX]{}-[MPI]{}: An [MPI]{} extension for supporting dynamic load balancing on heterogeneous non-dedicated systems*]{}, in Proceedings of Euro-Par 2013, F. Wolf, B. Mohr, and D. an Mey, eds., Springer, Berlin, Heidelberg, 2013, pp. 138–149. , [*Chunks and tasks: A programming model for parallelization of dynamic algorithms*]{}, Parallel Computing, 40 (2014), pp. 328–343. 7th Workshop on Parallel Matrix Algorithms and Applications. , [*Invasive compute balancing for applications with shared and hybrid parallelization*]{}, International Journal of Parallel Programming, 43 (2015), pp. 1004–1027. , [*Hybrid parallelization of [CFD]{} applications with dynamic thread balancing*]{}, in Applied Parallel Computing. State of the Art in Scientific Computing. PARA 2004, J. Dongarra, K. Madsen, and J. Waśniewski, eds., vol. 2732 of Lecture Notes in Computer Science, Springer, Berlin, Heidelberg, 2006, pp. 433–441. , [*Cluster[S]{}s: a task-based programming model for clusters*]{}, in Proceedings of the 20th [ACM]{} International Symposium on High Performance Distributed Computing, [HPDC]{} 2011, San Jose, CA, USA, June 8–11, 2011, pp. 267–268. , [*Super[G]{}lue: [A]{} shared memory framework using data versioning for dependency-aware task-based parallelization*]{}, [SIAM]{} J. Sci. Comput., 37 (2015), pp. C617–C642. , [*Duct[T]{}eip: [A]{}n efficient programming model for distributed task based parallel computing*]{}. Submitted, 2017.
--- abstract: 'Este texto busca brindar al lector un punto de partida para dar buenas pláticas acerca de temas matemáticos.' author: - | Daniel Pellicer\ Centro de Ciencias Matemáticas\ UNAM - Morelia\ bibliography: - 'refer.bib' title: Dé usted una buena plática de matemáticas --- ?‘Está dirigido a usted este texto? =================================== Este texto fue escrito pensando en ayudar a todo aquel que quiere mejorar su manera de dar pláticas de matemáticas. Le recomiendo no continuar leyendo si usted espera encontrar una receta fiel que garantice que su siguiente plática será un éxito total. En cambio, si usted está interesado en descubrir por usted mismo un camino que le permita mejorar sus exposiciones, el presente texto gustoso le ofrece un punto de partida. No espere encontrar una poción mágica que solucione de golpe todas las deficiencias de sus charlas; en vez de eso esmérese en mejorarlas poco a poco hasta alcanzar un estándar con el que se sienta satisfecho. Este texto por sí mismo no le hará dar mejores pláticas; su esfuerzo y dedicación en mejorar sí lo hará. ?‘Por qué va usted a dar una plática? ===================================== ![image](fig1.pdf){width="45.00000%"} Hay muchas razones por las que damos pláticas de matemáticas, y el peso que le damos a estas razones varía de persona a persona. Le presento a continuación algunas de ellas. [*Doy pláticas porque es un requisito.*]{} Con frecuencia el programa académico en el que estamos inscritos o nuestro contrato laboral requiere que impartamos una o más pláticas. [*Doy pláticas porque quiero transmitir mis conocimientos.*]{} El gusto que tengo por el área de matemáticas que estudio me motiva a querer compartir mis conocimientos con mis compañeros estudiantes y profesores. [*Doy pláticas para causar una buena impresión.*]{} Quiero mostrarle a expertos de mi área y a compañeros estudiantes que sé mucho del tema que estudio o que mis resultados tienen buena calidad. [*Doy pláticas porque me siento comprometido a hacerlo.*]{} Mi director de tesis (o alguna otra persona de renombre) me invitó a dar plática y no quiero rechazar su invitación. [*Doy pláticas porque me gusta dar pláticas.*]{} Entre la atención que recibe el expositor, el estar frente a una audiencia y el platicar sobre mis temas favoritos de matemáticas, hay algo que me motiva a presentar ponencias. Sin importar cuáles sean sus razones para dar pláticas, debe tomarlas en cuenta al planear sus presentaciones. En secciones subsecuentes se detallan ejemplos frecuentes de maneras de sabotear las razones propias para dar pláticas. En este texto se da por hecho que los objetivos de su charla incluyen el lograr que los asistentes asimilen las ideas y resultados de los que va a hablar. Gran parte del contenido siguiente carece de sentido si esto no fuera relevante para usted cuando da pláticas. ?‘Quién va a escuchar la plática que usted dará? ================================================ Imagínese usted dando una plática a niños de primaria, y compare esa experiencia con una exposición de su trabajo ante expertos en el área. Compare, por ejemplo, los temas de los que hablaría y el vocabulario que utilizaría ante tan distintas audiencias. El ejercicio anterior le dejará claro que distintos foros requieren distintas pláticas, y por lo tanto, distintas preparaciones. Confío en que usted se rehuse terminantemente a dar una misma plática en una escuela primaria y en un congreso de investigación. Veamos las diferencias relevantes entre las dos audiencias descritas anteriormente. En la primera los asistentes apenas tendrían noción de los conceptos matemáticos más básicos, mientras que la segunda podría contar con algunas de las personas con mayores conocimientos a nivel mundial en el tema a exponer. La primera no cuenta con conocimientos de áreas afines que permitan relacionar el tema a abordar, mientras que la segunda verá con agrado el uso de áreas afines de matemáticas u otras ciencias para complementar la presentación. Los individuos de la primera audiencia no tienen madurez suficiente para comprender la pertinencia de la investigación realizada o en curso, mientras que la totalidad de la segunda está directamente involucrada en el mundo académico. [r]{}[0.45]{} ![image](fig2.pdf){width="45.00000%"} Las dos audiencias recién consideradas son solo casos extremos de un abanico de posibles públicos ante los cuales se presentan pláticas de matemáticas semana a semana. !‘Pero no se requiere que sean tan radicalmente opuestas para que sea necesario preparar de manera distinta las pláticas a presentar! Tenga en consideración que - la cantidad de conocimientos, - la madurez, - la rapidez para asimilar ideas nuevas y - la comprensión de la pertinencia de la investigación son muy distintos entre un alumno a la mitad de la licenciatura, un alumno a finales de la maestría y un investigador del área. Cada uno de ellos requiere una manera distinta de abordar el tema a exponer. No siempre es posible dar gusto a todos los que escuchan nuestras pláticas, en especial cuando sus niveles de conocimiento son muy variados. Es posible que o bien aquellos con más conocimientos se aburran, o bien aquellos con menos conocimientos se pierdan. En ocasiones se puede librar esta dificultad haciendo una síntesis de los conocimientos necesarios que cumpla las sigientes características: - Debe ser suficientemente accesible para que quien no domina esos temas pueda desarrollar una intuición de ellos. Ello requiere cierto ingenio. !‘Recitar definiciones formales largas generalmente tiene el efecto de confundir en vez de crear intuición! - Debe incluir motivación y ejemplos interesantes, buscando que los expertos también puedan tener interés en seguir esta parte de la charla. - Las ideas deben seguir el flujo natural que requiere quien las escucha por primera vez. - No debe asumir que nuestra audiencia comprenderá en un minuto algo que a nosotros nos tomó cinco minutos o más en comprenderlo. Indudablemente hacer lo anterior resta tiempo al tema del que en realidad queremos hablar. Hacerlo tampoco garantiza que quien no conoce los conceptos vaya a desarrollar suficiente intuición para comprender lo que el expositor desea expresar en la plática, ni que quien ya conoce los conceptos no pierda interés; sin embargo mejora la probabilidad de que más asistentes comprendan y disfruten la plática. En caso de que usted decida no esforzarse de más para dejar contentos a todos los asistentes a su charla, y en vez de eso enfocarse solo a aquellos de un nivel académico determinado homogéneo, dicho nivel debe establecerlo de acuerdo al tipo de charla que le ha sido ofrecida. Es decir, si un experto en el área entra a una plática dirigida a estudiantes de licenciatura, usted debe asumir que el experto será paciente para permitir que los demás asistentes conozcan los temas preliminares antes de llegar a la parte de interés para el experto, aun si esto implica que solo los últimos 5 minutos serán relevantes para el experto. Por otro lado, en una plática de un seminario de investigación es de esperarse que usted pueda hablar de sus resultados más avanzados. Un estudiante de licenciatura sin la preparación suficiente para seguir la plática deberá comprender que entró al evento académico equivocado, o antes de estar debidamente preparado para ello. Usted no debe sentirse mal de que su plática no haya cumplido las espectativas de una minoría de la audiencia, sobre todo cuando el evento académico no estaba particularmente orientado para esa minoría. Como verá el lector, preparar una plática de matemáticas para una audiencia homogénea requiere entender las características específicas del sector académico al que ellos pertenecen. Por otro lado, una plática para una audiencia heterogénea requiere más cuidado. ?‘Cuánto va a durar su plática? =============================== En la mayoría de los casos, cuando se nos invita a dar una plática se nos indica de cuánto tiempo dispondremos para ello, o al menos un tiempo estimado. Una plática de 50 minutos permite plantear el contexto histórico del problema y motivarlo adecuadamente. En este tipo de pláticas es posible enunciar dos o más resultados pertinentes al mismo conjunto de definiciones y resultados preliminares. Usualmente uno encuentra este tipo de pláticas en seminarios, coloquios y conferencias plenarias de congresos. En pláticas de 20 minutos no es posible abarcar muchos resultados. En ellas se debe priorizar cubrir todo lo necesario para que se comprenda el resultado principal a exponer, por lo que debe contener un número relativamente pequeño de definiciones y resultados preliminares. Los reportes de tesis en congresos y pláticas de sesiones especiales suelen ser de esta duración aproximadamente. ?‘De qué tema debe hablar? ========================== [r]{}[0.45]{} ![image](fig3.pdf){width="45.00000%"} Cuando se nos invita a dar pláticas de matemáticas en congresos o seminarios, es conveniente conocer el tipo de audiencia esperada y la duración de la plática antes de decidir el tema del que hablaremos. Decidir primero el tema y después ajustarlo al tiempo disponible y al público suele ocasionar que no se logren transmitir las ideas para las que la plática fue pensada. Es natural tener la tentación de hablar acerca de nuestros logros más recientes, o de nuestros resultados más relevantes. Lo anterior no debe causar mayor dificultad en un evento dirigido a expertos en dichos temas. Sin embargo las dificultades se presentan de manera natural si el evento es dirigido a estudiantes o la audiencia incluye investigadores de otras áreas de matemáticas. Imagine primero que Andrew Wiles decidiera hablar de su demostración del último teorema de Fermat. Este tema pudiera ser viable ante cualquier audiencia, aunque la exposición debe variar significativamente de una audiencia a otra. Si se presenta frente a un grupo de investigadores familiarizados con curvas elípticas (o con algún otro elemento clave en su demostración), Andrew podrá mostrar detalles finos de la técnica que él utilizó. Por otro lado, esa misma plática no sería comprendida si es impartida como conferencia magistral de media hora ante estudiantes de licenciatura; ante esta última audiencia podría enfatizar la relevancia histórica del problema, plantearlo con toda precisión e ilustrar ideas utilizadas en la demostración, sin formalizarlas ni profundizar en ellas. En realidad, ante estudiantes de 2o año de licenciatura le tomaría mucho tiempo (tal vez el tiempo total destinado a la plática) definir curvas elípticas y crear intuición suficiente de modo que los resultados que mencione tengan sentido para los asistentes. Ante una audiencia aún menos experimentada que la conformada por estudiantes de licenciatura, el principal objetivo de su plática debería ser plantear el último teorema de Fermat de modo que todo el público entienda el enunciado, sin intentar esbozar una idea de demostración. Se puede tomar como objetivo dar a entender la importancia histórica y la dificultad inherente a un problema con planteamiento tan sencillo. El último teorema de Fermat es un tema apto para audiencias variadas y distintas duraciones debido en parte a que es un tema famoso del que muchas personas, y en especial la gran mayoría de los matemáticos, han oído hablar con anterioridad. Sin embargo, la característica principal que le permite ser presentado ante audiencias con pocos conocimientos matemáticos es que su planteamiento requiere muy pocos prerrequisitos. Para entender el enunciado es suficiente comprender conceptos que se adquieren en secundaria y preparatoria; aun frente a público que no los maneje bien, pueden ser explicados en poco tiempo de manera intuitiva utilizando ejemplos. Si, por otro lado, usted demostrara la hipótesis de Riemann y quisiera hablar de tan destacado logro, se topará con el problema de que son pocas las audiencias preparadas para entender su enunciado. Estudiantes de primer año de licenciatura apenas asimilan el concepto de función de manera formal, y tienen dificultades si el dominio y contradominio no están dentro de los números reales. A muchos de ellos les toma muchos meses comprender bien los números complejos y manejar con soltura sus operaciones básicas. Sin duda la comprensión del significado de la función $$\zeta(s)=\sum_{n=1}^{\infty}\frac{1}{n^s}$$ con dominio en los números complejos será un reto de alto grado de dificultad para ellos; es probable que algunos pasen el tiempo restante de la charla tratando de entender qué sentido tiene una suma de una infinidad de números, o el significado de elevar un entero a una potencia imaginaria. Al usted le tomaría mucho tiempo abordar convenientemente los ingredientes necesarios de modo que el enunciado sea comprendido. Probablemente la plática fuera más exitosa si usted destinara su totalidad a la historia de la hipótesis de Riemann y a su relevancia en las matemáticas del último siglo y medio, aun si su enunciado solo se esboza sin esperar que se comprenda a plenitud en qué consiste. En vista de lo anterior, es previsible que ante una audiencia conformada de personas sin estudios universitarios en matemáticas se logre una mejor comunicación entre el expositor y quienes lo escuchan si se elige como tema los números complejos y no la hipótesis de Riemann en sí. Tome usted en cuenta que en una plática acerca de los números complejos es posible mencionar el contexto histórico y la relevancia de la hipótesis de Riemann sin tener que enunciarla. Es momento de regresar nuestra atención a las razones por las que usted dará su siguiente plática. Si va a presentar una tesis ante un jurado para obtener un grado, su objetivo principal deberá ser el convencer a sus sinodales de sus conocimientos y destreza en el tema. Su presentación deberá tratar exactamente de el contenido de su tesis, aun si más de la mitad de los asistentes no tienen estudios universitarios en matemáticas. Por otro lado, si su objetivo principal es que los asistentes sigan las ideas que expone, inevitablemente deberá tomar en cuenta el tiempo disponible y la audiencia esperada antes de definir el tema; de otro modo corre el riesgo de ser usted mismo el primero en poner obstáculos al cumplimiento del objetivo planteado por usted para su plática. ?‘Cuáles preliminares deberá incluir en su plática? =================================================== [r]{}[0.45]{} ![image](fig4.pdf){width="45.00000%"} Una vez establecido el tema de la plática debemos tener en mente el (los) resultado(s) o idea(s) a los que queremos llegar. El siguiente importante paso es decidir cómo vamos a llegar a ellos. Si bien cuando escribimos un texto debemos buscar ser precisos y correctos, en un texto hablado no siempre es así. En un artículo de investigación los conceptos usados deben estar propiamente definidos, y debemos justificar nuestras afirmaciones con argumentos, demostraciones o referencias. En algunas pláticas se espera un carácter similar al de un artículo de investigación, mientras que en otras se entiende que el tiempo y el nivel de conocimientos de la audiencia hacen que la rigurosidad sea inadecuada. Imagine usted que lo invitan a hablar acerca de su tesis ante un grupo de expertos en el tema. En esa circunstancia es innecesario que defina algunos conceptos que se incluirían en artículos de investigación, dado que todos los asistentes a la plática los dominan. El contenido de dicha plática debe enfocarse en lo que es novedoso para esa audiencia, que puede incluir definiciones recientes, los resultados de su investigación, y detalles de las pruebas de dichos resultados. Por otro lado, si va a hablar acerca de su tesis en un seminario de estudiantes, al que asisten estudiantes de áreas distintas a la de usted, deberá incluir en la plática definiciones y/o motivación para los conceptos que no se estudian en el tronco común de las licenciaturas y maestrías en matemáticas. Asumir que los asistentes saben el significado de homología, forcing, matroide o superficie de Riemann (por poner unos pocos ejemplos) y no definir o motivar estos conceptos adecuadamente suele ocasionar que los asistentes no puedan seguir la plática a partir de que estos conceptos se convierten en parte importante de esta. Lo anterior fomenta que en el tiempo restante cambien su atención a sus propios problemas matemáticos, a lo que harán después de la plática, o a sus dispositivos móviles. Dado que tal plática comenzaría en un nivel más bajo que la plática que se impartiría ante expertos, es de esperarse que no le alcance el mismo tiempo para cubrir el mismo material. Una alternativa es que priorice la comprensión de los enunciados de sus resultados, dejando de lado las demostraciones de los mismos. Al hacerlo sacrificará el intento de convencer a los asistentes de que su resultado es cierto, en favor de que asimilen lo que plantea su resultado, junto con la relevancia de este. Una vez que usted ha decidido qué definiciones incluirá en la plática, es importante distinguir entre aquellas que basta con enunciar y aquellas en las que debe abundar. Por ejemplo, si la audiencia consiste de estudiantes de segundo semestre de licenciatura, es posible definir de manera clara el complemento ortogonal de un subespacio vectorial de $\mathbb{R}^3$ usando para ello unos cuantos segundos. Por otro lado, si se va a hablar de la forma canónica de Jordan de una matriz cuadrada, ello requerirá, además de una definición, suficiente motivación y ejemplos de modo que los resultados que mencionan dicho concepto tengan sentido para los asistentes. En ocasiones un resultado que queremos presentar requiere de muchas definiciones y resultados previos, pero se simplifica considerablemente si lo explicamos con un caso particular en el que nuestro resultado no se trivialice. Dedicando la plática al caso particular damos más oportunidad a quienes nos escuchan de que sigan las ideas claramente, y no excluimos la posibilidad de que al final se mencione el resultado general y se expliquen las diferencias. En cambio, intentar el caso general desde un principio puede ser demasiado para una audiencia poco conocedora del tema, y puede propiciar que se deje de seguir la plática antes de que se esté cerca de mencionar el resultado principal. Siempre contemple lo anterior en caso en que se le solicite hablar acerca de un tema específico ante una audiencia poco preparada para entenderlo. Las cosas que son importantes o interesantes para usted no necesariamente lo son para la audiencia. Dar un amplio rodeo incluyendo temas no relacionados con la charla para justificar que cierto resultado es muy importante o muy interesante puede tener como consecuencia que los asistentes pierdan interés. En particular, si da usted una plática de cierta área ante una audiencia con formación predominante en un área distinta, ponga entusiasmo en abundar en los temas de interés común (sin omitir las definiciones y motivaciones necesarias). Suele dar peores resultados pensar en ‘educar’ a esa audiencia para que vean la belleza de lo que es atractivo para usted, si para ello debe tomar mucho tiempo, incluir muchas definiciones o hablar de varios detalles técnicos, distrayendo a la audiencia del objetivo real de la charla. Tome por ejemplo un matemático que recientemente obtuvo resultados en ecuaciones diferenciales con muchas aplicaciones en física de fluidos, y que gracias a ello es invitado a hablar en un congreso de dicha área de la física. Naturalmente en esas circunstancias se espera que la plática gire alrededor de la relevancia de los resultados, su contexto histórico y sus potenciales aplicaciones, y a ello deberá dedicársele el cuerpo de la exposición. Es de esperar que la audiencia no vea bien si usted dedica la mayor parte de la plática a la demostración de los resultados, o en fundamentos de las ecuaciones diferenciales que no sean indispensables para la comprensión de los enunciados demostrados. El mismo fenómeno, si bien en menor grado, ocurre cuando a un matemático especializado en cierta área se le invita a dar una plática en un evento especializado en otra área. Por último, tome usted en cuenta que las personas a quienes les habla quieren escuchar acerca de las matemáticas que usted desarrolla, y no de las cuentas que fueron necesarias para llegar a sus resultados. Mostrar largas listas de igualdades que se sigan una de la otra suele provocar desinterés en entenderles a fondo, y en el contexto de una plática con frecuencia se pueden sustituir por explicaciones intuitivas de aquellas razones que están detrás de la veracidad del resultado. Acerca del título y resumen de su plática ========================================= La elección del título de nuestra ponencia no es un detalle menor, dado que puede influir en el mucho o poco interés que la gente tenga en asistir. Idealmente el título de nuestra charla debería cumplir simultáneamente las siguientes características. - Ser breve. - Dar una primera idea de lo que se debe esperar de la plática (por ejemplo, especificando el tema de la plática o sugiriendo el tono que va a tener). - Invitar a todo aquel que pudiera interesarle la plática. No siempre es sencillo cubrir todos estos aspectos, y debe usted tener en cuenta cuáles priorizar. Hay varias razones por las cuales gente que usted quiere que asista a su plática puede decidir no ir. En congresos grandes en los que hay sesiones simultáneas los asistentes deben decidir a lo más una de las varias pláticas que se ofrecen en un mismo momento. En eventos más pequeños puede ser que algunos asistentes acuerden darse tiempo para trabajar en proyectos pendientes, y para ello decidan ausentarse de unas pocas pláticas. A aquellas personas que tengan razones para ausentarse de algunas pláticas y quisieran decidir con precaución a cuáles asistir, usted les hace sencillo el trabajo si en su título no pueden ver de qué trata su plática. Esto puede deberse a que en el título incluya muchos términos técnicos, o porque asuma de los asistentes conocimientos previos de muchos conceptos. Si no le es posible expresar todo lo que quería en pocas palabras, !‘deje esa parte para el resumen! Como ejemplo, imagine que usted dará una plática introductoria de teoría de módulos, donde su principal objetivo es que otras personas, en particular estudiantes jóvenes, conozcan su área de estudio. Para ello piensa mostrar a detalle una amplia gama de ejemplos. Un título como ‘Módulos simples, semisimples, inyectivos y proyectivos’ envía el mensaje que la plática no será apta para quienes no conozcan ya esos conceptos (al menos algunos de ellos). En lugar de eso puede llamarla ‘Introducción a los módulos’, ‘La diversidad de los módulos’, o de alguna otra manera (tan formal o informal como desee) que sugiera que todo estudiante de licenciatura es bienvenido. Por otro lado, si su charla es acerca de su tesis y quiere mostrar sus resultados, pero debido al tiempo disponible debe asumir que la gente está familiarizada con módulos, debe adoptar un título como el primero, y no como los segundos. Es deseable que el resumen cumpla las siguientes características: - Describir el contenido de la plática. Si en el momento en que debe enviar el resumen aun no conoce con precisión el contenido de su plática, puede enviar un resumen vago que incluya frases como ‘...se mencionarán resultados acerca de...’ sin especificar de qué resultados se trata. - No incluir cosas que no se van a abordar. - Seguir un estilo similar al de la plática. Si en la plática va a avanzar poco a poco en los conceptos, en el resumen se deben incluir frases como ‘se abordarán las definiciones y propiedades básicas de ...’, y se pueden incluir algunas pocas nociones que ayuden a entender el tipo de matemáticas que se desarrollarán. Por otro lado si se asumen prerrequisitos, estos se deben sugerir en el resumen, al utilizar dichos términos sin definición previa. - Buena ortografía y redacción. Resúmenes que no cumplen con este requisito suelen causar una mala imagen del autor. Esto puede predisponer a potenciales asistentes para decidir no acudir a la charla, o no darle la seriedad debida. Un resumen no es sustituto de tiempo que se pueda ahorrar en la plática. No debe asumir que a la hora de su charla los asistentes recuerden lo que dice su resumen. Si las definiciones precisas ocupan mucho espacio e insiste en incluirlas en el resumen, busque descripciones intuitivas o vagas que den suficiente idea, y reserve la definición formal para su charla. No hay formatos estándares de resumen que sirvan para todas las charlas. Cada exposior debe elaborar el suyo propio antes de su plática. Puede guiarse en resúmenes de eventos anteriores de carácter similar para ver una muestra de estilos y longitudes. Acerca de la cantidad de material en su plática =============================================== Imagínese usted que lee una novela de 100 páginas en la que aparecen 200 personajes con nombre y apellido. Cada uno de estos personajes se presenta con pocas situaciones a su alrededor, pues después de todo la novela tiene solo 100 páginas. Sin duda usted tendrá problemas en memorizar quién es quién. Usted podrá leer la novela de corrido esperando poder detectar del contexto a qué persona corresponde cada nombre, en ocasiones teniendo que superar la confusión originada por rebautizar a algún personaje erróneamente. Otros dos resultados probables en esta situación son: que usted pierda interés en la novela y deje de leerla, o bien que con frecuencia regrese a páginas anteriores para recordar quién era la persona a quien pertenecía tal nombre, y en qué contexto aparecía. Aun en este último caso, es muy probable que usted se quede con la impresión de que la trama del libro pudo haberse expuesto con menos nombres de personajes (tal vez incluso con menos personajes). El mismo fenómeno ocurre en pláticas, sin importar si son de 20, 30 ó 50 minutos. En ellas los conceptos (actores de la trama) en ocasiones son introducidos por definiciones formales, en ocasiones por definiciones intuitivas, y en ocasiones el expositor espera que el público los conozca de pláticas o cursos anteriores. Piense que quiere evitar la analogía con la novela, y por ello deberá buscar que o bien en su plática haya pocas definiciones, o bien que cada definición esté debidamente ejemplificada y motivada para causar la menor confusión posible en quienes escuchan. No hay receta universal para solucionar este problema cuando se presenta. Ejemplificar y motivar definiciones toma tiempo, y no debe exagerarse en ello si se quiere dar el debido tiempo al contenido principal de la plática. Por otro lado, decidirse por incluir pocas definiciones puede ocasionar que no se entienda cuando menciona conceptos que no definió. Por supuesto tampoco debe suponer que los asistentes estén familiarizados con conceptos de los que hasta ese momento poco o nada hayan escuchado. Usted puede considerar las siguientes alternativas. Suponga que algunas hipótesis de algún teorema, si bien necesarias, no entran dentro del marco general del tema que se está exponiendo, y son originadas por cálculos o por resultados previos de sabor distinto al de la plática. El expositor debe procurar invertir una porción lo más pequeña posible del tiempo disponible para hacer mención de dichas hipótesis. Una manera de hacerlo es, !‘no enunciarlas! y sustituirlas por frases como ‘algunas propiedades técnicas’. Si alguno de los asistentes está particularmente interesado en ellas podrá preguntar al respecto al final de la charla, momento en el cual es posible extenderse en estas hipótesis sin interrumpir el flujo natural de las ideas planeadas. Otra manera común de evitar dar exceso de definiciones a memorizar a quien le escucha consiste en evitar algunos nombres que de inicio no dan información a quien no los conoce (piense en la primera vez que escuchó ‘isomorfo’, ‘inyectivo’, ‘ortogonal’). En vez de ellos se puede usar calificativos informales que permitan al lector entender la idea que se quiere expresar. Por ejemplo, suponga que quiere dar una plática para estudiantes de nuevo ingreso a la licenciatura, y una de las hipótesis que requiere para su resultado principal es que cierto par de vectores sean linealmente independientes, pues de lo contrario aparecerá una división entre cero. Puede ejemplificar lo que ocurriría si los vectores fueran linealmente dependientes (sin usar esos términos) y concluir que al conjunto de vectores deseados se les llamará ‘buenos’, o alternativamente se dirá que son ‘malos’ los linealmente dependientes. Busque dejar claro que tales términos se utilizarán únicamente durante la charla, y quienes quieran profundizar en el tema deberán adoptar los términos convencionales. En textos escritos es común agrupar las definiciones al inicio y después usar los conceptos según se van necesitando. En un texto hablado no hay oportunidad de ‘regresar unas páginas’ para leer la definición de la que se nos está hablando y que ya hemos olvidado; es más conveniente dar y motivar cada definición cuando se le va a usar. De igual manera que se le invitó a reducir el número de definiciones a un nivel que el público pueda asimilar fácilmente, se hace una invitación análoga para el número de resultados. Algunos resultados preliminares no aportan ideas relacionadas al tema de la plática, y por lo mismo podrían ser omitidos, en todo caso haciendo mención de que su existencia permite enunciar los resultados centrales. Frases como ‘resultados técnicos permiten concluir que...’ son aceptables en una plática, y pueden ser aclarados posteriormente ante aquellos que muestren interés en ello. Por último, se invita a usted a no caer en la tentación de hablar más rápido buscando de esa manera cubrir más material. Aumentar la velocidad del discurso tiende a generar consecuencias negativas ocasionadas por otorgar un menor tiempo a los asistentes para comprender las ideas que se les exponen. Piense que al incluir mayor material del debido corre el riesgo de que se comprendan menos ideas que si se hubiera expuesto una menor cantidad de material. Si es así, ?‘tiene caso incluir ese material adicional? Preparación de su plática: ========================== Cuando usted ya ha decidido el tema de la plática y el carácter que se le va a dar, así como las definiciones y resultados que quiere abordar, puede proceder a elaborar la secuencia de ideas a expresar en la plática. Durante este proceso debe cuidar que toda mención de un objeto matemático hecha antes de que dicho objeto sea definido propiamente, debe tener un carácter informativo de hacia dónde se dirige la charla, y abordar el objeto únicamente de manera intuitiva. De igual manera, si piensa enunciar cierto resultado antes de haber descrito adecuadamente las componentes del resultado, hágale saber a quienes lo escuchan que esas componentes serán explicadas más adelante. Cuide de no dar el mensaje de que no pusieron atención en definiciones cuando (hipotéticamente) las mencionó. Le sugiero hacer el ejercicio de ponerse en el papel del público. Revise el orden de las ideas que piensa expresar, intentando prever el grado en el que los asistentes pueden comprender cada idea dados sus conocimientos y lo que usted habrá dicho con anterioridad en la plática. Es común darse cuenta de que sería bueno cambiar el orden de las definiciones, o incluir motivaciones. A veces nos sentimos tentados a incluir en la plática adelantos de cómo va a ser usado lo que acabamos de explicar. En ocasiones eso justifica adecuadamente una definición rebuscada o un resultado técnico. Sin embargo, un adelanto así no contribuye a la plática si parte de lo dicho no es comprendido por su audiencia, ya sea porque va a ser definido más adelante o porque no se ha introducido el tema debidamente. En esos casos es mejor omitir el adelanto. Parte importante de la preparación de una plática es hacer un ensayo general, midiendo el tiempo que tarda en concluir su exposición. Esto puede ser ante su tutor, en un seminario, o presentándose la charla a usted mismo. Es importante que usted le de importancia al tiempo que le fue asignado, y esté preparado a terminar sin hablar de todo lo que tenía pensado porque se agotó el tiempo, o a agregar detalles o ejemplos si el tiempo lo permite. No olvide usted tomar en cuenta tiempo para preguntas, ya sea que sean formuladas en el transcurso la plática, o al final de ella. Es de mal gusto terminar una plática con demasiada anticipación, pues deja la impresión de que el tiempo disponible fue mal empleado. Terminar una plática después del tiempo previsto es causa de problemas más concretos. En ocasiones algunos asistentes tienen compromisos inmediatamente después de la hora dedicada a un seminario; extenderse en este tipo de pláticas inevitablemente provocará que dichas personas tengan que abandonar el salón, causando distracción en la audiencia y el expositor. Es importante respetar el tiempo de los asistentes también en eventos que tienen sesiones simultáneas; si alguien planeó escuchar una plática posterior a la de usted que se desarrollará en otra sesión, a esta persona no le será posible asistir a ambas pláticas en caso de que usted tarde en terminar su charla. Finalmente, es aconsejable no crear dificultades a los organizadores, por ejemplo en caso de que el recinto en el que se lleva a cabo el evento académico deba ser abandonado puntualmente a la hora programada de término de la última plática (sea la de usted, o la de alguien que hablará después de usted). En caso de que termine el ensayo de su plática varios minutos antes de cumplirse el tiempo disponible, le sugiero elaborar una lista de ejemplos, resultados o precisiones que sean candidatos a ser agregados; estos deberán ser priorizados de acuerdo tanto a la relevancia con el tema general de la charla, como a lo útil que será para facilitar la comprensión del material, e incluir únicamente aquellos que estén más altos en la lista de prioridades, de modo que no se exceda el tiempo de la plática. Evite la tentación de incluir una idea que aleje a quienes le escuchan del tema alrededor del que va a hablar, sobre todo si esa idea requiere agregar definiciones y resultados preliminares innecesarios. El caso contrario es más frecuente. Ocurre seguido que al practicar nuestra plática nos encontramos con que rebasamos el tiempo ofrecido. En esas ocasiones es necesario eliminar partes de la plática, y este proceso requiere cierto cuidado. Es conveniente identificar primero si hemos incluido definiciones o resultados superfluos que distraigan la atención hacia temas distintos al tema principal a tratar; estos deberán ser los primeros candidatos a ser eliminados del contenido. Posteriormente se pueden buscar dentro del contenido de la plática definiciones y resultados que sean demasiado técnicos para que la explicación que hacemos de ellos aporte sustantivamente a la presentación; dicha explicación puede ser sustituida por una breve mención, aclarando que se omiten los detalles debido a su carácter técnico. Si lo anterior aún no es suficiente y la plática todavía resulta demasiado larga, se puede reducir el número de definiciones formales, intercambiando algunas de ellas por ideas intuitivas; hay que elegir particularmente aquellos conceptos cuya definición formal requiera mucha explicación que no está directamente ligada a la idea general a tratar, y su esencia pueda ser transmitida por una idea intuitiva que requiere poco tiempo para describirla. En caso de que después de seguir las sugerencias anteriores la exposición aún tome demasiado tiempo deberá usted considerar seriamente en reducir el contenido de la plática, eliminando los resultados o ideas que aporten menos al tema a tratar. Alternativamente, en ocasiones se puede exponer el mismo contenido, pero sin la generalidad planeada originalmente; se pueden seguir las definiciones y resultados para un caso particular, y solo al final de la plática mencionar direcciones en que se generaliza. Esto resulta útil cuando las definiciones son complicadas y requiren muchos ejemplos y motivaciones, pero existen casos particulares sencillos o bien conocidos en los que se pueden ilustrar los resultados que se quieren comunicar. Aun cuando usted haya medido su plática y la duración sea óptima deberá tener en cuenta que algunos imprevistos pueden forzarle a terminar su plática antes de cubrir todo lo previsto (muchas preguntas durante la exposición, retraso al iniciar, etc). Busque que, de encontrarse en la situación de que le quedan 5 minutos y todavía falta mucho por exponer, sea sencillo decidir qué partes omitir de modo que no deje fuera las ideas más importantes. ![image](fig5.pdf){width="8cm"} Elección de palabras que usará en su plática ============================================ Cada uno de nosotros tiene su propio carácter y sus propios gustos. Estos son un factor determinante en el estilo que elegimos para presentar nuestros resultados, siempre que dicha elección no esté supeditada a requerimientos del foro en el que hablaremos. Si usted tiene facilidad para bromear entre sus conocidos, puede intentar incluir chistes y comentarios graciosos en sus pláticas; esto puede tener como consecuencia que tanto usted como su audiencia se relajen del rigor del tema expuesto y la presentación resulte más placentera. En cambio, si bromear no es lo suyo no tiene por qué forzar chistes en sus presentaciones, y en cambio puede dar una presentación tan sobria y formal como lo desee. Hacer algo a lo que no está habituado y que no le entusiasma particularmente puede cargarle con una preocupación más, y ella puede incitar que usted muestre inseguridad. La elección entre un lenguaje riguroso y uno coloquial debe tomar en cuenta la comprensión que desee en la audiencia. Ante un público altamente calificado usted puede ahorrarse tiempo al utilizar el lenguaje riguroso, dado que se espera que todos quienes lo escuchan lo dominan. Un lenguaje coloquial suele ser útil cuando ciertos conceptos y resultados no son conocidos por el público; en esos casos puede ser mejor dar breves ideas intuitivas incluyendo nombres fáciles de recordar de conceptos y resultados, evitando la rigurosidad que podría requerir una mayor cantidad de tiempo para su cabal comprensión. Es difícil encontrar un lenguaje apropiado para hablar frente a público con bajos conocimientos en el tema que se expone. Imagínese a usted mismo hablando de álgebra lineal ante personas que no tienen formación universitaria en temas matemáticos. Es probable que el tema del que usted esté hablando sea muy adecuado para su audiencia, después de todo el álgebra lineal tiene muchas aplicaciones en la vida real. Sin embargo, la audiencia no está calificada para entender términos como ‘linealmente independientes’ o ‘que generan un subespacio de dimensión $3$’, !‘y nosotros no estamos acostumbrados a hablar de estos conceptos con otras palabras! En estas circunstancias debe usted resistirse a su instinto de definir formalmente independencia lineal (piense lo extraño que la definición le parecerá a quienes le escuchan), y buscar salidas alternativas. En charlas de este estilo se justifica plenamente usar palabras coloquiales para referirse a estos conceptos, y describirlos de manera intuitiva. Por ejemplo, puede aclarar que llamará [*aplanada*]{} a una terna de vectores cuando se queden en un mismo plano, y sustituir el uso de ‘linealmente independientes’ por ‘no aplanados’. Note que el término ‘ terna aplanada’ es preferible sobre ‘terna buena’ dado que el primero describe el concepto hasta cierto grado, mientras que el segundo solo hace referencia a una opinión subjetiva. Por otro lado, suponga que entre los asistentes hay alguno que no asocie fácilmente los términos ‘independencia’ o ‘lineal’ con el concepto de ‘independencia lineal’, y ese concepto es definido sin suficiente motivación ni suficientes ejemplos (es decir, sin dedicarle suficiente tiempo). Una consecuencia frecuente es que esa persona deje de comprender los enunciados en que se menciona independencia lineal, y en esos tramos de la plática se dedique a verificar las cuentas realizadas, sin entender las ideas detrás de ellas. En otras ocasiones quien escucha la plática da por perdida la esperanza de seguir al expositor hasta el final de la charla, y cambia su atención hacia sus asuntos personales. Le invito a que compare estos resultados con el objetivo que se planteó para su plática. Cada área de matemáticas tiene un lenguaje especializado, que se ha moldeado a lo largo de los años. Gente que no se dedica a nuestra área no necesariamente usa el mismo lenguaje, aún cuando use los mismos conceptos. Tome usted por ejemplo el término ‘variedad’. Si bien la noción que un topólogo, un geómetra diferencial y un geómetra algebraico tengan de estos objetos puede ser compatible, es posible que surjan diferencias cuando se abordan resultados específicos. Puede ocurrir que el expositor permita a la variedad tener un conjunto finito de puntos singulares, y que esto esté prohibido en el área de algunos de la audiencia; o que por default alguien más asuma orientabilidad de las variedades, contrario a la teoría más general que admite variedades no orientables. Esto suele causar confusión (a veces pequeña y a veces grande) en la audiencia. Un ejemplo común de lo anterior se da en pláticas de topología, en las que se usa la palabra ‘función’ para referirse exclusivamente a ‘funciones continuas’, pues son las únicas relevantes para los resultados de la plática. Hacer esto frente a personas que trabajan en otras áreas de matemáticas también puede originar confusión. Usted evitará este tipo de confusiones si, o bien elude los términos que no son estándares en su audiencia, o bien anuncia desde el principio de su charla el significado que dará de esos términos. Es deseable motivar los temas de los que hablamos resaltando su relevancia histórica o reciente. Sin embargo esto puede requerir cierto tacto. Se debe tener cuidado con los superlativos como ‘el mayor matemático de la historia’ o ‘el área más bonita de matemáticas’, sobre todo cuando en la audiencia hay personas de áreas distintas a la nuestra. Estas opiniones son subjetivas y pueden causar suspicacias acerca de qué tan informado esté el expositor, o confrontación de ideas innecesaria para la plática. Por otro lado, datos bien documentados u opiniones justificadas con citas verificables deben ser siempre bien recibidos. Consideraciones generales ========================= Cuando pedimos sugerencias a distintas personas para la preparación de nuestra siguiente plática escuchamos respuestas de muchos tipos. Algunas de ellas dan algún consejo práctico, aunque no siempre apropiado para todas las pláticas, en particular para la próxima. A continuación enlisto algunos ejemplos. [*(a) Toda plática debe tener una demostración.*]{} [*(b) Inicie la plática con un chiste.*]{} [*(c) Divida la plática en tres partes; la primera debe ser comprensible para toda su audiencia, la segunda para las personas de su área, y la tercera solo para sus colegas más cercanos.*]{} En algunas ocasiones este tipo de sugerencias serán apropiadas, pero definitivamente no son universales y deberá seguirlas únicamente cuando crea que convienen a su plática. Piense que una de las pláticas más famosas y exitosas en la historia de las matemáticas (si no es que la más famosa y exitosa) es la plática de D. Hilbert en la que presentó los 23 problemas a atacarse durante el siglo XX. La exposición de Hilbert no hubiera sido mejor si hubiera incluido demostraciones, y para el era deseable que la plática en su totalidad fuera accesible para los matemáticos presentes. Compare esa plática con las sugerencias enlistadas anteriormente. No siempre contamos con las mismas herramientas para impartir una plática. En ocasiones se nos solicita que sea en pizarrón, mientras que en otras es necesario preparar una presentación para ser proyectada. Cuando usted tenga la libertad de elegir, busque la opción que considere más adecuada para los objetivos que se ha planteado para su plática; cada una tiene sus ventajas y desventajas. Las presentaciones proyectadas aportan información sin requerir de tiempo para escribirlas en un pizarrón, lo que contribuye a que el contenido pueda presentarse en menor tiempo. Esto hace que las presentaciones con computadora sean idóneas para pláticas cortas. Al hacerlas es importante evitar avanzar demasiado rápido. Si bien el expositor puede modificar la rapidez en la que se exponen las ideas, no puede controlar la rapidez en la que las procesan los que le escuchan. No debe extrañar que en seminarios de una hora haya quienes soliciten específicamente que las pláticas sean en pizarrón. Elaborar una buena presentación para acompañar una plática no es tarea fácil. La presentación no debe ser un acordeón de donde nosotros podamos leer cada palabra que hemos de decir; no debe ser hecha para ayudar al expositor sino para facilitar la comprensión de la audiencia. Es muy importante entender que las palabras que vayan a aparecer proyectadas no necesariamente van a ser leídas, sobre todo si el expositor habla durante el tiempo en que se proyecta la lámina que contiene esas palabras. El texto que proyectemos debe ser complemento de lo que digamos. Un texto largo invita a no leerlo y a esperar a que el expositor explique; si el texto no va a ser leído habrá que preguntarse la razón por la que fue escrito. Por ello se sugiere no basar su presentación en láminas que muestren páginas de artículos, o en láminas que contengan muchas letras. En cambio, se recomienda incluir diagramas o dibujos que ilustren de lo que se habla. También se pueden incluir a manera de esquema solo las ideas centrales de lo que se dice durante el tiempo en que se muestra la lámina. Tome esto en cuenta durante su plática. ======================================= [r]{}[0.45]{} ![image](fig6.pdf){width="45.00000%"} Es normal que estemos nerviosos durante la plática, sobre todo en nuestras primeras exposiciones. Hay factores que contribuyen a ello, y también maneras de atacarlos. En primer lugar, el tono de voz y la postura corporal pueden influir fuertemente a crear una atmósfera adecuada o adversa. Alguien que habla al pizarrón, que no ve a la audiencia y que usa un tono de voz muy bajo invita a dejar de prestar atención. Darse cuenta de una baja en la atención que se le brinda contribuirá a poner más nervioso al expositor. Debido a lo anterior es conveniente buscar contacto visual constante con la audiencia y modular el tono de voz de modo que se resalten las cosas más importantes (se puede subir un poco el tono en ellas), y nunca se hable demasiado bajo como para que no se escuche bien lo que decimos. Cuando estamos nerviosos somos susceptibles a crear impresiones erróneas en nosotros mismos, en particular al tratar de interpretar las caras de quienes nos escuchan. Una de estas impresiones es asumir que en la audiencia hay personas que se están aburriendo, y podemos cometer el error de incrementar la velocidad en la que abordamos los temas. Otra más es cuando las caras de algunos asistentes nos sugieren que no están entendiendo, y ponemos más atención a explicar con más detalle de lo planeado. Debemos evitar tomar estas impresiones como hechos consumados; de ser posible, debemos ignorar impresiones que no tengan bases sólidas. Piense que es posible que, a pesar de las apariencias, a la audiencia le estaba costando trabajo seguir la plática, y acelerar el ritmo solo empeora la situación. Por otro lado, abundar en las explicaciones más de lo planeado suele romper el flujo lógico de las ideas. De no tener evidencia convincente de que algo está saliendo mal, !‘continúe con lo planeado! A todos nos sucede de vez en cuando que hay un error en nuestras láminas y nos damos cuenta a la hora de la presentación. Hay varias maneras de abordarlo. Aquí le brindo unos ejemplos. aclarar el error y pedir disculpas de manera breve; posteriormente continuar como si nada hubiera ocurrido. enfatizar en la presunta veracidad del texto con el error (no más de unos segundos), para luego aclarar lo que debía haberse escrito. evitar leer el texto con error y hacer como si no estuviera ahí.Cada quien debe elegir cómo proceder de acuerdo a su carácter, y tener en cuenta que la salida que elijamos debe cumplir con que - no se da información falsa, - no se utiliza mucho tiempo en remediar la situación, - usted no se distrae por el error y retoma el flujo de la plática cuanto antes. Hay algunas ocasiones en que se deben evitar algunas de estas estrategias. Piense que si un error es evidente, eludir su mención puede ser una estrategia equivocada, pues es posible que se le interrumpa para verificar que se trata de un error. Por otro lado, un error ortográfico difícilmente causa confusión, y no merece ocupar tiempo alguno en evidenciarlo. En secciones anteriores sugerí evitar ser formal en partes de la plática, ya sea por cuestiones de tiempo, de fluidez de las ideas o de probabilidad de comprensión por parte de la audiencia. Esto no debe interprestarse como una invitación a abandonar la formalidad por completo, sobre todo si el omitir detalles deja una impresión falsa del tema en cuestión en quienes le escuchan. Por sobre todas las cosas le sugiero evitar decir mentiras a su audiencia. Aun si ignora la respuesta a alguna pregunta que le formulen, !‘Es mejor decir ‘no sé’ a decir mentiras! Tenga en cuenta que algunos comentarios y preguntas en las pláticas tienen como objetivo sugerir al expositor direcciones hacia dónde continuar, y no se espera que sepa a detalle todo acerca de esas direcciones. La intención de quienes preguntan en pláticas casi nunca es evidenciar que hay algo que el expositor no sabe. Después de la plática aún hay cosas que usted puede hacer ========================================================= Hay aspectos sencillos de evaluar de la plática que recién impartimos. Uno de ellos es el tiempo; ?‘lo planemos bien? ?‘nos faltó? ?‘nos sobró? Detectar otros aspectos requiere de voluntad autocrítica. Cuesta trabajo darnos cuenta de si hablamos demasiado rápido o en un volumen demasiado bajo. Para ese tipo de cosas es bueno contar con alguien de confianza entre los asistentes que nos pueda hacer ver esos detalles. La misma audiencia aporta evidencia de qué tan bien o qué tan mal estuvo la plática. Interrupciones pertinentes suelen indicar que esas personas están preocupadas por seguir lo que decimos. Solicitudes de ejemplos, o de volver a enunciar un resultado anterior indican que el tema y la manera de exponer captaron la atención de esas personas. Una plática sin interrupciones y cuya única pregunta al final busca relacionar lo expuesto con otros temas de matemáticas es un indicio de que no fueron muchos los que acompañaron al expositor hasta el final. Repetidas solicitudes de definiciones de conceptos básicos son clara muestra de que la plática fue de nivel muy elevado para la audiencia a la que fue presentada. Cuando damos una plática desastrosa tendemos a desanimarnos y suele ocurrírsenos dar el menor número de pláticas posibles en el futuro. Sin embargo, lo que debemos hacer !‘es precisamente lo contrario! Primero hay que identificar aspectos negativos de nuestra plática que queremos que no se repitan, y de ser posible también aspectos positivos que queremos que sigan presentes en nuestras exposiciones. Después hay que buscar foros adecuados para dar pláticas, como seminarios formales o informales. En cada plática hay que buscar corregir algún aspecto negativo. Es posible que para dar una plática que finalmente le deje satisfecho usted deba impartir otras cinco ponencias intermedias; si persevera, tenga por seguro que llegará ese día. En cambio, si decide dar el menor número de pláticas posibles puede ser que pase una vida completa en la que dar pláticas represente angustias y desilusiones. !Ànimo! !‘Todos podemos dar una buena plática! Acerca de este texto ==================== El presente texto es una versión preliminar de un trabajo enviado a Miscelánea Matemática. Fue escrito en México en 2017 y es producto de la preocupación de varios colegas por la baja calidad de muchas pláticas de matemáticas que presenciamos en seminarios, coloquios y congresos. Los síntomas de este hecho son de diversas naturalezas y algunos se describen a continuación. Hoy en día la asistencia a nivel nacional a coloquios y seminarios de becarios de matemáticas no es lo nutrida que debería. Cuando se cuestiona de esto a estudiantes y académicos se deja entrever que para muchos de ellos es una pérdida de tiempo asistir a estos eventos. Una parte fundamental de esta opinión es que el material presentado les resulta incomprensible. Vemos con preocupación que asistentes a pláticas, cada vez con mayor frecuencia, pasan una buena parte de la exposición frente al monitor de su computadora portátil, una tablet o un celular. No es raro ver que en esta situación alguna pantalla muestra una página de alguna red social o un videojuego. Es común que haya pocas preguntas al final de las charlas, y que sean todavía menos las que son relevantes al tema expuesto. Algunas son por cortesía y son formuladas por algún experto en el área o por el moderador con la intención de que el expositor no se desanime. Además de los síntomas antes mencionados, notamos falta de conciencia de la comunidad matemática de la gravedad de esta situación, y por lo mismo existen pocas acciones orientadas a remediarla. Por si fuera poco, hay una presión constante a dar pláticas para fines de apoyos económicos, obtención de grado, contrataciones y promociones; pero no incluye una solicitud de calidad en estas pláticas. En comparación, la presión para que la gente escuche pláticas es casi nula. De acuerdo a lo anterior, el futuro que nos espera es que las pláticas no sean entendidas por nadie de los que las escuchan, o peor aun, que no haya quien las escuche. Las agencias que dan apoyos financieros y los comités que deciden obtenciones de grado, contrataciones y promociones no notarían si alguno de estos fuera el caso. Este texto busca difundir la existencia del problema y solicitar la participación de todos sus lectores para colaborar en posibles soluciones. Será un gran paso que cada lector decida no ser parte del problema y encargarse de dar pláticas de buena calidad. El presente texto no trata de decir que hay una receta para dar una plática perfecta. El autor de estas líneas está convencido de que cada expositor le debe poner un toque de su personalidad a cada una de sus pláticas, sin importar si se trata de una persona preferentemente seria o bromista. Lo central en las mejores pláticas que he escuchado no es el tono del expositor, sino una serie de factores que hacen que me halle cautivo a lo largo de toda la exposición. No espero que el total de la comunidad matemática concuerde conmigo en cada uno de los temas que abordo en este texto. Por el contrario, atentamente invito a todo matemático a asistir a pláticas y ponerles la mayor atención posible. Al hacerlo sugiero realizar el ejercicio de determinar qué elementos le agradaron y cuáles no de modo que busque adoptar los primeros en la medida de lo posible y evitar caer en los segundos, independientemente de si ello contradice lo escrito aquí. También le recomiendo al lector buscar otros textos cuyo propósito sea orientar en la preparación de pláticas con el objetivo de tener más puntos de comparación. Algunos de ellos, con distintos objetivos particulares, se enlistan a continuación. Comenzamos con [@Halmos], un texto escrito hace más de 40 años a solicitud de una sociedad matemática enfocado a presentaciones ante matemáticos no especializados en el área. El texto [@Matilde] es la base de un curso orientado a expresarse bien al dar charlas, especialmente ante audiencias difíciles. En [@Agelos] se muestra una preocupación similar a la presentada aquí, aunque enfatiza otros aspectos. Finalmente, los textos [@Continuos1] y [@Continuos2] muestran lineamientos y requisitos para impartir pláticas de 20 minutos en un evento anual de un área específica de topología Finalmente, pido disculpas por anticipado si algún lector se sintiera ofendido por los ejemplos que menciono de elementos negativos de una plática. Ninguno ha sido incluido con la intención de atacar a algún área de matemáticas ni a ningún expositor en particular. Agradecimientos =============== El autor agradece a Coppelia Cerda por la elaboración de las ilustraciones, a Amanda Montejano, Patricia Pellicer, Ferrán Valdez y José Antonio Montero y al réferi anónimo cuyas sugerencias contribuyeron a hacer de este un mejor texto.
--- abstract: | Quantum Machine Learning is an exciting new area that was initiated by the breakthrough quantum algorithm of Harrow, Hassidim, Lloyd [@HHL09] for solving linear systems of equations and has since seen many interesting developments [@LMR14; @LMR13a; @LMR14a; @KP16]. In this work, we start by providing a quantum linear system solver that outperforms the current ones for large families of matrices and provides exponential savings for any low-rank (even dense) matrix. Our algorithm uses an improved procedure for Singular Value Estimation which can be used to perform efficiently linear algebra operations, including matrix inversion and multiplication. Then, we provide the first quantum method for performing gradient descent for cases where the gradient is an affine function. Performing $\tau$ steps of the quantum gradient descent requires time $O(\tau C_S)$, where $C_S$ is the cost of performing quantumly one step of the gradient descent, which can be exponentially smaller than the cost of performing the step classically. We provide two applications of our quantum gradient descent algorithm: first, for solving positive semidefinite linear systems, and, second, for performing stochastic gradient descent for the weighted least squares problem. author: - 'Iordanis Kerenidis [^1]' - 'Anupam Prakash [^2]' bibliography: - 'bibliography.bib' title: Quantum gradient descent for linear systems and least squares --- Introduction ============ Quantum Machine Learning is an area that has seen a flurry of new developments in recent years. It was initiated by the breakthrough algorithm of Harrow, Hassidim, Lloyd [@HHL09], that takes as input a system of linear equations which is sparse and well-conditioned, and in time polylogarithmic in the system’s dimension outputs the solution vector as a quantum state. In other words, given a matrix $A$ and a vector $b$, it outputs the quantum state $\ket{A^{-1} b}$ corresponding to the solution. Note that this algorithm does not explicitly output the classical solution, nevertheless, the quantum state enables one to sample from the solution vector or perform some interesting computation on it. This is a powerful algorithm and has been very influential in recent times, where several works obtained quantum algorithms for machine learning problems based on similar assumptions [@LMR14; @LMR14a; @LMR13a]. The review [@A15] further discusses these developments and the underlying assumptions. More recently, we also provided a new application to competitive recommendation systems [@KP16], where the quantum algorithm can provide a good recommendation to a user in time polylogarithmic in the dimension of the system and polynomial in the rank which is much smaller than the dimension, unlike classical recommendation systems that require time linear in the dimension. In all these examples, the power of quantum information comes from quantum routines that can implement efficiently some linear algebra operations, such as matrix multiplication, inversion or projection. A classical linear system solver is a very powerful tool in machine learning, since it can be leveraged to solve optimization problems using iterative methods. Such iterative methods are very versatile and most optimization problems can be solved using first order iterative methods like gradient descent or second order methods like Newton’s method and the interior point algorithms for linear and semidefinite programs. Each step of an iterative method involves a gradient computation or the inversion of a positive semidefinite Hessian matrix, but these methods allow us to solve a large number of problems that do not have a closed form solution and thus can not be solved using linear systems alone. If we look a little more closely at these iterative methods, we see that they start with a random initial state $\theta_0$ that is updated iteratively according to a rule of the form $\theta_{t+1} = \theta_t + \alpha r_t$. The first thing to notice is that in many cases, these updates can be implemented using linear algebra operations such as matrix multiplication and inversion. This raises the question of whether, in a similar manner, the quantum linear system solvers can be leveraged to solve more general optimization problems via iterative methods. However, there are some obvious obstacles towards realizing a general quantum iterative method. Most importantly, the quantum routines for matrix multiplication and inversion only output a quantum state that corresponds to the classical solution vector and not the classical vector itself. Hence, if at any step during the iterative method the quantum procedure fails, one needs to start from the very beginning of the algorithm. Another main problem, is that the current quantum linear systems solvers only work for sparse matrices. Even though one may argue that in some practical settings the data is indeed sparse, there is no reason to believe that the matrix multiplications necessary for the updates in the iterative methods will be sparse. Let us discuss some related work: in [@RSPS16] the authors develop quantum algorithms for gradient descent and Newton’s methods for polynomial optimization problems, but the proposed methods can be used only for a logarithmic number of steps. More precisely, the running time of the quantum algorithm presented depends exponentially on the necessary number of steps. While in some cases the gradient descent may converge fast, it is clear that the running time of a general quantum gradient descent method should scale at most polynomially with the number of steps. Quantum speedups for semidefinite programs were obtained in [@BS16] using Gibbs sampling and the multiplicative update method, while developing a quantum interior point method is stated as an open problem there. Our results ----------- In this work we make significant progress on both the challenges described above. We start by designing an improved quantum linear systems solver and then use it to define an efficient quantum iterative method for implementing gradient descent with affine update rules. Last, we show how to use our iterative method for solving linear systems and for performing stochastic gradient descent for the weighted least squares problem. ### An improved quantum linear systems solver First, we provide a quantum linear system solver that outperforms the current ones for large families of matrices and provides exponential savings for any low-rank (even dense) matrix, where by low-rank we mean that the rank is poly-logarithmic in the dimensions. Let us remark that the running time of the HHL algorithm is $\tilde{O}(s(A)^{2}\kappa(A)^{2}/\epsilon)$ where $s(A)$ and $\kappa(A)$ are the sparsity and the condition number of the matrix $A$, $\epsilon$ the error parameter, and we hide factors logarithmic in the dimension of the matrix. Subsequent works have improved the running time of the HHL algorithm to linear in both $s(A)$ and $\kappa(A)$ and the precision dependence to $\log(1/\epsilon)$ [@A12; @CKS15]. In the case of dense matrices, these algorithms run in time linear to the dimension of the matrix. Our quantum linear systems solver runs in time that, instead of the sparsity, depends on the matrix parameter $\mu(A)$ which is always smaller than the Frobenius norm of the matrix, ${\lVertA\rVert}_{F}$, and also smaller than $s_1(A)$, the maximum $\ell_1$ norm of a row of the matrix $A$. \[Theorem \[lqmat\]\] There exists a quantum algorithm that given access to a matrix $A$ and a vector $b$, outputs a quantum state $\ket{z}$, such that ${\lVert\ket{z} - \ket{A^{-1}b}\rVert} \leq \delta$, with running time $\tilde{O}(\frac{\kappa^2(A) \mu(A)}{\delta})$. Let us compare with the HHL algorithm under the same assumptions [@H14] that the eigenvalues of $A$ lie in the interval $[1/\kappa, 1]$. Note, that for the Frobenius norm we have ${\lVertA\rVert}_{F} = ( \sum_{i} \sigma_{i}^{2} )^{1/2} \leq \sqrt{rk(A)}$. Hence, our algorithm achieves an exponential speedup even for dense matrices whose rank is poly-logarithmic in the matrix dimensions. Moreover, while for general dense matrices the sparsity is $\Omega(n)$, we have $\mu(A) \leq {\lVertA\rVert}_{F} \leq \sqrt{n}$ and thus we have a worst case quadratic speedup over the HHL algorithm. Moreover, for the same normalization as in [@CKS15], we have that $s_1(A) \leq s(A)$ for all matrices $A$, hence we improve on the linear system solver in [@CKS15], whose running time depends on $s(A)$. For example, real-valued matrices with most entries close to zero and a few entries per row close to 1, e.g. small perturbations of permutation matrices, will have $s(A)=\Omega(n)$, ${\lVertA\rVert}_F=\Omega(\sqrt{n})$, while $s_1(A)$ will be $O(1)$ for small enough perturbations. Last, the parameter $\mu(A)=\Omega(\sqrt{n})$ for some matrices, meaning that our algorithm does not provide exponential savings for all matrices. We believe that one can improve the dependence on $\kappa(A)$ to linear and the dependence on the error to $\log (1/\delta)$ using the techniques in [@A12; @CKS15]. In this work, we focus on achieving a dependence on $\mu(A)$ instead of the sparsity of the matrix. The main technical tool is an improved quantum walk based algorithm for performing singular value estimation (SVE). In other words, we are given a matrix $A= \sum_{i} \sigma_{i} u_{i} v_{i}^{T}$ where $\sigma_{i}$ are the singular values and $(u_{i}, v_{i})$ are the singular vectors, and a vector $b$, which we can see as a superposition of the singular vectors of the matrix $A$, i.e. $\ket{b} = \sum_i \beta_i \ket{v_i}$, and the goal is to coherently estimate the corresponding singular values, i.e. perform the mapping $$\ket{b} = \sum_i \beta_i \ket{v_i} \ket{0} \stackrel{SVE}{\rightarrow} \sum_i \beta_i \ket{v_i}\ket{\overline{\sigma}_i},$$ such that $\overline{\sigma}_i $ is a good estimation for $\sigma_i$. The relation between quantum walks and singular values has been well known in the literature, for example see [@S04; @C10]. Here, we use an approach similar to [@KP16] and a better tailored analysis of the quantum walk in order to achieve the improvements in running time, and prove the following result \[Theorem \[sveplus1\]\] Given access to a matrix $A$, there exists a quantum algorithm that performs Singular Value Estimation for $A$ to precision $\delta$ in time $\widetilde{O}(\mu(A)/\delta)$. The SVE procedure can be used to perform a number of different linear algebra operations. For example, solving a linear system reduces to performing the SVE, then applying a conditional rotation by an angle proportional to the inverse of each singular value, and performing the SVE again to erase the singular value estimation. Namely, the following operation is performed $$\sum_i \beta_i \ket{v_i}\ket{\overline{\sigma}_i} \ket{0} \rightarrow \sum_i \beta_i \ket{v_i}\ket{0} {\left ( \frac{\sigma_{min}}{\overline{\sigma}_i}\ket{0} + \sqrt{1 - \frac{\sigma^2_{min}}{\overline{\sigma}^2_i} } \ket{1} \right )}$$ Note that conditioned on the last register being $\ket{0}$, one gets a good approximation to the desired output $\ket{A^{-1}b}= \sum_{i} \frac{\beta_{i}}{\sigma_{i}} \ket{v_{i}}$. To complete the analysis of the linear system solver, we take the appropriate error in the SVE estimation (which turns out to be $O(\frac{1}{\kappa(A)})$) and perform amplitude amplification to increase the probability of the desired state by repeating the procedure $O(\kappa(A))$, times, which gives the running time stated in Result 1. Note, that there is nothing special about multiplying with $A^{-1}$, one can as easily multiply with the matrix $A$, by again performing the SVE procedure and then performing a conditional rotation by an angle proportional to $\overline{\sigma}_i$ instead of its inverse. This way, one gets an algorithm for matrix multiplication with the same guarantee and running time. One important remark here is that the main quantum ingredient is the possibility, given a singular vector of a matrix $A$ (or a coherent superposition thereof), to estimate the corresponding singular value (also coherently). This is basically what the well-known phase estimation procedure does for a unitary matrix. Once we know how to do this then one can perform many different linear algebra operations, including multiplying by the inverse of the matrix (as in the linear systems solvers), multiplying with the matrix or a power of the matrix (as we will need for the iterative method), or even project a vector in some eigenspace of the matrix (as we used in [@KP16]). ### Quantum iterative methods In our second main result, we provide a new framework for performing first order quantum iterative methods, or quantum gradient descent, for cases where the gradient is an affine function. This includes the case of positive semidefinite linear systems and regularized weighted least squares problems. Before explaining our quantum iterative method, we provide some details about classical iterative methods that are necessary for our description. #### Classical iterative methods for empirical risk minimization. {#erm} We define more precisely classical iterative methods in the framework of empirical risk minimization. In this framework we are given $m$ examples from a training set $(x_{i}, y_{i})$ where variables $x_{i} \in {\mathbb{R}}^{n}$ and outcome $y_{i} \in {\mathbb{R}}$. The model is parametrized by $\theta \in {\mathbb{R}}^{n}$ and is obtained by minimizing the following objective function, [$$\begin{aligned} F(\theta) = \frac{1}{m} \sum_{i \in [m]} \ell( \theta, x_{i}, y_{i}) + R(\theta). \end{aligned}$$]{} The loss function $\ell( \theta, x_{i}, y_{i})$ assigns a penalty when the model does not predict the outcome $y_{i}$ well for example $(x_{i}, y_{i})$ while the regularization term $R(\theta)$ penalizes models with high complexity. We refer to [@BCN16] for a classical overview of empirical risk minimization. The first order iterative method for problems described by this framework is called gradient descent. The algorithm starts with some $\theta_0 \in {\mathbb{R}}^{n}$, and for $\tau$ steps updates this point via the following update rule: [$$\begin{aligned} \theta_{t+1} = \theta_{t} + \alpha \nabla F(\theta_t) \end{aligned}$$]{} In the end, it outputs $\theta_\tau$ which is guaranteed to be close to the solution for sufficiently large $\tau$. The running time of this method is $\tau C_S$, where $C_S$ is the cost of a single step, in other words it is the cost of the update. The cost can be much higher than the number of steps for high dimensional problems. A basic example of an optimization problem in the above form is that of solving the linear system $Ax=b$ for a positive semidefinite matrix $A$. The solution to the linear system is the unique minimum for the loss function $F(\theta) = \theta^{T} A \theta - \theta^{T}b$ and can be computed using the gradient descent update. In addition, several well known classical algorithms for regression and classification problems can be expressed in the empirical loss minimization framework. Regression problems correspond to the setting where the outcome $y \in {\mathbb{R}}$ is real valued, the predicted value for $y_{i}$ is $\theta^{T} x$. The linear regression or least squares problem corresponds to the loss function $F(\theta) = \frac{1}{m} \sum_{ i \in [m]} (\theta^{T}x_{i} - y_{i})^{2}$, a least squares model thus minimizes the average squared prediction error over the dataset. The $\ell_{2}$-regularized least squares or ridge regression problem and the $\ell_{1}$-regularized least squares or Lasso regression use the regularization term $R(\theta)$ to be $\lambda {\lVert \theta\rVert}_{2}^{2}$ and $\lambda {\lVert\theta\rVert}_{1}$ respectively and are of considerable importance in machine learning, see for example [@M12]. Classification problems correspond to the setting where the outcomes $y_{i}$ are discrete valued. Many well known classification algorithms including logistic regression, support vector machines and perceptron algorithms correspond to different choices of loss functions for the empirical loss minimization framework and can thus be solved using first order methods. One important subclass of the empirical loss minimization framework is when the gradient is an affine function, as for the linear systems, least squares and ridge regression problems. In these cases, the iterative method starts with some $\theta_0$ and updates this point via an update rule of the form for $t\geq 0$, $$\theta_{t+1} = \theta_t + \alpha r_{t}$$ where $\alpha$ is some scalar that denotes the step size and $r_t$ is an affine function $L$ (that depends on the data) of the current solution $\theta_t$, i.e. $r_{t}= L(\theta_t)=A\theta_t+c$. It is easy to see that this also implies that $r_{t+1}=S(r_{t}) $ for a linear operator $S$. Indeed, $$r_{t+1}= L(\theta_{t+1}) = L(\theta_t+\alpha r_t) = A(\theta_t+\alpha r_t) + c = r_t+\alpha Ar_t = S(r_t)$$ The final state of a linear update iterative method can be written hence as [$$\begin{aligned} \label{three} \theta_\tau = \theta_0 + \alpha \sum_{t=0}^{\tau - 1} r_t = \theta_0 + \alpha L(\theta_0) + \alpha \sum_{t=1}^{\tau - 1} S^t(r_0).\end{aligned}$$]{} where $S^t$ is the operator that applies $S$ for $t$ time steps and $S^0$ is the identity operator. We are slightly going to change notation in order to make the presentation clearer. We rename $\theta_0$ as $r_0$, which means that $r_0$ is renamed as $L(r_0)$. This way, we have $$\theta_\tau = r_0+\alpha L(r_0) + \alpha \sum_{t=1}^{\tau-1} S^t(L(r_0)) = r_0 + \alpha \sum_{t=1}^{\tau} S^{t-1}(L(r_0)).$$ Without loss of generality we assume that the initial pont has unit norm, i.e. ${\lVertr_0\rVert}=1$. #### A quantum gradient descent algorithm with affine updates We are now ready to explain the main ideas of the quantum algorithm for performing the above iterative method. Let us make things simpler for this exposition by looking at the case where we take $r_0=0$ and $\alpha=1$, meaning we just want to output the state $\ket{\theta_\tau} = \sum_t r_t$. We only make this assumption here for conveying clearly the main ideas and not in the rest of the paper where we address the most general case. Imagine that there was a procedure that performs the following mapping perfectly $$\ket{t} \ket{\theta_t} \rightarrow \ket{t+1} \ket{\theta_{t+1}}$$ Then, our task would be easy, since applying this unitary $\tau$ times would provide us with the desired state $\ket{\theta_\tau}$. Alas, this is not the case, or at least we do not know how to achieve it. Notice for example that the mapping $\theta_t$ to $\theta_{t+1}$ is not a unitary transformation and in fact the norm of $\theta_{t+1}$ could be larger than the one of $\theta_t$. Even so, imagine one could in fact perform this mapping with some “probability" (meaning mapping $\theta_t$ to some state $(\beta \ket{\theta_{t+1}}\ket{0} + \sqrt{1-\beta^2}\ket{G}\ket{1})$, for some garbage state $G$). The main issue is that one cannot amplify this amplitude, since the state $\ket{\theta_{t+1}}$ is unknown, being the intermediate step of the iterative method, and in the quantum case we only have a single copy of this state. Hence, the issue with the iterative method is that one needs to perform $\tau$ sequential steps, where each one may have some constant probability of success without the possibility of amplifying this probability. In the end, the probability of getting the desired state is unfortunately proportional to the product of the success probabilities for each step, which drops exponentially with the number of steps $\tau$. This is also the reason previous attempts for a quantum gradient descent algorithm break down after a logarithmic number of steps. Here we manage to overcome this obstacle in the following way. The first idea is to deal with the vectors $r_t$ instead of the $\theta_t$’s, since in this case, we know that the norm of $r_{t+1}$ is smaller than the norm of $r_t$. Our goal would be to find a unitary mapping that, in some sense, maps $r_t$ to $r_{t+1}$. Again, there is the problem that the norms are not equal, but in this case, since the norm of $r_{t+1}$ is smaller, we can possibly make it into a unitary mapping by adding some garbage state. Indeed, we define the quantum step of the quantum iterative method via the following unitary $$\ket{t} {\lVertr_t\rVert}\ket{r_t} \stackrel{{V}}{\rightarrow}\ket{t+1} ( {\lVertr_{t+1}\rVert} \ket{r_{t+1}}\ket{0} + \ket{G}\ket{1}),$$ where the norm of the garbage state is such that the norm of the right hand side is equal to ${\lVertr_t\rVert}$. Note that the above vectors are not unit norm but $V$ is still length preserving. We write it in this way to mimic the mapping of the unnormalized vectors $r_t \rightarrow r_{t+1}$. Since we are dealing with linear updates, the above transformation is basically a matrix multiplication and we use our SVE procedure to perform it with high accuracy. The second idea is noticing that our goal now is not to obtain the final state $r_\tau$, but the sum of all the vectors $\sum_t r_t$. Let us see how to construct this efficiently. Given a procedure for performing one step of the iterative method as above, we design another procedure $U$ that given as input a time $t$ and the initial state $r_0$ can map $r_0$ to $r_t$. We do this by basically applying the unitary $V$ $t$ times, conditioned on the first register. In other words, we can perform the mapping $$\ket{t,r_0} \stackrel{U}{\rightarrow}\ket{t} ( {\lVertr_t\rVert} \ket{r_{t}}\ket{0} +\ket{G}\ket{1}).$$ Note that if the cost of $V$ is $C_V$, then naively, the cost of $U$ will be $\tau C_V$ by applying $V$ sequentially. We will actually see that in fact we can implement $C_U$ in time $O(C_V+\log \tau)$. We are now ready for the last step of the algorithm that consists in starting with a superposition of time steps from 0 to $\tau$ and applying $U$, in order to get a superposition of the form $$\frac{1}{\sqrt{\tau}} \sum_t \ket{t} \ket{r_0} \rightarrow \frac{1}{\sqrt{\tau}}\sum_t \ket{t}( {\lVertr_t\rVert} \ket{r_{t}}\ket{0} +\ket{G}\ket{1}).$$ Then, we can “erase” the time register by performing a Hadamard on the first register and accepting the result when the first register is 0. In other words, we are having a state of the form $$\frac{1}{{\tau}}\sum_t {\lVertr_t\rVert} \ket{r_{t}}\ket{0} + \ket{G'}\ket{1}$$ Using Amplitude Amplification, we can get the desired state $\frac{1}{{\lVert\theta_\tau\rVert}}\sum_t {\lVertr_t\rVert} \ket{r_{t}}$, in overall time $O(\frac{\tau}{{\lVert\theta_\tau\rVert}})$ times the cost of applying the unitary $U$, and since in our applications ${\lVert\theta{\tau}\rVert}=\Omega(1)$ we get the efficient quantum gradient descent algorithm. Given a unitary $V$ that approximately applies one step of the iterative method in time $C_V$, there exists a quantum algorithm that performs $\tau$ steps of the iterative method and outputs a state close to $\theta_\tau$, in time at most $O(\tau C_V)$. Our running time is quadratic in the number of steps the classical iterative method needs to take in order to get close to a solution, times the cost of taking one step of the iterative method, i.e. quantumly implementing the update rule. The updates are performed using SVE and the update cost can be exponentially smaller in the quantum case. Hence we can get quantum algorithms with vastly improved performance, for cases where the number of steps of the classical iterative method is not too large compared to the update cost, such as linear systems and least squares. Let us remark that our algorithm does not try to create all the intermediate states $\theta_t$ of the iterative method, which we do not know how to achieve with non-negligible probability for large $\tau$. Instead, we first see that the final state $\theta_\tau$ is equal to the sum of all the update states $r_t$ and then we try to create the sum of these states in the way we described above: we first go to a superposition of all time steps from $0$ to $\tau$ and then conditioned on the time being $t$ we apply coherently $t$ updates to the initial state $r_0$ in order to create a sort of “history” quantum state. This is reminiscent of the “history" states in Kitaev’s proof of the QMA-completeness of the Local Hamiltonian problem [@KSV02]. Last, erasing the register that keeps the time can be done in time linear in the number of time steps, which is still efficient. Finally, we note that in all these quantum machine learning algorithms, one needs to use a classical data structure for the input (which can be seen as a matrix or set of vectors) so that the quantum algorithm be able to efficiently create superpositions of rows of the input matrix or of the vectors. While in many cases one just assumes this ability, here, we also rigorously describe such a classical data structure with efficient update and retrieval time that extends the proposals in [@GR02; @KP16] and allows to efficiently create the necessary quantum superpositions, as we want our application to be end-to-end in the sense of $\cite{A15}$. ### Applications of quantum gradient descent We provide two applications of our quantum gradient descent method. #### Positive semidefinite linear systems First, we use it for solving positive semidefinite linear systems, namely, given a positive semidefinite matrix $A$ and a vector $b$, output a state close to $\ket{A^{-1}b}$. Of course, linear systems can be solved directly as we have seen, but we provide the analysis and running time of our gradient descent algorithm in order to compare the two methods. Also, we will see below that in many cases, gradient descent is preferable in practice than direct methods. The error analysis shows that the number of steps we need to perform in order to get $\delta$ close to the solution of the linear system is roughly $O(\kappa(A)\log(1/\delta))$, while in order to keep the error of the final state small, we need to perform the SVE for $I - \alpha A$ (which can be performed using SVE for $A$) with precision $\frac{\delta}{\kappa(A)^2}$, which then takes time $O(\kappa(A)^2 \mu(A)/\delta)$. Let $A|b$ denote the matrix with row $b$ added to $A$. Overall, we have \[Theorem \[qlinsys\] \] Given positive semidefinite matrix $A$ and vector $b$ stored in memory, there is an iterative quantum algorithm that outputs a state $\ket{z}$ such that ${\lVert\ket{z} - \ket{A^{-1}b}\rVert} \leq 2\delta$ with expected running time $\tilde{O}(\frac{\kappa(A)^3 \mu(A|b)}{\delta})$. Note that the running time has an extra factor $\kappa(A)$ compared to the direct method we described, while again the algorithm depends linearly on the parameter $\mu(A)$ which is smaller than the sparsity. #### Stochastic gradient descent for weighted least squares Our second application is to the weighted least squares problem. For this problem, we are given a matrix $X$ of examples and a corresponding vector $y$ of labels, as well as a vector $w$ of weights, and the goal is to find $\theta$ that minimizes the squared loss $\sum_{i} w_{i} (y_{i} - x_{i}^{T} \theta)^{2}$. One can provide a closed form solution, which is given by $$\theta = (X^{T} W X)^{-1} X^{T} W y$$ and thus the problem a priori can also be solved using a direct method. Quantum algorithms for unweighted least squares problems with a polynomial dependence on sparsity using the direct method were described in [@WBL12]. There are two ways in which we extend this work, first using our improved SVE algorithm we can perform matrix multiplication and inversion efficiently for a larger class of matrices and can also solve the weighted version of the problem. We thus extend the results on efficient quantum algorithms for this problem which has numerous applications in science and engineering. More importantly, we are able to implement an iterative stochastic gradient method for this problem which has many advantages in practical settings (see for example [@BCN16] for a more detailed discussion). In fact, the least squares problem is used in practice for regression or data fitting, where in many cases this data comes from internet traffic, social networks, or scientific experiments. There, the data matrix is extremely skewed in shape since the number of data points is order of magnitudes larger than the dimension of the data points. Therefore, it is too expensive to perform any linear algebra operation using the entire data set and moreover due to redundancy in the data, the gradient can be estimated efficiently over small batches. For these reasons, in practice the gradient is estimated over randomly sampled batches of the training set, an approach which is called stochastic gradient descent. This way, the stochastic gradient descent avoids having to perform linear algebra operations on huge matrices, which would be the case if we were to solve the problem directly or use the usual gradient descent. Our quantum iterative method can also be used to perform stochastic gradient descent for the above problems, hence considerably reducing the requirements compared to an algorithm that provides a direct solution. This works in a manner similar to the classical setting, the data is split randomly into batches and for each step of the iterative method one considers only one batch of data in order to compute the gradient and perform the linear update. Our quantum gradient descent algorithm can be readily adapted to this setting. The main technical difference between the quantum iterative method for linear systems and that for the least squares problem, is that in the second case, one needs to perform a matrix inversion and multiplication by a matrix which is not a priori stored in memory. More precisely, we have in memory the matrix $X$, the diagonal matrix $W$ and a vector $y$ and we need to perform matrix multiplication with the matrix $(X^{T} W X)^{-1}$ and also for creating the vector $X^TWy$. We show how to do this efficiently and get the following algorithm, where $\sqrt{W} X|y$ is the matrix obtained by adding row $y$ to $\sqrt{W} X$. \[Theorem \[qwlsq\] \] Given access to a semidefinite matrix $X$, a diagonal weight matrix $W$ and a vector $y$ there is a quantum gradient descent algorithm that outputs a state $\ket{z}$ such that ${\lVert\ket{z} - \ket{(X^{T} W X)^{-1} X^TWy}\rVert} \leq 2\delta$ with expected running time $\tilde{O}(\frac{\kappa(X^TWX)^3 \mu(\sqrt{W} X|y)}{\delta})$. Let us remark that while linear updates capture a significant class of iterative methods, a generalization to non-linear update functions would imply the ability to perform a much larger family of algorithms of interest to machine learning, as discussed in section \[erm\]. Also note that it is straightforward to generalize the weighted least squares algorithm to include $\ell_{2}$ regularization. Machine learning algorithms on the other hand often use $\ell_{1}$ regularization and in some case $\ell_{p}$ regularization for $p \in [1,2]$. It would be interesting to find a quantum algorithm for $\ell_{p}$ regularization for $p\neq 2$. The paper is organised as follow: In Section 2, we provide some linear algebra definitions and some basic quantum procedures that we will be using in our algorithms. In Section 3, we define the quantum gradient descent method and analyse its correctness and running time. In Section 4, we provide the improved SVE procedure and show how to use it to directly solve linear systems of equations and how to perform the linear update of the quantum gradient descent method. Last, in Section 5, we provide two applications of our quantum gradient descent method to linear systems and weighted least squares. Preliminaries ============= Linear algebra -------------- The set $\{ 1, 2, \cdots, n\}$ is denoted by $[n]$, the standard basis vectors in ${\mathbb{R}}^{n}$ are denoted by $e_{i}, i \in [n]$. For a vector $x \in {\mathbb{R}}^n$ we denote the $\ell_{p}$-norm as ${\lVertx\rVert}_{p} = (\sum_{i} x_i^p)^{1/p}$. The Euclidean norm ${\lVertx\rVert}_{2}$ is denoted as ${\lVertx\rVert}$. The rank of a matrix is denoted as $rk(A)$. A matrix is positive semidefinite if it is symmetric and has non negative eigenvalues, the notation $A \succeq 0$ indicates that $A$ is a psd matrix. The singular value decomposition of a symmetric matrix $A\in {\mathbb{R}}^{n\times n}$ is written as $A= \sum_{i} \lambda_{i} v_{i} v_i^T$ where $\lambda_{i} \geq 0$ are the eigenvalues and $v_{i}$ are the corresponding eigenvectors. The singular value decomposition of $A\in {\mathbb{R}}^{m\times n}$ is written as $A= \sum_{i} \sigma_{i} u_{i} v_{i}^{T}$ where $\sigma_{i}$ are the singular values and $(u_{i}, v_{i})$ are the singular vectors. The Frobenius norm ${\lVertA\rVert}_{F}^{2}= \sum_{ij} A_{ij}^{2} = \sum_{i} \sigma_{i}^{2}$, where $\sigma_{i}$ are the singular values. The spectral norm ${\lVertA\rVert} = \sigma_{max}$, the largest singular value. The condition number $\kappa(A) = \sigma_{max}/\sigma_{min}$. The $i$-th row of matrix $A \in {\mathbb{R}}^{m\times n}$ is denoted as $a_{i}$ and the $j$-th column is denoted as $a^{j}$. The $\circ$ operator denotes the Hadamard product, that is $A = P \circ Q$ implies that $A_{ij} = P_{ij}. Q_{ij}$ for $i \in [m], j \in [n]$. For a matrix $A \in {\mathbb{R}}^{m \times n}$, the maximum $\ell_{p}$ of the row vectors is denoted $s_{p}(A) := \max_{i \in [m]} {\lVerta_{i}\rVert}_{p}^{p}$, the maximum $\ell_{p}$ norm of the column vectors is $s_{p}(A^{T})$. The sparsity $s(A)$ is the maximum number of non-zero entries in a row of $A$. The $\widetilde{O}$ notation is used to suppress factors poly-logarithmic in vector or matrix dimsnions, that is it $O(f(n) \text{polylog} (mn))$ is written as $\widetilde{O}( f(n) )$. Quantum Algorithms ------------------ We will use phase estimation and variants of amplitude amplification that we recall below. The time required to implement a unitary operator $U$ will be denoted by $T(U)$. \[pest\] \[Phase estimation, [@K95]\] Let $U$ be a unitary operator with eigenvectors $\ket{v_j}$ and eigenvalues $e^{ \iota \theta_{j}}$ for $\theta_{j} \in [-\pi, \pi]$. There exists a quantum algorithm with running time $O( T(U) \log n /\epsilon)$ that transforms $\ket{\phi} = \sum_{j \in [n]} \alpha_{j} \ket{v_{j}} \to \sum_{j \in [n]} \alpha_{j} \ket{v_{j}}\ket{ \overline{\theta_{j}} }$ such that $|\overline{\theta_{j}} - \theta_{j} | \leq \epsilon$ for all $j\in [n]$ with probability at least $1-1/\emph{poly}(n)$. We state a version of amplitude amplification and estimation below, more precise statements can be found in [@BHMT00]. \[tampa\] \[Amplitude amplification and estimation, [@BHMT00]\] If there is unitary operator $U$ such that $U\ket{0}^{l}= \ket{\phi} = \sin(\theta) \ket{x, 0} + \cos(\theta) \ket{G, 0^\bot}$ then $\sin^{2}(\theta)$ can be estimated to additive error $\epsilon \sin^{2}(\theta)$ in time $O(\frac{T(U)}{\epsilon \sin(\theta)})$ and $\ket{x}$ can be generated in expected time $O(\frac{T(U)}{\sin (\theta)})$. Last we provide a simple claim that shows that if two unnormalized vectors are close to each other, then their normalized versions are also relatively close. \[unnorm\] Let $\theta$ be the angle between $\phi, \tilde{\phi}$ and assume that $\theta< \pi/2$. Then, ${\lVert \phi - \tilde{\phi} \rVert} \leq \epsilon$ implies ${\lVert \ket{\phi} - \ket{ \tilde{\phi} }\rVert} \leq \frac{ \sqrt{2} \epsilon }{ {\lVert\phi\rVert} }.$ We bound the $\ell_{2}$ distance ${\lVert \ket{\phi} - \ket{ \tilde{\phi}}\rVert}$ using the following argument. Let $\theta$ be the angle between $\phi, \tilde{\phi}$. For the unnormalized vectors we have ${\lVert \phi -\tilde{\phi} \rVert} \leq \epsilon$, and assuming that $\theta< \pi/2$ we have $\epsilon \geq {\lVert\phi\rVert} \sin (\theta)$. The distance between the normalized states can thus be bounded as, [$$\begin{aligned} \label{norm0} {\lVert \ket{\phi} - \ket{ \tilde{\phi} }\rVert}^{2} = (2 \sin (\theta/2))^{2} \leq 2\sin^{2} (\theta) \leq \frac{2 \epsilon^{2}} { {\lVert\phi\rVert}^{2}} \end{aligned}$$]{} The access model for quantum machine learning applications ---------------------------------------------------------- Quantum algorithms for linear algebra require quantum access to the matrices being manipulated, and most prior research in the literature works in the model of oracle access to the matrix entries, that is quantum queries of the form $\ket{i, j, 0} \to \ket{i,j,a_{ij}}$ are allowed. Such an access model can be particularly helpful in cases where the matrix is structured so that $a_{ij}$ is a simple function of $(i,j)$, for example it can be used to represent well structured matrices of even exponential size. The matrices that arise in machine learning applications do not have such structure, since they arise from empirical datasets and are represented by a list of entries $(i,j, a_{ij})$. There is no succinct way to compute $a_{ij}$ from $i,j$ and thus even to implement the quantum queries of the form $\ket{i, j, 0} \to \ket{i,j,a_{ij}}$, an implicit data structure must be stored in memory. In the machine learning setting, there is no a priori reason to restrict ourselves to the model with black box access to matrix entries, in particular we can modify the data structure storing the matrix if it provides an algorithmic speedup. We also note that prior work on lower bounds has been in the model with quantum access to the matrix entries and does not apply to the setting where one has quantum access to a different data structure instead. In our work, we make the data structure explicit and ensure that it has poly-logarithmic insertion and update times, which is the best that one could hope for. Our access model is particularly useful for data that is acquired over time and in settings where it is important that the new elements can be efficiently added to the data structure. In such settings it would be prohibitive to make $poly(m, n)$ time preprocessing after the entire matrix has been stored or each time a new element comes into the data structure. For these reasons, we believe that our access model is well suited for quantum machine learning applications. The Quantum Gradient Descent algorithm ====================================== In this section we provide the definition of a quantum step of the quantum gradient descent in the case of a linear update rule and then describe the full quantum procedure that performs the quantum iterative method. The quantum step ---------------- We assume that the classical iterative method has an update rule such that $$\theta_\tau = r_0 + \alpha \sum_{t=1}^{\tau} S^{t-1}(L(r_0)).$$ for an affine operator $L$, a linear, contracting operator $S$, and a random initial vector $r_0$ with ${\lVertr_0\rVert}=1$. This is the case, for example, for solving linear systems or least squares. First we define the notion of an approximate quantum step of the quantum iterative method. Let us denote by $\tau$ the number of steps of the classical iterative algorithm, and let $\tau +1= 2^\ell$ (if not just increase $\tau$ to the next power of 2). \[31\] The $(\epsilon, \delta)$-approximate quantum step algorithm is a unitary $V$ such that for any $1 \leq t \leq \tau-1$, $$\begin{aligned} V & : \ket{0} \ket{r_0} \ket{0}\rightarrow \ket{1} \left( \alpha {\lVert\tilde{L}(r_0)\rVert} \ket{\tilde{L}(r_0)} \ket{0} + \ket{G_1}\ket{1} \right) \\ &: \ket{t} {\lVertr_t\rVert} \ket{r_t} \ket{0} \rightarrow \ket{t+1} \left( {\lVert\tilde{S}(r_{t})\rVert} \ket{\tilde{S}(r_{t})}\ket{0} + \ket{G_{t+1}}\ket{1} \right),\end{aligned}$$ where $\ket{G_{t+1}}$ is an unnormalised garbage state, $\tilde{S}$ is an approximation to the linear, contractive operator $S:r_t \rightarrow r_{t+1}$, $\tilde{L}$ is an approximation to the affine, contractive operator $L:r_0 \rightarrow r_{1}$, in the sense that with probability $\geq 1-\delta$, it holds that for any $r_t$, $ {\lVert S(r_{t}) - \tilde{S}(r_{t}) \rVert} \leq \epsilon $ and also ${\lVert L(r_0) - \tilde{L}(r_0)\rVert} \leq \epsilon $. Notice that ${\lVert\tilde{L}(r_0)\rVert}$ might be larger than 1, but by taking $\alpha$ a small constant we have $\alpha {\lVert\tilde{L}(r_0)\rVert} \leq 1$. The way we defined $V$, it is norm preserving but the vectors we wrote down are not unit. We can of course do that by dividing both parts with ${\lVertr_t\rVert}$. We prefer this notation in order to resemble more the classical mapping of the unnormalised vectors $r_t \rightarrow r_{t+1}$. We can define the following procedure $U$ similar to the ideal case. \[U\] Given access to the $(\epsilon, \delta)$-approximate quantum step algorithm $V$ (with running time $C_V$), there exists a quantum procedure $U$ with running time at most $O(\tau C_V)$, such that for any $t \in [\tau]$, $$\begin{aligned} U & : \ket{0} \ket{0} \ket{0} \ket{r_0} \ket{0} \rightarrow \ket{0} \ket{0} \ket{0} \ket{r_0} \ket{0} \\ & : \ket{t} \ket{0} \ket{0} \ket{r_0} \ket{0} \rightarrow \ket{t} \ket{0} \left( \alpha {\lVert\tilde{S}^{t-1}(\tilde{L}(r_{0}))\rVert} \ket{t} \ket{\tilde{S}^{t-1}(\tilde{L}(r_{0}))}\ket{0} + \ket{G'_t}\ket{1} \right),\end{aligned}$$ where $\ket{G}$ is an unnormalised garbage state, and with probability at least $(1-t\delta)$ it holds that $ {\lVert S^{t-1}(L(r_0)) - \tilde{S}^{t-1}(\tilde{L}(r_{0})) \rVert} \leq t \epsilon$. Note that out of the five registers used for the iterative method, registers 1 and 3 store the time step, 4 stores the quantum state for the iterative method while 2 and 5 are control qubits or flags. We define the operator $W$ on four registers, such that if the control register is 0, then it applies a $V$ on the other three registers and then a CNOT to copy the last register into the control register. If the control register is 1, then it does nothing. Namely $$\begin{aligned} W & : \ket{0} \ket{0} \ket{r_0} \ket{0} \rightarrow \ket{0} \ket{1} \alpha {\lVert\tilde{L}(r_0)\rVert} \ket{\tilde{L}(r_0)} \ket{0} + \ket{1}\ket{1}\ket{G_1}\ket{1} \\ & : \ket{0} \ket{t} {\lVertr_t\rVert} \ket{r_t} \ket{0} \rightarrow \ket{0} \ket{t+1} {\lVert\tilde{S}(r_{t})\rVert} \ket{\tilde{S}(r_{t})}\ket{0} + \ket{1} \ket{t+1} \ket{G_{t+1}}\ket{1}, \;\;\; t \in [1,\tau-1]\\ & : \ket{1} \ket{t} \ket{b} \ket{1} \; \rightarrow \ket{1} \ket{t} \ket{b} \ket{1} \;\;\; t \in [0,\tau-1]\end{aligned}$$ We define the following procedure $U$ that acts as Identity for $t=0$ and for $t \in [\tau-1]$ it does the following: $$\begin{aligned} \ket{t} \ket{0} \ket{0}\ket{r_0} \ket{0} & \stackrel{W^t}{\rightarrow} & \ket{t} W^t \ket{0} \ket{0} \ket{r_0} \ket{0} \\ & = & \ket{t} \ket{0} \ket{t} \alpha {\lVert\tilde{S}^{t-1}(\tilde{L}(r_{0}))\rVert} \ket{\tilde{S}^{t-1}(\tilde{L}(r_{0}))}\ket{0} + \ket{t} \ket{1} \sum_{i=1}^{t} \ket{i} \ket{G_i} \ket{1} \\ & \stackrel{CNOT_{5,2}}{\rightarrow} & \ket{t} \ket{0} \left( \alpha {\lVert\tilde{S}^{t-1}(\tilde{L}(r_{0}))\rVert} \ket{t} \ket{\tilde{S}^{t-1}(\tilde{L}(r_{0}))} \ket{0} + \ket{G'_t}\ket{1} \right)\\\end{aligned}$$ The equality on line 2 follows from the definition of $W$. We prove the properties by induction on $t$. For $t=1$ we use the definition of the quantum step $V$ and the property holds. Assume it holds for $t-1$, i.e. with probability $1-(t-1)\delta$ we have ${\lVert S^{t-2}(L(r_{0})) - \tilde{S}^{t-2}(\tilde{L}(r_0)) \rVert} \leq (t-1)\epsilon $. Then, we have $$\begin{aligned} {\lVert S^{t-1}(L(r_0)) - \tilde{S}^{t-1}(\tilde{L}(r_{0})) \rVert} & \leq & {\lVert S(S^{t-2}(L(r_0))) - S( \tilde{S}^{t-2}(\tilde{L}(r_{0}))) \rVert} + {\lVert S( \tilde{S}^{t-2}(\tilde{L}(r_{0}))) - \tilde{S}( \tilde{S}^{t-2}(\tilde{L}(r_{0}))) \rVert}\\ & \leq & {\lVert S^{t-2}(L(r_0)) - \tilde{S}^{t-2}(\tilde{L}(r_{0})) \rVert} + {\lVert S( \tilde{S}^{t-2}(\tilde{L}(r_{0}))) - \tilde{S}( \tilde{S}^{t-2}(\tilde{L}(r_{0}))) \rVert}\end{aligned}$$ where we used the fact that ${S}$ is contractive. Also, by definition of the iterative step, with probability $(1-\delta)$ we have $ {\lVert S( \tilde{S}^{t-2}(\tilde{L}(r_{0}))) - \tilde{S}( \tilde{S}^{t-2}(\tilde{L}(r_{0}))) \rVert} \leq \epsilon $ and with probability $1-(t-1)\delta$, by induction hypothesis, we have ${\lVert S^{t-2}(L(r_{0})) - \tilde{S}^{t-2}(\tilde{L}(r_0)) \rVert} \leq (t-1)\epsilon $. Hence overall, with probability at least $1-t\delta$, we have $${\lVert S^{t-1}(L(r_0)) - \tilde{S}^{t-1}(\tilde{L}(r_{0})) \rVert} \leq t \epsilon .$$ The Quantum Iterative Method algorithm: general case ---------------------------------------------------- Again, we use Amplitude Amplification to optimize the running time of our method. The main part is the efficient construction of the necessary unitary $Q$.\ [**The Quantum Iterative Method**]{} Use Amplitude Amplification and Estimation with unitary $Q$ $$Q : \ket{0}^\ell \rightarrow \frac{1}{T} \ket{\tilde{\theta}_\tau}\ket{0} + \ket{G}\ket{0^\bot}$$ to output $\ket{\tilde{\theta}_\tau}$ and ${\lVert\tilde{\theta}_\tau\rVert}$.\ [**The unitary** ]{}$Q : \ket{0}^\ell \rightarrow \frac{1}{T} \ket{\tilde{\theta}_\tau}\ket{0} + \ket{G}\ket{0^\bot}$\ 1. Create the state $\frac{1}{\sqrt{\tau+1}} \sum_{t=0}^{\tau} \ket{t} \ket{0}\ket{0} \ket{r_0} \ket{0}$ 2. Apply the unitary procedure $U$ and trace out the second register to get $$\frac{1}{\sqrt{\tau+1}} \ket{0}\ket{0} \ket{r_0} \ket{0} + \frac{1}{\sqrt{\tau+1}} \sum_{t=1}^{\tau} \ket{t} \left( \alpha {\lVert\tilde{S}^{t-1}(\tilde{L}(r_{0}))\rVert} \ket{t} \ket{\tilde{S}^{t-1}(\tilde{L}(r_{0}))}\ket{0} + \ket{G'_t}\ket{1} \right)$$ 3. Conditioned on the last register being $0$, perform a $CNOT_{1,2}$ to erase the second copy of $t$ and then by exchanging the place of the second and third register we get $$\frac{1}{\sqrt{\tau+1}} \ket{0} \ket{r_0} \ket{0} \ket{0} + \frac{1}{\sqrt{\tau+1}} \sum_{t=1}^{\tau} \ket{t} \left( \alpha {\lVert\tilde{S}^{t-1}(\tilde{L}(r_{0}))\rVert} \ket{\tilde{S}^{t-1}(\tilde{L}(r_{0}))}\ket{0}\ket{0} + \ket{G'_t}\ket{1} \right)$$ 4. Conditioned on the last register being $0$ perform a Hadamard on the first register and then by exchanging the place of the first and second register we get $$\frac{1}{\tau+1} \sum_{y=0}^{\tau} \left( \ket{r_0} + \alpha \sum_{t=1}^{\tau} (-1)^{y \cdot t} {\lVert\tilde{S}^{t-1}(\tilde{L}(r_{0}))\rVert} \ket{\tilde{S}^{t-1}(\tilde{L}(r_{0}))} \right) \ket{y}\ket{0} \ket{0} + \ket{G^{\prime\prime}}\ket{1} =$$ $$\frac{{\lVert\tilde{\theta}_\tau\rVert}}{\tau+1} \left( \frac{1}{{\lVert\tilde{\theta}_\tau\rVert}} \left( \ket{r_0} + \alpha \sum_{t=1}^{\tau} {\lVert\tilde{S}^{t-1}(\tilde{L}(r_{0}))\rVert} \ket{\tilde{S}^{t-1}(\tilde{L}(r_{0}))} \right) \right) \ket{0}\ket{0} \ket{0} + \ket{G} ( \ket{0}\ket{0}\ket{0})^\bot =$$ $$\frac{1}{T}\ket{\tilde{\theta}_\tau}\ket{0} + \ket{G}\ket{0^\bot} $$ with $\ket{\tilde{\theta}_\tau} = \frac{1}{{\lVert\tilde{\theta}_\tau\rVert}} \left( \ket{r_0} + \alpha \sum_{t=1}^{\tau} {\lVert\tilde{S}^{t-1}(\tilde{L}(r_{0}))\rVert} \ket{\tilde{S}^{t-1}(\tilde{L}(r_{0}))} \right) $ and $T = \frac{ \tau+1}{{\lVert\tilde{\theta}_\tau\rVert}} $. Analysis -------- ### Correctness We just need to calculate how close the final state of our algorithm is to the correct state $\ket{{\theta}_\tau} = \frac{1}{{\lVert{\theta}_\tau\rVert}} \left( \ket{r_0} + \alpha \sum_{t=1}^{\tau} {\lVert{S}^{t-1}({L}(r_{0}))\rVert} \ket{{S}^{t-1}({L}(r_{0}))} \right)$. We first look at the non-normalised distance and have $${\lVert\theta_\tau - \tilde{\theta}_\tau\rVert} \leq \alpha \sum_{t=1}^{\tau} {\lVertS^{t-1}(L(r_0)) - \tilde{S}^{t-1}(\tilde{L}(r_0))\rVert} \leq \alpha \sum_{t=1}^{\tau} t \epsilon \leq \alpha \tau^2 \epsilon$$ Then by Claim \[unnorm\] we have $${\lVert\ket{\theta_\tau} - \ket{\tilde{\theta}_\tau}\rVert} \leq \frac{\sqrt{2} \alpha \tau^2 \epsilon}{{\lVert\theta_\tau\rVert}}.$$ For the norms, Amplitude Estimation will output the norm ${\lVert\tilde{\theta}_\tau\rVert}$ within any constant error (by increasing the running time by a constant factor) and note that ${\lVert\tilde{\theta}_\tau\rVert}$ is $(\alpha \tau^2 \epsilon)$-close to ${\lVert\theta_\tau\rVert}$. Hence by taking $\epsilon=O(\frac{1}{\tau^2})$ appropriately small, the approximation of the norm can be made to have some small constant error. We note that in the applications we will consider, ${\lVert\theta_\tau\rVert}$ is at least $\Omega(1)$ and at most $O(\tau)$ and $\alpha=O(1)$. Hence, again, by taking $\epsilon=O(\frac{1}{\tau^2})$ appropriately small, we can make this distance less than some small constant. ### Running time {#itertime} The expected running time is the expected running time of the Amplitude Amplification (which is the same as for Amplitude Estimation for constant error), which is $T$ times the cost of implementing the unitary $Q$, which is $O(C_U+\log\tau)$. Overall, the expected running time is $O(T (C_U+\log\tau))$. As we said, in our applications we will have ${\lVert\theta_\tau\rVert}=\Omega(1)$, which also implies that ${\lVert\tilde{\theta}_\tau\rVert}\geq {\lVert\theta_\tau\rVert} - \alpha \tau^2 \epsilon=\Omega(1)$ for appropriately small $\epsilon$. Hence, the running time for the applications will be $O(\tau (C_U+\log\tau))$. In the worst case, $C_U$ could be at most $\tau C_V$, but we will see that in fact in many cases we can implement $U$ with basically the same cost as $V$ (i.e. $C_U=O(C_V+\log \tau)$) and hence get an overall running time $O(\tau (C_U+\log \tau))$, the proof is given in Section \[istep\] . Improved quantum algorithms for matrix multiplication and linear systems ======================================================================== In sections \[ds\] and \[isve\] we generalize the data structure and quantum algorithm used for singular value estimation in [@KP16] obtaining an improvement in the running time for several classes of matrices. We use the improved singular value estimation algorithm for solving quantum linear systems and quantum matrix multiplication in section \[lsmul\]. Finally, in section \[istep\] we show how to implement a single step of the quantum iterative method described in section 4. The data structure {#ds} ------------------ We first define normalized states and then describe a data structure that enables efficient preparation normalized states corresponding to the rows/columns of a matrix. The normalized vector state corresponding to vector $x \in {\mathbb{R}}^{n}$ and $M \in {\mathbb{R}}$ such that ${\lVertx\rVert}_{2}^{2} \leq M$ is the quantum state $\ket{\overline{x}} = \frac{1}{\sqrt{M}} \sum_{i \in [n]} x_{i} \ket{i} + (M-{\lVertx\rVert}_{2}^{2} )^{1/2} \ket{n+1}$. We work in a computational model where the entries of the matrix $A$ arrive in an online manner and are stored in a classical data structure, where a quantum algorithm has quantum access. This is the normal quantum query model, used for example for Grover’s algorithm. The insertion and update times for the data structure are poly-logarithmic per entry. The time required to construct the data structure is $O(w \log^{2}mn)$ where $w$ is the number of non zero entries in $A$. We have the following theorem. \[dsplus\] Let $A \in {\mathbb{R}}^{m\times n}$ and $M= \max_{i \in [m]} {\lVerta_{i}\rVert}^{2}$. There is an algorithm that given a list of matrix entries $(i, j, a_{ij})$ of length $w$, creates a data structure of size $O(w \log mn)$ in time $O( w \log^{2}mn)$ such that a quantum algorithm with quantum access to the data structure can implement the following unitary in time $\widetilde{O}(\log (mn) )$. [$$\begin{aligned} \label{nscreate} U \ket{ i, 0^{\lceil \log (n+1) \rceil} } = \ket{i} \frac{1}{\sqrt{M}} {\left ( \sum_{j \in [n]} a_{ij} \ket{j} + (M - {\lVerta_{i}\rVert}^{2})^{1/2}\ket{n+1} \right )} \end{aligned}$$]{} The data structure maintains an array of $m$ binary trees $B_{i}, i \in [m]$ one for each row of the matrix. The leaf node $j$ of tree $B_{i}$, if present, stores $(a_{ij}^{2}, sign(a_{ij}))$. An internal node $u$ stores the sum of the values of the leaf nodes in the subtree rooted at $u$. In addition, there is an extra node $M$ that, at any instant of time stores the maximum row norm $M=\max_{i \in [m]} {\lVerta_{i}\rVert}^{2} $ for the matrix $A$ currently stored in the data structure. The data structure is initially empty and the value stored in node $M$ is $0$. We next describe the update when entry $(i, j,a_{ij})$ is added to the data structure and then the procedure for implementing the unitary $U$ in using the data structure. The algorithm on receiving entry $(a_{ij}, i, j)$ creates the leaf node $j$ in tree $B_{i}$ if not present and updates it otherwise. Then, it creates or updates the value of all nodes in the path between the leaf and the root of the tree. The update requires time $O(\log^{2} mn)$ as at most $O(\log n)$ nodes on the path from node $j$ to the root in the tree $B_{i}$ are updated and each update requires time $O(\log mn)$ to find the address of the node being updated. At the end of each update the root of $B_{i}$ stores the squared norm ${\lVerta_{i}\rVert}^{2}$ for the vector stored in $B_{i}$. The algorithm compares $M$ with ${\lVerta_{i}\rVert}^{2}$ and updates the maximum if ${\lVerta_{i}\rVert}^{2} > M$, this additional step requires time $O(\log n)$. The time needed to construct the data structure is $O(w \log^{2}mn)$ as there are $w$ updates each taking $O(\log^{2}mn)$ time, and the space required is $O(w \log mn)$. After the data structure has been created the value stored in $M= \max_{i \in [m]} {\lVerta_{i}\rVert}^{2}$. In order to implement $U$ we first perform a controlled rotation using the values at $M$ and the root of $B_{i}$ and tag the part of the superposition with value $\ket{n+1}$, [$$\begin{aligned} \label{tag} \ket{ i, 0^{\lceil \log (n+1) \rceil } } \to \ket{i} \frac{1}{ \sqrt{M}} \left( {\lVerta_{i}\rVert} \ket{0^{\lceil \log (n+1) \rceil }} \ket{0} + (M - {\lVerta_{i}\rVert}^{2} )^{1/2} \ket{n+1} \ket{1} \right) \end{aligned}$$]{} We then proceed similarly to the construction in [@KP16]. Let $B_{i,k}$ be the value of an internal node $k$ of tree $B_{i}$ at depth $t$. We apply a series of conditional rotations to the second register, conditioned on the first register being $\ket{i}$, the first $t$-qubits of the second register being $\ket{k}$ and the tag qubit being $\ket{0}$, the rotation applied is: $$\ket{i} \ket{k} \ket{0} \to \ket{i}\ket{k} \frac{1}{\sqrt{ B_{i, k}}} \left( \sqrt{ B_{i,2k} } \ket{0} + \sqrt{ B_{i,2k+1} } \ket{1} \right)$$ We take positive square roots except for the leaf nodes where the sign of the square root is the same as $sign(a_{ij})$ of the entry stored at the leaf node. The tag qubit is uncomputed after all the conditional rotations have been performed by mapping $\ket{n+1}\ket{1}$ to $\ket{n+1}\ket{0}$. Correctness follows since conditioned on the tag qubit being $\ket{0}$ the conditional rotations produce the state $\frac{1}{ \sqrt{M}{\lVerta_{i}\rVert}} \sum_{j} a_{ij} \ket{j}$ and the amplitude for the tagged part is $\sqrt{ (M - {\lVerta_{i}\rVert}^{2})/M } $ matching the amplitudes in equation . The time for implementing $U$ is $\widetilde{O}(\log mn)$ as the number of quantum queries to the data structure is $O(\log mn)$ and each query takes poly-logarithmic time. Improved Singular Value Estimation {#isve} ---------------------------------- We first recall the notion of singular value estimation, Let $A= \sum_{i} \sigma_{i} u_{i} v_{i}^{t}$ be the singular value decomposition for matrix $A \in {\mathbb{R}}^{m \times n}$. A quantum algorithm estimates the singular values of $A$ with precision $\delta$ if it transforms $\sum_{i} \beta_{i} \ket{v_{i}} \to \sum_{i} \beta_{i} \ket{v_{i}} \ket{\overline{\sigma_{i}}}$ where $|\overline{\sigma_{i}} - \sigma_{i}|\leq \delta$ for all $i \in [n]$ with probability $1-1/poly(n)$. The theorem below provides a generalized quantum walk algorithm for singular value estimation. It extends the $SVE$ algorithm from [@KP16] and the quantum walk algorithms used for linear systems, for example [@CKS15]. \[sveplus\] Let $A \in {\mathbb{R}}^{m\times n}$ be a matrix and suppose there exist $P, Q \in {\mathbb{R}}^{m\times n}$ and $\mu >0$ such that ${\lVertp_{i}\rVert}_{2} \leq 1 \; \forall i \in [m], \;{\lVertq^{j}\rVert}_{2} \leq 1 \; \forall j \in [n]$ and [$$\begin{aligned} \label{hadamard} A/\mu = P \circ Q. \end{aligned}$$]{} If unitaries $U: \ket{i} \ket{0^{\lceil \log (n+1) \rceil}} \to \ket{i} \ket{\overline{p}_{i}}$ and $V: \ket{0^{\lceil \log (m+1) \rceil}} \ket{j} \to \ket{\overline{q}^{j}} \ket{j}$ can be implemented in time $\widetilde{O} (\log (mn))$ then there is a quantum algorithm that estimates the singular values of $A$ with precision $\delta$ in time $\widetilde{O}(\mu/\delta)$. Let $\overline{P}, \overline{Q} \in {\mathbb{R}}^{(m+1) \times (n+1)}$ be matrices with rows and columns respectively equal to the normalized states $\overline{p}_{i}, \overline{q}^{j}$ for $i\in [m], j\in [n]$ and an additional row or column $\overline{p}_{m+1} = e_{m+1}, \overline{q}^{n+1} = e_{n+1}$. Let $\overline{A}= \left ( \begin{matrix} A &0 \\ 0 &\mu \end{matrix} \right )$ be an extension of $A$ of size $(m+1) \times (n+1)$ so that the factorization $\overline{A}/\mu = \overline{P} \circ \overline{Q}$ holds. The singular value decomposition of $\overline{A}$ is $\sum_{i} \sigma_{i} \overline{u}_{i} \overline{v}_{i}^{t} + \mu e_{m+1}^{t} e_{n+1}$ where $\sigma_{i}$ are singular values for $A$ and the singular vectors $\overline{u}_{i}, \overline{v}_{i}$ are obtained by appending an additional $0$ coordinate to the singular vectors $u_{i}, v_{i}$ of $A$. The operators $\widetilde{P} \in {\mathbb{R}}^{(m+1)(n+1) \times (m+1)}, \widetilde{Q} \in {\mathbb{R}}^{(m+1)(n+1) \times (n+1)}$ are defined as follows, [$$\begin{aligned} \label{defpq} \widetilde{P} \ket{i} &= \ket{i} \ket{ \overline{p}_{i}}, \;\;\; \widetilde{Q} \ket{j} = \ket{ \overline{q}^{j} } \ket { j}. \end{aligned}$$]{} The columns of $\widetilde{P}, \widetilde{Q}$ are orthogonal unit vectors so we have $\widetilde{P}^{t} \widetilde{P} = I_{m+1}$ and $\widetilde{Q}^{t} \widetilde{Q}=I_{n+1}$. Multiplication by $\widetilde{P}, \widetilde{Q}$ can be implemented efficiently using the unitaries $U$, $V$ in the theorem statement, we illustrate below for multiplication by $\widetilde{P}$. [$$\begin{aligned} \ket{z} \to \ket{ z, 0^{\lceil \log (n+1) \rceil} } \xrightarrow{\widetilde{U}} \sum_{i \in [n+1]} z_{i} \ket{ i, \overline{p}_{i} } = \ket{ \widetilde{P} z} \end{aligned}$$]{} The unitary $\widetilde{U}$ acts as $U$ conditioned on $0\leq i \leq m$ and maps $\ket{0^{\lceil \log (n+1) \rceil}} \to \ket{e_{n+1}}$ for $i=m+1$. Multiplication by $Q$ can be implemented similarly using $\widetilde{V}$, thus the reflections $2\widetilde{P}\widetilde{P}^{t} -I$ and $2\widetilde{Q}\widetilde{Q}^{t} -I$ can be performed in time $\widetilde{O}(\log (mn))$. Finally, the factorization $\widetilde{P}^{t}\widetilde{Q} = \overline{A}/\mu$ implies that the unitary $W= (2\widetilde{P}\widetilde{P}^{t} -I). (2\widetilde{Q}\widetilde{Q}^{t} -I)$ has eigenspaces $Span(\widetilde{P} \overline{u}_{i}, \widetilde{Q} \overline{v}_{i})$ with eigenvalues $e^{ \iota \theta_{i}}$ such that $\cos(\theta_{i}/2) = \sigma_{i}/\mu$ and hence phase estimation for $W$ on $\ket{\widetilde{Q}\overline{v}_{i}}$ recovers an estimate of $\sigma_i$ up to additive error $\delta$ in time $\widetilde{O}(\mu/\delta)$. Theorem \[sveplus\] holds for any choice of $P, Q$ such that $A/\mu = P \circ Q$ provided the unitaries $U$ and $V$ can be implemented efficiently in time $\widetilde{O}(\log mn)$, that is if the normalized states corresponding to the rows of $P$ and the columns of $Q$ can be prepared efficiently. We show that the data structure in Theorem \[dsplus\] allows us to implement $U, V$ for the following choice of $P$ and $Q$. [$$\begin{aligned} \label{fact} p_{ij} = \frac{ a_{ij}^{p} } { \max {\lVerta_{i}\rVert}_{2p}^{2p} }, \;\;\;\;\; q_{ij} = \frac{ a_{ij}^{1-p} } { \max {\lVerta^{j}\rVert}_{2(1-p)}^{2(1-p)}}\end{aligned}$$]{} Indeed, in order to implement the unitaries $U$ and $V$ corresponding to this choice of $P, Q$, we create two copies of the data structure in Theorem 5.2 that respectively store the rows and the columns of $A$. Given entry $(i, j, a_{ij})$ instead of $a_{ij}$, we store $a_{ij}^{p}$ and $a_{ij}^{1-p}$ in the two data structures. Then, Theorem 5.2 implies that the unitaries $U,V$ can be implemented efficiently. We therefore obtain quantum algorithms for singular value estimation that are parametrized by $p \in [0,1]$. The normalization factors are $\mu_{p}(A) = \sqrt{ s_{2p}(A) s_{2(1-p)}(A^{t})}$, where we denote by $s_{p}(A) := \max_{i \in [m]} {\lVerta_{i}\rVert}_{p}^{p}$ the maximum $\ell_{p}$ of the row vectors, and by $s_{p}(A^{T})$ the maximum $\ell_{p}$ norm of the column vectors. Note that the $SVE$ algorithm [@KP16] corresponds to the choice $p_{ij} = \frac{ a_{ij} } { {\lVerta_{i}\rVert} }$ and $q_{ij} = \frac{ {\lVerta_{i}\rVert} } { {\lVertA\rVert}_{F} }$ and has $\mu(A)={\lVertA\rVert}_{F}$. \[sveplus1\] Let $A \in {\mathbb{R}}^{m\times n}$ be stored in the data structure of Theorem \[dsplus\]. There is a quantum algorithm that performs singular value estimation for $A$ to precision $\delta$ in time $\widetilde{O}(\mu(A)/\delta)$ where $\mu(A) = \min_{p\in [0,1]} \left( {\lVertA\rVert}_{F}, \sqrt{s_{2p}(A) s_{2(1-p)}(A^{t})}\right)$. Note that the optimal value of $p$ depends on the matrix $A$. One important case is when we consider a symmetric matrix $A$ and $p=1/2$ and obtain $\mu(A) \leq s_{1}(A)$, showing that $\mu(A)$ is at most the maximum $\ell_{1}$-norm of the row vectors. The generalized singular value estimation is stated next as Algorithm \[gensve\] using notation as in the proof of Theorem \[sveplus\]. $A \in {\mathbb{R}}^{m\times n}$, $x \in {\mathbb{R}}^{n}$, efficient implementation of unitaries $U, V$ in Theorem \[sveplus\]. Precision parameter $\epsilon>0$.\ 1. Create $\ket{\overline{x}} = \sum_{i \in [n]} \alpha_{i} \ket{\overline{v}_{i}}$. Append a first register $\ket{0^{\lceil \log (m+1) \rceil }}$ and apply unitary $\widetilde{V}$ to create the state $\ket{\widetilde{Q}\overline{x}} = \sum_{i} \alpha_{i} \ket{\widetilde{Q}\overline{v}_{i}}$.\ 2. Perform Phase Estimation with precision $2\epsilon >0$ on input $\ket{\widetilde{Q}\overline{x}}$ for the unitary $W$ in Theorem \[sveplus\]. 3. Compute $\overline{\sigma_{i}} = \cos(\overline{\theta_{i}}/2) \mu(A)$ where $\overline{\theta_{i}}$ is the estimate from Phase Estimation, and uncompute the output of the Phase Estimation to obtain $\sum_{i} \alpha_{i} \ket{\widetilde{Q}\overline{v}_{i}, \overline{\sigma_{i}} } $. [\ ]{} 4. Apply the inverse of $\widetilde{V}$ to multiply the first register with the inverse of $\widetilde{Q}$ and obtain $\sum_{i} \alpha_{i} \ket{\overline{v}_{i} } \ket{\overline{\sigma_{i}}}$. [\ ]{} Quantum matrix multiplication and linear systems {#lsmul} ------------------------------------------------ We provide algorithms for quantum linear systems and quantum matrix multiplication using the improved singular value estimation algorithm. We will see that once we perform singular value estimation for a matrix $A$, then multiplication with the matrix consists of a conditional rotation by an angle proportional to each singular value. Similarly, solving the linear system corresponding to the matrix $A$ is a multiplication with the inverse of $A$, in other words a conditional rotation by an angle proportional to the inverse of each singular value of $A$. The two algorithms are therefore very similar. We will also extend our matrix multiplication algorithm, i.e. the application of a linear operator, to the case of an affine operator, namely, given matrix $A$ and vector $b$ in memory, the algorithm maps any state $\ket{x}$ to a state close to $\ket{Ax+b}$. Last, we discuss briefly the cases for which our algorithm improves upon the running time of existing quantum linear system solvers. If $A \in {\mathbb{R}}^{m\times n}$ is a rectangular matrix, then multiplication by $A$ reduces to multiplication by the square symmetric matrix $A'= \left ( \begin{matrix} 0 &A \\ A^{t} &0 \end{matrix} \right )$ as $A' (0^{m}, x) = (Ax, 0)$. Therefore, without loss of generality we restrict our attention to symmetric matrices for the quantum matrix multiplication problem. Since, the matrix multiplication algorithm \[qmat\] will be used for implementing a single step of the iterative method, and in that case we multiply by a positive semidefinite matrix $A$ that is also contractive, we state the quantum matrix multiplication algorithm \[qmat\] for such matrices. Note that linear systems for general symmetric matrices are not much harder than the case described in Algorithm \[qmat\] [^3] Vector $x=\sum_{i} \beta_{i} \ket{v_{i}}\in {\mathbb{R}}^{n}$ and matrix $A \in {\mathbb{R}}^{n\times n}$ such that $A \succeq 0$ stored in the data structure, such that eigenvalues of $A$ lie in $[1/\kappa, 1]$. Perform singular value estimation with precision $\epsilon_{1}$ for $A$ on $\ket{x}$ to obtain $\sum_{i} \beta_{i} \ket{v_{i}} \ket{\overline{\lambda}_{i}}$. Perform a conditional rotation and uncompute the $SVE$ register to obtain the state: [\ ]{} (i) $\sum_{i} \beta_{i} \ket{v_{i}} ( \overline{\lambda_{i}} \ket{0} + \gamma \ket{1})$ for matrix multiplication. [\ ]{} (ii) $\sum_{i} \beta_{i} \ket{v_{i}} ( \frac{\lambda_{min}} {\overline{\lambda_{i}} } \ket{0} + \gamma \ket{1})$ for linear systems. [\ ]{} Perform Amplitude Amplification with the unitary $V$ implementing steps $1$ and $2$, to obtain (i) $\ket{z} = \sum_{i} \beta_{i} \overline{\lambda_{i}} \ket{v_{i}}$ or (ii) $\ket{z} = \sum_{i} \beta_{i} \frac{1}{\overline{\lambda_{i}}} \ket{v_{i}} $. [\ ]{} Note that as in the analysis of the HHL algorithm [@HHL09], the parameter $\kappa$ does not have to be as big as the true condition number of $A$. If $\kappa$ is smaller, then it means that we invert only the well conditioned part of the matrix. In the appendix, we also provide an algorithm to normalize the matrix $A$ such that ${\lVertA\rVert} \leq 1$. Let us analyze the correctness and running time of the above algorithm. \[lqmat\] Algorithm \[qmat\] produces as output a state $\ket{z}$ such that ${\lVert \ket{Ax} - \ket{z}\rVert} \leq \delta$ in expected time $\widetilde{O}(\frac{\kappa(A) \mu(A)}{ \delta } \kappa(A) )$ for both matrix multiplication and linear systems. We first analyze matrix multiplication. The unnormalized solution state is $Ax= \sum_{i} \beta_{i} \lambda_{i} v_{i}$, while the unnormalized output of the algorithm is $z= \sum_{i} ( \lambda_{i} \pm \widetilde{\epsilon}_{i}) \beta_{i} v_{i}$ for $|\widetilde{\epsilon}_{i}| \leq \epsilon_{1}$. As the $v_{i}$ are orthonormal, we have ${\lVert Ax - z\rVert} \leq \epsilon_{1} {\lVertx\rVert}$ and by Claim \[unnorm\], we have ${\lVert \ket{Ax} - \ket{z}\rVert} \leq \frac{\sqrt{2}\epsilon_{1}{\lVertx\rVert}}{{\lVertAx\rVert}}\leq \sqrt{2} \epsilon_{1} \kappa(A)$. We next analyze linear systems. The unnormalized solution state is $A^{-1}x= \sum_{i} \frac{\beta_{i}}{\lambda_{i}} v_{i}$. The unnormalized output is $z= \sum_{i} \frac{\beta_{i}}{\lambda_{i} \pm \tilde{\epsilon}_{i}} v_{i}$ for $|\widetilde{\epsilon}_{i}| \leq \epsilon_{1}$. We have the bound $${\lVert A^{-1} x - z\rVert}^{2} \leq \sum_{i} \beta_{i}^{2} (\frac{1}{\lambda_{i}} - \frac{1}{ \lambda_{i} \pm \epsilon_{i}} )^{2} \leq \epsilon_{1}^{2} \sum_{i} \frac{\beta_{i}^{2}}{\lambda_{i}^{2} (\lambda_{i} - \epsilon_{1})^{2} } \leq \frac{\epsilon_{1}^{2} \kappa(A)^{2} {\lVertA^{-1} x\rVert}^{2}}{ (1-\kappa(A) \epsilon_{1})^{2}} \leq 4\epsilon_{1}^{2} \kappa(A)^{2} {\lVertA^{-1} x\rVert}^{2}$$ assuming that $\kappa(A) \epsilon_{1} \leq 1/2$. Applying Claim \[unnorm\] we obtain ${\lVert \ket{A^{-1} x} -\ket{ z}\rVert} \leq 2\sqrt{2}\kappa(A) \epsilon_{1}$ for $\kappa(A) \epsilon_{1} \leq 1/2$. The running time bounds for the SVE in step 1 and 2 can be obtained by substituting the above error bounds for $\delta$. In addition, we perform Amplitude Amplification as in claim \[tampa\] with the unitary $V$ that represents the first two steps of the algorithm, this incurs on expectation a multiplicative overhead of ${\widetilde{O}(\kappa(A))}$ over the cost of $V$. Let us compare the new linear systems algorithm to the HHL algorithm. First, note that the quantum linear system problem is invariant under rescaling of $A$, thus we are free to choose a normalization for $A$. A quantum walk algorithm estimates the singular value $\sigma$ of a matrix by mapping the singular value to $\cos(\theta)$ for some $\theta \in [0, \pi]$ and then estimating $\cos(\theta)$. It is natural to use the normalization ${\lVertA\rVert} = 1$ as the eigenvalues being estimated have been scaled down to quantities in $[0,1]$. The HHL algorithm [@HHL09; @H14] under the assumption that $A$ is a Hermitian matrix with eigenvalues in range $[-1, 1/\kappa] \cup [1/\kappa, 1]$ produces the state $\ket{A^{-1} b}$ and an extimate for ${\lVert A^{-1} b\rVert}$ in time $\widetilde{O}( s^{2}(A) \kappa^{2})$ where $s(A)$ is the number of non zero entries per row. Subsequent work has improved the running time to linear in both the condition number $\kappa(A)$ and the sparsity $s(A)$ [@CKS15; @A12]. Our running time is quadratic on the condition number, like that of the HHL algorithm, we expect that the running time of our algorithm can also be improved to be linear on the condition number. More importantly, instead of the sparsity $s(A)$ of the matrix, our running time depends on the parameter $\mu(A)$. On one hand, this factor is smaller than the Frobenius norm, for which we have ${\lVertA\rVert}_{F} = ( \sum_{i} \sigma_{i}^{2} )^{1/2} \leq \sqrt{rk(A)}$. Hence, our algorithm \[qmat\] achieves an exponential speedup even for dense matrices whose rank is poly-logarithmic in the matrix dimensions. Moreover, while for general dense matrices the sparsity is $\Omega(n)$, we have $\mu(A) \leq {\lVertA\rVert}_{F} \leq \sqrt{n}$ and thus we have a worst case quadratic speedup over the HHL algorithm. In addition, the factor $\mu(A)$ is smaller than the maximum $\ell_{1}$-norm $s_1(A)$ which is always smaller than the maximum sparsity $s(A)$ when we take the normalization that ${\lVertA\rVert}_{max}=1$, that is the entries of $A$ have absolute value at most $1$. This is the normalization used in [@CKS15], and we therefore also improve on the linear system solver in [@CKS15], whose running time depends on $s(A)$. For example, real-valued matrices with most entries close to zero and a few entries close to 1, e.g. small perturbations of permutation matrices, will have $s(A)=\Omega(n)$, ${\lVertA\rVert}_F=\omega(\sqrt{n})$ but could have $s_1(A)=O(1)$ for small enough perturbations. Last, there are also matrices with bounded spectral norm for which Algorithm \[qmat\] requires time $\Omega(\sqrt{n})$, for example consider a random sign matrix $A$ with ${\lVertA\rVert}=1$. In this case, one can easily show that with high probability $\mu(A) = \Omega(\sqrt{n})$. The optimal value $\mu(A) =\min_{\mu} \{ P \circ Q = A/\mu \;|\; {\lVertp_{i}\rVert}_{2} \leq 1, {\lVertq^{j}\rVert}_{2} \leq 1 \}$ in Theorem \[sveplus\] is the spectral norm of $|A|$ where $|A|$ is the matrix obtained by replacing entries of $A$ by their absolute values [@M90]. We recall that the matrix $A \in {\mathbb{R}}^{m\times n}_{+}$ has unique positive left and right eigenvectors, these eigenvectors are called the Perron-Frobenius eigenvectors and can be computed for example by iterating $x_{t+1} = Ax_{t}/{\lVertAx_{t}\rVert}$. The optimal walk can be implemented efficiently if the coordinates of the Perron-Frobenius eigenvectors for $|A|$ are stored in memory prior to constructing the data structure. This is exactly the quantum walk that is also used in [@C10]. Hence, if one can precompute the Perron-Frobenius eigenvectors, for example if the matrix is stochastic it is the all ones vector, one can perform this optimal walk. However, the entries of the Perron-Frobenius eigenvector can not be computed in a single pass over a stream of the matrix entries, and hence the optimal walk can not be implemented in our model where we have quantum access to a data structure built in linear time from a stream of matrix entries. We also note that the spectral norm of $|A|$ can be much larger than the spectral norm of $A$, for example for a random sign matrix with $\pm 1$ entries the spectral norm of $A$ is ${\lVertA\rVert}= O(\sqrt{n})$ but the spectral norm of $|A|$ is ${\lVert|A|\rVert}=n$. Thus there are two interesting questions about quantum linear system solvers, the first is to find the optimal quantum linear system algorithm given access to a data structure built from a stream of matrix entries and the second to find if there can exist more general quantum walk algorithms with $\mu(A)= {\lVertA\rVert}$, the latter is also stated as an open problem in [@C10]. The iterative step {#istep} ------------------ We now show that the $(\epsilon, \delta)$-approximate quantum step for the iterative method in Definition \[31\] can be implemented using the quantum matrix multiplication algorithm presented above. The matrix $S$ for iterative methods corresponding to linear systems and least squares is of the form $S=I - \alpha A$ for a positive semidefinite matrix $A$. Further, we can assume that the matrix $A$ is stored in the data structure of Theorem \[dsplus\] and that $S$ is positive semi-definite and contractive. \[iterimp\] The $(\epsilon, \delta)$-approximate quantum step for the iterative method with $S=I - \alpha A$ and $L(x)=b-Ax$ with ${\lVertb\rVert}=1$, $\alpha\leq 1$, and for $A$ stored in the data structure of Theorem \[dsplus\], can be implemented in time $\widetilde{O}( \mu(A | b) / \epsilon)$, where $A | b$ is the matrix $A$ to with an extra row equal to $b$. We show how to implement the unitary $V$ in the iterative method, [$$\begin{aligned} V &: \ket{0} \ket{r_0} \ket{0}\rightarrow \ket{1} \left( \alpha {\lVert\tilde{L}(r_0)\rVert} \ket{\tilde{L}(r_0)} \ket{0} + \ket{G_1}\ket{1} \right) {\notag \\}&:\ket{t} {\lVertr_t\rVert} \ket{r_t} \ket{0} \rightarrow \ket{t+1} \left( {\lVert\tilde{S}(r_{t})\rVert} \ket{\tilde{S}(r_{t})}\ket{0} + \ket{G_{t+1}}\ket{1} \right),\end{aligned}$$]{} where $\ket{G_{1}}, \ket{G_{t+1}}$ are unnormalised garbage states, such that with probability $\geq 1-\delta$, it holds that ${\lVert L(r_0) - \tilde{L}(r_0)\rVert} \leq \epsilon $ and ${\lVert S(r_{t}) - \tilde{S}(r_{t}) \rVert} \leq \epsilon$. We first implement the linear part of $V$ that corresponds to $1 \leq t \leq \tau-1$ and then then the affine part corresponding to $t=0$. We denote $r_t=\sum_{i} \beta_{i} \ket{v_{i}}$. The linear part of $V$ is implemented by performing singular value estimation for $A$ and then using $\overline{ \lambda_{i}} =(1-\alpha \overline{\lambda_{i}} (A))$ as estimates for singular values of $S$, [$$\begin{aligned} \label{istep1} \ket{t} {\lVertr_t\rVert}\ket{r_t} \ket{0} \equiv \ket{t} \sum_{i} \beta_{i} \ket{v_{i}} \ket{0} \to \ket{t+1} \sum_{i} \beta_{i} \ket{v_{i}} {\left ( \overline{\lambda_{i} } \ket{0} + \sqrt{1- {\overline{\lambda_{i}}^2}} \ket{1} \right )} \end{aligned}$$]{} If the precision for singular value estimation is $\epsilon$ then the algorithm runs in time $\widetilde{O}( \mu(A) /\epsilon)$. For bounding the difference of the norms we observe that ${\lVert S(r_{t}) - \tilde{S}(r_{t}) \rVert} \leq {\lVert\sum_{i} \beta_{i} \tilde{\epsilon_{i}} v_{i} \rVert} \leq \alpha \epsilon \leq \epsilon$ as all the errors $\epsilon_{i} \leq \epsilon$ if singular value estimation succeeds and ${\lVert\beta\rVert} \leq 1$. The procedure succeeds if the singular value estimation algorithm produces the correct estimates so $1-\delta = 1 - 1/poly(n)$, that is $\delta$ can be taken to be $1/poly(n)$. The affine part of $V$ is implemented as follows. Let $A_1= {\left ( \begin{matrix} -A & b {\notag \\}0& 0\end{matrix} \right )}$ and $x_1= (x, 1)$ so that $A_1 x_1= (b-Ax, 0)$. Then we symmetrize $A_1$ by defining $A'= \left ( \begin{matrix} 0 &A_1 \\ A_1^{t} &0 \end{matrix} \right )$ and $x'= (0^{n+1},x_1) $, and have $A' x' = (A_1 x_1, 0) = (b-Ax,0)$. The columns of $A'$ are stored in the memory data structure so we can perform SVE for $A'$. We take $x=r_0$ and use the SVE algorithm for symmetric matrices on $r'_0$ (where we add an extra factor $\alpha$ in the conditional rotation) to map it to $A' r_{0}'= (b - Ar_{0},0)$ as the last coordinates become $0$. Denote $r'_0 = \sum_{i} \beta_{i} \ket{v'_{i}}$, where $v'_{i}$ are the eigenvectors of $A'$. $$\ket{0} {\lVertr'_0\rVert}\ket{r_0'} \ket{0} \equiv \ket{0} \sum_{i} \beta_{i} \ket{v'_{i}} \ket{0} \to \ket{1} \sum_{i} \beta_{i} \ket{v_{i}} {\left ( \alpha \overline{\lambda_{i} }(A') \ket{0} + \sqrt{1- {\alpha^{2} \overline{\lambda_{i}}(A')^2}} \ket{1} \right )}$$ If the precision for singular value estimation is $\epsilon$ then the algorithm runs in time $\widetilde{O}( \mu(A') /\epsilon)$ and the correctness analysis is the same as above. We next provide an upper bound for $\mu(A')$. We have ${\lVertA'\rVert}_F \leq 2{\lVertA\rVert}_F+2{\lVertb\rVert}$, while $s_1(A') \leq \max(s_1(A)+{\lVertb\rVert}_{\infty} ,s_1(b))$. Let’s assume for simplicity that ${\lVertb\rVert}=1$, which is the case in our applications, then the upper bound is $O(\mu(A|b))$ where $A|b$ is a matrix obtained by adding an extra row $b$ to $A$. Finally, let us see how to implement unitary $U$ as defined in the quantum iterative method with cost $O(C_V+\log \tau)$ which is asymptotically the same as the cost of $V$. It is easy to see that the complexity of $U$ is asymptotically upper bounded by the complexity of applying the unitary $V^\tau$ on $\ket{0}\ket{r_0}\ket{0}$. For this we first apply $V$ once to get the affine transformation (with running time $\widetilde{O}( \mu(A|b) /\epsilon)$), then we apply the SVE procedure on $A$ to obtain estimates $\overline{\lambda_i}$ of the eigenvalues of $S=(I-\alpha A)$ and then compute $\overline{\lambda_{i}}^{\tau-1}$ (in time $O(\log \tau)$) as the estimates for the singular values of $S^{\tau-1}$ for the conditional rotation. This gives us the desired unitary and the running time of the second part is $\widetilde{O}( \mu(A) /\epsilon+\log \tau)$. Hence, the overall running time is $O(C_V+\log \tau)$. Quantum iterative algorithms ============================ Linear systems {#ls} -------------- Let $A \succeq 0$ be a $n\times n$ psd matrix and $b$ a unit vector in ${\mathbb{R}}^{n}$. We can assume $b$ to be a unit vector as it is stored in memory and we know ${\lVertb\rVert}$. The goal is to solve the linear system $A\theta=b$. The quadratic form $F(\theta)=\theta^{T} A \theta - b \theta$ is minimized at the solution to $A \theta=b$. The classical iterative method starts with an arbitrary $\theta_{0}$ and applies the following updates, [$$\begin{aligned} \theta_{t+1} = \theta_{t} + \alpha ( b - A\theta_{t}) = \theta_{t} + \alpha r_{t} \end{aligned}$$]{} where the step size $\alpha$ will be specified later and the residuals $r_{t}= b- A\theta_{t}$ for $t\geq 0$. The residuals satisfy the recurrence $r_{t+1} = b - A(\theta_{t} + \alpha r_{t}) = (I- \alpha A) r_{t}$ and the initial condition $r_{0} =(b - A\theta_{0})$. In order to be consistent with the normalization used in quantum linear systems, we assume that the eigenvalues of $A$ lie within the interval $[1/\kappa, 1]$. The convergence analysis and the choice of the step size $\alpha$ follow from the following argument that bounds the norm of the error. Let $\theta^{*}=A^{-1}b$ be the optimal solution, the error $e_{t}:= \theta_{t} - \theta^{*}$ satisfies the recurrence $e_{t+1} = (\theta_{t+1} - \theta^{*}) = \theta_{t} - \theta^{*} + \alpha (b - A\theta_{t}) = e_{t} + \alpha A(\theta^{*} - \theta_{t})= (I- \alpha A) e_{t}$. After $t$ steps of the iterative method we have, $${\lVerte_{t}\rVert}= {\lVert (I- \alpha A)^{t} e_{0} \rVert} \leq (1- \alpha/\kappa)^{t} {\lVerte_{0}\rVert}.$$ The iterative method therefore converges to within error $\epsilon$ to the optimal solution $\theta^{*}$ in $\tau = O( \kappa \log ({\lVerte_{0}\rVert}/\epsilon) /\alpha)$ iterations. The step size $\alpha$ can be fixed to be a small constant say $\alpha = 0.01$ and the starting point $\theta_{0}$ chosen to be a unit vector so that ${\lVerte_{0}\rVert} \leq \kappa$. With these choices the method converges to within error $\epsilon$ of the optimal solution in $O(\kappa \log (\kappa/\epsilon))$ steps. In order to bound the running time of the iterative method we need that ${\lVert\theta_{\tau}\rVert}=\Omega(1)$ for $\tau = O(\kappa \log (\kappa/\epsilon))$. The solution $A^{-1}b$ to the linear system has norm at least $1$ as $b$ is a unit vector and the eigenvalues of $A^{-1}$ are greater than $1$. After $\tau$ steps we have ${\lVert \theta_{\tau} - \theta^{*} \rVert} \leq \epsilon \Rightarrow {\lVert\theta_{\tau} \rVert} \geq {\lVert\theta^{*} \rVert} -\epsilon \geq 1 - \epsilon$. The $(\epsilon, \delta)$ approximate iterative step can be implemented in time $C_{V}=\widetilde{O}( \mu(A | b) /\epsilon)$ by Proposition \[iterimp\]. Note also that in this case, the cost of implementing $V^t$ is the same as that of implementing $V$. This is because we do not have to apply $V$ sequentially $t$ times, but once the SVE has estimated the eigenvalues we can directly perform the conditional rotations by an angle proportional to the $t$-th power of each eigenvalue. The analysis in section \[itertime\] and the fact that $\alpha, {\lVert\tilde{\theta}_\tau\rVert}$ are constants, shows that given an $(\epsilon, \delta)$ approximate step, the quantum iterative algorithm has error ${\lVert\ket{\theta_\tau} - \ket{\tilde{\theta}_\tau}\rVert} = O(\tau^2 \epsilon)$ and requires time $O(\tau C_{U})= O(\tau C_V) = O(\frac{ \tau \mu(A|b)}{\epsilon} )$. We take $\epsilon = O(\frac{\delta}{\tau^2})$ in order to have ${\lVert\ket{\theta_\tau} - \ket{\tilde{\theta}_\tau}\rVert} \leq \delta$ for some $\delta>0$. Then, we can take $\tau = \kappa(A) \log \frac{\kappa(A)}{\delta}$ to have that ${\lVert\ket{\theta_\tau} - \ket{A^{-1}b} \rVert} \leq \delta$. Hence, we have the following theorem. \[qlinsys\] Given positive semidefinite $A \in {\mathbb{R}}^{n\times n}, b \in {\mathbb{R}}^{n}$ stored in memory, there is an iterative quantum algorithm that outputs a state $\ket{z}$ such that ${\lVert\ket{z} - \ket{A^{-1}b}\rVert} \leq 2\delta$ with expected running time $O(\frac{\kappa(A)^3 \log^3 \frac{\kappa(A)}{\delta} \mu(A | b)}{\delta})$. Weighted Least Squares {#wls} ---------------------- For the weighted least squares problem, we are given a matrix $X \in {\mathbb{R}}^{m\times n}$ and a vector $y\in {\mathbb{R}}^{m}$, as well as a vector $w \in {\mathbb{R}}^{m}$ of weights, and the goal is to find $\theta \in {\mathbb{R}}^{n}$ that minimizes the squared loss $\sum_{i \in [m]} w_{i} (y_{i} - x_{i}^{T} \theta)^{2}$. The closed form solution is given by, $$\theta = (X^{T} W X)^{-1} X^{T} W y$$ and thus the problem can also be solved using a direct method. The iterative method for weighted least squares is a gradient descent algorithm with the update rule $\theta_{t+1}= \theta_{t} + \rho \sum_{ i \in [m]} w_{i} ( y_{i} - \theta_{t}^{T} x_{i}) x_{i}$ which in matrix form can be written as, [$$\begin{aligned} \theta_{t+1} = (I - \rho X^{T}WX) \theta_{t} + \rho X^{T} W y \end{aligned}$$]{} The update can also be written as $\theta_{t+1} = \theta_{t} + \rho r_{t}$ where $r_{t} = X^{T} W y - X^{T}WX \theta_{t}$. Note that these updates are analogous to the linear system updates in section \[ls\] as $r_{t}=b - A\theta_{t}$ for $b= X^{T} W y$ and $A= X^{T}WX$. The step size $\rho$ is analogous to the $\alpha$ for the linear system and we assume the same scaling for $A$ and $b$ as in the case of linear systems. By the analysis of positive semidefinite linear system solvers in section \[ls\] it follows that if we are able to implement the steps of the iterative method, the quantum iterative algorithm for weighted least squares has the following running time. \[qwlsq\] Let semidefinite $X \in {\mathbb{R}}^{m,\times n}, y \in {\mathbb{R}}^{n}, w \in {\mathbb{R}}^{m}$ stored in memory and define $W=diag(w)$, $A= X^{T}WX$ and $b= X^{T} W y$. There is an iterative quantum algorithm that outputs a state $\ket{z}$ such that ${\lVert\ket{z} - \ket{A^{-1}b}\rVert} \leq 2\delta$ with expected running time $O(\frac{\kappa(A)^3 \log^3 \frac{\kappa(A)}{\delta} \mu( \sqrt{W} X | y)}{\delta})$. We now show how to implement the iterative step for the least squares problem, which is somewhat different from the case of linear systems as instead of the matrix $X^{T}WX$ and the vector $X^{T} W y$, we have the matrix $X$ and the vectors $w$ and $y$ stored in memory. Nevertheless, the iterative step can be implemented in this setting as we show next. Note that $A = B^{T} B$ where $B=\sqrt{W} X$, thus the eigenvalues of $A$ are the squared singular values of $B$, hence it suffices to perform the generalized SVE for $B$. We assume that the data structures for performing generalized SVE for $X$ have been created. The weights $w_{i}, i \in [m]$ are also stored in memory. We maintain the variable $M_{w}$ that stores $\max_{i} w_{i}{\lVertx_{i}\rVert}^{2} $ and is updated whenever $w_{i}$ arrives or ${\lVertx_i\rVert}$ gets updated, that is if $M_{w} \leq w_{i} {\lVertx_{i}\rVert}^{2}$ then set $M_{w}= w_{i} {\lVertx_{i}\rVert}^{2}$. We replace $M \to M_{w}$ and ${\lVertx_{i}\rVert} \to \sqrt{w_{i}} {\lVertx_{i}\rVert}$ in the computation in equation and follow the same steps in Theorem \[dsplus\] to implement the unitary, $$U' \ket{ i, 0^{\lceil \log (n+1) \rceil} } = \ket{i} \frac{1}{\sqrt{M_w}} {\left ( \sum_{j \in [n]} \sqrt{w_{i}} x_{ij} \ket{j} + (M_{w} - w_{i} {\lVertx_{i}\rVert}^{2})^{1/2}\ket{n+1} \right )}$$ Using $U', V$ instead of $U, V$ in Theorem \[sveplus1\] we can perform generalized SVE for $B=\sqrt{W} X$ in time $\mu(B)$. In order to multiply by $A$ for the iterative method, we perform generalized SVE for $B$ in equation and then conditional rotation with factor $\alpha \overline{\sigma_{i}}^{2}$. Analogous to the above procedure one can also implement matrix multiplication for $B' = X^{T}W$. Note that the state $\ket{b}$ is not in the memory, so we can not do the first affine update used for linear systems (where we set $r_0 = b-A\theta_0$ for a random $\theta_0$). Instead we have $y$ and $X$ in memory, so we first do the affine step to create $(y-X\theta_0)$ and then multiply with the matrix $X^TW$, $$b-A\theta_0 = X^TWy-X^TWX\theta_0 = X^TW (y-X\theta_0)$$ It is straightforward to add $\ell_{2}$ regularization to the weighted least squares problem. The loss function becomes $\sum_{i} w_{i} ( y_{i} - \theta^{t} x_{i})^{2} + \lambda {\lVert\theta\rVert}^{2}$ and the update rule changes to $r_{t} = b - A\theta_{t}$ for $b=X^{t}W y$ and $A= X^{t} WX+ \lambda I$. The algorithm therefore performs the generalized SVE for $X^{t} WX+ \lambda I$ instead of $X^{t} WX$. #### Stochastic gradient descent for Weighted Least Squares: {#stochastic-gradient-descent-for-weighted-least-squares-1 .unnumbered} It is prohibitive in practice to compute the gradient $\sum_{ i \in [m]} w_{i} ( y_{i} - \theta_{t}^{T} x_{i}) x_{i} $ by summing over the entire dataset when the dataset size is large. Moreover, due to redundancy in the dataset the gradient can be estimated by summing over randomly sampled batches. Stochastic gradient descent utilizes this fact in the classical setting to lower teh cost of the updates. Stochastic gradient descent algorithms do not compute the gradient exactly, but estimate it over batches $\sum_{ i \in S_{j}} w_{i} ( y_{i} - \theta_{t}^{T} x_{i}) x_{i}$ obtained by randomly partitioning the dataset. We could expect similar issues for the quantum case where a large dataset would require more memory capacity and controlled operations over a large number of qubits. Stochastic gradient descent therefore remains relevant for quantum iterative methods as well. The stochastic gradient updates are defined for any choice of partition $\Pi= (S_{1}, S_{2}, \cdots, S_{k})$. For a given partition $\Pi$ let $X_{j}$ be the the $|S_{j}| \times n$ matrix obtained by selecting the rows corresponding to $S_{j}$. Define $A_{j} = X_{j}^{t} W_{j} X_{j}$ where $W_{j}$ is the diagonal matrix of weights restricted to $S_{j}$. The residuals satisfy the recurrence $r_{t+1} = (I- \rho A_{t'}) r_{t}$ where $t' = t \mod k$ and the initial condition $r_{0} =(b - A_{1}\theta_{0})$. All these updates can be implemented efficiently as the matrices $S_{j}$ are stored in memory. The running time will be linear in $\mu = \max_{i \in [k]} \mu(S_{k})$. In the case of linear systems and least squares, the updates were of the form $r_{t+1} = (I- \rho A) r_{t}$ for a fixed matrix $A$, and hence we could apply simultaneously $t$ steps of the update in one step and hence have that the running time is $O(\tau C_V)$. In the stochastic gradient descent case, we have different matrices $A_{t}$ for each time step which have different eigenbases. Therefore, we can only perform the linear updates sequentially, and hence the running time for the stochastic gradient descent is $O(\tau^{2} C_{V})$ as opposed to $O(\tau C_{V})$ for the linear systems and weighted least squares. Here, we provided two applications of our quantum gradient descent method, psd linear systems and stochastic gradient descent for weighted least squares. It would be interesting to find generalizations and further applications of the quantum gradient descent algorithm and to develop second order quantum iterative methods. Acknowledgements: {#acknowledgements .unnumbered} ----------------- IK was partially supported by projects ANR RDAM, ERC QCC and EU QAlgo. AP was supported by the Singapore National Research Foundation under NRF RF Award No. NRF-NRFF2013-13. We thank Robin Kothari for bringing reference [@M90] to our attention. Spectral Norm Estimation ======================== We assumed throughout the paper that the matrices $A$ are normalized such that the absolute value of the eigenvalues lie in the interval $[1/\kappa, 1]$. This is the same assumption as in [@HHL09]. We show here a simple quantum algorithm for estimating the spectral norm, which can be used to rescale matrices so that the assumption ${\lVertA\rVert} \leq 1$ is indeed valid. Note that $0 \leq \frac{\sigma_{max}(A)}{{\lVertA\rVert}_{F}} \leq 1$. $A \in {\mathbb{R}}^{m\times n}$ stored in data structure in Theorem \[dsplus\]. Returns an estimate for $\eta:= \sigma_{max}(A)/{\lVertA\rVert}_{F}$ with additive error $\epsilon$.\ 1. Let $l=0$ and $u=1$ be upper and lower bounds for $\eta$, the estimate $\tau = (l+u)/2$ is refined using binary search in steps 2-5 over $O(\log 1/\epsilon)$ iterations. 2. Prepare $\ket{\phi} = \frac{1}{{\lVertA\rVert}_{F}} \sum_{i,j} a_{ij} \ket{i, j} = \frac{1}{{\lVertA\rVert}_{F}} \sum_{i,j} \sigma_{i} \ket{u_{i}, v_{i}}$ and perform SVE [@KP16] with precision $\epsilon$ to obtain $\frac{1}{{\lVertA\rVert}_{F}} \sum_{i,j} \sigma_{i} \ket{u_{i}, v_{i}, \overline{\sigma_{i}} }$. where $|\overline{\sigma}_{i} - \frac{\sigma_{i}}{{\lVertA\rVert}_{F}}| \leq \epsilon$. 3. Append single qubit register $\ket{R}$ and set it to $\ket{1}$ if $\overline{\sigma}_{i} \geq \tau$ and $\ket{0}$ otherwise. Uncompute the SVE output from step 2. 4. Perform amplitude estimation on $\frac{1}{{\lVertA\rVert}_{F}} \sum_{i,j} \sigma_{i} \ket{u_{i}, v_{i}, R}$ conditioned on $R=1$ to estimate $\sum_{i: \overline{\sigma}_{i} \geq \tau} \sigma_{i}^{2}/{\lVertA\rVert}_{F}^{2}$ to relative error $(1\pm \delta)$. 5. If estimate in step 4 is $0$ then $u \to \tau$ else $l \to \tau$. Set $\tau = (u+l)/2$. The following proposition proves correctness for Algorithm \[sne\] and bounds its running time. Algorithm \[sne\] estimates $\eta$ to additive error $\epsilon$ in time $\widetilde{O}( \log (1/\epsilon)/ \epsilon \eta ) $. The running time for step 2 is $\widetilde{O}(1/\epsilon)$ and that for the amplitude estimation in step 4 is $\widetilde{O}(1/\epsilon\delta \eta)$ as the time $T(U)= \widetilde{O}(1/\epsilon_{1})$ and the amplitude being estimated is either $0$ or at least $\eta^{2}$. We will see that $\delta$ can be chosen to be a small constant, the running time is $\widetilde{O}( \log (1/\epsilon)/ \epsilon \eta ) $ as step 4 is repeated $\log(1/\epsilon)$ times. For correctness, it suffices to show that if $|\tau - \eta|\geq \epsilon$ then the algorithm determines $sign(\tau - \eta)$ correctly. If $|\tau - \eta|\geq \epsilon$ then the amplitude $\sum_{i: \overline{\sigma}_{i} \geq \tau} \sigma_{i}^{2}/{\lVertA\rVert}_{F}^{2}$ being estimated in step 4 is either $0$ or at least $\eta^{2}$. Amplitude estimation yields a non-zero estimate in the interval $(1\pm \delta)\eta$ for the latter case and thus the sign is determined correctly if $\delta$ is a small constant. [^1]: CNRS, IRIF, Université Paris Diderot, Paris, France and Centre for Quantum Technologies, National University of Singapore, Singapore. Email: [jkeren@irif.fr]{}. [^2]: Centre for Quantum Technologies and School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore. Email: [ aprakash@ntu.edu.sg]{}. [^3]: More precisely, the Singular Value Estimation procedure estimates the absolute value of the eigenvalues $|\lambda_{i}|$ for a symmetric $A$, and hence we also need to recover the sign of the $\lambda_{i}$ to perform matrix multiplication or to solve linear systems. The sign can be recovered by performing singular value estimation for the matrices $A, A + \mu I$ and comparing the estimates $|\overline{\lambda}_{i}|, |\overline{\lambda_{i} + \mu}|$. If $\lambda_{i}$ is positive the second estimate is larger while if it is negative the first estimate is larger. Such a method for sign recovery was presented in [@WZP17] where it was used to construct a quantum linear system solver that corresponds to the case $\mu(A)= {\lVertA\rVert}_{F}$.
--- abstract: 'We have obtained long-slit observations of the circumnuclear region of M87 at three different locations, with a spatial sampling of 0028 using the Faint Object Camera f/48 spectrograph on board HST. These data allow us to determine the rotation curve of the inner $\sim$ 1 of the ionized gas disk in 3727 to a distance as close as 007 ($\simeq 5$pc) to the dynamic center, thereby significantly improving on both the spatial resolution and coverage of previous FOS observations. We have modeled the kinematics of the gas under the assumption of the existence of both a central black hole and an extended central mass distribution, taking into account the effects of the instrumental PSF, the intrinsic luminosity distribution of the line, and the finite size of the slit. We find that the central mass must be concentrated within a sphere whose maximum radius is 005 ($\simeq$3.5pc) and show that both the observed rotation curve and line profiles are consistent with a thin–disk in keplerian motion. We conclude that the most likely explanation for the observed motions is the presence of a supermassive black hole and derive a value of $M_{BH} = (3.2\pm 0.9){\mbox{$\times 10^{9}$}}$ for its mass.' author: - 'F. Macchetto, A. Marconi, D.J. Axon' - 'A. Capetti' - 'W. Sparks' - 'P. Crane' title: 'The supermassive black hole of M87 and the kinematics of its associated gaseous disk [^1]' --- Introduction ============ The presence of massive black holes at the center of galaxies is widely believed to be the common origin of the phenomena associated with so-called Active Galactic Nuclei. The black hole model is very appealing because it provides an efficient mechanism that converts gravitational energy, via accretion, into radiation within a very small volume as required by the rapid variability of the large energy output of AGN (e.g. [@blandford]). The standard model comprises a central black hole with mass in the range $\simeq{\mbox{$10^{6}$}}-{\mbox{$10^{9}$}}{\mbox{$\rm M_\odot$}}$ surrounded by an accretion disk which releases gravitational energy. The radiation is emitted thermally at the local black–body temperature and is identified with the “blue–bump”, which accounts for the majority of the bolometric luminosity in the AGNs. The disk possesses an active corona, where infrared synchrotron radiation is emitted along with thermal bremsstrahlung X-rays. The host galaxy supplies this disk with gas at a rate that reflects its star formation history and, possibly, its overall mass ([@magorrian]) thereby accounting for the observed luminosity evolution. Broad emission lines originate homogeneously in small gas clouds of density $\simeq{\mbox{$10^{9}$}}{\mbox{$\rm cm^{-3}$}}$ and size $\simeq$1 AU in random virial orbits about the central continuum source. Plasma jets are emitted perpendicular to the disk. At large radii, the material forms an obscuring torus of cold molecular gas. Orientation effects of this torus to the line-of-sight naturally account for the differences between some of the different classes of AGN (see [@antonucci] for a review). While this broad picture has been supported and refined by a number of observations, direct evidence for the existence of accretion disks around supermassive black holes is sparse (see, however, [@livio]) and detailed measurements of their physical characteristics are conspicuous by their absence. The giant elliptical galaxy M87 at the center of the Virgo cluster, at a red-shift of 0.0043 ([@z]), is well known for its spectacular, apparently one–sided jet, and has been studied extensively across the electromagnetic spectrum. Ground–based observations first revealed the presence of a cusp–like region in its radial light profile accompanied by a rapid rise in the stellar velocity dispersion and led to the suggestion that it contained a massive black hole ([@young], [@sargent]). Stellar dynamical models of elliptical galaxies showed however that these velocity dispersion rises did not necessarily imply the presence of a black hole, but could instead be a consequence of an anisotropic velocity dispersion tensor in the central 100pc of a triaxial elliptical potential (e.g. [@duncan], [@mamon]). Considerable controversy has surrounded this and numerous other attempts to verify the existence of the black hole in M87 and other nearby giant ellipticals using ground based stellar dynamical studies (e.g. [@dressler], [@vandermarel]). To-date the best available data remains ambiguous largely because of the difficulty of detecting the high-velocity wings on the absorption lines which are the hallmark of the black hole. One of the major goals of HST has been to establish or refute the existence of black holes in active galaxies by probing the dynamics of AGN at much smaller radii than can be achieved from the ground. HST emission line imagery ([@crane], [@wfpc94]) of M87 has lead to the discovery of a small scale disk of ionized gas surrounding its nucleus which is oriented approximately perpendicularly to the synchrotron jet. This disk is also observed in both the optical and UV continuum ([@macchettoA] and 1996b). Similar gaseous disks have also been found in the nuclei of a number of other massive galaxies ([@ferrarese], [@jaffe]). Because of surface brightness limitations on stellar dynamical studies at HST resolutions, the kinematics of such disks are in practice likely to be the only way to determine if a central black hole exists in all but the very nearest galaxies. In the case of M87 FOS observations at two locations on opposite sides of the nucleus separated by 05 showed a velocity difference of $\simeq$1000 km , a clear indication of rapid motions close to the nucleus ([@fos94]). By [*assuming*]{} that the gas kinematics determined at these and two additional locations arise in a thin rotating keplerian disk, [@fos96] estimated the central mass of M87 is $\simeq$2 with a range of variation between 1 and 3.5. Currently this result provides the most convincing observational evidence in favour of the black hole model. Implicit in this measurement of the mass of the central object, however, is the assumption that the gas motions in the innermost regions reflect keplerian rotation and not the effects of non gravitational forces such as interactions with the jet. Establishing the detailed kinematics of the disk is therefore vital. In this paper we present new FOC,f/48 long-slit spectroscopic observations of the ionized circumnuclear disk of M87 with a pixel size of 0028. They provide us with a 3727 rotation curve in three different locations on the disk and which extend up to a distance of $\sim$ 1. We show that the observations are consistent with a thin–disk in keplerian motion, which explains the observed rotation curve and line profiles, and we derive a mass of $M_{BH} = (3.2\pm 0.9){\mbox{$\times 10^{9}$}}$ within a radius of $\simeq$5pc for the central black hole. The plan of the paper is as follows: the observations and data reduction are described in sections 2 and 3. In section 4 we use the current data and previous HST images to constrain the precise location of the slit with respect to the nucleus of M87. The results of the observations are presented in section 5 and compared to HST/FOS observations in section 6. In section 7 and 8 we discuss the fitting procedures to the observed rotation curve and line profiles with increasingly sophisticated models and in section 9 we compare the observed line profiles with the predictions from the keplerian model of a thin disk. Limits on the distributed mass are discussed in section 10 and the conclusions are given in section 11. Following [@fos94] we adopt a distance to M87 of 15 Mpc throughout this paper, whence 01 correspond to 7.3 pc. Observations ============ The circumnuclear disk of M87 was observed using the Faint Object Camera f/48 long–slit spectrograph on board the Hubble Space Telescope on July 25$^{\rm th}$, 1996 at resolutions of 1.78Å  and 00287 per pixel along the dispersion and slit directions, respectively. The F305LP filter was used to isolate the first order spectrum which covers the 3650–5470 Å region and therefore includes the 3727, 4861 and 4959, 5007 Å  emission lines. An interactive acquisition (integration time 297s) 1024x512 zoomed image was obtained with the f/48 camera through the F140W filter to accurately locate the nucleus. The slit, 0063x135 in size, was positioned on the gas disk at a position angle of 47.3$^\circ$ and initially spectra with exposure times of 2169 seconds were taken in the 1024x512 non–zoomed mode at 3 locations separated by 02 centered approximately on the nucleus. These observations were used to derive the actual location of the nucleus and HST was then repositioned using a small angle maneuver to this location (which actually turned out to be virtually coincident with the position of the second of the three scans). This allowed us to position the slit to within 01 of the nucleus (see section \[sec:impact\]). A further higher signal-to-noise spectrum at this location was then obtained with a total exposure time of 7791 seconds, built from 3 shorter exposures, 2597 seconds in duration. The accuracy of the HST small angle maneuvers is known to be a few milliarcseconds and this is in agreement with the close correspondence between the four spectra taken on the nucleus during the spatial scan. The actual slit positions, as derived in section \[sec:impact\], are displayed in Fig. \[fig:slitpos\] superposed on the + image published by [@wfpc94] which we retrieved from the HST archive. Hereafter we will refer to them as POS1, NUC and POS2 from South–East to North–West, respectively. In Tab. 1 we list the datasets obtained for M87 and those which were used for the calibration. \[sec:calib\] Data reduction ============================ The raw FOC data suffer from geometric distortion, i.e., the spatial relations between objects on the sky are not preserved in the raw images produced by the FOC cameras. This geometric distortion can be viewed as originating from two distinct sources. The first of these, optical distortion, is external to the detectors and arises because of the off–axis nature of the instrument aperture. The second, and more significant source of distortion is the detector itself, which is magnetically focused. All frames, including those used for calibration, were geometrically corrected using the equally spaced grid of reseau marks which is etched onto the first of the bi–alkali photocatodes in the intensifier tube ([@focman]). This geometric correction takes into account both the external and internal distortions. The positions of the reseau marks were measured on each of the internal flat–field frames which were interspersed between the spectra. The transformation between these positions and an equally spaced 9x17 artificial grid was then computed by fitting bi-dimensional Chebyshev polynomials of 6$^{\rm th}$ order in x and y terms and 5$^{\rm th}$ order in the cross terms. This transformation was applied to the science images resulting in a mean uncertainty in the reseau position of 0.10–0.25 pixels, depending on the signal-to-noise (SNR) of the flat frames. In addition, geometric distortion is also induced on the slit and dispersion directions by the spectrographic mirror and the grating. The distortion in the dispersion direction was determined by tracing the spectra of two stars taken in the core of the 47 Tucanae globular cluster. These stars are $\simeq$ 130 pixels apart and almost at the opposite extremes of the slit. The distortion along the slit direction was determined by tracing the brightness distribution along the slit of the planetary nebula NGC 6543 emission lines. Ground–based observations (Perez, Cuesta, Axon and Robinson, in preparation) indicate that the distortion induced by the velocity field of the planetary nebula are negligible (less than 0.5 Å) at the f/48 resolution. After rectification the spectra of the 47 Tuc stars and that of NGC 6543 were straight to better than 0.2 pixels on both axes. The wavelength calibration was determined from the geometrically corrected NGC 6543 spectrum. The reference wavelengths were again derived from the ground–based observations. The residual uncertainty on the wavelength calibration is less than 0.2Å. As a result of these procedures we obtained images with the dispersion direction along columns and the slit direction along rows. The pixel sizes are 00287 in the spatial direction and 1.78Å along the dispersion direction. The relatively small width of the lines of NGC 6543 (FWHM$<$ 100 km ) allows us to estimate the instrumental broadening to be $\simeq$ 430$\pm$30 km . From the luminosity profile of the 47 Tuc stellar spectra the instrumental broadening along the spatial direction is 008$\pm$002. The contributions to the total error budget from the various calibration steps can be summarized as follows: - geometric correction with the reseau marks has a residual error of 0.10–0.25 pixels; - rectification of the dispersion direction has a residual error less than 0.2 pixels; - rectification along the slit direction has a residual error of 0.2 pixels; - the error due to the intrinsic distortions of the planetary nebula velocity field is less than 0.5Å, i.e. 0.3 pixels; - the residual error in the wavelength calibration is less than 0.15Å. Combining all the uncertainty terms in quadrature we estimate a maximum uncertainty of 0.45 pixels (0.8 Å which correspond to 65 km  at 3727 Å) and 0.28 pixels (8 mas) in the dispersion and spatial directions respectively. Moreover, when we restrict ourselves to a small region of the detector, corresponding to a single spectral line, both uncertainties are negligible compared to those arising from the SNR of the data. The distortion correction and wavelength calibration were applied to the geometrically corrected M87 spectra and, as a check on our error budget, we traced the nuclear continuum emission. We found that the continuum was straight to better than 1 pixels across the whole spectral range and 0.5 pixels if we excluded the low SNR region of the spectrum redward of 4800 Å which was not used in our analysis. The imperfect repositioning of the spectrographic mirror, which moves between flat–field and source exposures, caused shifts between successive spectra in both the spatial and spectral directions. By comparing the internal consistency of the four independent spectra of the nucleus of M87 we have determined that these shifts range from 1 to 4 pixels in the spatial direction and are less than 2 pixels in the dispersion direction. The four spectra were aligned to an accuracy of 0.02 pixels by cross-correlating the flux distribution of the  line and co–added. The relative zero points of the other slit positions cannot be determined because the continuum is too weak to be detected. In the following analysis we will therefore conservatively allow for zero–point shifts in both the spatial and velocity directions of up to 4 pixels. The background emission was subtracted in all the spectra by means of a spline fit along the slit direction after masking out the regions where line or continuum emission is detected. Similarly the continuum under the lines was subtracted with a first order polynomial fit along the dispersion direction. \[sec:impact\] Determination of the slit location ================================================= Given the 02 step of the spatial scan and the slit width of 0063, the “impact parameter”, $b$, the minimum distance between the center of the slit and the nucleus is constrained to be smaller than 01 by our observing strategy . We accurately determine $b$ by comparing the flux measured from each of our three slit positions with the brightness profile derived from a previous FOC,f/96 HST image in the F342W. This filter covers a similar spectral range and includes the dominant (Sec.\[sec:results\]) line in our spectra, . The F342W filter has a width of about 670 Å and the scale of the f/48 spectra along the dispersion direction is 1.78 Å/pixels. To correctly synthesize the spectral energy distribution transmitted by the F342W filter, for each slit we collapsed 376 spectral channels around the  line and then measured the flux by co–adding 30 pixels (09) around the peak in the slit direction. A section of the F342W image, 09 wide and parallel to the slit orientation was extracted. Since the continuum flux measured in our spectra is the spatial average over the slit width, we filtered the extracted f/96 image with a flat-topped rectangular kernel 0063 wide in the direction perpendicular to the slit. Leaving aside for the moment the different point-spread-functions (PSF) of the f/48 and f/96, the resulting brightness profile can be directly compared with the relative fluxes obtained from our spectroscopic measurements (as shown in Fig. \[fig:align\]). The value of the impact parameter which best reproduces the ratio of the fluxes measured in the three slit positions was determined from a least–squares fit to synthetic flux profile derived from the filter observation. As shown in Fig. \[fig:align\] $\chi^2$ has one well defined minimum at $b = 0\farcs07 \pm 0\farcs01$. To take into account the possible effects of the different PSFs for f/48 and f/96 we degraded the F342W image to the f/48 resolution. Since the f/96 and f/48 PSFs can be approximated by gaussians with FWHM 004 and 008 respectively, we convolved the f/96 image using a circularly symmetric gaussian function with a 007 FWHM and repeated the analysis. As before the $\chi^2$ minimum falls at $0\farcs07 \pm 0\farcs01$ implying that the different PSFs do not significantly affect the derived impact parameter. The positions of the slits with the impact parameter derived above are displayed in Fig. \[fig:slitpos\], overlayed on the +continuum subtracted archival WFPC2 image . \[sec:results\] Results ======================= Extended 3726,3729 emission was detected in all three slit positions and the gray–scale, continuum subtracted  image at NUC is displayed in Fig. \[fig:obscontour\]. At NUC we also detected 4076,4069 Å, 4959,5007 and possibly  emission. Since the  lines fall close to a defect in the image, only the  and  were chosen as being suitable for velocity measurements. The continuum subtracted lines were fitted, row by row, along the dispersion direction with single gaussian profiles using the task LONGSLIT in the TWODSPEC FIGARO package ([@longslit]). In a few cases, at the edges of the emission region where the signal–to–noise ratio was insufficient, the fitting was improved by co-adding two or more pixels along the slit direction. All fits and respective residuals for the  and  lines can be found in Fig. \[fig:allfit\]. In all cases the measured line profiles are well represented by a single gaussian and constant continuum. The resulting central velocities, FWHM’s and line intensities along the slit are plotted in Fig. \[fig:results1\] for the NUC position, and in Fig. \[fig:results2\] for POS1 and POS2. The corresponding continuum distribution along the slit is also shown in Fig. \[fig:results1\]. Within the uncertainties, the velocities derived from  (thin crosses) agree with those obtained from  (filled squares), confirming the integrity of the wavelength calibration. The overall NUC velocity distribution indicates rotation with an amplitude of 1200 km , with a steep quasi–linear central portion between $\pm0\farcs2$ of the continuum peak and flattening at larger radii. Because of the much reduced signal–to–noise of POS1 and POS2 we primarily see only the brightest linear parts of the rotation curves but we do detect a clear turn–over to the South–West of POS1 at an amplitude of about 1000 km . The line intensity profile at NUC increases steeply toward the center but is essentially flat-topped within the central 014. The width of the lines at the three slit positions is significantly larger than the instrumental broadening (Fig. \[fig:allfit\]), even after taking into account the effects of density variations on the wavelengths of the density sensitive doublets (see the discussion below). Furthermore we note that at NUC the FWHM and the continuum peak are shifted by $\simeq$ 006 ($\simeq$ 2 pixels). The significance of both these results will be discussed in Sec. \[sec:profiles\]. The position–velocity diagram of the continuum subtracted  line observed at NUC (Fig. \[fig:obscontour\]) reveals the presence of two emission peaks $\simeq 800$ km/sec apart in velocity, spatially separated by approximately 014. The existence of these two peaks implies that the line–emission does not increase monotonically to zero radius but rather that, at a certain distance smaller than 007 from the nucleus ($\simeq 5$pc), the  emission is absent. The impact of density variations -------------------------------- One potential concern for the accuracy of the derived rotation curve is that the observed  line is actually a blend of $\lambda$3726.0 and $\lambda$3728.8 i.e. they are separated by 225 km s$^{-1}$. We have adopted a central wavelength of 3727.15 Å in our analysis. However the doublet is density sensitive and it is important to determine the magnitude of the shift in the central wavelength of the doublet in response to density variations. Using a code kindly provided by Dr. E. Oliva, we computed the line emissivities for the  lines using a five level atom and atomic parameters from a compilation by [@mendoza]. As can be seen in Fig. \[fig:dens\], the ratio between the two lines varies between 0.68 (low density limit $<$ 50) to 2.88 (high density limit $>$) and, consequently, any density–induced velocity shift is less than $\pm$ 45 km . Furthermore, from the archive FOS spectra described below (Sec. \[sec:fosdata\]), the density derived from 6716/6731 ranges from $\simeq$ 200 to $\simeq$ 4300  and this restricts the range of variation to $\pm$ 25 km . Similarly, the presence of an unresolved doublet will affect the measurements of the line widths. The greatest effect is when the lines are narrowest, i.e. when their FWHM is greater/equal to the instrumental broadening ($\simeq$ 430 km ). In that case the FWHM of the doublet can be $\simeq$100 km  broader than that of a single line. When the FWHM is larger than 600 km  the broadening is less than 60 km  i.e. negligible for our data (Fig. \[fig:dens\]). We applied a similar treatment to the density sensitive doublet (Fig. \[fig:dens\]) deriving a central wavelength of 4070.2 Åwith a range of variation of $\pm$ 25 km (atomic data from [@pradhan]). If, as the above density measurements imply, we are in the low density limit for the 4076/4069 doublet then the central wavelength would be shifted to 4070.5 Å, implying an uncertainty of at most 25 km  in our assumed rest wavelength. We conclude that the variations induced by density effects are always within our measurement uncertainties. \[sec:fosdata\] Comparison with archival FOS observations ========================================================= We retrieved the FOS data used by [@fos94] and [@fos96] from the HST archive. The datasets used and the corresponding target names are listed in Table 2, according to their notation, and their positions are compared with those of our slits in Fig. \[fig:focfos\]. Because the FOS data are rather noisy we smoothed them by convolution with a 1.8Å–sigma gaussian, i.e. the FOS spectral resolution, and then measured the velocities of , , ,  and  using single–gaussian fits as shown in Fig. \[fig:fosres\]. When possible  and the two  lines were fitted simultaneously under the constraint that they had the same FWHM. In some cases it was also possible to satisfactorily deblend ,  and the  doublet. The measured ratio of the  doublet implies an electron density which varies between 200  and 4300 . The measured velocities are given in Table 3 and are in acceptable agreement with the values given in Table 1 of [@fos94]. The similarity between the velocities obtained from different ionic species indicates that our results are not unduly biased by variations in the ionization conditions in the disk. In Fig. \[fig:comp\] we compare the velocities obtained from our slit position NUC with those obtained from the FOS at POS4 through 6, in the 026 aperture and at POS9 through 11, in the smaller 009 aperture. The plotted uncertainties of the FOS data, typically between 50 and 100 km  in a given dataset, are the internal scatter of the velocities measured in a given aperture. Within the substantially larger uncertainties, the FOS rotation curve is in reasonable accord with our results. It is also clear that our data represent a considerable improvement in both spatial resolution and accuracy in the determination of the rotation curve of the disk. \[sec:fit\] Modeling the rotation curve: a simple but constructive approximation ================================================================================ We now derive the expected velocity measured along the slit for a thin disk in keplerian motion in a gravitational potential dominated by a condensed central source. At this stage we ignore both the finite width of the slit and effects of the PSF which will be considered in the next section. Although the limitations of this approximation are clear, since the angular scale of the region of interest is similar to the size of point–spread–function, several general conclusions can be drawn from this simplified treatment which clarify the behaviour or the more realistic model fits described in Sec. 8. For simplicity we will also refer to the condensed central source as a black hole deferring the reality of this assumption to Sec. 10. Any given point $P$, located on the disk at a radius $R$, has a keplerian velocity $$V(R) = \left(\frac{GM_{BH}}{R}\right)^\frac{1}{2}$$ where $M_{BH}$ is the mass of the black hole. We choose a reference frame such that the X and Y axis, as seen on the plane of the sky, are along the major and minor axis of the disk respectively (see Fig. \[fig:disk\]). In this coordinate system a point $P(X,Y)$ is at a radius $R$ such that $$\label{eq:eq1} X^2+\frac{Y^2}{(\cos i)^2 } = R^2$$ Each point along the slit can be identified by its distance [**s**]{} to the “center” of the slit [**O**]{}, whose distance from the nucleus defines the “impact parameter” $b$. Let $\theta$ be the angle between the slit and the major axis of the disk, i.e. the line of nodes, and define $i$ to be the inclination of the disk with respect to the line of sight. Since $P(X,Y)$ is located on the slit, X and Y are given by $$\label{eq:eq2} X = -b\sin\theta+s\cos\theta \\$$ $$\label{eq:eq3} Y = b\cos\theta+s\sin\theta \\$$ The circular velocity $V(R)$ is directed tangentially to the disk as in Fig. \[fig:disk\] and its projection along the line of sight is then $-V(R)\cos\alpha \sin i$ (the - sign is to take into account the convention according to which blue–shifts result in negative velocities). Making the transformation between coordinates on the plane of the disk and the X,Y on the plane of the sky $$\tan\alpha = \frac{Y}{X\cos i}$$ hence, if $V_{sys}$ is the systemic velocity, the observed velocity along the slit is given by $$\label{eq:kepdisk} V = V_{sys}-(GM_{BH})^\frac{1}{2} \frac{(\sin i)\, X} {\left( X^2+\frac{Y^2}{(\cos i)^2}\right)^\frac{3}{4}}$$ A non–linear least squares fit of the model defined in the equation \[eq:kepdisk\], with $M_{BH} (\sin i)^2$, $\theta$, $i$, $b$, $V_{sys}$ and $cpix$ (which defines the origin of the $s$ axis) as free parameters to the observed rotation curve was obtained using simplex optimization code. Since the error bars on the independent variable are not negligible, we took them into account by minimizing the modified norm: $$\label{eq:chisq} \chi^2 = \sum_{i=1}^N \frac{\left(y_i-V(x_i; p_1, ..., p_6)\right)^2} {\Delta y_i^2+\left(\frac{\partial V}{\partial x}\right)_{x_i}^2 \Delta x_i^2}$$ where $y_i$ is the measured velocity at the pixel $x_i$, there are $N$ data points and $p_i$ are the free parameters of the fit. Because of the complexity of the fitting function we also carried out many trial minimizations using different initial estimates for the most critical free parameters, i.e. $i$, $\theta$ and $b$, taken from a large grid of several hundred, evenly spaced values. Many local minima of the $\chi^2$ function were found and we only accepted those solutions with a reduced $\chi^2<2.5$ and an impact parameter, $b$, consistent with that determined in Sec. \[sec:impact\] ($0\farcs06<b<0\farcs08$). Even though there is considerable non–axisymmetric structure in the data, in their original analysis [@fos94] used a value for the inclination, $i=42^\circ\pm5$, determined from isophotal fits to the surface photometry of the disk at radii ranging between 03 and 08 from the nucleus. Unfortunately, from our analysis of the imaging data it is not clear whether this result, obtained on the large scale, is valid at small radii (the inner 03). To determine the inclination on the basis of surface photometry of the emission lines, higher spatial resolution images are needed and this can only be accomplished by HST measurements at UV wavelengths. Even then the bright non–thermal nucleus may dominate the structure of the central region. Indeed from our analysis of the kinematics the inclination is the most poorly constrained parameter, with acceptable values ranging from 47$^\circ$ to 65$^\circ$. Though the other parameters are intrinsically rather well constrained, in Table 4 we therefore show the variation of their allowed values for two inclination ranges. A few points are evident from this analysis: the small angle $\theta$ to the line of nodes ($-5^\circ<\theta<4^\circ$) is a consequence of the apparent symmetry of the two branches of the rotation curve. When the impact parameter is non–null this symmetry can be achieved only if the slit direction is close to that of the line of nodes. The center of the rotation curve (between pixels 22.6 and 22.9) is close to the peak of the continuum distribution along the slit (pixels $\simeq$23–24) as one would expect if the latter indicates the location closest to the nucleus. However one would also expect the point with largest FWHM (pixel 25) to be coincident with it and not to be shifted by $\simeq$ 2 pixels ($\simeq0\farcs06$), as observed. We will return to this issue in the following sections. The systemic velocity is in reasonable agreement with the estimate of 1277 km  determined by van der Marel (1994) when one takes into account the uncertainty on the zero point of the wavelength calibration. The two fits which have the minimum and maximum of the acceptable $\chi^2$ values are shown in Fig. \[fig:bestfit1\] as solid and dotted lines, respectively, and the corresponding values of the parameters are: $cpix=22.7$, $b$=008, $M_{BH} (\sin i)^2$=1.73, $\theta=0.7^\circ$, $i=49^\circ$ $ V_{sys}$=1204 km ($\chi^2$=1.55) and $cpix=22.7$, $b$=006, $M_{BH} (\sin i)^2$=1.68, $\theta=-5.1^\circ$, $i=60^\circ$ $ V_{sys}$=1274 km ($\chi^2$=1.73). Note that the error bars in the plot of the residuals are the square roots of the denominators in equation \[eq:chisq\]. Taking into account all possible fits which are compatible with the data, the preliminary estimated value for the projected mass is $M_{BH} (\sin i)^2=1.7^{+0.2}_{-0.1}{\mbox{$\times 10^{9}$}}$and $M_{BH} =2.7\pm0.5{\mbox{$\times 10^{9}$}}$ with the allowed range of variation in $i$. In later sections we will derive a more accurate value for this important parameter. \[sec:fitpsf\]The smearing effects of the PSF ============================================= While the results of the preceding section give a reasonably satisfactory fit to the observed rotation curve by assuming a simple keplerian model, there are two significant issues which need to be addressed. Inspection of Fig. \[fig:results1\] reveals that: i) the lines are broad in the inner region (FWHM$>1000$ km ) and ii) the broadest lines do not occur at the center of rotation but at a distance of $\simeq 0\farcs06$ (2 pixels) from it. Taken in conjunction these facts may imply that the gas disk is not in keplerian rotation which could potentially invalidate any derived mass estimate. So far we have ignored the combined effects of the PSF, the finite slit–width and the intrinsic luminosity distribution of the gas. In this section we include these effects in our analysis and this enables us to reconcile these worrying features of the gas kinematics with the keplerian disk. To take into account the effects of the f/48 PSF and the finite slit size we must average the velocities using the luminosity distribution and the PSF as weights. To compute the model curve we chose the reference frame described by $s$ and $b$ i.e. the coordinate along the slit and the impact parameter. With this choice the model rotation curve $V_{ps}$ is given by the formula: $$V_{ps}(S) = \frac{\int_{S-\Delta S}^{S+\Delta S} ds \int_{B-h}^{B+h} db \int\int_{-\infty}^{+\infty} db^\prime ds^\prime V(s^\prime,b^\prime) I(s^\prime,b^\prime) P(s^\prime-s, b^\prime-b) } { \int_{S-\Delta S}^{S+\Delta S} ds \int_{B-h}^{B+h} db \int\int_{-\infty}^{+\infty} db^\prime ds^\prime I(s^\prime,b^\prime) P(s^\prime-s, b^\prime-b) }$$ where $V(s^\prime,b^\prime)$ is the keplerian velocity derived in eq. \[eq:kepdisk\], $I(s^\prime,b^\prime)$ is the intrinsic luminosity distribution of the line, $P(s^\prime-s, b^\prime-b)$ is the spatial PSF of the f/48 relay along the slit direction. $B$ is the impact parameter (measured at the center of the slit) and $2h$ is the slit size, S is the position along the slit at which the velocity is computed and $2\Delta S$ is the pixel size of the f/48 relay. For the PSF we have assumed a gaussian with 008 FWHM i.e. $$P(s^\prime-s, b^\prime-b) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp \left( -\frac{1}{2}\frac{(s^\prime-s)^2}{\sigma^2} -\frac{1}{2}\frac{(b^\prime-b)^2}{\sigma^2} \right)$$ The consequences of this more realistic approach to modeling the rotation curve are illustrated in Fig. \[fig:rotpsf\] for some extreme choices of both the luminosity distribution (power law or exponential profile) and the geometric parameters of the disk. Motivated by the presence of two peaks in the position velocity of figure \[fig:obscontour\] we also included a case in which the line emission is absent in the very center of the disk. In each case we show the importance of the convolution with the spatial PSF and the weighted average with luminosity profile and slit width. In general the dominant effect on the 2D velocity field is the convolution with the spatial PSF, and since the slit width is narrower than the PSF it has little or no effect in modifying the expected rotation curve. The effects of the luminosity distribution are important only when the curve is strongly asymmetric with respect to the center of rotation i.e. when the impact parameter is not null and the angle with the line of nodes is much greater than zero. These effects are larger for steeper luminosity distributions and lead to large velocity excursions from the PSF–convolved velocity field at the turn–over radii (see Fig. \[fig:rotpsf\], right panel). Fortunately, these extreme cases can be eliminated from further discussion because they are not a good representation of the observed rotation curve for M87. In the cases of interest, the differences at the turn–over radii are always less than $\simeq$100 km  and neglecting the weighting of the luminosity distribution can result in an over–estimate of the mass of up to 0.5, still within the formal uncertainties of the fit derived below. The weak dependence of the model rotation curve on the luminosity distribution is important because the true luminosity distribution for  is unknown. The presence of a “hole” at the center of the luminosity distribution whose size is comparable with the FWHM of the PSF has little effect on the rotation curve but, as we shall see in Sec. 9, holes do have an effect on the width of the line profiles. Using this modified fitting function under the same basic assumptions described in Sec. \[sec:fit\] leads to the parameters given in Table 4. The errors quoted are conservative as they are based on the mean absolute deviation of values obtained from the histogram of the local minima. As a sanity check, we have repeated the above fitting procedure taking into account the luminosity profiles plotted in Fig. \[fig:rotpsf\] i.e. exponential and power law dependences on radius (see Sec. \[sec:profiles\]). We found that the luminosity weighting introduces no significant change in the loci of acceptable solutions. The PSF smearing has three effects on the model fits, firstly, as one would expect, the required black hole mass is increased to compensate for the lowering of the velocity amplitudes. In addition the inclination is more poorly constrained and larger angles with the line of nodes become admissible. However, taking into account the POS1 and POS2 data restricts the inclination to less than $\simeq65^\circ$. Three representative fits with acceptable values of the reduced $\chi^2$ are shown in Fig. \[fig:bestfit2\] and have the corresponding parameters: fit A: $cpix=23.0$, $b$=008, $M_{BH} (\sin i)^2$=1.91, $\theta=-9^\circ$, $i=51^\circ$ $ V_{sys}$=1290 km ($\chi^2$=2.08), fit B: $cpix=22.7$, $b$=008, $M_{BH} (\sin i)^2$=1.93, $\theta=1^\circ$, $i=52^\circ$ $ V_{sys}$=1203 km ($\chi^2$=1.90) and fit C: $cpix=22.5$, $b$=0085, $M_{BH} (\sin i)^2$=2.00, $\theta=7^\circ$, $i=50^\circ$ $ V_{sys}$=1146 km ($\chi^2$=1.82). The main difference between the fits is in the sign of $\theta$ since the analysis of the rotation curve alone cannot distinguish between them but, as described in the next section (9), this ambiguity can be resolved by analyzing the 2D position–velocity diagram. Regardless of which of the above effects we include in the fit, the residuals for the outermost points ($R>0\farcs2$) still show a systematic behaviour which indicates a velocity decrease steeper than the expected $R^{-0.5}$ keplerian law. Unfortunately the external points are also those with the worse SNR hence this issue cannot be investigated further with the available data, but a possible explanation might be found in a slight warping of the disk at large radii. Such warping, if present, does not affect the estimate of the central mass. The comparison between the predictions of models A,B and C and the velocities observed at POS1 and POS2 are shown in Fig. \[fig:fitout\]. Because of the uncertainty in the zero–points of both velocity and position along the slit, which we described in Sec. 3, the off–nuclear data do not provide as good a constraint on the models as one might at first expect. This comparison shows that all three models lead to velocity gradients compatible with the data, though as presented, the data have been arbitrarily shifted to match model A. If it were not for the zero–point uncertainty the POS1 and POS2 data would allow us to unambiguously choose between the three models. Taking into account all possible fits which are compatible with the data, the estimated value for the mass is $M_{BH} (\sin i)^2=2.0^{+0.5}_{-0.4}{\mbox{$\times 10^{9}$}}$and $M_{BH} = (3.2\pm 0.9){\mbox{$\times 10^{9}$}}$with $i=51^\circ$ and its allowed range of variation. \[sec:profiles\] Analysis of the line profiles ============================================== The line profiles are given by: $$\Phi(v; S) = \frac{\int_{S-\Delta S}^{S+\Delta S} ds \int_{B-h}^{B+h} db \int\int_{-\infty}^{+\infty} db^\prime ds^\prime \phi(v-V(s^\prime,b^\prime)) I(s^\prime,b^\prime) P(s^\prime-s, b^\prime-b) } { \int_{S-\Delta S}^{S+\Delta S} ds \int_{B-h}^{B+h} db \int\int_{-\infty}^{+\infty} db^\prime ds^\prime I(s^\prime,b^\prime) P(s^\prime-s, b^\prime-b) }$$ where the symbols used are the same than those in the preceding equations and $\phi(v-V)$ is the intrinsic line profile. If the motions are purely keplerian and turbulence is negligible or less than the instrumental FWHM this is simply a gaussian with a FWHM=430 km . Rather than attempting to carry out a full model fit to the line profiles, which would require us to know the true surface brightness distribution of the line within the unresolved core, we proceeded by computing the expected line profiles using both an exponential and a power law dependence on radius ($\Sigma(R)\propto\exp(-(R/0\farcs1))$ and $\Sigma(R)\propto R^{-2}$ respectively). The scaling parameters were chosen to be consistent with the observed luminosity profile along the slit. As noted at the end of Sec. \[sec:results\], the existence of a double peak in the observed  position–velocity diagram of might imply the presence of a central hole in the line emission. To take this into account, $\Sigma(R)$ is then multiplied by a “hole function” which forces to zero intensity all the points with R less than the radius of the hole. To reproduce the observed double peaked structure the radius of the hole must be larger than $\simeq0\farcs03$ and, moreover, models with smaller radii predict line widths broader than those observed. Models with hole radii larger than 005 were discarded since the predicted line widths at the center are much smaller than observed. Since the hole in the emissivity distribution has a smaller radius than the PSF, the central dark mass condensation might be either point-like or distributed within the hole. The model luminosity profiles of the line along the slit derived for the same three representative models are compared with the observed  light profile in Fig. \[fig:lumslit\], and are all compatible with it. As shown above, the presence of the “hole” in the emission does not significantly alter the rotation curve, as shown above, but produces changes in the line profiles. In Fig. \[fig:modcont\] we compare the observed and model  intensity contours derived using the parameters from fits A, B and C, the exponential luminosity distribution and a hole radius of 005. A is the model which best agrees with the data. Models B and C, with $\theta\simeq 0^\circ$ and $\theta> 0^\circ$ respectively , do not reproduce the observed position of the emission peaks. Thus model A is the most satisfactory of the three test cases. From the computed 2D position–velocity diagrams we can infer that i) the choice of the intensity distribution, as long as it is radially symmetric does not significantly alter the results; ii) the presence of two peaks is indeed the result of a hole in the luminosity profile; iii) the two peaks are shifted with respect to the center of rotation if the inclination angle $\theta$ is different from zero; iv) the shift is in the direction of the observations only if $b$ and $\theta$ have opposite signs and, since $b>0$ as shown earlier, $\theta$ must be negative; v) the presence of the hole is also required to prevent the line widths in the center to be broader than those observed. In Fig. \[fig:modprof\] we plot the predicted line profiles compared with those observed in the central pixels. The solid and dotted lines are the profiles derived with the exponential and power law luminosity distributions respectively. The model profiles have been scaled and re–grided to match the pixelation of the actual data. The agreement is remarkable especially since this is not a direct fit to the profiles. The keplerian model fully reproduces the observed line widths and the different choices of the luminosity distributions do not alter this result. Furthermore the model naturally accounts for the shift between the position at which the FWHM is maximum and the point of minimum distance from the nucleus, i.e. the peak of the continuum distribution. This is simply a consequence of the non–null impact parameter and angle between the slit and the line of nodes. In summary a thin–disk in keplerian motion around a central black hole explains all the observed characteristics. This fact strengthens the reliability of the derived value for the BH mass of $M_{BH} = (3.2\pm 0.9){\mbox{$\times 10^{9}$}}$. The emission of   is absent in the regions closest to the black hole ($R<3.5$pc). Physically this might be due to either the gas being fully ionized or to the gas having been blown away by the interaction with the jet. Can the mass be distributed? ============================ In the above sections we have demonstrated that $(3.2\pm 0.9){\mbox{$\times 10^{9}$}}$ are required to explain the observed rotation curve and, so far, we have assumed that this mass is point-like. To investigate if more extended mass distributions are consistent with the data we have fitted the rotation curves derived with a Plummer Potential (e.g. [@binney]) with increasing core radii. Fitting the NUC data with a core radius larger than 005 and keeping all the other parameters free leads to solutions which tend to make the impact parameter 0. Such fits are not consistent with the observations for two reasons: i) the impact parameter has a value of 007 as discussed in Sec. \[sec:impact\], ii) decreasing the impact parameter of the slit at the NUC position increases that of the slit at POS1 hence the models are not able to reproduce the spatial structure of the velocity field even within the scope of our limited off-nuclear data. Consequently we fit the data by fixing the impact parameter in the range 006-008. The minimum $\chi^2$ which can be obtained increases with increasing core radius. Moreover to reproduce the observed rotation curve at NUC the total mass increases. In Fig. \[fig:plummer\] we plot the minimum $\chi^2$ as a function of the core radius (solid line) for those fits which reproduce the velocity field at POS1 and POS2 and whose total mass is consistent with the limit of  implied by the large scale stellar dynamical measurements ([@vandermarel]). Acceptable fits to the rotation curve can be found provided the core radius is less than $0\farcs13$. However the observed radial variation of the line FWHM provides a more stringent constraint. The dashed line in Fig. \[fig:plummer\] represents the maximum FWHM of the lines which can be expected for a given core radius (assuming an exponential luminosity distribution) and the shaded area represents the region which matches the observations. As can be clearly seen, we must adopt mass distributions with core radii smaller than 007 to match the observed line widths. Adding a central hole to the luminosity distribution only compounds the problem of matching the FWHM. Such a small core radius of course places $\simeq$ 60% of the mass at radii smaller than that of our PSF. As we described in Sec. 9 the finite PSF in conjunction with a central hole in the line emissivity distribution, even in the pure black hole model, would allow the mass to be distributed within the PSF. If the estimated mass were uniformly distributed in a sphere with a 005 radius ($\simeq3.5$pc) the mean density would be $\simeq2{\mbox{$\times 10^{7}$}}$pc$^{-3}$ which is greater than the highest value encountered in the collapsed cores of galactic globular clusters (NGC 6256 and 6325, cf. Table II of [@gc]). The [*total*]{} flux estimated in the $5\times5$ pixel$^2$ nuclear region (028$\times$028 $\simeq$ 16$\times$16 pc$^2$) from F547M,F555W WFPC2 archival images is $\simeq$ 5.3 erg cm$^{-2}$  Å$^{-1}$ which corresponds roughly to 3.2 at 15Mpc in the V band. Consequently the mass-to-light ratio in the V band is $M/L_V\simeq$110 ${\mbox{$\rm M_\odot$}}/L_{V\odot}$ where $L_{V\odot}$ is the V luminosity of the sun ($L_{V\odot}=0.113{\mbox{$\rm L_\odot$}}$). Such mass-to-light ratio is uncomfortably high; indeed from stellar population synthesis $M/L_V<20$ (e.g. [@bruzual]). The above considerations suggest that the mass condensation in the central $R<5$pc of the nucleus of M87 cannot be a supermassive cluster of “normal” evolved stars. If it is not a supermassive black hole, it must nonetheless be quite an “exotic” object such as a massive cluster of neutron stars or other dark objects. A more extensive discussion of such possibilities has been given in [@vandermarel97]. We concur with their general conclusion that these alternatives are both implausible and contrived. Summary and Conclusions ======================= We have presented the results of HST FOC f/48 high spatial resolution long–slit spectroscopy of the ionized circumnuclear gas disk of M87, at three spatially separated locations 02 apart. We have analyzed these data and, in particular, the  emission lines and derived rotation curves which extend to a distance of $\sim$1 from the nucleus. Within the uncertainties, these data are insensitive to density variations over a broad range of values which are larger than the constraints on density derived from the FOS archive data. Our rotation curve is compatible with that obtained form the archival FOS data, within their substantially larger intrinsic uncertainties. Furthermore we have verified that this applies to all emission lines (, , ,  and ) measured with FOS which implies that we have not been misled by ionization conditions of the gas. To analyze our data we have first constructed a simple analytical model for a thin keplerian disk around a central mass condensation, and fitted the model function to the observed rotation curve. Since the number of free parameters is large we carried out trial minimization of the residual errors by using different estimates for the values of the key parameters. This procedure allowed us to construct a series of self–consistent solutions as well as to highlight the sensitivity of the final solutions to the different choices of initial estimates for the free parameters. Using this simple model we derived two extreme sets of self–consistent solutions which provide good fits to the observational data. There is marginal evidence for a warp of the disk in the outermost ($R>0\farcs2$) points but this has little effect on our mass estimate. We then conducted a more realistic analysis incorporating the finite slit width, the spatial PSF and the intrinsic luminosity distribution of the gas. This analysis showed that a thin keplerian disk with a central hole in the luminosity function provides a good match to our data. We presented three representative models (A, B and C) which encompass the range of variation of the line of nodes and used these to compute the line profiles and 2D position–velocity diagrams for the  lines. Model A best reproduces the observations, and the resulting parameters of the disk are $i=51^\circ$, $\theta=-9^\circ$, $V_{sys}=1290$ km and a corresponding mass of $(3.2\pm0.9){\mbox{$\times 10^{9}$}}$, where the error in the mass allows for the uncertainty of each of the parameter (Tab. 4). We showed that this mass must be concentrated within a sphere of less than 3.5 pc and concluded that the most likely explanation is a supermassive black hole. To make further progress there are a number of possibilities the easiest of which is to make a more comprehensive and higher signal-to-noise 2D velocity map of the disk to better constrain its parameters. We note in passing that recently there has been considerable progress in modeling warped disks ([@pringle], [@liviob]) and this treatment could be applied to such improved data to investigate the origin of the apparent steeper than keplerian fall off in rotation velocity beyond a radius of 02 that we alluded to above. The biggest limitation of the present data is that, even by observing with HST at close to its optimal resolution at visible wavelengths, some of the important features of the disk kinematics are subsumed by the central PSF. Until a larger space based telescope becomes available, the best we can do is to study the gas disk in Ly$\alpha$ and gain the Rayleigh advantage in resolution by moving to the UV. This approach may run into difficulties because of geocoronal Ly$\alpha$ emission and the effects of obscuration. Nevertheless this may be the only way to proceed because of the difficulty of detecting the high velocity wings which characterize the stellar absorption lines in the presence of a supermassive black hole. A.M. acknowledges partial support through GO grant G005.44800 from Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5–26555. A.C. acknowledges support from the STScI visitor program. We thank E. Oliva for kindly providing his compilation of atomic parameters and his code to derive line emissivities. We thank Stefano Casertano Massimo Stiavelli and Roeland van der Marel for stimulating discussions and suggestions which improved the analysis and Mario Livio for a careful reading of the manuscript. We thank Robert Jedrzejewski, Mark Voit and Dorothy Fraquelli for their assistance during the observations. We thank the anonymous referee and the scientif editor, Dr. Greg Bothun, for useful comments and suggestion which improved this paper. Antonucci R., 1993, Ann. Rev. , 31, 473-521 Binney J. and Mamon S., 1982, , 200, 361. Binney J. and Tremaine G. A., 1987, “Galactic Dynamics”, Princeton Series in Astrophysics, P.U.P., p. 42 Blandford R.D., 1991, “Physics of AGN”, Proceedings of Heidelberg Conference, Springer-Verlag, eds. W.J. Duschl and S.J. Wagner, p. 3 Bruzual G.A., 1995, “From Stars to Galaxies: the Impact of Stellar Physics on Galaxy Evolution”, Proceedings of Crete Conference, eds. C. Leitherer, U. Fritze-von Alvensleben and J. Huchra, ASP Conf. Series, vol. 98, p. 14 Cai W., Pradhan A.K., 1993, , 88, 329 Crane P., Stiavelli M., King I.R., Deharveng J.M., Albrecht R., Barbieri C., Blades J.C., Boksemberg A., Disney M.J., Jakobsen P., 1993, , 106, 1371 de Vaucouleurs G., de Vaucouleurs A., Corwin JR. H.G., Buta R.J., Paturel G., and Fouque P., 1991, “Third Reference Catalogue of Bright Galaxies”, Version 3.9 Dressler A., Richstone D.O., 1990, , 348, 120 Duncan M.J. and Wheeler J.C., 1980 , 237, L27 Ferrarese L., van den Bosch F.C., Ford H.C., Jaffe W., OConnell R.W. , 1994, , 108, 1598 Ford H.C., Harms R.J., Tsvetanov Z.I., Hartig G.F., Dressel L.L., Kriss G.A., Bohlin R.C., Davidsen A.F., Margon B., Kochhar A.K., 1994, , 435, L27 Ford H.C., Tsvetanov Z.I., Hartig G.F., Kriss G.A., Harms R.J., Dressel L.L., 1996, “Science with the HST – II”, Eds. P. Benvenuti, F. Macchetto and E. Schreier, p.192 Harms R.J., Ford H.C., Tsvetanov Z.I., Hartig G.F., Dressel L.L., Kriss G.A., Bohlin R.C., Davidsen A.F., Margon B., Kochhar A.K., 1994, , 435, L35 Jaffe W., Ford H.C., Ferrarese, L., van den Bosch, F.C., OConnell R.W. , 1993, Nature, 364, 213 Jarvis M.,Peletier R. F., 1997, , 247, 315 Livio B. J., Xu C., 1997, , in press Livio M., Pringle J.E., 1997, , submitted Macchetto F., 1996a, IAU Symposium 175, “Extragalactic Radio Sources”, Bologna (Italy), Eds. R. Ekers et al., pp. 195–200 Macchetto F., 1996b, “Science with the HST – II”, Paris, Eds. P. Benvenuti, F. Macchetto and E. Schreier, p. 394 Magorrian J., Tremaine S., Gebhardt K., Richstone D., Faber S., 1996, , 189, 111.06 Mendoza C., 1983, IAU Symposium 103, “Planetary Nebulae”, ed. D.R. Flower (Dordrecht; Reidel), p. 143 Nota A., Jedrzejewski R., Hack W., 1995, [*Faint Object Camera Instrument Handbook Version 6.0*]{}, Space Telescope Science Institute Pringle J.E., 1996, , 281, 357 Pryor C., Meylan G., ASP Conference Series Vol. 150, “Structure and Dynamics of Globular Clusters”, eds. S.G. Djorgovski and G. Meylan, p. 357 Sargent W.L.W., Young P.J., Boksenberg A., Shortridge K., Lynds C.R., Hartwick F.D.A., 1978, , 221, 731 Young P.J., Westphal J.A., Kristian J., Wilson C.P., Landauer F.T., 1978, , 221, 721 Wilkins T.W. and Axon D.J., 1992, in Astronomical data analysis software and systems I, Ast. Soc. Pac. Conf. Ser. 25, p. 427 van der Marel R.P., 1994, , 270, 271 van der Marel R.P., de Zeeuw P.T., Rix H.W., Quinlan G.D., 1997, Nature, 385, 610 [^1]: Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555 and by STScI grant GO-3594.01-91A
--- author: - 'Jonathan Ben-Artzi [^1]' - 'Daniel Marahrens [^2]' - 'Stefan Neukamm [^3]' title: 'Moment bounds on the corrector of stochastic homogenization of non-symmetric elliptic finite difference equations' --- #### Abstract. We consider the corrector equation from the stochastic homogenization of uniformly elliptic finite-difference equations with random, possibly non-symmetric coefficients. Under the assumption that the coefficients are stationary and ergodic in the quantitative form of a Logarithmic Sobolev inequality (LSI), we obtain optimal bounds on the corrector and its gradient in dimensions $d \geq 2$. Similar estimates have recently been obtained in the special case of diagonal coefficients making extensive use of the maximum principle and scalar techniques. Our new method only invokes arguments that are also available for elliptic systems and does not use the maximum principle. In particular, our proof relies on the LSI to quantify ergodicity and on regularity estimates on the derivative of the discrete Green’s function in weighted spaces. In the critical case $d=2$ our argument for the estimate on the gradient of the elliptic Green’s function uses a Calderón-Zygmund estimate in discrete weighted spaces, which we state and prove. Introduction ============ We study the *modified corrector equation* $$\label{eq:1} \frac{1}{T}\phi_T+\nabla^*(a\nabla\phi_T)=-\nabla^*(a\xi)\qquad\text{in }{\mathbb{Z}}^d,\ d\geq2,$$ which is a discrete elliptic finite-difference equation for the real valued function $\phi_T$, called the *modified corrector*. [[As we explain below, ]{}]{}it arises in stochastic homogenization. The symbols $\nabla$ and $\nabla^*$ denote the discrete (finite-difference) gradient and the negative divergence, see Section \[S:FW\] below for the precise definition. In the modified corrector equation $T$ denotes a positive “cut-off” parameter (which we think of to be very large), and $\xi\in{\mathbb{R}}^d$ is a vector, fixed throughout this paper. We consider with a *random, uniformly elliptic* field of coefficients $a:{\mathbb{Z}}^d\to{\mathbb{R}}^{d\times d}$. To be precise, for a fixed constant of ellipticity $\lambda>0$ we denote by $\Omega_0$ those matrices $a_0\in{\mathbb{R}}^{d\times d}$ that are uniformly elliptic in the sense that $$\label{ass:ell} \forall v\in{\mathbb{R}}^d\,:\qquad v\cdot a_0v\geq \lambda|v|^2\ \ \text{ and }\ \ |a_0v|\leq|v|,$$ and define the set of admissible coefficient fields $$\Omega:=\Omega_0^{{\mathbb{Z}}^d}=\{\,a:{\mathbb{Z}}^d\to\Omega_0\,\}.$$ In this paper we derive optimal bounds for finite moments of the modified corrector and its gradient, under the assumption that the coefficients are distributed according to a *stationary and ergodic* law on $\Omega$, where ergodicity holds in the *quantitative* form of a *Logarithmic Sobolev Inequality* (LSI), see Definition \[def:LSI\] below. Our main results are presented in Theorems \[T1\] and \[T2\] below. For easy reference, let us state them already here, somewhat informally. [[Throughout the paper,]{}]{} we write ${\left\langle \cdot \right\rangle}$ for the expected value [[associated to the law on $\Omega$]{}]{}. [**The first result**]{} concerns a bound on all moments of the gradient of the corrector. Under the assumptions of stationarity and LSI, we have for all $1\leq p<\infty$ and $T\geq 2$ that $$\langle |\nabla\phi_T(0) + \xi|^{2p} \rangle \le C |\xi|^{2p},$$ where the constant $C$ is independent of $T$. (Note that here and throughout the paper the constant “2” in “$T\geq 2$” has no special meaning. In fact, since we are interested in the behavior $T\uparrow\infty$, we could replace “$2$” with any number greater than $1$). [**The second result**]{} is a bound on the corrector itself. Under the same assumptions (even under a slightly weaker assumption than LSI, see Theorem \[T2\] below), we have that $$\langle |\phi_T(0)|^{2p} \rangle \le C {\left\langle |\nabla \phi_T(0)+\xi|^{2p} \right\rangle}\times \begin{cases} (\log T)^p&\text{for }d=2,\\ 1&\text{for }d>2. \end{cases}$$ These estimates are optimal, even in dimension $d=2$ where we recover the optimal logarithmic rate of divergence of the moment of $\phi_T$. While the first result is relatively easy to proof, the argument for the second result is substantially harder and the main purpose of our paper. Let us emphasize that the coefficients in are not assumed to be symmetric or even diagonal. Thus, equation  in general *does not enjoy a maximum principle*; this constitutes a major difference to previous works where the maximum principle played a major role and exclusively the case of diagonal coefficients was studied, see e.g.[@GO1; @GO2; @GNO1]. In fact, the method presented in this paper only relies on arguments that are also available in the case of elliptic systems. The extension of our findings to discrete systems, in particular a discrete version of linear elasticity, is work in progress. Very recently, Bella and Otto considered in [@Bella-Otto-14] *systems* of elliptic equations (on ${\mathbb{R}}^d$) with periodic (but still random) coefficients. As a main result, they obtain moment bounds on the *gradient* of the corrector with help of an argument that avoids the maximum principle and even the use of Green’s functions. Still, the derivation of moment bounds on the *corrector itself* – which is the main purpose of our paper – remains open. [**Relation to stochastic homogenization.**]{} The modified corrector equation appears in stochastic homogenization: For ${\varepsilon}>0$ and $a\in\Omega$ distributed according to ${\left\langle \cdot \right\rangle}$, we consider the equation $$\label{eq:13} \nabla^*(a\nabla u^{\varepsilon})={\varepsilon}^2f({\varepsilon}\cdot)\qquad\text{in }{\mathbb{Z}}^d.$$ For simplicity we suppose that the right-hand side $f:{\mathbb{R}}^d\to{\mathbb{R}}$ is smooth, compactly supported, deterministic and has zero spatial average, so that admits a unique, decaying solution $u^{\varepsilon}(a;\cdot):{\mathbb{Z}}^d\to{\mathbb{R}}$. As shown in [@Papanicolaou-Varadhan-79; @Kozlov-79; @Kozlov-87; @Kunnemann-83], in the homogenization limit ${\varepsilon}\downarrow 0$ the rescaled solution $u^{\varepsilon}(a;\tfrac{\cdot}{{\varepsilon}})$ converges for almost every $a\in\Omega$ to the unique decaying solution $u^0:{\mathbb{R}}^d\to{\mathbb{R}}$ of the homogenized equation $$-\operatorname{div}(a_{\hom}\nabla u_{\hom})=f\qquad\text{in }{\mathbb{R}}^d.$$ Here $a_{\hom}\in\Omega_0$ is deterministic and determined by the formula $$\label{eq:hom-formula} e_i\cdot a_{\hom}e_j=\lim\limits_{T\uparrow\infty}{\left\langle (e_i+\nabla\phi_{T,i}(0))\cdot a(0)(e_j+\nabla\phi_{T,j}(0)) \right\rangle},$$ where $\phi_{T,j}$ is the solution to with $\xi=e_j$. Let us comment on the appearance of the limit as $T\uparrow\infty$ in this formula. Formally, and in analogy to periodic homogenization, we expect that $$e_i\cdot a_{\hom}e_j= {\left\langle (e_i+\nabla\phi_{i}(0))\cdot a(0)(e_j+\nabla\phi_{j}(0)) \right\rangle},$$ where $\phi_i$ is a solution to the *corrector equation* $$\label{eq:corr} \nabla^*(a(\nabla\phi_i+e_i))=0\qquad\text{in }{\mathbb{Z}}^d,$$ that is *stationary* in the sense of $$\label{LMC:1a} \phi_i(a;x+z)=\phi_i(a(\cdot+z);x)\qquad\text{${\left\langle \cdot \right\rangle}$-almost every }a\in\Omega\text{ and all }x,z\in{\mathbb{Z}}^d.$$ Furthermore, a formal calculation suggests the two-scale expansion $$\label{eq:11} u^{\varepsilon}\approx u_{\hom}({\varepsilon}\cdot)+{\varepsilon}\sum_{j=1}^d\phi_j\partial_ju_{\hom}({\varepsilon}\cdot).$$ In the case of deterministic, periodic homogenization, it suffices to solve on the reference torus of periodicity and existence essentially follows from Poincaré’s inequality on the torus. In the stochastic case, the corrector equation  has to be solved on the infinite space ${\mathbb{Z}}^d$ subject to the stationarity condition . [[Since this is not possible in general, the corrector equation is typically regularized by adding the zeroth-order term $\frac{1}{T}\phi_i$ with parameter $T\gg 1$.]{}]{} In fact this was already done in the pioneering work of Papanicolaou and Varadhan [@Papanicolaou-Varadhan-79] and leads to the modified corrector equation , which in contrast to , admits for all $a\in\Omega$ a unique bounded solution $\phi_T(a;\cdot)\in\ell^\infty({\mathbb{Z}}^d)$ that automatically is stationary, see Lemma \[LMC\] below. While simple energy bounds, cf. , make it relatively easy to pass to the regularization-limit $T\uparrow\infty$ on the level of $\nabla\phi_T$ (and thus in the homogenization formula ), it is difficult, and in general even impossible, to do the same on the level of $\phi_T$ itself. For similar reasons (and in contrast to the periodic case), it is difficult to *quantify* errors in stochastic homogenization, such as the homogenization error $u_{\varepsilon}-u_{\hom}$ [[or the expansion .]{}]{} [**Previous quantitative results and novelty of the paper.**]{} For periodic homogenization the quantitative behavior of [[and the expansion ]{}]{} is reasonably well understood (e.g. see [@Avellaneda-Lin-87; @Allaire-Amar-99; @Gerard-Varet-12]). [[In the stochastic case, due to the lack of compactness, the quantitative understanding of is less developed and in most cases only suboptimal estimates are obtained, see [@Yurinskii-76; @Naddaf-Spencer-98; @Conlon-Naddaf-00; @Conlon-Spencer-13; @Caputo-Ioffe-03; @Bourgeat-04; @Armstrong-Smart-14].]{}]{} [[In particular, the first quantitative result is due to Yurinskii [@Yurinskii-76] who proved an algebraic rate of convergence (with an suboptimal exponent) for the homogenization error $u_{\varepsilon}-u_{\hom}$ in dimensions $d>2$ for algebraically mixing coefficients. For refinements and extensions to dimensions $d\geq 2$ we refer to the inspiring work by Naddaf and Spencer [@Naddaf-Spencer-98], and the recent works by Conlon and Naddaf [@Conlon-Naddaf-00] and Conlon and Spencer [@Conlon-Spencer-13]. Most recently, Armstrong and Smart [@Armstrong-Smart-14] obtained the first result on the homogenization error for the stochastic homogenization of convex minimization problems. Their approach, which builds up on ideas of Avellaneda and Lin [@Avellaneda-Lin-87], substantially differs from what has been done before in stochastic homogenization of divergence form equations. It in particular applies to the continuum version of with symmetric coefficients, and potentially extends to symmetric systems (at least under sufficiently strong ellipticity assumptions). For results on non-divergence form elliptic equations see [@Cafarelli-Souganidis-10; @Armstrong-Smart-13].]{}]{} While qualitative stochastic homogenization only requires ${\left\langle \cdot \right\rangle}$ to be stationary and ergodic, the derivation of error estimates requires a quantification of ergodicity. [[Persuing optimal error bounds]{}]{}, in a series of papers [@GO1; @GO2; @GO3; @GNO1; @GNO3; @MO1; @LNO1; @Mourrat-Otto-14] (initiated by Gloria and Otto) a quantitative theory for is developed based on *Spectral Gap* (SG) and LSI as tools to quantify ergodicity. In contrast to earlier results, the estimates in the papers mentioned above are optimal: E.g. [@GNO1] contains a complete and optimal analysis of the approximation of $a_{\hom}$ via periodic representative volume elements and [@GNO3] establishes optimal estimates for the homogenization error and the expansion in . A fundamental step in the derivation of these results are [[optimal]{}]{} moment bounds for the corrector, see [@GO1; @GO2; @GNO1]. The extension to the continuum case has been discussed in recent papers: In [@GO3] moment bounds on the corrector and its gradient have been obtained for scalar equations with elliptic coefficients. In the present contribution we continue the theme of quantitative stochastic homogenization and present a new approach that relies on methods, that – we believe – extend with only few modifications to the case of systems satisfying sufficiently strong ellipticity assumptions. In the works discussed above, arguments restricted to scalar equations are used at central places. Most significantly, *Green’s function estimates* are required and derived via De Giorgi-Nash-Moser regularity theory ([[e.g.see [@GNO1 Theorem 3]]{}]{}). This method is based on the *maximum principle*, which holds for diagonal coefficients, but not for general symmetric or possibly non-symmetric coefficients as considered here. In fact, in our case the Green’s function is not in general positive everywhere. We derive the required estimates on the gradient of the Green’s function from the corresponding estimate on the constant coefficient Green’s function by a perturbation argument that invokes a Helmholtz projection; this is inspired by [@Conlon-Spencer-11]. Secondly, previous works rely on a gain of stochastic integrability obtained by a nonlinear Caccioppoli inequality (see Lemma 2.7 in [@GO1]). In the present contribution we appeal to an alternative argument that invokes the LSI instead. While SG, which is weaker than LSI (see [@Guionnet-Zegarlinski-03]), has been introduced into the field of stochastic homogenization by Naddaf and Spencer [@Naddaf-Spencer-98 Theorem 1] (in form of the Brascamp-Lieb inequality), the LSI has been used in [@MO1] in the context of stochastic homogenization to obtain optimal annealed estimates on the gradient of the Green’s function and bounds on the random part of the homogenization error $u_{\varepsilon}- \langle u_{\varepsilon}\rangle$. Note that in the special case of diagonal coefficients (i.e. when the maximum principle [[and the De Giorgi-Nash-Moser regularity theory]{}]{} is available) our results are not new: The $T$-independent results on $\phi_T$ and $\nabla \phi_T$ in $d>2$ dimensions have already been established in [@GO1; @GNO1] under the slightly weaker assumption SG on the statistics (see below), [[and the estimate on the corrector in the optimal form of $\langle |\phi_T|^{2p} \rangle \le C (\log T)^p$ with a constant independent of $T$ is obtained in [@GNO1].]{}]{} [**Relation to random walks in random environments.**]{} There is a strong link between stochastic homogenization and random walks in random environments (see [@Biskup-11] and [@Kumagai-14] for recent surveys). Suppose for a moment that ${\left\langle \cdot \right\rangle}$ concentrates on diagonal matrices. Then for each diagonal-matrix-valued field $a:{\mathbb{Z}}^d\to\in{\mathbb{R}}^{d\times d}$, we may interpret as a conductance network, where each edge $[x,x+e_i]$ ($x\in{\mathbb{Z}}^d$, $i=1,\ldots,d$) is endowed with the conductance $a_{ii}(x)$. The elliptic operator $\nabla^*(a\nabla)$ generates a stochastic process, called the *variable speed random walk* $X=(X_a(t))_{t\geq 0}$ in a random environment with law ${\left\langle \cdot \right\rangle}$. Using arguments from stochastic homogenization, Kipnis and Varadhan [@Kipnis-Varadhan-86] (see also [@Kunnemann-83] for an earlier result) show that the law of the rescaled process $\sqrt{{\varepsilon}}X({\varepsilon}t)$ converges weakly to that of a Brownian motion with covariance $2a_{\hom}$. This *annealed* invariance principle for $X$ has been upgraded to a *quenched* result by Sidoravicious and Sznitman [@Sidoravicius-Sznitman-04]. The key ingredient in their argument is to prove that the “anchored corrector” (i.e. the function $\varphi$ introduced in Corollary \[cor:1\] (a) below) satisfies a *quenched sublinear growth* property. The quantitative analysis derived in the present paper is stronger. Indeed, our estimate on $\nabla\phi_T$ almost immediately implies that the anchored corrector grows sublinearly. On top of that in dimensions $d>2$ the moment bound on $\phi_T$ implies that the anchored corrector is almost bounded, in the sense that it grows *slower than any rate*, see Corollary \[cor:1\] and the subsequent remark. If the coefficients are not diagonal, then is not any longer related to a random conductance model. As mentioned before, for non-symmetric $a$ (and even for certain symmetric coefficients) the maximum principle for $\nabla^*(a\nabla)$ generally fails to hold. In that case the semigroup generated by $\nabla^*(a\nabla)$ is not a Markov process and there is no natural probabilistic interpretation for . [This may also be seen in terms of Dirichlet forms. While the (non-symmetric) elliptic operator $-\operatorname{div}(a_{\hom}\nabla)$ acting on functions on ${\mathbb{R}}^d$ generates a Dirichlet form $\int_{{\mathbb{R}}^d} \nabla u \cdot a_{\hom} \nabla v dx$ in the sense of [@Ma-Rockner Definition I.4.5] and a corresponding Markov process, the discrete operator $\nabla^*(a\nabla)$ with associated bilinear form $\sum_{{\mathbb{Z}}^d}\nabla u \cdot a \nabla v$ defined on $\ell^2({\mathbb{Z}}^d)\times\ell^2({\mathbb{Z}}^d)$ does not. Indeed, the contraction property (4.4) in [@Ma-Rockner] (which encodes a maximum principle) generally fails to hold in the non-diagonal discrete case.]{} However, the limiting process can be approximated by (non-symmetric) Markov processes, see [@Deuschel-Kumagai-13] for a recent construction. Let us finally remark that we do not use any ingredients from probability theory except for the quantification of ergodicity via SG and LSI in this paper. Furthermore, since we view our present contribution as a first step towards systems (which certainly are unrelated to probability theory), we do not further investigate the connection to random walks in the present paper. [**Outline of the paper.**]{} In Section \[S:FW\], we present the main results of our paper and give a brief sketch of our proof. The proof of the main result and auxiliary lemmas are contained in Section \[S:P\]. Let us mention that in the critical dimension $d=2$, we invoke a Calderón-Zygmund estimate on weighted $\ell^p$-spaces on ${\mathbb{Z}}^d$. We give a proof of this estimate, which may be of independent interest, in Section \[S:CZ\]. [*[**Acknowledgements.**]{} The authors gratefully acknowledge Felix Otto for suggesting the problem and for helpful discussions. J. B.-A. and S. N. thank the Max-Planck-Institute for Mathematics in the Sciences, Leipzig, for its hospitality. S. N. was partially supported by ERC-2010-AdG no.267802 AnaMultiScale.* ]{} Main results and sketch of proof {#S:FW} ================================ General framework ----------------- [**Discrete functions and derivatives.**]{} Let $\{e_i\}_{i=1}^d$ denote the canonical basis of ${\mathbb{R}}^d$. For a scalar function $u:{\mathbb{Z}}^d\to{\mathbb{R}}$ and a vector field $g:{\mathbb{Z}}^d\to{\mathbb{R}}^d$ with components $g=(g_1,\ldots,g_d)$ we define the discrete gradient $\nabla u:{\mathbb{Z}}^d\to{\mathbb{R}}^d$ and negative divergence $\nabla^*g:{\mathbb{Z}}^d\to{\mathbb{R}}$ as follows: $$\begin{aligned} &\nabla u:=(\nabla_1u,\ldots,\nabla_du),\qquad \nabla^*g:=\sum_{i=1}^d\nabla^*_ig_i,\qquad\text{where}\\ &\nabla_iu(x):=u(x+e_i)-u(x),\qquad \nabla^*_iu(x):=u(x-e_i)-u(x).\end{aligned}$$ We denote by $\ell^p({\mathbb{Z}}^d)$, $1\leq p\leq \infty$, the space of functions $u:{\mathbb{Z}}^d\to{\mathbb{R}}$ with $\|u\|_{\ell^p}<\infty$, where $\|u\|_{\ell^p}:=\left(\sum_{x\in{\mathbb{Z}}^d}|u(x)|^p\right)^{\frac{1}{p}}$ for $p<\infty$ and $\|u\|_{\ell^\infty}:=\sup_{x\in{\mathbb{Z}}^d}|u(x)|$. Note that $\nabla$ and $\nabla^*$ are adjoint: We have the discrete integration by parts formula $$\sum_{x\in{\mathbb{Z}}^d}\nabla u(x)\cdot g(x)=\sum_{x\in{\mathbb{Z}}^d}u(x)\nabla^*g(x)$$ for all exponents $1\le p,q \le \infty$ such that $1 = \frac{1}{p} + \frac{1}{q}$ and all functions $u\in\ell^p({\mathbb{Z}}^d)$ and $g\in\ell^q({\mathbb{Z}}^d,{\mathbb{R}}^d)$. [**Random coefficients and quantitative ergodicity.**]{} In order to describe random coefficients, we endow $\Omega$ with the product topology induced by ${\mathbb{R}}^{d \times d}$ and denote by $C_b(\Omega)$ the set of continuous functions $\zeta:\Omega\to{\mathbb{R}}$ that are uniformly bounded in the sense that $$\|\zeta\|_\infty:=\sup_{a\in\Omega}|\zeta(a)|<\infty.$$ Throughout this work, we consider a probability measure on $\Omega$ with respect to the Borel-$\sigma$-algebra. Following the convention in statistical mechanics, we call this probability measure an *ensemble* and write ${\left\langle \cdot \right\rangle}$ for the associated expected value, the ensemble average. We assume that ${\left\langle \cdot \right\rangle}$ is *stationary* w. r. t. translation on ${\mathbb{Z}}^d$, i.e. for all $x\in{\mathbb{Z}}^d$, the mapping $\tau_x : \Omega\to\Omega, a\mapsto a(\cdot+x)$ is measurable and measure preserving: $$\forall \zeta: \Omega\to{\mathbb{R}}:\quad \langle \zeta( \tau_x \cdot) \rangle = \langle \zeta(\cdot) \rangle.$$ Our key assumption is that ${\left\langle \cdot \right\rangle}$ is *quantitatively ergodic* where the ergodicity is quantified through either LSI or SG. To be precise, we make the following definitions: \[def:LSI\] We say that $\langle \cdot \rangle$ satisfies the LSI with constant $\rho>0$ if $$\label{eq:LSI} {\left\langle \zeta^2\log\frac{\zeta^2}{{\left\langle \zeta^2 \right\rangle}} \right\rangle}\le \frac{1}{2\rho}{\left\langle \sum_{x\in{\mathbb{Z}}^d} \Big( \operatorname*{osc}_{a(x)} \zeta\Big)^2 \right\rangle}.$$ for all $\zeta\in C_b(\Omega)$. Here the [*oscillation*]{} of a function $\zeta\in C_b(\Omega)$ is defined by taking the oscillation over all $\tilde a\in\Omega$ that coincide with $a$ outside of $x\in{\mathbb{Z}}^d$, i.e.$$\begin{gathered} \label{eq:osc} \operatorname*{osc}_{a(x)} \zeta(a) := \sup\{ \zeta(\tilde a) \ | \ \tilde a\in \Omega \text{ s.t.\ } \tilde a(y)=a(y)\ \forall y\neq x \}\\ - \inf\{ \zeta(\tilde a) \ | \ \tilde a\in \Omega \text{ s.t.\ } \tilde a(y)=a(y)\ \forall y\neq x \}.\end{gathered}$$ The continuity assumption on $\zeta$ ensures that the oscillation is well-defined. A weaker form of quantitative ergodicity is the SG which is defined as follows. \[def:SG\] We say that $\langle \cdot \rangle$ satisfies the SG with constant $\rho>0$ if $$\label{eq:SG} {\left\langle (\varphi-{\left\langle \varphi \right\rangle})^2 \right\rangle}\le \frac{1}{\rho}{\left\langle \sum_{x\in{\mathbb{Z}}^d} \Big( \operatorname*{osc}_{a(x)} \varphi\Big)^2 \right\rangle}$$ for all $\varphi\in C_b(\Omega)$. The SG  is automatically satisfied if LSI  holds, which may be seen by expanding $\zeta = 1+\epsilon\varphi$ in powers of $\epsilon$. Moreover, LSI and SG are satisfied in the case of independently and identically distributed coefficients, i.e. when ${\left\langle \cdot \right\rangle}$ is the ${\mathbb{Z}}^d$-fold product of a probability measure on $\Omega_0$, cf. [@MO1 Lemma 1]. We refer to [@Guionnet-Zegarlinski-03] for a recent exposition on LSI and to [@GNO1] for a systematic application of SG to stochastic homogenization. Main results ------------ Throughout this paper the modified corrector $\phi_T$ is defined as the unique bounded solution to , see Lemma \[LMC\] below for details. Our first result yields boundedness of the finite moments of $\nabla\phi_T$. \[T1\] Assume that $\langle \cdot \rangle$ is stationary and satisfies LSI  with constant $\rho>0$. Then the modified corrector defined via satisfies $$\label{eq:Dphi} \langle |\nabla\phi_T(x) + \xi|^{2p} \rangle \le C(d,\lambda,p,\rho) |\xi|^{2p}$$ for all $x\in{\mathbb{Z}}^d$, $p<\infty$ and $T\ge2$. Here and throughout this work, $C(d,\lambda,p,\rho)$ stands for a constant which may change from line to line and that only depends on the exponent $p$, the LSI-constant $\rho$, the ellipticity ratio $\lambda$ and the dimension $d$. As already mentioned earlier, the lower bound “2” for $T$ is arbitrary and may be replaced by any other constant greater than 1. The second result establishes moment bounds on the corrector itself. More precisely, we establish control of moments of $\phi_T$ by moments of $\nabla\phi_T$. As opposed to Theorem \[T1\], we just need to assume that the ensemble satisfies SG, i.e. Definition \[def:SG\]. \[T2\] Assume that $\langle \cdot \rangle$ is stationary and satisfies SG  with constant $\rho>0$. [There exists $p_0 = p_0(d,\lambda)$ such that]{} the the modified corrector defined via  satisfies $$\label{eq:phi} \langle |\phi_T(x)|^{2p} \rangle \le C(d,\lambda,p,\rho) {\left\langle |\nabla \phi_T(x)+\xi|^{2p} \right\rangle}\times \begin{cases} (\log T)^p&\text{for }d=2,\\ 1&\text{for }d>2, \end{cases}$$ for all [$x\in{\mathbb{Z}}^d$, $p\ge p_0$ and $T\ge2$.]{} By letting $T\uparrow\infty$, we obtain the following estimate for the (unmodified) corrector. \[cor:1\] Assume that $\langle \cdot \rangle$ is stationary and satisfies LSI  with constant $\rho>0$. Then: (a) In dimensions $d\geq 2$ there exists a unique measurable function $\varphi:\Omega\times{\mathbb{Z}}^d\to{\mathbb{R}}$ that solves for ${\left\langle \cdot \right\rangle}$-almost every $a\in\Omega$ and - $\varphi$ satisfies the anchoring condition $\varphi(a,0)=0$ for ${\left\langle \cdot \right\rangle}$-almost every $a\in\Omega$, - $\nabla\varphi$ is stationary in the sense of  and ${\left\langle \nabla\varphi(x) \right\rangle}=0$ for all $x\in{\mathbb{Z}}^d$, - ${\left\langle |\nabla\varphi(x)|^p \right\rangle}<\infty$ for all $x\in{\mathbb{Z}}^d$ and $p<\infty$. (b) In dimensions $d>2$ there exists a unique measurable function $\phi:\Omega\times{\mathbb{Z}}^d\to{\mathbb{R}}$ that solves for ${\left\langle \cdot \right\rangle}$-almost every $a\in\Omega$, and - $\phi$ is stationary in the sense of , - ${\left\langle |\phi(x)|^p \right\rangle}<\infty$ for all $x\in{\mathbb{Z}}^d$ and $p<\infty$. \[sec:labelt2-assume-that\] - The “anchored corrector” $\varphi$ defined in Corollary \[cor:1\] (a) has already been considered in the seminal works by Papanicolaou and Varadhan [@Papanicolaou-Varadhan-79] and Kozlov [@Kozlov-79]. In fact, for existence and uniqueness – which can be proved by soft arguments – only (a1) and (a2) are required. The new estimate (a3) follows from Theorem \[T1\] in the limit $T\uparrow\infty$. Note that (a3) implies (by a short ergodicity argument) sublinearity of the anchored corrector in the sense that $$\lim\limits_{R\uparrow\infty}\max_{|x|\leq R}\frac{|\varphi(a,x)|}{R}=0$$ for ${\left\langle \cdot \right\rangle}$-almost every $a\in\Omega$. - Existence, uniqueness and moment bounds of the “stationary corrector” $\phi$ defined in Corollary \[cor:1\] (b) have been obtained in the case of diagonal coefficients in [@GO1], see also [@GNO1]. Note that the anchored corrector $\varphi$ can be obtained from $\phi$ via $\varphi(x,a):=\phi(a,x)-\phi(a,0)$, and, as explained in the discussion below [@LNO1 Corollary 1], the moment bound (b2) implies that $$\forall\theta\in (0,1]\,:\qquad \lim\limits_{R\uparrow\infty}\max_{|x|\leq R}\frac{|\varphi(a,x)|}{R^\theta}=0$$ for ${\left\langle \cdot \right\rangle}$-almost every $a\in\Omega$. Instead of the modified corrector, one might consider the periodic corrector which in the stochastic context is defined as follows: For $L\in{\mathbb{N}}$ let $$\Omega_L:=\{\,a\in\Omega\,:\,a(\cdot+Lz)=a\;\;\text{for all }z\in{\mathbb{Z}}^d\,\}$$ denote the set of $L$-periodic coefficient fields. In the $L$-periodic case, one considers the corrector equation together with an $L$-periodic ensemble, i. e. a stationary probability measure on $\Omega_L$. In that case, equation admits a unique solution $\phi_L$ with $\sum_{x\in([0,L)\cap{\mathbb{Z}})^d}\phi_L(x)=0$ for all $a\in\Omega_L$. The $L$-periodic versions of LSI and SG are obtained by replacing the sum $\sum_{x\in{\mathbb{Z}}^d}$ in and by $\sum_{x\in([0,L)\cap{\mathbb{Z}})^d}$. With these modifications, Theorem \[T1\] and Theorem \[T2\] extend to the $L$-periodic case ([with $L=\sqrt{T}$ since the cut-off term involving $T$ effectively restricts the equation to a domain of side length $\sqrt{T}$]{}). In particular, if the $L$-periodic ensemble satisfies an $L$-periodic LSI with constant $\rho>0$, then the $L$-periodic corrector satisfies for all $p<\infty$ $${\left\langle \phi_L^{2p} \right\rangle}^{\frac{1}{2p}}\lesssim \begin{cases} (\log L)^{\frac{1}{2}}&\text{for }d=2,\\ 1&\text{otherwise.} \end{cases}$$ The proof follows along the same lines and can easily be adapted. For estimates on the periodic corrector $\phi_L$ in the case of diagonal coefficients, see [@GNO1]. Sketch of proof of Theorem \[T1\] {#SS:sketch1} --------------------------------- Theorem \[T1\] is relatively straight-forward to prove. We simply follow the approach developed in [@MO1] and use the LSI  of Definition \[def:LSI\] to upgrade a lower order $L^2_{\langle \cdot \rangle}(\Omega)$-bound to a bound in $L^{2p}_{\langle \cdot \rangle}(\Omega)$. Note that by stationarity of ${\left\langle \cdot \right\rangle}$ and $\phi_T$, see , it suffices to prove the estimates  at $x=0$. The lower order bound $$\langle |\nabla\phi_T(0) + \xi|^{2} \rangle \le C(d,\lambda) |\xi|^{2},\qquad\text{cf.~\eqref{T1.3},}$$ follows from a simple energy argument, i.e. an $L^2$-estimate obtained by testing the equation for $\phi_T$ with $\phi_T$ itself. The integral here is the ensemble average and not the sum over ${\mathbb{Z}}^d$; this is possible thanks to stationarity of $\phi_T$. For details, we refer to Step 1 in the proof of Theorem \[T1\]. This bound is then upgraded via the following consequence of LSI : $$\langle |\nabla \phi_T(0) + \xi|^{2p} \rangle \le C(d,p,\rho,\delta) \langle |\nabla \phi_T(0) + \xi|^{2} \rangle^p + \delta {\left\langle \bigg( \sum_{x\in{\mathbb{Z}}^d} \Big| \operatorname*{osc}_{a(x)} \nabla\phi(0) \Big|^2 \bigg) \right\rangle}$$ for all $\delta > 0$, where we have implicitly taken the oscillation of the vector $\nabla \phi_T$ component-wise. This reverse Jensen inequality is the content of Lemma \[L1\] below. Next, we need an expression for $\operatorname*{osc}_{a(x)} \nabla \phi_T$. In Lemma \[L:RP\] we will show that the response to a variation at $x$ in the coefficient field is given via the Green’s function $G_T$ as: $$\operatorname*{osc}_{a(x)}(\nabla_j \phi_T(a;0) + \xi_j) \leq C(d,\lambda) |\nabla \nabla G_T(a;0,x)| |\nabla \phi_T(a;x) + \xi|,$$ where $G_T$ is the Green’s function associated to , see Definition \[def:1\]. Throughout this work, $\nabla\nabla G_T(x,y) = \nabla_x\nabla_y G_T(x,y)\in{\mathbb{R}}^{d\times d}$ denotes the mixed derivative and we use the spectral norm on ${\mathbb{R}}^{d\times d}$. The above estimate on the oscillation then yields $$\begin{aligned} {\left\langle \bigg( \sum_{x\in{\mathbb{Z}}^d} \Big| \operatorname*{osc}_{a(x)} \nabla\phi(0) \Big|^2 \bigg) \right\rangle} &\le C(d,\lambda,p) {\left\langle \bigg( \sum_{x\in{\mathbb{Z}}^d} |\nabla \nabla G_T(a;0,x)|^2 |\nabla \phi_T(a;x) + \xi|^2 \bigg) \right\rangle}\\ &\le C(d,\lambda,p) {\left\langle |\nabla \phi_T(a;0) + \xi|^2 \right\rangle},\end{aligned}$$ where in Step 2 of the proof of Theorem \[T1\], we will obtain the last inequality from stationarity and the energy estimate , i.e.$$\sum_{x\in{\mathbb{Z}}^d} |\nabla\nabla G_T(x,y)|^2\le C(d,\lambda),$$ which holds in any dimension $d\ge 2$. Sketch of proof of Theorem \[T2\] {#SS:sketch2} --------------------------------- By stationarity of ${\left\langle \cdot \right\rangle}$ and $\phi_T$, it suffices to prove at $x=0$. In contrast to Theorem \[T1\], the proof of Theorem \[T2\] only requires the weaker ergodicity assumption SG of Definition \[def:SG\], which we will use in form of $${\left\langle |\phi_T(0)|^{2p} \right\rangle}\le C(p,\rho) {\left\langle \bigg(\sum_{x\in{\mathbb{Z}}^d} \Big( \operatorname*{osc}_{a(x)} \phi_T(0) \Big)^2 \bigg)^p \right\rangle},$$ see Lemma \[LSGp\] below. Again, we require an estimate on the oscillation, which we shall obtain in Lemma \[L:RP\] and which yields $$\operatorname*{osc}_{a(x)} \phi_T(a;0) \le C(d,\lambda) |\nabla_x G_T(a;0,x)| |\nabla \phi_T(a;x) + \xi|.$$ Again, this will be substituted in the above SG-type inequality. In contrast to the proof of Theorem \[T1\], where a simple $\ell^2$-estimate of $\nabla\nabla G_T$ sufficed, we will see that we require a bound on $\nabla G_T$ including weights: In Lemma \[L2\], we show that $$\sum_{x\in{\mathbb{Z}}^d}|\nabla_x G_T(a;0,x)|^{2q}\omega_q(x) \le C(d,\lambda,q) \begin{cases} \log T&\text{for }d=2,\\ 1&\text{for }d>2 \end{cases}$$ for all $q\ge1$ close enough to $1$, and weight $\omega_q$ given by $$\omega_q(x):= \begin{cases} (|x|+1)^{2(q-1)}+T^{1-q}(|x|+1)^{4(q-1)}&\text{for }d=2,\\ (|x|+1)^{2d(q-1)}&\text{for }d>2. \end{cases}$$ The case $d>2$ is relatively straight-forward and follows by testing the equation with weights and applying Hardy’s inequality. The case $d=2$ is critical for this estimate and we will prove it by reducing the problem via a perturbation argument to the constant-coefficient case; this approach involves a Helmholtz projection and is inspired by the work [@Conlon-Spencer-11]. To make it rigorous, we require a Calderón-Zygmund estimate in discrete weighted spaces which may be of independent interest and which is proved in Section \[S:CZ\]. With this estimate at hand, we may smuggle in the weight $\omega_q$ and apply Hölder’s inequality with $q\approx 1$ and large dual exponent $p$ to obtain $$\begin{gathered} {\left\langle \bigg( \sum_{x\in{\mathbb{Z}}^d} |\nabla_x G_T(a;0,x)|^2 |\nabla \phi_T(a;x) + \xi|^2 \bigg) \right\rangle}\\ \le C(d,\lambda,q) {\left\langle |\nabla \phi_T(a;x) + \xi|^{2p} \right\rangle} \begin{cases} \log T&\text{for }d=2,\\ 1&\text{for }d>2 \end{cases}\end{gathered}$$ as long as $p$ is large enough such that $\sum_x \omega_q^{1-p}(x) < \infty$. Auxiliary results and proofs {#S:P} ============================ In this section we first present and prove some auxiliary results and then turn to the actual proof of our main results. We start in Section \[SS:W\] with the definition of the modified corrector and prove its existence and some continuity properties. This invokes the elliptic Green’s function, which we introduce in the same section. Section \[SS:G\] and Section \[SS:E\] contain the two key ingredients of our approach: In Section \[SS:G\], we prove estimates on the oscillation of the corrector and estimates on the gradient of the Green’s function; in Section \[SS:E\], we revisit LSI and SG, which quantify ergodicity and are the only ingredients from probability theory in our approach. Finally in Sections \[SS:T1\] and \[SS:T2\], we present the proofs of Theorems \[T1\] and \[T2\]. Well-posedness of the modified corrector {#SS:W} ---------------------------------------- We define the modified corrector $\phi_T:\Omega\times{\mathbb{Z}}^d\to{\mathbb{R}}$ as the unique bounded solution to , i.e. for each $a\in\Omega$, we require $\phi_T(a,\cdot):{\mathbb{Z}}^d\to{\mathbb{R}}$ to solve and to be bounded, see Lemma \[LMC\] for details. Note that this definition is [ *pointwise in $a\in\Omega$*]{} and does not invoke any probability measure on $\Omega$. This is in contrast to what is typically done in stochastic homogenization (e.g. in the seminal work [@Papanicolaou-Varadhan-79], where $\phi_T$ is unambigously defined through an equation on the probability space $L^2_{{\left\langle \cdot \right\rangle}}(\Omega)$). We opt for the “non-probabilistic” definition, since later we need to estimate the oscillation in $a$ of $\phi_T$, which is most conveniently done when $\phi_T$ is defined for [*all*]{} $a\in\Omega$ and not only ${\left\langle \cdot \right\rangle}$-almost surely. However, since the right-hand side of  is only in $\ell^\infty({\mathbb{Z}}^d)$, it is not clear a-priori whether admits a bounded solution. To settle this question we consider the elliptic Green’s function $G_T:\Omega\times{\mathbb{Z}}^d\times{\mathbb{Z}}^d\to {\mathbb{R}}$ and prove integrability of $G_T$ in Lemma \[L:Gint\] below. The latter then implies existence of $\phi_T$ together with some continuity properties, see Lemma \[LMC\] below. \[def:1\] [Given $a\in\Omega$ and $y\in{\mathbb{Z}}^d$, the Green’s function $G_T(a;x,y)$ associated to equation  is the unique solution in $\ell^2({\mathbb{Z}}^d)$ to $$\label{T1.2} \frac{1}{T}G_T(a;\cdot,y)+\nabla^*(a\nabla G_T(a;\cdot,y))=\delta(\cdot-y)\qquad\text{in }{\mathbb{Z}}^d,$$ where $\delta:{\mathbb{Z}}^d\to\{0,1\}$ denotes the Dirac function centered at $0$. ]{} Equation  can also be expressed in its “weak" formulation: For all $w\in\ell^2({\mathbb{Z}}^d)$ we have that $$\label{P1.3} \frac{1}{T}\sum_{x\in{\mathbb{Z}}^d}G_T(a;x,y) w(x)+\sum_{x\in{\mathbb{Z}}^d}\nabla w(x)\cdot a(x)\nabla_x G_T(a;x,y)=w(y).$$ It immediately follows from the unique characterization of $G_T$ through (\[T1.2\]) that the Green’s function is stationary: $$\label{P1.15} \nabla\nabla G_T(a,x+z,y+z)=\nabla\nabla G_T(a(\cdot+z),x,y).$$ Furthermore it is symmetric in the sense that $$\label{P1.16} \nabla\nabla G_T(a;y',y)=\nabla\nabla G_T(a^t;y,y'),$$ where $a^t$ denotes the transpose of $a$ in ${\mathbb{R}}^{d\times d}$. This can be seen from applying (\[P1.3\]) to $w(x)= G_T(a^t;x,y')$, yielding the representation $$G_T(a^t;y,y')=\frac{1}{T}\sum_{x} G_T(a^t;x,y') G_T(a;x,y)+ \sum_{x}\nabla_x G_T(a^t;x,y') \cdot a(x)\nabla_x G_T(a;x,y).$$ On the other hand, choosing $w(x) = G_T(a;x,y)$ in the definition for $G_T(a^t;\cdot,\cdot)$ shows $$G_T(a;y',y)=\frac{1}{T}\sum_{x} G_T(a;x,y) G_T(a^t;x,y') + \sum_{x}\nabla_x G_T(a;x,y) \cdot a^t(x)\nabla_x G_T(a^t;x,y').$$ By definition of the transpose $a^t$, this shows $G_T(a;y,y') = G_T(a^t;y',y)$ and hence . The Green’s function is useful since by linearity it encodes all the information for the solution $u$ to the equation $$\label{eq:7} \frac{1}{T}u+\nabla^*(a\nabla u)=f\qquad\text{in }{\mathbb{Z}}^d.$$ Indeed, testing with $G_T(a;\cdot,y)$ and integrating by parts *formally* yields $$\label{eq:6} u(a;x)=\sum_{y\in{\mathbb{Z}}^d}G_T(a;x,y)f(y).$$ Of course, to make sense of this for $f=\nabla^*(a\xi)\in\ell^\infty({\mathbb{Z}}^d)$, we need $G_T$ in $\ell^1({\mathbb{Z}}^d)$. On the other hand, the definition of the Green’s function only yields $G_T(\cdot,y) \in \ell^2({\mathbb{Z}}^d)$ but this is not enough to establish well-posedness of . It is not difficult to establish that $\sum_x G_T(x,y) = T$ for all $y\in{\mathbb{Z}}^d$ and $a\in\Omega$ but without the maximum principle, $G_T$ may be negative and it does not follow that $G_T$ is in $\ell^1({\mathbb{Z}}^d)$. Therefore we need another argument to establish well-posedness of . This is provided by the following lemma, which shows exponential decay of $G_T$ and in particular that $G_T$ is in $\ell^1({\mathbb{Z}}^d)$. \[L:Gint\] [There exist a large constant $C=C(d,\lambda,T)<\infty$ and a small constant $\delta=\delta(d,\lambda,T)>0$, both only depending on $d$, $\lambda$ and $T$, such that $$\sum_{x\in{\mathbb{Z}}^d}\Big(|G_T(a;x,y)|^2+|\nabla_x G_T(a;x,y)|^2\Big)e^{\delta(d,\lambda,T)|x-y|}\leq C(d,\lambda,T)$$ for all $a\in\Omega$ and $y\in{\mathbb{Z}}^d$.]{} Since we could not find a suitable reference for this estimate in the discrete, non-symmetric case, we present a proof in the appendix. The proof uses Agmon’s positivity method [@Agmon] and in the discrete setting is inspired by [@Gloria10 Proof of Lemma 3]. With this result at hand, we can provide well-posedness of the modified corrector $\phi_T$. In addition to well-posedness, Lemma \[L:Gint\] allows us to deduce $\phi_T(0) = \phi_T(a; 0) \in C_b(\Omega)$, which is necessary for the application of LSI  and SG  to $\phi_T$. \[LMC\] For all $a\in\Omega$ the modified corrector equation admits a unique bounded solution $\phi_T(a;\cdot)\in\ell^\infty({\mathbb{Z}}^d)$. The so defined modified corrector $\phi_T:\Omega\times{\mathbb{Z}}^d\to{\mathbb{R}}$ satisfies $\phi_T(\cdot,x)\in C_b(\Omega)$ for all $x\in{\mathbb{Z}}^d$, and $$\label{LMC:2} |\phi_T(a;x)|\leq C(T,\lambda,d)|\xi|\qquad\text{for all $a\in\Omega$ and all $x\in{\mathbb{Z}}^d$}.$$ Furthermore, $\phi_T$ is stationary, i.e.  $$\label{LMC:1} \phi_T(a;x+z)=\phi_T(a(\cdot+z);x)\qquad\text{for all $a\in\Omega$ and all $x,z\in{\mathbb{Z}}^d$}.$$ [**Step 1**]{}. Existence and uniqueness of $\phi_T$: In this step, we argue that for arbitrary $f\in\ell^\infty({\mathbb{Z}}^d)$ equation admits a unique solution $u$ and $u$ can be represented as in . The existence and uniqueness of $\phi_T$ then follows by setting $f:=-\nabla^*(a\xi)$. For the argument, note that by Lemma \[L:Gint\] we have $G_T(a;\cdot,y)\in\ell^1({\mathbb{Z}}^d)$. Hence, for every $f\in\ell^\infty({\mathbb{Z}}^d)$, equation defines a function $u(a;\cdot)\in\ell^\infty({\mathbb{Z}}^d)$ that solves . For the uniqueness, let $\tilde u\in\ell^\infty({\mathbb{Z}}^d)$ solve . Testing with $G_T(a^t;\cdot,x)$ yields $$\begin{aligned} \sum_{y\in{\mathbb{Z}}^d}G_T(a^t;y,x)f(y)&=&\sum_{y\in{\mathbb{Z}}^d}G_T(a^t;y,x)\Big(\frac{1}{T}+\nabla^*(a\nabla))\Big)\tilde u(y)\\ &=&\sum_{y\in{\mathbb{Z}}^d}\Big(\frac{1}{T}+\nabla^*(a^t\nabla)\Big)G_T(a^t;y,x)\tilde u(y)\\ &=&\sum_{y\in{\mathbb{Z}}^d}\delta(x-y)\tilde u(y)=\tilde u(x).\end{aligned}$$ By symmetry the left-hand side is equal to $\sum_{y\in{\mathbb{Z}}^d}G_T(a;x,y)f(y)=u(a;x)$ and thus $u(a;\cdot)=\tilde u(\cdot)$ follows. [**Step 2**]{}. Argument for and : The stationarity property directly follows from uniqueness and the stationarity of the operator and the right-hand side $-\nabla^*(a\xi)$. We turn to estimate . By the Green’s representation , which is valid by Step 1, and an integration by parts (possible since $G_T(x,\cdot) \in \ell^1({\mathbb{Z}}^d)$), we have $$\phi_T(a;x)=\sum_{y\in{\mathbb{Z}}^d}\nabla_yG_T(a;x,y)\cdot a(y)\xi.$$ We smuggle in the exponential weight from Lemma \[L:Gint\], use uniform ellipticity and the Cauchy-Schwarz inequality to get $$\begin{aligned} |\phi_T(a;x)|&\leq\sum_{y\in{\mathbb{Z}}^d}\left(|\nabla_yG_T(a;x,y)|e^{\frac{\delta}{2}|y|}\right)\left(|a(y)\xi|e^{-\frac{\delta}{2}|y|}\right)\\ &\leq\left(\sum_{y\in{\mathbb{Z}}^d}|\nabla_yG_T(a;x,y)|^2e^{\delta|y|}\right)^{\frac{1}{2}}\left(\sum_{y\in{\mathbb{Z}}^d}e^{-\delta|y|}\right)^{\frac{1}{2}}|\xi|,\end{aligned}$$ where $\delta>0$ is given in Lemma \[L:Gint\]. By symmetry, cf. , and Lemma \[L:Gint\], the right-hand side is bounded by $C(d,\lambda,T)|\xi|$ and follows. [**Step 3**]{}. Argument for $\phi_T(\cdot;x)\in C_b(\Omega)$: Thanks to , we only need to show that $\phi_T(a;x)$ is continuous in $a$. Furthermore, by stationarity, cf. , it suffices to consider $\phi_T(a;0)$. Now, consider a sequence $a_n\in\Omega$ that converges to some $a\in\Omega$ in the product topology. We need to show that $\phi_T(a_n;0)\to\phi_T(a;0)$. To that end, consider the function $$\psi_n(x):=\phi_T(a_n;x)-\phi_T(a;x),$$ which can be characterized as the unique bounded solution to $$\frac{1}{T}\psi_n+\nabla^*(a_n\nabla\psi_n)=\nabla^*((a-a_n)(\nabla\phi_T(a,\cdot)+\xi))\qquad\text{in }{\mathbb{Z}}^d.$$ Hence, by Step 1 we have $$\psi_n(0)=\sum_{y\in{\mathbb{Z}}^d}\nabla_yG_T(a_n;0,y)\cdot (a(y)-a_n(y))(\nabla\phi_T(a,y)+\xi),$$ and thus Lemma \[L:Gint\] and the result of Step 2 yield $$\begin{aligned} |\psi_n(0)|&\leq\left(\sup_{y\in{\mathbb{Z}}^d}\sup_{a\in\Omega}|\nabla\phi_T(a,y)+\xi|\right)\\ & \qquad\times\,\left(\sum_{y\in{\mathbb{Z}}^d}|\nabla_yG_T(a_n;0,y)|^2e^{\delta|y|}\right)^{\frac{1}{2}}\left(\sum_{y\in{\mathbb{Z}}^d}e^{-\delta|y|}|a(y)-a_n(y)|^2\right)^{\frac{1}{2}}\\ &\le C(T,\lambda,d)\,\left(\sum_{y\in{\mathbb{Z}}^d}e^{-\delta|y|}|a(y)-a_n(y)|^2\right)^{\frac{1}{2}}.\end{aligned}$$ Since $a_n\to a$ in the product topology, i.e. $a_n(y)\to a(y)$ for all $y\in{\mathbb{Z}}^d$, the right-hand side vanishes as $n\to\infty$ by dominated convergence. Oscillations and Green’s function estimates {#SS:G} ------------------------------------------- In this section, we estimate the oscillation of the corrector and its gradient, see Lemma \[L:RP\] below, and establish estimates on the gradient of the elliptic Green’s functions, see Lemma \[L2\] below. These bounds are at the core of our analysis. Indeed, the proofs of Theorem \[T1\] and Theorem \[T2\] start with an application of quantitative ergodicity: In Theorem \[T1\], the LSI  in form of Lemma \[L1\] is applied to $\zeta=\nabla_j\phi_T(0)+\xi_j$, while in Theorem \[T2\], the SG  in form of Lemma \[LSGp\] is applied to $\zeta=\phi_T(0)$. Hence we require estimates for $\operatorname*{osc}_{a(x)}(\nabla_j\phi_T(a;0)+\xi_j)$ and $\operatorname*{osc}_{a(x)}\phi_T(a;0)$. Following [@GO1], these expressions are related to the elliptic Green’s function: \[L:RP\] For all $T>0$, $a\in\Omega$, $x\in{\mathbb{Z}}^d$ and $j=1,\ldots,d$ we have $$\begin{aligned} \label{eq:osc_phi} \operatorname*{osc}_{a(x)} \phi_T(a;0) &\leq C(d,\lambda) |\nabla_x G_T(a;0,x)| |\nabla \phi_T(a;x) + \xi|,\\ \label{T1.1} \operatorname*{osc}_{a(x)}(\nabla_j \phi_T(a;0) + \xi_j) &\leq C(d,\lambda) |\nabla \nabla G_T(a;0,x)| |\nabla \phi_T(a;x) + \xi|. \end{aligned}$$ Let $a\in\Omega$ and $x\in{\mathbb{Z}}^d$ be fixed. As in the definition of the oscillation, let $\tilde a\in\Omega$ denote an arbitrary coefficient field that differs from $a$ only at $x$, i.e. $\tilde a(y)=a(y)$ for all $y\neq x$. We consider the difference $\phi_T(\tilde a;x)-\phi_T(a;x)$. Equation  yields $$\frac{1}{T} ( \phi_T(\tilde a;\cdot) - \phi_T(a;\cdot) ) + \nabla^* \big(\tilde a(\cdot) ( \nabla \phi_T(\tilde a;\cdot) - \nabla \phi_T(a;\cdot) \big) = \nabla^* \big(( a - \tilde a )(\cdot) (\nabla \phi_T(a;\cdot) + \xi )\big)$$ and consequently the Green’s function representation yields $$\label{eq:phi_rep} \phi_T(\tilde a; y) - \phi_T(a;y) = \nabla_x G_T(\tilde a;y,x) \cdot ( a(x) - \tilde a(x) ) (\nabla \phi_T(a;x) + \xi )$$ for all $y\in{\mathbb{Z}}^d$. In particular, taking the gradient w. r. t. $y_j$ and then setting $y=x$ yields $$|\nabla_j \phi_T(\tilde a; x) - \nabla_j \phi_T(a;x)| \le 2 |\nabla_{j}\nabla G_T(\tilde a;x,x)| |\nabla \phi_T(a;x) + \xi|$$ since $a, \tilde a\in\Omega$ are uniformly bounded. In view of , the mixed derivative of $G_T$ is bounded by $\lambda^{-1}$ and we obtain $$\label{eq:osc_x_phi_x} |\nabla_j \phi_T(\tilde a; x) - \nabla_j \phi_T(a;x)| \le 2 \lambda^{-1} |\nabla \phi_T(a;x) + \xi|.$$ Exchanging $a$ and $\tilde a$ in yields $$\label{eq:phi_rep2} \phi_T(a; y) - \phi_T(\tilde a;y) = \nabla_x G_T(a;y,x) \cdot ( \tilde a(x) - a(x) ) (\nabla \phi_T(\tilde a;x) + \xi ).$$ We take the absolute value to obtain $$|\phi_T(a;0) - \phi_T(\tilde a;0)| \le 2 |\nabla_x G_T(a;0,x)| |\nabla \phi_T(\tilde a;x) + \xi|.$$ On the right hand side, we plug in  to obtain $$|\phi_T(a;0) - \phi_T(\tilde a;0)| \le C(d,\lambda) |\nabla_x G_T(a;0,x)| |\nabla \phi_T(a;x) + \xi|.$$ Since $\tilde a(x)$ was arbitrary, it follows that $$\operatorname*{osc}_{a(x)} \phi_T(a;0) \le C(d,\lambda) |\nabla_x G_T(a;0,x)| |\nabla \phi_T(a;x) + \xi|,$$ which is precisely the claimed identity . Taking the gradient with respect to $y_j$ in  yields $$ \nabla_j\phi_T(a; y) - \nabla_j\phi_T(\tilde a;y) = \nabla_{y,j}\nabla_x G_T(a;y,x) \cdot ( \tilde a(x) - a(x) ) (\nabla \phi_T(\tilde a;x) + \xi ).$$ We take the absolute value and insert to obtain $$|\nabla_j\phi_T(a; y) - \nabla_j\phi_T(\tilde a;y)| \le C(d,\lambda) |\nabla_{y,j}\nabla_x G_T(a;y,x)| |\nabla \phi_T(a;x) + \xi |.$$ and follows. In view of and it is natural that integrability properties of $G_T$ are required. Next to quantitative ergodicity, these Green’s function estimates are the second key ingredient in our approach. For Theorem \[T1\], which invokes , a standard $\ell^2$-energy estimate for $\nabla\nabla G_T$ suffices, see . For Theorem \[T2\], which invokes , some more regularity of the Green’s function is required. We need a spatially weighted estimate on the gradient $\nabla G_T$ that is uniform in $a\in\Omega$. To this end, as announced in Section \[SS:sketch2\], we define a weight $$\label{def_weight} \omega_q(x):= \begin{cases} (|x|+1)^{2(q-1)}+T^{1-q}(|x|+1)^{4(q-1)}&\text{for }d=2,\\ (|x|+1)^{2d(q-1)}&\text{for }d>2, \end{cases}$$ for every $q\ge 1$ and $T\ge 1$. \[L2\] There exists $q_0>1$ only depending on $\lambda$ and $d$ such that $$\begin{aligned} \label{P1.18} \sum_{x\in{\mathbb{Z}}^d} |\nabla_x\nabla_{y,j} G_T(x,y)|^2 &\le\lambda^{-2},\quad j=1, \ldots, d,\\ \sum_{x\in{\mathbb{Z}}^d}|\nabla_x G_T(a;x,0)|^{2q}\omega_q(x) &\le C(d,\lambda) \begin{cases} \log T&\text{for }d=2,\\ 1&\text{for }d>2 \end{cases}\label{eq:L2} \end{aligned}$$ for all $1\leq q \le q_0$. Lemma \[L2\] establishes a weighted $\ell^{2q}$-estimate on the gradient $\nabla G_T$ of the Green’s function. For the application, it is crucial that the integrability exponent $2q$ is larger than $2$. The weight is chosen in such a way that the estimate remains valid for the constant coefficient Green’s function $G_T^0(x):=G_T(\operatorname{\mathbbm{1}};x,0)$ (where we use the symbol $\operatorname{\mathbbm{1}}$ to denote the identity in $\mathbb{R}^{d\times d}$) whose gradient behaves as $$\label{eq:decay-const} |\nabla G_T^0(x)| \le C(d) (|x|+1)^{1-d}\exp\Big(-c_0\frac{|x|+1}{\sqrt T}\Big)$$ for some generic constant $c_0>0$, [which can easily be deduced from the well-known heat kernel bounds on the gradient of the parabolic Green’s function (for lack of a better reference, we refer to [@Delmotte-Deuschel Theorem 1.1] in the special case of a measure concentrating on $a(x)=\operatorname{\mathbbm{1}}$) along the lines of [@Mourrat Proposition 3.6].]{} With this bound at hand, the definition of the weight yields $$\label{eq:const} \sum_{x\in{\mathbb{Z}}^d}|\nabla G_T^0(x)|^{2q}\omega_q(x) \le C(d,q) \begin{cases} \log T&\text{for }d=2,\\ 1&\text{for }d>2 \end{cases}$$ for all $q>1$. Hence, Lemma \[L2\] says that the variable-coefficient Green’s function exhibits (on a spatially averaged level) the same decay properties as the constant-coefficient Green’s function. In the diagonal, scalar case, Lemma \[L2\] is a consequence of [@GO1 Lemma 2.9] and can also be derived from the weighted estimates on the parabolic Green’s function in [@GNO1 Theorem 3]. Although the arguments in [@GO1; @GNO1] rely on scalar techniques, Lemma \[L2\] also holds in the case of systems. Indeed, our proof relies only on techniques which are also available for systems. The proof will be split into three parts: First we will provide a simple argument for  valid in all dimensions. Then we will prove  in $d>2$ dimensions. The hardest part is the proof of  if $d=2$ since this is the critical dimension. An application of $\nabla_{y,j}$ to yields the following characterization for $\nabla_{y,j} G_T(a;\cdot,y)$ $$ \frac{1}{T}\sum_{x\in{\mathbb{Z}}^d}\nabla_{y,j} G_T(a;x,y) w(x)+\sum_{x\in{\mathbb{Z}}^d}\nabla w(x) \cdot a(x)\nabla_x\nabla_{y,j} G_T(a;x,y)=\nabla_j w(y)$$ for all $w\in\ell^2({\mathbb{Z}}^d)$. Taking $w(\cdot):=\nabla_{y,j} G_T(\cdot,y) \in\ell^2({\mathbb{Z}}^d)$ yields $$\frac{1}{T} \sum_{x\in{\mathbb{Z}}^d} |\nabla_{y,j} G_T(x,y)|^2 + \sum_{x\in{\mathbb{Z}}^d} \nabla_x \nabla_{y,j} G_T(x,y) \cdot a(x) \nabla_x \nabla_{y,j} G_T(x,y) = \nabla_{j} \nabla_{j} G_T(y,y),$$ where $\nabla_{j} \nabla_{j} G_T(y,y)=\nabla_{x,j} \nabla_{y,j} G_T(x,y)\big|_{x=y}$. The first term on the l. h. s. is positive and ellipticity yields $$\lambda \sum_{x\in{\mathbb{Z}}^d} |\nabla_x \nabla_{y,j} G_T(x,y)|^2 \le |\nabla_{j} \nabla_{j} G_T(y,y)| \le \bigg( \sum_{x\in{\mathbb{Z}}^d} |\nabla_x \nabla_{y,j} G_T(x,y)|^2 \bigg)^\frac{1}{2}.$$ Thus follows. [**Step 1**]{}. A priori estimate: We prove $$\label{apriori_L2} |G_T(0,0)| + \sum_x |\nabla G_T(x,0)|^2 \le C(d,\lambda).$$ The weak form of with $\zeta = G_T(\cdot,0)$ and ellipticity immediately yield $$0 \le \lambda \sum_x |\nabla G_T(x,0)|^2 \le G_T(0,0),$$ in particular $G_T(0,0) \ge 0$. Now a Sobolev embedding in $d>2$ with constant $C(d)$ yields $$\begin{aligned} |G_T(0,0)| &\le \bigg(\sum_x |G_T(x,0)|^{\frac{2d}{d-2}}\bigg)^{\frac{d-2}{2d}}\\ &\le C(d) \bigg( \sum_x |\nabla G_T(x,0)|^2 \bigg)^{\frac{1}{2}} \le C(d,\lambda) |G_T(0,0)|^{\frac{1}{2}}.\end{aligned}$$ The Sobolev embedding is readily obtained from its continuum version on ${\mathbb{R}}^d$ via a linear interpolation function on a triangulation subordinate to the lattice ${\mathbb{Z}}^d$. Hence $|G_T(0,0)|\le C(d,\lambda)$ and follows. [**Step 2**]{}. A bound involving weights: In this step we show that there exists $\alpha_0(d) > 0$ such that $$\label{weight_hardy} \sum_x ( |x| + 1 )^{2\alpha-2} |G_T(x,0)|^2 \le C(d) \sum_x (|x|+1)^{2\alpha} |\nabla G_T(x,0)|^2$$ for all $0 < \alpha \le \alpha_0$. (Note that both sides are well-defined for $G_T$.) We start by recalling Hardy’s inequality in ${\mathbb{R}}^d$ if $d>2$: $$\int_{{\mathbb{R}}^d}\frac{|f|^2}{|x|^2} \;dx \le \Big(\frac{2}{d-2}\Big)^2 \int_{{\mathbb{R}}^d} |\nabla f|^2 \;dx$$ for all $f\in H^1({\mathbb{R}}^d)$. A discrete counterpart can be derived by interpolation w. r. t. a triangulation subordinate to the lattice and yields $$\label{hardy} \sum_x ( |x| + 1 )^{2\alpha-2} |G_T(x,0)|^2 \le C(d) \sum_x \big|\nabla( ( |x| + 1 )^{\alpha} G_T(x,0) ) \big|^2.$$ The discrete Leibniz rule $\nabla_i (fg)(x) = f(x+e_i)\nabla_i g(x) + g(x)\nabla_i f(x)$ yields $$\nabla_i(( |x| + 1 )^{\alpha} G_T(x,0) ) = ( |x+e_i| + 1 )^{\alpha} \nabla_i G_T(x,0) + G_T(x,0) \nabla_i( |x| + 1 )^{\alpha}.$$ By the mean value theorem we obtain the simple inequality $|a^\alpha - b^\alpha| \le \alpha (a^{\alpha-1} + b^{\alpha-1}) |a-b|$ for all $a,b\ge 0$ [and we trivially have that $$\frac{1}{2} (|x|+1) \le |x+e|+1 \le 2(|x|+1).$$ The choice $a=|x+e|+1$ and $b=|x|+1$ thus yields $$\nabla_i( |x| + 1 )^{\alpha} \le 3 \alpha (|x|+1)^{\alpha-1}$$ ]{}for all $0\le\alpha\le1$. Summation over $i=1,\ldots,d$ and the discrete Leibniz rule above consequently yield $$\big| \nabla \big( ( |x| + 1 )^{\alpha} G_T(x,0) \big) \big|^2 \le C(d) \Big( ( |x| + 1 )^{2\alpha} |\nabla G_T(x,0)|^2 + \alpha ( |x| + 1 )^{2\alpha-2} |G_T(x,0)|^2 \Big)$$ for any $0\le\alpha\le1$. We substitute this estimate in Hardy’s inequality and take $\alpha=\alpha_0(d)$ small enough to absorb the last term into the l. h. s. to obtain , i.e.$$\sum_x (|x|+1)^{2\alpha_0-2} |G_T(x,0)|^2 \le C(d) \sum_x ( |x| + 1 )^{2\alpha_0} |\nabla G_T(x,0)|^2.$$ [**Step 3**]{}. Improvement of Step 1 to include weights: Now we deduce the existence of $\alpha_0 = \alpha_0(d,\lambda) > 0$ (smaller than $d$ and possibly smaller than $\alpha_0(d)$ from Step 2) such that $$\label{apriori_L2_alpha} \sum_x \big( |x| + 1 \big)^{2\alpha_0} |\nabla G_T(x,0)|^2 \le C(d,\lambda).$$ To this end, we set $w(x) = (|x|+1)^{2\alpha} G_T(x,0)$ and note that $$\nabla_iw(x)=(|x|+1)^{2\alpha}\nabla_iG_T(x,0)+\nabla_i\Big((|x+e_i|+1)^{2\alpha}\Big) G_T(x+e_i,0).$$ Hence, yields (for $y=0$): $$\begin{gathered} \label{Green_weights_eq} \frac{1}{T} \sum_x (|x|+1)^{2\alpha} |G_T(x,0)|^2 + \sum_x\sum_{i,j=1}^d G_T(x+e_i,0) \nabla_i \big((|x+e_i|+1)^{2\alpha}\big) \cdot a_{ij}(x) \nabla_j G_T(x,0)\\ + \sum_x( |x| + 1 )^{2\alpha} \nabla G_T(x,0) \cdot a(x) \nabla G_T(x,0) = G_T(0,0).\end{gathered}$$ As in Step 2, we have that $$\big|\nabla_i \big((|x|+1)^{2\alpha}\big)\big| \le 4 \alpha (|x|+1)^{\alpha-1}(|x+e_i|+1)^\alpha.$$ for all $0 \le \alpha \le 1$ and $i=1,\ldots,d$. Thus , ellipticity, and Hölder’s inequality yield $$\begin{gathered} \lambda\sum_x \big( |x| + 1 \big)^{2\alpha} |\nabla G_T(x,0)|^2 \le |G_T(0,0)|+\\C(d) \alpha \bigg(\sum_x |G_T(x,0)|^2 (|x|+1)^{2\alpha-2} \bigg)^\frac{1}{2} \bigg(\sum_x |\nabla G_T(x,0)|^2 (|x|+1)^{2\alpha} \bigg)^\frac{1}{2}.\end{gathered}$$ We apply the result of Step 2 with $\alpha \le \alpha_0(d)$ and then possibly decrease $\alpha$ further to absorb the second term on the r. h. s. This is possible for $\alpha\le\alpha_0(d,\lambda)$ for some $\alpha_0(d,\lambda)>0$. By Step 1, we conclude . By the discrete $\ell^{2q}-\ell^2$-inequality $\|f\|_{\ell^{2q}({\mathbb{Z}}^d)} \le \|f\|_{\ell^{2}({\mathbb{Z}}^d)}$, it follows that $$\sum_x \big( |x| + 1 \big)^{2q\alpha_0} |\nabla G_T(x,0)|^{2q} \le C(d,\lambda)$$ for all $q>1$. [Hence Lemma \[L2\] holds for $d>2$ with $\omega_q$ defined in  as long as $2d(q-1) \le 2q\alpha_0$, i.e. we may take $q_0 = \frac{d}{d-\alpha_0}$.]{} Let us remark that the following proof is valid in all dimensions $d\ge2$. However, if $d>2$, we have the simpler proof above. Fix $T>0$ and $a\in\Omega$. For convenience, we set $$\label{eq:5} G(x):=G_T(a;x,0)\qquad\text{and}\qquad G^0(x):=G_{\frac{T}{\lambda}}(\operatorname{\mathbbm{1}};x,0),$$ where $\operatorname{\mathbbm{1}}$ denotes the identity in ${\mathbb{R}}^{d\times d}$ and $\lambda$ denotes the constant of ellipticity from Assumption \[ass:ell\]. We first introduce some notation. For $1\leq q<\infty$ and $\gamma>0$, we denote by $\ell^{q}_\gamma$ the space of vector fields $g:{\mathbb{Z}}^d\to{\mathbb{R}}^d$ with $$\begin{aligned} \|g\|_{\ell^q_\gamma}&:=&\left(\sum_{x\in{\mathbb{Z}}^d}|g(x)|^q(|x|+1)^{\gamma}\right)^{\frac{1}{q}}<\infty.\end{aligned}$$ Likewise we denote by $\ell^{2q}_{{\omega_q}}$ the space of vector fields with $$\|g\|_{\ell^{2q}_{\omega_q}}:=\left(\sum_{x\in{\mathbb{Z}}^d}|g(x)|^{2q}{\omega_q}(x)\right)^{\frac{1}{2q}}<\infty,$$ with ${\omega_q}$ defined by . We write $\|\mathcal H\|_{B(X)}$ for the operator norm of a linear operator $\mathcal H: X\to X$ defined on a normed space $X$. [**Step 1**]{}. Helmholtz decomposition: We claim that the gradients of the variable coefficient Green’s function $G$ and of the constant coefficient Green’s function $G^0$ from are related by $$\label{helmholtz} (\operatorname{Id}+ \mathcal{H} \overline a) \nabla G = \lambda \nabla G^0$$ where $\overline a = \lambda a - \operatorname{\mathbbm{1}}$, $\mathcal H:=\nabla\mathcal L^{-1} \nabla^*$ denotes the modified Helmholtz projection, $\mathcal L:= \frac{\lambda}{T} + \nabla^*\nabla$, and $\operatorname{Id}$ denotes the identity operator. Here and in the following, we tacitly identify $\overline a$ with the multiplication operator that maps the vector field $g:{\mathbb{Z}}^d\to{\mathbb{R}}^{d}$ to the vector field $(\overline a g)(x):=\overline a(x)g(x)$. Moreover, since $G$ is integrable [in the sense of Lemma \[L:Gint\], the operators]{} $\mathcal L^{-1}$, and thus $\mathcal H$ and $(\operatorname{Id}+\mathcal{H}\overline a)$ are bounded linear operators on $\ell^2({\mathbb{Z}}^d)$ (resp. $\ell^2({\mathbb{Z}}^d,{\mathbb{R}}^d)$) and the weighted spaces discussed in Step 2 below. Identity may be seen by appealing to satisfied by $G$ and the equation $\mathcal L G^0=\delta$ satisfied by $G^0$: $$\begin{aligned} (\operatorname{Id}+ \mathcal{H} \overline a) \nabla G &= \nabla G+\lambda \nabla\mathcal L^{-1} \nabla^* a\nabla G- \nabla\mathcal L^{-1} \nabla^* \nabla G\\ &= \nabla G+\lambda \nabla\mathcal L^{-1} \left(\delta-\frac1TG\right)- \nabla\mathcal L^{-1}\left(\mathcal L-\frac{\lambda}{T}\right) G\\ &= \lambda \nabla\mathcal L^{-1}\delta=\lambda \nabla G^0. \end{aligned}$$ [**Step 2.**]{} Invertibility of $(\operatorname{Id}+\mathcal H\overline a)$ in a weighted space: In this step, we prove that there exists $q_0=q_0(d,\lambda)>1$ such that the operator $(\operatorname{Id}+\mathcal H\overline a):\ell^{2q}_{\omega_q}\to \ell^{2q}_{\omega_q}$ is invertible and $$\label{inv_Helmholtz} \|(\operatorname{Id}+\mathcal H\overline a)\|_{B(\ell^{2q}_{\omega_q})}\leq C(d,\lambda)$$ for all $1\leq q\leq q_0$ We split the proof into several sub-steps. [*Step 2a.*]{} Reduction to an estimate for $\mathcal H$: We claim that it suffices to prove the following statement. There exists $q_0=q_0(\lambda)>1$ such that $$\label{eq:8} \max \left\{ \| \mathcal{H} \|_{B(\ell^{2q}_{2q-2})}, \| \mathcal{H} \|_{B(\ell^{2q}_{4q-4})} \right\} \leq \frac{2-\lambda}{2(1-\lambda)}$$ for all $1\leq q\leq q_0$. Our argument is as follows: We only need to show that implies that $$\label{eq:10} \|\mathcal H\overline a\|_{B(\ell^{2q}_{\omega_q})}\leq \frac{2-\lambda}{2},$$ since then $(\operatorname{Id}+\mathcal H\overline a)$ can be inverted by a Neumann-series. Since the $\|\cdot\|_{B(\ell^{2q}_{\omega_q})}$-norm is submultiplicative, inequality follows from $$\label{eq:9} \| \mathcal{H} \|_{B(\ell^{2q}_{\omega_q})}\leq \frac{2-\lambda}{2(1-\lambda)}\qquad\text{and}\qquad \|\overline a\|_{B(\ell^{2q}_{\omega_q})}\leq 1-\lambda.$$ We start with the argument for the second inequality in . Thanks to , we have for all $a_0\in\Omega_0$ and $v\in{\mathbb{R}}^d$: $$\begin{aligned} |(\lambda a_0-\operatorname{\mathbbm{1}})v|^2&=&v\cdot((\lambda a_0-\operatorname{\mathbbm{1}})^t(\lambda a_0-\operatorname{\mathbbm{1}}))v\\ &=&\lambda^2|a_0v|^2-2v\cdot\frac{a_0+a_0^t}{2}v+|v|^2\ =\ \lambda^2|a_0v|^2-2v\cdot a_0v+|v|^2\\ &\stackrel{\eqref{ass:ell}}{\leq}&\lambda^2|v|^2-2\lambda|v|^2+|v|^2=(1-\lambda)^2|v|^2,\end{aligned}$$ [which shows  by definition of the (spectral) operator norm.]{} Regarding the first inequality in , we note that $\|\cdot\|_{\ell^{2q}_{\omega_q}}^{2q} = \|\cdot\|_{\ell^{2q}_{2q-2}}^{2q} + {T}^{1-q} \|\cdot\|_{\ell^{2q}_{4q-4}}^{2q}$, as can been seen by recalling definition . Hence, $$\begin{aligned} \| \mathcal{H} \|_{B(\ell^{2q}_{\omega_q})}^{2q} &= \sup_{\|g\|_{\ell^{2q}_{\omega_q}}\le1}\left( \|\mathcal Hg\|_{\ell^{2q}_{2q-2}}^{2q} + {T}^{1-q} \|\mathcal Hg\|_{\ell^{2q}_{4q-4}}^{2q}\right)\\ &\le \max \left\{ \| \mathcal{H} \|_{B(\ell^{2q}_{2q-2})}^{2q}, \| \mathcal{H} \|_{B(\ell^{2q}_{4q-4})}^{2q} \right\}\sup_{\|g\|_{\ell^{2q}_{\omega_q}}\le1}\left( \|g\|_{\ell^{2q}_{2q-2}}^{2q} + {T}^{1-q} \|g\|_{\ell^{2q}_{4q-4}}^{2q}\right)\\ &= \max \left\{ \| \mathcal{H} \|_{B(\ell^{2q}_{2q-2})}^{2q}, \| \mathcal{H} \|_{B(\ell^{2q}_{4q-4})}^{2q} \right\} \stackrel{\eqref{eq:8}}{<}\Big(\frac{2-\lambda}{2(1-\lambda)}\Big)^{2q},\end{aligned}$$ and follows. [*Step 2b.*]{} Proof of : A standard energy estimate yields $$\label{eq:12} \|\mathcal H\|_{B(\ell^2({\mathbb{R}}^d,{\mathbb{Z}}^d))}\leq 1.$$ [Indeed, given $g\in[\ell^2({\mathbb{Z}}^d)]^d$, we have that $\mathcal{H} g = \nabla u$ where $u$ solves $\frac{\lambda}{T} u + \nabla^* \nabla u = \nabla^* g$. Testing with $u$ yields $\|\nabla u\|_{\ell^2({\mathbb{Z}}^d)} \le \| g \|_{\ell^2({\mathbb{Z}}^d)}$ which is just another way of writing .]{} In the following we prove the desired inequality by complex interpolation of $B(\ell^2({\mathbb{R}}^d,{\mathbb{Z}}^d))=B(\ell^2_0)$ with $B(\ell^{p}_\gamma)$ for suitable $p$ and $\gamma$. In Proposition \[prop:weighted-dics-cz\] below (in Section \[S:CZ\]) we prove a Calderón-Zygmund-type estimate for $\mathcal H$ in weighted spaces and obtain $$\| \mathcal{H} \|_{B(\ell^{p}_\gamma)} < \infty\quad\text{for all } 2\le p\le \infty\text{ and }0 \le \gamma < \min\{2(p-1),{\textstyle\frac{1}{2}}\}.\label{eq:4}$$ Fix such $p$ and $\gamma$ and $0 < \theta < 1$. A theorem due to Stein and Weiss [@Bergh-Lofstrom-76 Theorem 5.5.1] that also holds in the discrete setting yields $$\label{stein_weiss} \| \mathcal{H} \|_{B(\ell^{p}_{\gamma'})} \le \| \mathcal{H} \|_{B(\ell^{p}_\gamma)}^{1-\theta} \| \mathcal{H} \|_{B(\ell^{p})}^\theta,\qquad\text{if $\gamma'= (1-\theta)\gamma$.}$$ Likewise the classical Riesz-Thorin theorem [@Bergh-Lofstrom-76 Theorem 1.1.1] yields $$\label{riesz_thorin} \| \mathcal{H} \|_{B(\ell^{p'}_{\gamma})} \le \| \mathcal{H} \|_{B(\ell^{p}_\gamma)}^{1-\theta} \| \mathcal{H} \|_{B(\ell^{2}_\gamma)}^\theta,\qquad\text{if $\frac{1}{p'} = \frac{1-\theta}{p} + \frac{\theta}{2}$.}$$ In particular, the map $(p,\gamma)\mapsto\|\mathcal{H}\|_{\mathcal B(\ell_{\gamma}^{p})}$ is continuous at $(2,0)$: Given $\epsilon > 0$, we use with $\gamma=0$ to find $p'>2$ such that $\| \mathcal{H} \|_{\mathcal B(\ell^{p'})} \le 1+ \frac{\epsilon}{2}$. Then we apply to find $\gamma' > 0$ such that $\max\{\| \mathcal{H} \|_{\mathcal B(\ell^{2}_{\gamma'})},\| \mathcal{H} \|_{\mathcal B(\ell^{p'}_{\gamma'})}\} \le 1+\epsilon$. Hence, we have $\| \mathcal{H} \|_{\mathcal B(\ell^{p}_{\gamma})} \le 1+\epsilon$ for the corner points $(p,\gamma)$ of the square $[2,p']\times[0,\gamma']$. By  resp. , we may always decrease either $p'$ resp. $\gamma'$ while achieving the same bound. Consequently we have that $\| \mathcal{H} \|_{\mathcal B(\ell^{p}_{\gamma})} \le 1+\epsilon$ for all $(p,\gamma)\in [2,p']\times[0,\gamma']$. In particular, [letting $\epsilon=\frac{2-\lambda}{2(1-\lambda)}-1>0$]{}, there exists $q_0>1$ such that $\| \mathcal{H} \|_{\mathcal B(\ell^{2q_0}_{2q_0-2})}\leq\frac{2-\lambda}{2(1-\lambda)}$ and the same bound for $\| \mathcal{H} \|_{\mathcal B(\ell^{2q_0}_{4q_0-4})}$. By monotonicity in the exponent, estimate follows for all $1\leq q\leq q_0$. This completes the argument of Step 2. [**Step 3**]{}. In this last step, we fix $d=2$ and derive the bound $$\label{nabla_G_bound} \sum_x |\nabla G(x)|^{2q} {\omega_q}(x) = \| \nabla G \|_{\ell^{2q}_{\omega_q}}^{2q} \le C(\lambda,q) \log T$$ for $q$ and ${\omega_q}$ as in Step 2. The relation and the estimate yield $$\| \nabla G \|_{\ell^{2q}_{\omega_q}} \le C(\lambda) \| \nabla G^0 \|_{\ell^{2q}_{\omega_q}}$$ so that it is enough to consider the constant coefficient Green’s function whose behaviour is well-known and is given by (cf. ) $$|\nabla G^0(x)| \le C (|x|+1)^{-1}\exp\bigg(-\frac{{\sqrt{\lambda}}|x|}{C\sqrt{T}}\bigg),$$ where $C$ is a universal constant. Hence by splitting $ \| \nabla G^0 \|_{\ell^{2q}_{\omega_q}}^{2q}$ into its contributions coming from $|x|\le \sqrt{T}$ and $|x| > \sqrt{T}$ and using the definition of the weight ${\omega_q}$, we have $$\begin{aligned} \| \nabla G^0 \|_{\ell^{2q}_{\omega_q}}^{2q} &= \sum_x |\nabla G^0(x)|^{2q}\left((|x|+1)^{2q-2}+T^{1-q}(|x|+1)^{4q-4}\right) \\ &\le C \sum_x (|x|+1)^{-2q} e^{-\frac{2q{\sqrt{\lambda}}|x|}{C\sqrt{T}}} \left((|x|+1)^{2q-2}+T^{1-q}(|x|+1)^{4q-4}\right) \\ & \le C(\lambda ,q) \sum_{|x|\le\sqrt{T}} (|x|+1)^{-2} +C(\lambda ,q)\sum_{|x|>\sqrt{T}} {T}^{1-q}(|x|+1)^{2q-4} e^{-\frac{2q{\sqrt{\lambda}}|x|}{C\sqrt{T}}}\\ &\le C(\lambda ,q) \log T+ C(\lambda ,q) \sum_{|x|>\sqrt{T}} {T}^{-1} \big({\textstyle\frac{|x|}{\sqrt{T}}}\big)^{2q-4} e^{-\frac{2{\sqrt{\lambda}}|x|}{C\sqrt{T}}}\\ &\le C(\lambda ,q) \log T+C(\lambda ,q), \end{aligned}$$ where we have used that $q > 1$. Logarithmic Sobolev inequality and spectral gap revisited {#SS:E} --------------------------------------------------------- The LSI only enters the proof of Theorem \[T1\] in form of the following lemma borrowed from [@MO1]. \[L1\] Let ${\left\langle \cdot \right\rangle}$ statisfy LSI  with constant $\rho>0$. Then we have that $$\label{T1.0} \langle|\zeta|^{2p}\rangle^\frac{1}{2p}\le C(\delta,p,\rho)\langle|\zeta|^2\rangle^{\frac{1}{2}} +\delta\Big\langle\bigg(\sum_{x\in{\mathbb{Z}}^d} \Big( \operatorname*{osc}_{a(x)} \zeta\Big)^2 \bigg)^p\Big\rangle^\frac{1}{2p}$$ for any $\delta>0$, $1\le p<\infty$ and $\zeta\in C_b(\Omega)$. This inequality expresses a reverse Jensen inequality and allows to bound high moments of $\zeta$ to the expense of some control on the oscillations of $\zeta$. [The difference to SG lies in the fact that the improved integrability properties of LSI allow us to choose $\delta>0$ arbitrarily small.]{} In the proof of Theorem \[T1\], we will apply to the random variables $\zeta=\nabla_i \phi_T(0) + \xi_i$ for $i=1,\ldots,d$. The second moment of $\nabla_i \phi_T(0) + \xi_i$ will be controlled below, whereas the oscillation was already estimated Lemma \[L:RP\] and involves the second mixed derivatives of $G_T$. In the proof of Theorem \[T2\], we just require the weaker statement of SG. To be precise, we will use an $L^{2p}_{\langle\cdot\rangle}$-version of SG which is the content of the following lemma. \[LSGp\] Let ${\left\langle \cdot \right\rangle}$ statisfy SG  with constant $\rho>0$. Then for arbitrary $1\le p<\infty$ and $\zeta\in C_b(\Omega)$ it holds that $$\label{eq:SGp} {\left\langle |\zeta - \langle \zeta \rangle|^{2p} \right\rangle}\le C(p,\rho) \Big\langle\bigg(\sum_{x\in{\mathbb{Z}}^d} \Big( \operatorname*{osc}_{a(x)} \zeta\Big)^2 \bigg)^p \Big\rangle.$$ The proof is a combination of the proofs of [@GNO1 Lemma 2] and [@MO1 Lemma 4]. We present it here for the convenience of the reader. Without loss of generality assume that $\zeta\in C_b(\Omega)$ satisfies ${\left\langle \zeta \right\rangle}=0$. The triangle inequality and SG  yield $$\begin{aligned} {\left\langle |\zeta|^{2p} \right\rangle} &\le 2{\left\langle \big(|\zeta|^{p} - {\left\langle |\zeta|^p \right\rangle}\big)^2 \right\rangle} + 2{\left\langle |\zeta|^p \right\rangle}^2\\ &\le \frac{2}{\rho}{\left\langle \sum_{x}\Big(\operatorname*{osc}_{a(x)}|\zeta|^p\Big)^2 \right\rangle}+2{\left\langle |\zeta|^{2p} \right\rangle}^{\frac{p-2}{p-1}}{\left\langle |\zeta|^2 \right\rangle}^{\frac{p}{p-1}}.\end{aligned}$$ By Young’s inequality, we may absorb $\langle |\zeta|^{2p} \rangle$ on the l. h. s. and we obtain that $$\label{eq:SG-p} {\left\langle |\zeta|^{2p} \right\rangle} \le \frac{4}{\rho}{\left\langle \sum_{x}\Big(\operatorname*{osc}_{a(x)}|\zeta|^p\Big)^2 \right\rangle}+C(p){\left\langle |\zeta|^2 \right\rangle}^{p}.$$ We insert SG , note $\langle\zeta\rangle=0$ and apply Jensen’s inequality to obtain that $$\label{eq:SG-1^p} {\left\langle |\zeta|^2 \right\rangle}^{p} \le \rho^{-p} {\left\langle \sum_{x}\Big(\operatorname*{osc}_{a(x)}\zeta\Big)^2 \right\rangle}^p \le \rho^{-p} {\left\langle \bigg(\sum_{x}\Big(\operatorname*{osc}_{a(x)}\zeta\Big)^2\bigg)^p \right\rangle}.$$ In order to deal with the first term in , we note that the elementary inequality $|t^p - s^p| \le C(p) (t^{p-1} |t-s| + |t-s|^p)$ for all $t,s\ge0$ yields for every two coefficient fields $a,\tilde a \in \Omega$: $$\Big||\zeta(a)|^p - |\zeta(\tilde a)|^p\Big| \le C(p) \big(|\zeta(a)|^{p-1} |\zeta(a)-\zeta(\tilde a)| + |\zeta(a)-\zeta(\tilde a)|^p\big),$$ where we have in addition used the triangle inequality in form of $\Big||\zeta(a)|-|\zeta(\tilde a)|\Big| \le |\zeta(a)-\zeta(\tilde a)|$. Letting $\tilde a \in \Omega$ run over the coefficient fields that coincide with $a$ outside of $x\in{\mathbb{Z}}^d$ yields $$\operatorname*{osc}_{a(x)}|\zeta|^p \le C(p) \bigg(|\zeta|^{p-1} \operatorname*{osc}_{a(x)} \zeta + \Big(\operatorname*{osc}_{a(x)} \zeta\Big)^p\bigg)$$ Consequently we obtain $$\begin{aligned} {\left\langle \sum_{x}\Big(\operatorname*{osc}_{a(x)}|\zeta|^p\Big)^2 \right\rangle} &\le C(p) \Bigg({\left\langle |\zeta|^{2(p-1)}\sum_{x}\Big(\operatorname*{osc}_{a(x)}\zeta\Big)^2 \right\rangle} + C(p) {\left\langle \sum_{x}\Big(\operatorname*{osc}_{a(x)} \zeta\Big)^{2p} \right\rangle}\Bigg)\\ &\le C(p) \Bigg({\left\langle |\zeta|^{2p} \right\rangle}^{\frac{p-1}{p}}{\left\langle \bigg(\sum_{x}\Big(\operatorname*{osc}_{a(x)}\zeta\Big)^2\bigg)^p \right\rangle}^{\frac{1}{p}} + {\left\langle \bigg(\sum_{x}\Big(\operatorname*{osc}_{a(x)}\zeta\Big)^2\bigg)^p \right\rangle}\Bigg)\end{aligned}$$ by Hölder’s inequality and the discrete $\ell^2\subset\ell^{2p}$-inequality. Inserting this estimate as well as  into  yields $${\left\langle |\zeta|^{2p} \right\rangle} \le C(p,\rho) \Bigg({\left\langle |\zeta|^{2p} \right\rangle}^{\frac{p-1}{p}}{\left\langle \bigg(\sum_{x}\Big(\operatorname*{osc}_{a(x)}\zeta\Big)^2\bigg)^p \right\rangle}^{\frac{1}{p}} + {\left\langle \bigg(\sum_{x}\Big(\operatorname*{osc}_{a(x)}\zeta\Big)^2\bigg)^p \right\rangle}\Bigg).$$ Again, we may absorb the factor $\langle |\zeta|^{2p} \rangle$ on the l. h. s. using Young’s inequality and thus conclude the proof of Lemma \[LSGp\]. Proof of Theorem \[T1\] {#SS:T1} ----------------------- [**Step 1**]{}. We claim the following energy estimate: $$\label{T1.3} {\left\langle |\nabla\phi_T(0)+\xi|^2 \right\rangle}\le C(\lambda) |\xi|^2.$$ To see this, we multiply  with $\phi_T(0)$ and take the expectation: $$\frac{1}{T}{\left\langle |\phi_T(0)|^2 \right\rangle}+{\left\langle \phi_T(0)\nabla^*(a\nabla\phi_T)(0) \right\rangle}=-{\left\langle \phi_T(0)\nabla^*(a\xi)(0) \right\rangle}.$$ Thanks to the stationarity of ${\left\langle \cdot \right\rangle}$ and the stationarity of $\phi_T$, cf. , we have that $$\begin{aligned} \langle \phi_T(0) \nabla^* w(x) \rangle &= \sum_{i=1}^d \langle \phi_T(0) \big( w_i(x-e_i) - w_i(x) \big) \rangle\\ &= \sum_{i=1}^d \langle \big( \phi_T(e_i) - \phi_T(0) \big) w_i(x) \rangle = \langle \nabla \phi_T(0) \cdot w(x) \rangle \end{aligned}$$ for all stationary vector fields $w : {\mathbb{Z}}^d \to {\mathbb{R}}^d$. This integration by parts property then yields $$\frac{1}{T}{\left\langle |\phi_T(0)|^2 \right\rangle}+\langle \nabla\phi_T(0) \cdot a(0) \nabla\phi_T(0) \rangle =- \langle \nabla\phi_T(0) \cdot a(0) \xi \rangle.$$ Since the first term on the left-hand side is non-negative, uniform ellipticity, cf. , yields $${\left\langle |\nabla \phi_T(0)|^2 \right\rangle} \le \lambda^{-2} |\xi|^2,$$ and  follows from the triangle inequality. [**Step 2**]{}. We claim that $$\label{P1.17} \bigg\langle\bigg(\sum_{x}|\nabla\nabla G_T(0,x)|^2|\nabla\phi_T(x)+\xi|^2\bigg)^p\bigg\rangle \le \lambda^{-2p}\langle|\nabla\phi_T(0)+\xi|^{2p}\rangle.$$ We start by applying Hölder’s inequality with exponent $p$ in space: $$\begin{gathered} \bigg(\sum_{x}|\nabla\nabla G_T(0,x)|^2|\nabla\phi_T(x)+\xi|^2\bigg)^p\\ \le \bigg(\sum_{x}|\nabla\nabla G_T(0,x)|^2\bigg)^{p-1} \sum_{x}|\nabla\nabla G_T(0,x)|^2|\nabla\phi_T(x)+\xi|^{2p}. \end{gathered}$$ We now apply $\langle\cdot\rangle$ to obtain $$\begin{aligned} &\Big\langle\bigg(\sum_{x}|\nabla\nabla G_T(0,x)|^2 |\nabla\phi_T(x)+\xi|^2 \bigg)^p\Big\rangle\\ &\le \bigg(\sup_{a\in\Omega}\sum_{x}|\nabla\nabla G_T(0,x)|^2\bigg)^{p-1} \sum_{x}\langle |\nabla\nabla G_T(0,x)|^2 |\nabla\phi_T(x)+\xi|^{2p}\rangle. \end{aligned}$$ At this stage, we appeal to the stationarity of $G_T$, cf. (\[P1.15\]), the stationarity of $\nabla\phi_T$, cf. , and the stationarity of $\langle\cdot\rangle$ in form of $$\langle|\nabla\nabla G_T(0,x)|^2|\nabla\phi_T(x)+\xi|^{2p}\rangle =\langle|\nabla\nabla G_T(-x,0)|^2|\nabla\phi_T(0)+\xi|^{2p}\rangle,$$ which yields $$\begin{aligned} &\Big\langle\bigg(\sum_{x}|\nabla\nabla G_T(0,x)|^2|\nabla\phi_T(x)+\xi|^2\bigg)^p\Big\rangle\\ &\le \bigg(\sup_{a\in\Omega}\sum_{x}|\nabla\nabla G_T(0,x)|^2\bigg)^{p-1} \Big\langle \sum_{x} |\nabla\nabla G_T(-x,0)|^2|\nabla\phi_T(0)+\xi|^{2p} \Big\rangle\\ &\le \bigg(\sup_{a\in\Omega}\sum_{x}|\nabla\nabla G_T(0,x)|^2\bigg)^{p-1} \bigg(\sup_{a\in\Omega}\sum_{x}|\nabla\nabla G_T(x,0)|^2 \bigg) \langle|\nabla\phi_T(0)+\xi|^{2p}\rangle. \end{aligned}$$ We conclude by appealing to symmetry, cf. , and . Note that the transposed coefficient field $a^t$ satisfies $a^t\in\Omega$. [**Step 3**]{}. Conclusion: The combination of and yields $$\label{P1.35} \bigg\langle \bigg(\sum_{x}\Big(\operatorname*{osc}_{a(x)}(\nabla_i\phi_T(0)+\xi_i)\Big)^2\bigg)^p\bigg\rangle^\frac{1}{p} \le C(d,\lambda) \langle|\nabla\phi_T(0)+\xi|^{2p}\rangle^\frac{1}{p}$$ for $i=1,\ldots,d$. We now appeal to Lemma \[L1\] with $\zeta=\nabla_i \phi_T(0) + \xi_i$, i.e.$$\langle|\nabla_i \phi_T(0) + \xi_i|^{2p}\rangle^\frac{1}{2p}\le C(\delta,p,\rho)\langle|\nabla_i \phi_T(0) + \xi_i|^2\rangle^{\frac{1}{2}} +\delta\Big\langle\bigg(\sum_{x\in{\mathbb{Z}}^d} \Big( \operatorname*{osc}_{a(x)} (\nabla_i \phi_T(0) + \xi_i) \Big)^2 \bigg)^p\Big\rangle^\frac{1}{2p}.$$ On the r. h. s. we insert the estimates (\[T1.3\]) and (\[P1.35\]) and sum in $i=1,\ldots,d$ to obtain (after redefining $\delta$) $$\nonumber \sum_{i=1}^d\langle|\nabla_i\phi_T(0)+\xi_i|^{2p}\rangle^\frac{1}{2p}\le C(d,\lambda,\delta,p,\rho)|\xi| +\delta \langle|\nabla\phi_T(0)+\xi|^{2p}\rangle^\frac{1}{2p}.$$ By the equivalence of finite-dimensional norms, it follows (again, after redefining $\delta$) $$\langle|\nabla\phi_T(0)+\xi|^{2p}\rangle^\frac{1}{2p}\le C(d,\lambda,\delta,p,\rho)|\xi| +\delta \langle|\nabla\phi_T(0)+\xi|^{2p}\rangle^\frac{1}{2p}.$$ By choosing $\delta=\frac{1}{2}$, we may absorb the second term on the r. h. s. into the l. h. s. which completes the proof. Proof of Theorem \[T2\] {#SS:T2} ----------------------- As a starting point, we apply SG in its $p$-version Lemma \[LSGp\]: We apply this inequality with $\zeta=\phi_T(0)$. Since ${\left\langle \phi_T(0) \right\rangle}=0$ (as can be seen by taking the expectation of and using the stationarity of ${\left\langle \cdot \right\rangle}$ and $\phi_T$), estimate yields $$\langle|\phi_T(0)|^{2p}\rangle\le \frac{1}{\rho} \Big\langle\bigg(\sum_{x}\Big( \operatorname*{osc}_{a(x)} \phi_T(0) \Big)^2\bigg)^p\Big\rangle.$$ The oscillation estimate  yields $$\langle|\phi_T(0)|^{2p}\rangle \le C(d,\lambda,\rho) \Big\langle \bigg( \sum_x |\nabla G_T(0,x)|^2 |\nabla\phi_T(x)+ \xi|^2 \bigg)^p \Big\rangle.$$ With the help of Hölder’s inequality we can introduce the weight $\omega_q$ from Lemma \[L2\] and get for the r. h. s. $$\begin{aligned} &\Big\langle \bigg( \sum_x |\nabla G_T(0,x)|^2 |\nabla\phi_T(x) + \xi|^2 \bigg)^p\Big\rangle\\ &\le \Big\langle \bigg(\sum_x |\nabla G_T(0,x)|^{2q} \omega_q(x)\bigg)^{p-1} \sum_x |\nabla\phi_T(x)+ \xi|^{2p} \omega_q(x)^{-\frac{1}{q-1}} \Big\rangle\\ &\le \bigg(\sup_{a\in\Omega} \sum_x |\nabla G_T(0,x)|^{2q}\omega_q(x) \bigg)^{p-1} \sum_x \langle|\nabla\phi_T(x) + \xi|^{2p}\rangle \omega_q^{-\frac{1}{q-1}}(x). \end{aligned}$$ Due to the stationarity of $\nabla\phi_T+\xi$ and Lemma \[L2\] we obtain $$\langle |\phi_T(0)|^{2p} \rangle \le C(d,\lambda,p) \begin{cases} (\log T)^{p-1} \big\langle |\nabla\phi_T+\xi)(0)|^{2p}\big\rangle \sum_x \omega_q(x)^{-\frac{1}{q-1}}&\text{for }d=2,\\ \big\langle |\nabla\phi_T+\xi)(0)|^{2p}\big\rangle \sum_x \omega_q(x)^{-\frac{1}{q-1}}&\text{for }d>2. \end{cases}$$ To conclude in the case of $d=2$, we simply insert to bound (for $T\geq 2$) $$\begin{aligned} \sum_x \omega_q(x)^{-\frac{1}{q-1}} &\le C(p) \Bigg(\sum_{|x|\le\sqrt{T}} (|x|+1)^{2} + \sum_{|x|>\sqrt{T}} T(|x|+1)^{-4} \Bigg)\\ & \le C(p) \Big( \log T + \frac{1}{\sqrt{T}} \Big) \le C(p) \log T.\end{aligned}$$ If $d>2$, we find that $$\sum_x \omega_q(x)^{-\frac{1}{q-1}} = \sum_x (|x|+1)^{-2d} \le C(d),$$ which finishes the proof. A weighted Calderón-Zygmund estimate {#S:CZ} ==================================== In this section we present a discrete Calderón-Zygmund estimate on $\ell^p$-spaces with Muckenhoupt weights, which we used in Step 2b of the proof of estimate  in Lemma \[L2\] in the case $d=2$, see . Although we require the estimate in this paper only in dimension $d=2$, we present it here for any dimension $d\ge 2$ since it may be of independent interest. The proof closely follows [@GNO1prep Lemma 28]; the difference lies in the inclusion of weighted spaces which requires a bit more effort. \[prop:weighted-dics-cz\] Let $T>0$, let $g:{\mathbb{Z}}^d\to{\mathbb{R}}^d$ be a compactly supported function and let $u\in\ell^2({\mathbb{Z}}^d)$ be the unique solution to $$\label{eq:cz-1} {\frac{1}{T}} u+\nabla^*\nabla u=\nabla^*g\quad\text{on }{\mathbb{Z}}^d.$$ Then for all $1<p<\infty$ and all $0\le \gamma < \min\{d(p-1),1/2\}$ we have $$\sum_{x\in{\mathbb{Z}}^d}|\nabla u(x)|^p(|x|+1)^\gamma\le C(d,p,\gamma) \sum_{x\in{\mathbb{Z}}^d}|g(x)|^p(|x|+1)^\gamma.$$ This proposition is a discrete version of the well-known *continuum Calderón-Zygmund estimate* with Muckenhoupt weight: \[prop:weighted-cont-cz\] Let $T>0$, let $g:{\mathbb{R}}^d\to{\mathbb{R}}^d$ be smooth and compactly supported, and let $u:{\mathbb{R}}^d\to{\mathbb{R}}$ be the unique smooth and decaying solution to $${\frac{1}{T}} u - \Delta u= -\nabla \cdot g\quad\text{on }{\mathbb{R}}^d.$$ Then for all $1<p<\infty$ and all $-d<\gamma<d(p-1)$ we have that $$\int_{{\mathbb{R}}^d} |\nabla u(x)|^p|x|^\gamma \;dx \le C(d,p,\gamma) \int_{{\mathbb{R}}^d} |g(x)|^p|x|^\gamma \;dx.$$ The rest of this section is devoted to the proof of Proposition \[prop:weighted-dics-cz\]. To simplify the upcoming argument, fix for the remainder of this section two indices $j,\ell\in\{1,\dots,d\}$. By linearity it suffices to consider instead of the equation $$\label{eq:cz-2} {\frac{1}{T}} u+\nabla^*\nabla u=\nabla^*_\ell g\quad\text{on }{\mathbb{Z}}^d$$ for scalar $g$, and then to prove $$\label{eq:disc-weighted-cz} \sum_{x\in\cap{\mathbb{Z}}^d}|\nabla_j u(x)|^p(1+|x|)^\gamma\le C(d,p,\gamma) \sum_{x\in\cap{\mathbb{Z}}^d}|g(x)|^p(1+|x|)^\gamma.$$ The discrete estimate will be obtained from Proposition \[prop:weighted-cont-cz\] by a perturbation argument. More precisely, we compare the discrete equation and its continuum version in Fourier space. We denote the Fourier transform on ${\mathbb{R}}^d$ by $$({\mathcal{F}}g)(\xi) = (2\pi)^{-d/2}\int_{{\mathbb{R}}^d} g(x)e^{-i\xi\cdot x}\ dx,\quad \xi\in{\mathbb{R}}^d,$$ and for functions defined on the discrete lattice ${\mathbb{Z}}^d$ we define the *discrete Fourier transform* as $$({\mathcal{F}}_{dis}g)(\xi) = (2\pi)^{-d/2}\sum_{x\in{\mathbb{Z}}^d}g(x)e^{-i\xi\cdot x},\quad \xi\in{\mathbb{R}}^d.$$ Note that ${\mathcal{F}}_{dis}F$ is $(-\pi,\pi)^d$-periodic and that we have the inversion formula $$\label{eq:Finv} ({\mathcal{F}}^{-1}(\chi{\mathcal{F}}_{dis}g))(x)=g(x)\qquad\text{for all }x\in{\mathbb{Z}}^d,$$ where $\chi$ denotes the indicator function of the *Brillouin zone* $(-\pi,\pi)^d$ [which is the unit cell of the Fourier transform on a lattice]{}. The Fourier multipliers corresponding to and its continuum version are given by $$\begin{aligned} {\mathfrak{M}}_{T}^{cont}(\xi) = \frac{\xi_j\xi_\ell}{{\frac{1}{T}}+|\xi|^2},\qquad\qquad {\mathfrak{M}}_{T}(\xi) = \frac{(e^{-i\xi_j}-1)(e^{i\xi_\ell}-1)}{{\frac{1}{T}}+\sum_{n=1}^d|e^{i\xi_n}-1|^2}.\end{aligned}$$ In particular,  reads in Fourier space as $$\nabla_j u=\mathcal F^{-1}(\chi{\mathfrak{M}}_{T}\mathcal F_{dis}g)$$ and is equivalent to $$\label{eq:main-ineq} \sum_{x\in{\mathbb{Z}}^d}|({\mathcal{F}}^{-1}(\chi\ {\mathfrak{M}}_{T}{\mathcal{F}}_{dis}g))(x)|^p\,(|x|+1)^\gamma\le C(d,p,\gamma)\sum_{x\in{\mathbb{Z}}^d}|g(x)|^p\,(|x|+1)^\gamma.$$ Finally, we state two auxiliary results that will be used in the subsequent argument and which we prove at the end of this section. The first result shows that the discrete and continuum norms for band-restricted functions are equivalent. For brevity, we set $$\label{eq:norms} \|g\|_{\ell^p_\gamma} = \bigg( \sum_{x\in{\mathbb{Z}}^d} |g(x)|^p(|x|+1)^\gamma \bigg)^{\frac{1}{p}} \quad\text{and}\quad \|g\|_{L^p_\gamma} = \bigg( \int_{{\mathbb{R}}^d} |g(x)|^p|x|^\gamma \;dx \bigg)^{\frac{1}{p}}.$$ Furthermore, we use the notation $\|\cdot\|_{\ell^p_\omega}$ (resp. $\|\cdot\|_{L^p_\omega}$), if $(|x|+1)^\gamma$ (resp. $|x|^\gamma$) is replaced by a general weight function $\omega$. \[lemma:equiv-norms\] For all $L$ large enough, the $\ell^p_\gamma$-norm and the $L^p_\gamma$-norm are equivalent for functions supported on $[-\frac{1}{L},\frac{1}{L}]^d$ in Fourier space, i.e.$$\frac{1}{C(d,p,\gamma)} \|g\|_{L^p_\gamma} \le \|g\|_{\ell^p_\gamma} \le C(d,p,\gamma) \|g\|_{L^p_\gamma}$$ for all functions $g:=\mathcal{F}^{-1}(F):{\mathbb{R}}^d\to{\mathbb{C}}$ with $F$ supported on $[-\frac{1}{L},\frac{1}{L}]^d$ where we let without loss generality $\frac{1}{L} < \pi$. The second result is a generalization of Young’s convolution estimate to weighted spaces. \[lemma:weighted-young\] Let $\omega:{\mathbb{Z}}^d\to{\mathbb{R}}$ satisfy $$\label{eq:23} \omega(x)\geq 1\qquad\text{and}\qquad \omega(x)\le \omega(y)\omega(x-y)\qquad\text{for all }x,y\in{\mathbb{Z}}^d.$$ Then the estimate $$\label{eq:weighted-young} \|f \ast_{dis} g\|_{\ell_\omega^p} \le \|f\|_{\ell_\omega^q}\|g\|_{\ell_\omega^r}, \quad 1+\frac1p=\frac1q+\frac1r$$ holds, where $\ast_{dis}$ denotes the discrete convolution on ${\mathbb{Z}}^d$: $$(f\ast_{dis}g)(x):=\sum_{y\in{\mathbb{Z}}^d}f(x-y)g(y).$$ The same estimate holds in the continuum case (with $\ast_{dis}$ and $\|\cdot\|_{\ell^p_\omega}$ replaced by the usual convolution $\ast$ and $\|\cdot\|_{L_\omega^p}^p$, respectively) as long as $\omega$ satisfies for all $x,y\in{\mathbb{R}}^d$. Now, we are ready to start the proof of Proposition \[prop:weighted-dics-cz\] in earnest. **Step 1**. Fourier multipliers: We claim that the invoked Fourier multipliers satisfy $$\label{eq:step1} {\mathfrak{M}}_{T}-{\mathfrak{M}}_{T}^{cont}={\mathfrak{M}}_{T}{\mathfrak{M}}^*_{T},$$ where we define $$\label{eq:m*} {\mathfrak{M}}^*_{T}:= 1 - \frac{1}{h(\xi_j) h(-\xi_\ell)} + \frac{|\xi|^2}{{\frac{1}{T}} + |\xi|^2} \ \sum_{k=1}^d \frac{|\xi_k|^2 (1-|h(\xi_k)|^2)}{|\xi|^2 h(\xi_j) h(-\xi_\ell)}$$ and $$\label{eq:hz} h(z):=\begin{cases}\frac{e^{iz}-1}{iz}&0\neq z\in{\mathbb{C}},\\1&z=0.\end{cases}$$ Indeed, is true for $\xi=0$. For $\xi\neq 0$ the definition of $h(z)$ yields that $$\begin{aligned} {\mathfrak{M}}^*_{T} &= 1 - \frac{\mathfrak{M}^{cont}_{T}}{\mathfrak{M}_{T}}\\ &= 1 - \frac{\xi_j\xi_\ell}{(e^{i\xi_j}-1)(e^{-i\xi_\ell}-1)} - \frac{1}{{\frac{1}{T}}+|\xi|^2} \ \sum_{k=1}^d \frac{\xi_j\xi_\ell({\frac{1}{T}} + |\xi_k|^2 - ({\frac{1}{T}} + |e^{i\xi_k}-1|^2))}{(e^{i\xi_j}-1)(e^{-i\xi_\ell}-1)}\\ &= 1 - \frac{1}{h(\xi_j) h(-\xi_\ell)} - \frac{|\xi|^2}{{\frac{1}{T}}+|\xi|^2} \ \sum_{k=1}^d \frac{{\frac{1}{T}} + |\xi_k|^2 - ({\frac{1}{T}} + |\xi_k|^2|h(\xi_k)|^2)}{|\xi|^2 h(\xi_j) h(-\xi_\ell)}\\ &= 1 - \frac{1}{h(\xi_j) h(-\xi_\ell)} + \frac{|\xi|^2}{{\frac{1}{T}} + |\xi|^2} \ \sum_{k=1}^d \frac{|\xi_k|^2 (1-|h(\xi_k)|^2)}{|\xi|^2 h(\xi_j) h(-\xi_\ell)}.\end{aligned}$$ In order to prove uniformity in $T$ (recall that the assertion of Proposition \[prop:weighted-dics-cz\] does not involve $T$), we may split $\mathfrak{M}_{T}^*$ into two terms independent of $T$ and a simple prefactor involving $\frac{1}{T}$: $${\mathfrak{M}}^*_{T} = {\mathfrak{M}}^*_1+\frac{|\xi|^2}{{\frac{1}{T}}+|\xi|^2} {\mathfrak{M}}^*_2,\label{eq:19}$$ where we have set $$\begin{aligned} \label{eq:M1a} {\mathfrak{M}}^*_1 &= 1 - \frac{1}{h(\xi_j) h(-\xi_\ell)},\\ {\mathfrak{M}}^*_2 &=\label{eq:M2a} \sum_{k=1}^d \frac{|\xi_k|^2 (1-|h(\xi_k)|^2)}{|\xi|^2 h(\xi_j) h(-\xi_\ell)}. \end{aligned}$$ **Step 2**. Reduction by separating low and high frequencies: We take a smooth cutoff function $\eta_1$ that equals one in $[-1,1]^d$ with compact support in $(-\pi,\pi)^d$. We then rescale it to $$\eta_L(\xi)=\eta_1(L\xi).$$ Using the triangle inequality and $\chi\eta_L=\eta_L$, we separate the expression on the left hand side of into low and high frequencies: $$\|{\mathcal{F}}^{-1}(\chi\ {\mathfrak{M}}_{T}{\mathcal{F}}_{dis}g)\|_{\ell^p_\gamma} \le \underbrace{\|{\mathcal{F}}^{-1}(\eta_L {\mathfrak{M}}_{T}{\mathcal{F}}_{dis}g)\|_{\ell^p_\gamma}}_{I} + \underbrace{\|{\mathcal{F}}^{-1}(\chi(1-\eta_L) {\mathfrak{M}}_{T}{\mathcal{F}}_{dis}g)\|_{\ell^p_\gamma}}_{II}.$$ Term $I$ represents low frequencies (treated in Step 4) and term $II$ represents high frequencies (treated in Step 5). Hence, in order to conclude, we only need to prove the following two statements: (I) For all $L\geq L_0$ (where $L_0\geq 1$ only depends on $\gamma,p$ and $d$) we have $$\label{eq:low} \|{\mathcal{F}}^{-1}({\mathfrak{M}}_{T} \eta_L{\mathcal{F}}_{dis}g)\|_{\ell^p_\gamma} \le C(d,\gamma,p)\,\|g\|_{\ell^p_\gamma}.$$ (II) For all $L\geq 1$ we have $$\label{eq:high} \|{\mathcal{F}}^{-1}(\chi(1-\eta_L) {\mathfrak{M}}_{T}{\mathcal{F}}_{dis}g)\|_{\ell^p_\gamma} \le C(d,\gamma,p,L) \|g\|_{\ell^p_\gamma}.$$ We note that while the constants a-priori depend on the cutoff functions $\eta_1$ and $\zeta_1$ (the latter will be introduced in Step 3), both may be constructed in a canonical way only depending on $d$. **Step 3**. A bound on the correction ${\mathfrak{M}}^*_{T}$ for low frequencies: This is perhaps the most important ingredient in the proof, as it is here that we truly capture the difference between the discrete and continuous settings. Recall that ${\mathfrak{M}}^*_1$ and ${\mathfrak{M}}^*_2$ are defined in  and . In this step we prove that $$\label{eq:m*-estimate} \|{\mathcal{F}}^{-1}({\mathfrak{M}}^*_j\eta_L)\|_{\ell^1_\gamma}\le C(d,\gamma) L^{2\gamma-1},\quad j=1,2,$$ for $L$ large enough. We start the argument with the observation that $h(z)$, defined in , and $h^{-1}(z)$ are both analytic in the disk $\{z\in{\mathbb{C}}:|z|<2\pi\}$ and we may write $$\frac{1}{h(z)} = 1 + z r_1(z) \quad\text{and}\quad h(z) = 1 + z r_2(z)$$ with two functions $r_1, r_2$ which are analytic on the disk $\{z\in{\mathbb{C}}:|z|<2\pi\}$. **The term ${\mathfrak{M}}_1^*$.** This term becomes $${\mathfrak{M}}^*_1= 1 - \frac{1}{h(\xi_j) h(-\xi_\ell)} = \xi_\ell r_1(-\xi_\ell) - \xi_j r_1(\xi_j) + \xi_j \xi_\ell r_1(\xi_j) r_1(-\xi_\ell),$$ which is a linear combination of terms of the form $i\xi_m\phi(\xi)$, $m=1,\ldots,d$, with a (generic) analytic function $\phi$ on the disk $\{z\in{\mathbb{C}}:|z|<2\pi\}$. **The term ${\mathfrak{M}}_2^*$.** Denoting the real part of $z\in{\mathbb{C}}$ by $\mathrm{Re}(z)$, we compute that $${\mathfrak{M}}^*_2= \sum_{k=1}^d \frac{|\xi_k|^2 (1-|h(\xi_k)|^2)}{|\xi|^2 h(\xi_j) h(-\xi_\ell)} = \sum_{k=1}^d \frac{|\xi_k|^2 \big(2\xi_k \mathrm{Re}(r_2(\xi_k)) + |\xi_k|^2 |r_2(\xi_k)|^2\big)}{|\xi|^2 h(\xi_j) h(-\xi_\ell)},$$ which is a linear combination of terms of the form $\xi_m\frac{|\xi_n|^2}{|\xi|^2}\phi(\xi)$, $m,n=1,\ldots,d$, with a (generic) analytic function $\phi$ on the disk $\{z\in{\mathbb{C}}:|z|<2\pi\}$. Hence our problem reduces to showing that $$\label{eq:m*-est1} \left\|{\mathcal{F}}^{-1} \left(i\xi_m\frac{|\xi_n|^2}{|\xi|^2}\phi(\xi)\eta_L\right) \right\|_{\ell^1_\gamma} \le C(d,\gamma,\phi) L^{2\gamma-1}$$ and $$\label{eq:m*-est2} \left\| {\mathcal{F}}^{-1} \left(i\xi_m \phi(\xi)\eta_L\right) \right\|_{\ell^1_\gamma} \le C(d,\gamma,\phi) L^{2\gamma-1}$$ for any generic analytic function $\phi$ on the complex disc of radius $2\pi$. For the argument consider the Schwartz functions $$K_L={\mathcal{F}}^{-1}(\phi\eta_L)\qquad\text{and}\qquad \hat K_L = {\mathcal{F}}^{-1}(\phi({\textstyle \frac{\cdot}{L}})\eta_1),$$ and note that both are related through the scaling: $$K_L(x) = \frac{1}{L^d}\hat K_L({\textstyle \frac xL}).$$ For what follows it is crucial to note that the family $\{\hat K_L\}_{L\geq 1}$ is equibounded in the space of Schwartz space functions, i.e. for all multi-indices $\alpha,\beta$ we have $$\label{eq:KL-equi} \sup_x | x^\alpha \partial_x^\beta \hat{K}_L(x)| \le C(\phi,\alpha,\beta),$$ [where $x^\alpha:=\prod_{i=1}^dx^{\alpha_i}_i$ and $\partial_x^\beta:=\prod_{i=1}^d\partial^{\beta_i}_{x_i}$.]{} We now turn to the argument for and . The latter is easily shown, in fact with a slightly better decay rate of $L^{\gamma-1}$. Since $\gamma\geq 0$ and $L\geq1$, we have that $$\label{eq:weight_scale} (L|y|+1)^\gamma = L^\gamma (|y|+L^{-1})^\gamma\le L^\gamma (|y|+1)^\gamma,$$ and the definition of $K_L$ yields $$\begin{aligned} \left\| {\mathcal{F}}^{-1} \left(i\xi_m \phi(\xi)\eta_L\right) \right\|_{\ell^1_\gamma} &= \sum_{x\in{\mathbb{Z}}^d} |\partial_m K_L(x)|\ (|x|+1)^\gamma\\ &\le L^{\gamma-1}\left(L^{-d}\sum_{x\in \frac{1}{L}{\mathbb{Z}}^d} |\partial_m \hat{K}_L(x)|\ (|x|+1)^\gamma\right).\end{aligned}$$ Thanks to the term in the brackets on the right-hand side is bounded by $C(d,\gamma,\phi)$ and follows. To show , we notice that $$\mathcal{F}^{-1}\Big({\frac{\xi_m}{|\xi|^2}}\Big) = \frac{(2\pi)^{\frac{d}{2}}}{|S^{d-1}|}\frac{x_m}{|x|^d}\qquad\text{as a tempered distribution on ${\mathbb{R}}^d$},$$ where $|S^{d-1}|$ denotes the surface area of the $d-1$-dimensional unit sphere $S^{d-1}\subset {\mathbb{R}}^d$. Therefore standard properties of the Fourier transform yield $$\label{eq:sing} {\mathcal{F}}^{-1} \left(i\xi_m\frac{\xi_n^2}{|\xi|^2}\phi(\xi)\eta_L\right) = \frac{(2\pi)^{\frac{d}{2}}}{|S^{d-1}|} \partial_n^2 \left(\frac{x_m}{|x|^d} \ast K_L\right).$$ Next we introduce a spatial cutoff $\zeta_L$ (as opposed to the frequency cutoff $\eta_L$), defined as follows: first define a smooth cutoff function $\zeta_1$ for $\{x\in{\mathbb{R}}^d:|x|\le1\}$ in $\{x\in{\mathbb{R}}^d:|x|\le2\}$ and its rescaled version $$\zeta_L(x) = \zeta_1({\textstyle \frac xL}).$$ By the triangle inequality and since the derivative in may fall on either term in the convolution, for we only need to argue that $$\label{eq:m*-est22} \sum_{x} \left| \left(\frac{\zeta_Lx_m}{|x|^d} \ast \partial_n^2K_L\right)(x) \right| (|x|+1)^\gamma \le C(d,\gamma,\phi) L^{2\gamma-1}$$ and $$\label{eq:m*-est3} \sum_{x}\left| \left( \partial_n^2 \frac{(1-\zeta_L)x_m}{|x|^d} \ast K_L \right)(x) \right| (|x|+1)^\gamma \le C(d,\gamma,\phi) L^{2\gamma-1}.$$ By definition of the (continuous) convolution, thanks to $$(|x|+1)^\gamma\leq (|x-y|+1)^\gamma(|y|+1)^\gamma\qquad\text{for all }x,y\in{\mathbb{Z}}^d,\ \gamma\geq0,$$ by a change of variables and , we obtain that $$\begin{aligned} \nonumber \text{[l.h.s. of \eqref{eq:m*-est22}]}=\,&\sum_{x\in{\mathbb{Z}}^d} \bigg| \int_{{\mathbb{R}}^d} \frac{\zeta_L(y) y_m}{|y|^d} \partial_n^2 K_L(x-y) \;dy \bigg| (|x|+1)^\gamma\\ \le\,& \sum_{x\in{\mathbb{Z}}^d} \int_{{\mathbb{R}}^d} \Big| \frac{\zeta_L(y) y_m}{|y|^d} \partial_n^2 K_L(x-y) \Big| (|x-y|+1)^\gamma(|y|+1)^\gamma\;dy\nonumber\\ =\,& \sum_{x\in\frac1L{\mathbb{Z}}^d} \int_{{\mathbb{R}}^d} \Big| L \frac{\zeta_1(y) y_m}{|y|^d} L^{-2-d} \partial_n^2 \hat{K}_L(x-y) \Big| (L|x-y|+1)^\gamma(L|y|+1)^\gamma\;dy.\label{eq:step3.1}\end{aligned}$$ Hence  yields $$\begin{aligned} &\text{[l.h.s. of \eqref{eq:m*-est22}]}\\ &\leq\,L^{2\gamma-1}L^{-d}\sum_{x\in\frac1L{\mathbb{Z}}^d} \int_{{\mathbb{R}}^d} \Big| \frac{\zeta_1(y) y_m (|y|+1)^\gamma}{|y|^d} \Big| (|x-y|+1)^\gamma \big| \partial_n^2 \hat{K}_L(x-y) \big| \;dy\\ &\leq\,L^{2\gamma-1}\int_{|y|\leq 2} |y|^{1-d}(|y|+1)^\gamma\Big(L^{-d}\sum_{x\in\frac1L{\mathbb{Z}}^d} (|x-y|+1)^\gamma \big| \partial_n^2 \hat{K}_L(x-y) \big|\Big) \;dy.\end{aligned}$$ The Schwartz property  yields $$\Big(L^{-d}\sum_{x\in\frac1L{\mathbb{Z}}^d} (|x-y|+1)^\gamma \big| \partial_n^2 \hat{K}_L(x-y) \big|\Big)\leq C(d,\gamma,\phi),$$ and thus $$\begin{aligned} \text{[l.h.s. of \eqref{eq:m*-est22}]}\,\leq\, C(\phi)L^{2\gamma-1} \int_{|y|\leq 2} |y|^{1-d}(|y|+1)^\gamma\;dy\leq C(d,\gamma,\phi)\,L^{2\gamma-1},\end{aligned}$$ which completes the argument for . The second term is bounded similarly: by the same triangle inequality and change of variables that allowed us to arrive at , we obtain a bound on the l. h. s. of by $$L^{-1-d}\sum_{x\in\frac1L{\mathbb{Z}}^d} \int_{{\mathbb{R}}^d} \Big| \partial_n^2 \frac{(1-\zeta_1(x-y)) (x_m-y_m)}{|x-y|^d} \Big| (L|x-y|+1)^\gamma |\hat{K}_L(y)| (L|y|+1)^\gamma \;dy.$$ We insert again to obtain a bound by $$L^{2\gamma-1} \frac{1}{L^d}\sum_{x\in\frac1L{\mathbb{Z}}^d} \int_{{\mathbb{R}}^d} \Big| \partial_n^2 \frac{(1-\zeta_1(x-y)) (x_m-y_m)}{|x-y|^d} \Big| (|x-y|+1)^\gamma |\hat{K}_L(y)| (|y|+1)^\gamma \;dy.$$ This time, we use that $\left|\partial_n^2\Big((1-\zeta_1(x-y)) (x_m-y_m)|x-y|^{-d}\Big)\right| \, (|x-y|+1)^\gamma$ is integrable for large $x-y$ and vanishes for $|x-y| \le 1$, to obtain that $$\frac{1}{L^d}\sum_{x\in\frac1L{\mathbb{Z}}^d} \Big| \partial_n^2 \frac{(1-\zeta_1(x-y)) (x_m-y_m)}{|x-y|^d} \Big| (|x-y|+1)^\gamma \le C(d,\gamma,\phi).$$ Consequently, it remains to bound $$L^{2\gamma-1} \int_{{\mathbb{R}}^d} |\hat{K}_L(y)| (|y|+1)^\gamma \;dy,$$ which, thanks to , is clearly bounded by $C(d,\gamma,\phi)L^{2\gamma-1}$. **Step 4**. Low frequencies – proof of : We assume that $L$ is large enough, so that we can apply Lemma \[lemma:equiv-norms\] to deduce the equivalence of the norm $\ell^p_\gamma$ and $L^p_\gamma$. For brevity we set $F= \eta_L{\mathcal{F}}_{dis}g$. Equation  yields $$\label{eq:disc-cont} \| {\mathcal{F}}^{-1}({\mathfrak{M}}_{T} F) \|_{\ell^p_\gamma} \le \| {\mathcal{F}}^{-1}({\mathfrak{M}}_{T}^{cont} F) \|_{\ell^p_\gamma} + \| {\mathcal{F}}^{-1}({\mathfrak{M}}_{T}{\mathfrak{M}}^*_{T} F) \|_{\ell^p_\gamma}.$$ With help of the continuum Calderòn-Zygmund estimate, cf. Proposition \[prop:weighted-cont-cz\], and the equivalence of discrete and continuous norms, see Lemma \[lemma:equiv-norms\], we get for the first term: $$\|{\mathcal{F}}^{-1}({\mathfrak{M}}_{T}^{cont}F)\|_{L^p_\gamma}\leq C \|g\|_{\ell^p_\gamma}.$$ Hence, we only need to estimate the term ${\mathcal{F}}^{-1}({\mathfrak{M}}_{T}{\mathfrak{M}}^*_{T} F)$. First we notice that by definition of $F$ and $\eta_{L}$, we have that $F=\eta_{L/2}F$. Since the Fourier transform turns multiplication into convolution, we have $$\begin{aligned} \label{eq:M_frak-split} &{\mathcal{F}}^{-1}({\mathfrak{M}}_{T}{\mathfrak{M}}_{T}^*F) \stackrel{\eqref{eq:19}}{=} {\mathcal{F}}^{-1} \left({\mathfrak{M}}_{T} \left({\mathfrak{M}}_1^* + \frac{|\xi|^2}{{\frac{1}{T}}+|\xi|^2} {\mathfrak{M}}^*_2 \right) \eta_{\frac{L}{2}} F \right)\\\nonumber &= (2\pi)^{d/2} \left({\mathcal{F}}^{-1} \left( {\mathfrak{M}}_1^*\eta_{\frac{L}{2}} \right) \ast_{dis} {\mathcal{F}}^{-1} ({\mathfrak{M}}_{T} F) + {\mathcal{F}}^{-1} \left( {\mathfrak{M}}_2^* \eta_{\frac{L}{2}} \right) \ast_{dis} {\mathcal{F}}^{-1}\left( \frac{|\xi|^2}{{\frac{1}{T}}+|\xi|^2} {\mathfrak{M}}_{T} F \right) \right).\end{aligned}$$ We estimate the right-hand side using the Young’s inequality of Lemma \[lemma:weighted-young\]. For the first term, we get $$\|{\mathcal{F}}^{-1}({\mathfrak{M}}_1^*\eta_{\frac{L}{2}})\ast_{dis}{\mathcal{F}}^{-1}({\mathfrak{M}}_{T} F)\|_{\ell^p_\gamma} \le \|{\mathcal{F}}^{-1}({\mathfrak{M}}_1^*\eta_{\frac{L}{2}})\|_{\ell_\gamma^1}\|{\mathcal{F}}^{-1}({\mathfrak{M}}_{T} F)\|_{\ell_\gamma^p},$$ and likewise for the second term: $$\left\| {\mathcal{F}}^{-1}\left( {\mathfrak{M}}_2^* \eta_{\frac{L}{2}} \right) \ast_{dis} {\mathcal{F}}^{-1}\left( \frac{|\xi|^2}{{\frac{1}{T}}+|\xi|^2} {\mathfrak{M}}_{T} F\right) \right\|_{\ell^p_\gamma} \le \left\| {\mathcal{F}}^{-1}\left( {\mathfrak{M}}_2^* \eta_{\frac{L}{2}} \right) \right\|_{\ell^1_\gamma} \left\| {\mathcal{F}}^{-1}\left( \frac{|\xi|^2}{{\frac{1}{T}}+|\xi|^2} {\mathfrak{M}}_{T} F\right)\right\|_{\ell^p_\gamma}.$$ In both cases, the first term is bounded by , see Step 3. Hence, we have shown $$\begin{aligned} \label{eq:M1} \left\|{\mathcal{F}}^{-1}\left({\mathfrak{M}}_1^*\eta_{\frac{L}{2}} \right) \ast_{dis} {\mathcal{F}}^{-1}\left({\mathfrak{M}}_{T} F\right)\right\|_{\ell^p_\gamma} &\le C L^{2\gamma-1} \left\|{\mathcal{F}}^{-1}\left({\mathfrak{M}}_{T} F\right)\right\|_{\ell^p_\gamma},\\ \left\| {\mathcal{F}}^{-1}\left( {\mathfrak{M}}_2^* \eta_{\frac{L}{2}} \right) \ast_{dis} {\mathcal{F}}^{-1}\left(\frac{|\xi|^2}{{\frac{1}{T}}+|\xi|^2} {\mathfrak{M}}_{T} F\right) \right\|_{\ell^p_\gamma} &\le C L^{2\gamma-1} \left\| {\mathcal{F}}^{-1}\left( \frac{|\xi|^2}{{\frac{1}{T}}+|\xi|^2} {\mathfrak{M}}_{T} F\right)\right\|_{\ell^p_\gamma}.\label{eq:M2}\end{aligned}$$ We may use the equivalence of norms for band-restricted functions, cf. Lemma \[lemma:equiv-norms\], and then write the last term as another convolution to obtain that $$\begin{aligned} \left\| {\mathcal{F}}^{-1}\left( \frac{|\xi|^2}{{\frac{1}{T}}+|\xi|^2} {\mathfrak{M}}_{T} F\right)\right\|_{\ell^p_\gamma} &\le C \left\| {\mathcal{F}}^{-1}\left( \frac{|\xi|^2}{{\frac{1}{T}}+|\xi|^2} \right) \ast {\mathcal{F}}^{-1}\left( {\mathfrak{M}}_{T} F\right)\right\|_{L^p_\gamma}\\ &\le C \left\| {\mathcal{F}}^{-1}\left({\mathfrak{M}}_{T} F\right)\right\|_{L^p_\gamma},\end{aligned}$$ where for the second inequality we used the continuum Calderón-Zygmund estimate with Muckenhoupt weights for the Fourier-multiplier $|\xi|^2/({\frac{1}{T}}+|\xi|^2)$ which follows from Proposition \[prop:weighted-cont-cz\]. Combining , and and using the equivalence of norms yet again, we arrive at $$\begin{aligned} \|{\mathcal{F}}^{-1}({\mathfrak{M}}_{T}{\mathfrak{M}}_{T}^*F)\|_{\ell^p_\gamma} &\le C L^{2\gamma-1}\|{\mathcal{F}}^{-1}({\mathfrak{M}}_{T} F)\|_{L_\gamma^p}\\ &\le C L^{2\gamma-1}\|{\mathcal{F}}^{-1}({\mathfrak{M}}_{T} F)\|_{\ell_\gamma^p}. \end{aligned}$$ Hence, for $L$ sufficiently large the right-hand side may be absorbed into the left-hand side of , and follows. **Step 5**. High frequencies – proof of : By the weighted convolution estimate of Lemma \[lemma:weighted-young\], we have that $$\begin{aligned} \|\mathcal F^{-1}({\mathfrak{M}}_{T}(1-\eta_L)\chi\mathcal F_{dis} g)\|_{\ell^p_\gamma} &= \|\mathcal F^{-1}({\mathfrak{M}}_{T}(1-\eta_L)\chi)\ast_{dist}\mathcal F^{-1}(\chi\mathcal F_{dis} g)\|_{\ell^p_\gamma}\\ &\le \|\mathcal F^{-1}({\mathfrak{M}}_{T}(1-\eta_L)\chi)\|_{\ell^1_\gamma}\|\mathcal F^{-1}(\chi\mathcal F_{dis} g)\|_{\ell^p_\gamma}.\end{aligned}$$ where we haved used that $\chi^2 = \chi$ by definition. [By the Fourier inversion formula , the right-hand side equals $\|\mathcal F^{-1}({\mathfrak{M}}_{T}(1-\eta_L)\chi)\|_{\ell^1_\gamma}\|g\|_{\ell^p_\gamma}$ whereof we just need to estimate the first term. We have that]{} $$\begin{aligned} \|\mathcal F^{-1}({\mathfrak{M}}_{T}(1-\eta_L)\chi)\|_{\ell^1_\gamma} &= \sum_{x\in{\mathbb{Z}}^d}|\mathcal F^{-1}({\mathfrak{M}}_{T}(1-\eta_L)\chi)(x)|(1+|x|)^\gamma\\ &= \sum_{x\in{\mathbb{Z}}^d}|\mathcal F^{-1}({\mathfrak{M}}_{T}(1-\eta_L)\chi)(x)|(1+|x|)^{\gamma+2d}(1+|x|)^{-2d}\\ &\le C\sup_{x\in{\mathbb{Z}}^d}\big|\mathcal F^{-1}({\mathfrak{M}}_{T}(1-\eta_L)\chi)(x)(1+|x|)^{\gamma+2d}\big|.\end{aligned}$$ We rewrite this result using the definition of the Fourier transform and integration by parts. Let $x\in{\mathbb{Z}}^d$ and let $\alpha\in{\mathbb{N}}^d$ be an arbitrary multi-index such that $|\alpha| \ge \gamma + 2d$. [Then we have that:]{} $$\begin{aligned} x^{2\alpha}\mathcal F^{-1}({\mathfrak{M}}_{T}(1-\eta_L)\chi)(x)&=(2\pi)^{-d}\int_{(-\pi,\pi)^d}{\mathfrak{M}}_{T}(\xi)(1-\eta_L)(\xi)x^{2\alpha}e^{i\xi\cdot x}\,d\xi\\ &=(2\pi)^{-d}\int_{(-\pi,\pi)^d}{\mathfrak{M}}_{T}(\xi)(1-\eta_L)(\xi)i^{2|\alpha|}\partial_\xi^{2\alpha}e^{i\xi\cdot x}\,d\xi\\ &=(2\pi)^{-d}\int_{(-\pi,\pi)^d}i^{2|\alpha|}\partial_\xi^{2\alpha}\big({\mathfrak{M}}_{T}(1-\eta_L)\big)(\xi)e^{i\xi\cdot x}\,d\xi.$$ For the integration by parts when passing from the second to third lines of the last identity, we used that ${\mathfrak{M}}_{T}(\xi)(1-\eta_L(\xi))$ and $\exp(i\xi\cdot x)$ are $(-\pi,\pi)^d$-periodic function of $\xi$. It remains to argue that the latter integral is bounded by a constant $C(L,\alpha)$. The main difficulty lies in checking that the estimate is uniform in $T\geq 1$. Since the integral over the Brillouin zone is finite, it suffices to show that $$\label{eq:CZ3} \sup_{\xi\in(-\pi,\pi)^d\setminus (-\frac{1}{L},\frac{1}{L})^d}|\partial_\xi^{\alpha}{\mathfrak{M}}_{T}(\xi)|\le C(L,\alpha)$$ for all multi-indices $\alpha\in{\mathbb{N}}^d$. Note that $${\mathfrak{M}}_{T}(\xi) = \frac{\sum_{j=1}^d |\exp(i\xi_j)-1|^2}{{\frac{1}{T}} + \sum_{j=1}^d |\exp(i\xi_j)-1|^2} {\mathfrak{M}}_0(\xi)$$ and ${\mathfrak{M}}_0$ is smooth away from the origin so that $$\sup_{\xi\in(-\pi,\pi)^d\setminus (-\frac{1}{L},\frac{1}{L})^d} |\partial_\xi^{\alpha} {\mathfrak{M}}_0(\xi)| \le C(L,\alpha)$$ for all multi-indices $\alpha\in{\mathbb{N}}^d$. Furthermore, we have that $$\sup_{\xi\in(-\pi,\pi)^d\setminus (-\frac{1}{L},\frac{1}{L})^d} \frac{1}{{\frac{1}{T}}+\sum_{j=1}^d|\exp(i\xi_j)-1|^2} \le C(d,L)$$ and $$\partial_\xi^\alpha\left(\frac{\sum_{j=1}^d|\exp(i\xi_j)-1|^2}{{\frac{1}{T}}+\sum_{j=1}^d|\exp(i\xi_j)-1|^2}\right) = \frac{\phi(\xi)}{({\frac{1}{T}}+\sum_{j=1}^d|\exp(i\xi_j)-1|^2)^k}$$ for some (generic) smooth function $\phi$ and some $k\ge 0$, both depending only on the multi-index $\alpha$ and $d$. Hence we have that $$\sup_{\xi\in(-\pi,\pi)^d\setminus (-\frac{1}{L},\frac{1}{L})^d} \bigg| \partial_\xi^{\alpha}\left(\frac{\sum_{j=1}^d|\exp(i\xi_j)-1|^2}{{\frac{1}{T}}+\sum_{j=1}^d|\exp(i\xi_j)-1|^2}\right) \bigg| \le C(L,\alpha).$$ Since $\alpha$ was arbitrary, estimate follows from the Leibniz rule. First we write $|f(x-y)g(y)|$ as $$|f(x-y)g(y)| = \underbrace{|f(x-y)|^{\frac qp}|g(y)|^{\frac rp}}_{I} \underbrace{|f(x-y)|^{1-\frac qp}}_{II}\underbrace{|g(y)|^{1-\frac rp}}_{III}$$ and apply a Hölder inequality to the terms $I,II$ and $III$ with exponents $p,\frac{pq}{p-q},\frac{pr}{p-r}$ to obtain: $$\sum_{y\in{\mathbb{Z}}^d} f(x-y)g(y) \le \bigg(\sum_{y\in{\mathbb{Z}}^d}|f(x-y)|^q|g(y)|^r\bigg)^{\frac 1p} \bigg(\sum_{y\in{\mathbb{Z}}^d}|f(x-y)|^q\bigg)^{\frac 1q-\frac 1p} \bigg(\sum_{y\in{\mathbb{Z}}^d}|g(y)|^r\bigg)^{\frac 1r-\frac 1p}.$$ Therefore $$\begin{aligned} \sum_{x\in{\mathbb{Z}}^d} \Big|\sum_{y\in{\mathbb{Z}}^d} f(x-y)g(y)\Big|^pw(x) &\le \bigg(\sum_{x,y\in{\mathbb{Z}}^d}|f(x-y)|^q|g(y)|^rw(x)\bigg)\|f\|_{\ell^q}^{p-q}\|g\|_{\ell^r}^{p-r}\\ &\le \left(\|f\|^q_{\ell_w^q}\|g\|_{\ell_w^r}^r\right)\|f\|_{\ell^q_w}^{p-q} \|g\|_{\ell^r_w}^{p-r}\\ &= \|f\|_{\ell_w\gamma^q}^p\|g\|_{\ell_w^r}^p, \end{aligned}$$ where in the second inequality we used the assumption . For convenience we set $Q:=(-\tfrac{1}{2},\tfrac{1}{2})^d$ and without loss of generality we assume that $L\geq 1$. **Step 1.** We claim that for all $z\in{\mathbb{Z}}^d$ and $1\leq p<\infty$ we have $$\begin{aligned} \label{eq:22} \sup_{x\in (z+Q)}|g(x)|&\leq C(d,p)\|g\|_{L^p(z+Q)},\\ \label{eq:24} \|g\|_{L^p(z+Q)}&\leq C(d,p)\left(|g(z)|+L^{-1}\|g\|_{L^p(z+Q)}\right).\end{aligned}$$ By translation invariance it suffices to consider $z=0$. Thanks to the Sobolev embedding of $W^{n,p}(Q)$ into $L^\infty(Q)$ for $n>d$, we get $$\label{eq:20} \sup_{x\in Q}|g(x)|\leq C(d,n,p)\|g\|_{L^p(Q)}+\|\nabla^ng\|_{L^p(Q)}.$$ We argue that the band restriction implies for all $n\geq 1$ that $$\label{eq:21} \|\nabla^ng\|_{L^p(Q)}\leq C(d,n)L^{-n}\|g\|_{L^p(Q)},$$ which combined with and $L\geq 1$ yields . Estimate can be seen as follows: Recall that $g=\mathcal F^{-1}F$ where $F$ is supported in $[-\tfrac{1}{L},\tfrac{1}{L}]$. Let $\eta_1$ denote a smooth cutoff function that is one in $[-1,1]^d$ and compactly supported in $(-2,2)^d$, say. Let $\phi_1:=\mathcal F^{-1}\eta_1$ and note that for all $L>0$ we have $$(\mathcal F^{-1}\eta_L)(x)=\phi_L\qquad\text{where }\eta_L(\xi):=\eta_1(L\xi)\text{ and } \phi_L(x):=L^{-d}\phi_1(\tfrac{x}{L}).$$ In view of the band restriction of $F$ and its definition we have $g={\mathcal F}^{-1}F={\mathcal F}^{-1}(\eta_L F) =(2\pi)^\frac{d}{2}{\mathcal F}^{-1}\eta_L*{\mathcal F}^{-1}F=\phi_L*g$. We thus obtain the representation $\nabla^ng=\nabla^n(\phi_L*g)=(\nabla^n\phi_L)*g$ with $\nabla^n\phi_L(x) =L^{-n}\frac{1}{L^d}\nabla^n\phi_1(\frac{x}{L})$, which yields the inequality $$\nonumber \|\nabla^ng\|_{L^p}\le\|\nabla^n\phi_L\|_{L^1}\|g\|_{L^p}=L^{-n}\|\nabla^n\phi_1\|_{L^1}\|g\|_{L^p},$$ and thus the estimate , since $\phi_1$ is a Schwartz function that can be chosen only depending on $d$. [Estimate may be seen as follows: A simple application of the mean-value theorem yields $$\left(\int_{Q}|g(x)-g(0)|^p\,dx\right)^{\frac{1}{p}} \leq C(d,p)\sup_{x\in Q}|\nabla g(x)|.$$ Then the Sobolev embedding  with $g$ replaced by $\nabla g$ yields $$\left(\int_{Q}|g(x)-g(0)|^p\,dx\right)^{\frac{1}{p}} \leq C(d,n,p)\|\nabla g\|_{L^p(Q)}+\|\nabla^{n+1}g\|_{L^p(Q)}.$$ Finally, we insert estimate  (with $n$ replaced by $n+1$) to obtain that $$\left(\int_{Q}|g(x)-g(0)|^p\,dx\right)^{\frac{1}{p}} \leq C(d,n,p)(L^{-1}+L^{-(n+1)})\|g\|_{L^p(Q)},$$ which easily turns into the desired estimate  at $z=0$.]{} **Step 2.** We claim that there exists $L_0=L_0(d,p)$ such that for all $L\geq L_0$ and $z\in{\mathbb{Z}}^d$ we have $$\label{eq:26} \frac{1}{C(d,p,\gamma)}|g(z)|^p(|z|+1)^\gamma\leq\int_{z+Q}|g(x)|^p(|x|+1)^\gamma\,dx\leq C(d,p,\gamma)|g(z)|^p(|z|+1)^\gamma.$$ For the argument first note that for all $z\in{\mathbb{Z}}^d$ and $x\in z+Q$ we have $$\label{eq:25} (|z|+1)^\gamma\leq C(d,\gamma)(|x|+1)^\gamma\qquad\text{and}\qquad (|x|+1)^\gamma\leq C(d,\gamma)(|z|+1)^\gamma.$$ [Indeed, since $\max_{y\in Q}|y|+1 = \frac{1}{2}\sqrt{d}+1$ we have that $$(|z|+1)^\gamma\leq (|x|+|z-x|+1)^\gamma\leq (|x|+{\textstyle\frac{1}{2}\sqrt{d}+1})^\gamma\leq ({\textstyle\frac{1}{2}\sqrt{d}+1})^\gamma(|x|+1)^\gamma,$$ and $$(|x|+1)^\gamma\leq (|z|+|x-z|+1)^\gamma\leq ({\textstyle\frac{1}{2}\sqrt{d}+1})^\gamma(|z|+1)^\gamma.$$ Hence the result  of Step 1 yields $$|g(z)|^p(|z|+1)^\gamma \le\left(\sup_{x\in z+Q}|g(x)|\right)^p(|z|+1)^\gamma \le C(d,p)\int_{z+Q}|g(x)|^p(|z|+1)^\gamma\,dx $$ Estimate  thus yields the desired first inequality $$|g(z)|^p(|z|+1)^\gamma \le C(d,p,\gamma)\int_{z+Q}|g(x)|^p(|x|+1)^\gamma\,dx.$$ For the second estimate in , we note that, by absorption,  implies existence of $L_0=L_0(d,p)$ such that $$\int_{z+Q}|g(x)|^p\,dx\leq C(d,p)|g(z)|^p$$ for all $L\geq L_0$. Hence another application of  yields as desired $$\int_{z+Q}|g(x)|^p(|x|+1)^\gamma\,dx \leq C(d,\gamma)\int_{z+Q}|g(x)|^p\,dx\ (|z|+1)^\gamma \leq C(d,p,\gamma)|g(z)|^p(|z|+1)^\gamma$$ for all $L\geq L_0$.]{} **Step 3.** Conclusion: The estimate $\|g\|_{L^p_\gamma}^p\leq C(d,p,\gamma)\|g\|_{\ell^p_\gamma}^p$ follows from the second part of by summation in $z\in{\mathbb{Z}}^d$. For the opposite inequality, estimate  and Hölder’s inequality yield $$\frac{1}{C(d,q)}|g(0)|^p \le \left(\int_Q|g|^q\,dx\right)^{\frac{p}{q}} \leq\left(\int_Q|g|^p|x|^\gamma\,dx\right)\left(\int_Q|x|^{-\frac{q}{p-q}\gamma}\,dx\right)^{\frac{p-q}{p}}$$ for all $1\leq q<p$. Thanks to the assumption $0\leq \gamma<d(p-1)$, we can find $1\leq q<p$ such that the second integral on the right-hand side is finite, so that $$\label{eq:27} |g(0)|^p\leq C(d,p,\gamma)\int_Q|g(x)|^p|x|^\gamma\,dx.$$ (Note that this is the only place where the upper bound on $\gamma$ is required.) We conclude by  and  that $$\begin{aligned} \|g\|^p_{\ell^p_\gamma}&=\sum_{z\in{\mathbb{Z}}^d}|g(z)|(|z|+1)^\gamma=|g(0)|+\sum_{z\in{\mathbb{Z}}^d\setminus\{0\}}|g(z)|^p(|z|+1)^\gamma\\ &\leq C(d,p,\gamma)\left(\int_Q|g(x)|^p|x|^\gamma\,dx+\int_{{\mathbb{R}}^d\setminus Q}|g(x)|^p(|x|+1)^\gamma\right)\\ &\leq C(d,p,\gamma)\int_{{\mathbb{R}}^d}|g(x)|^p|x|^\gamma\,dx,\end{aligned}$$ where in the last line we have used that $|x|+1 \le 3 |x|$ for all $|x| \ge \frac{1}{2}$. Proof of Lemma \[L:Gint\] ========================= Thanks to the shift-invariance $G_T(a;x,y)=G_T(a(\cdot+y);x-y,0)$, it suffices to prove the estimate for $y=0$. We set for brevity $$G(x):=G_T(a;x,0)$$ and recall that $G$ is the unique solution in $\ell^2({\mathbb{Z}}^d)$ to $$\label{eq:2} \frac{1}{T}G+\nabla^*(a\nabla G)=\delta.$$ By discreteness and the standard energy estimate, we have $$\frac{1}{T}|G(0)|^2\leq \frac{1}{T}\sum_{x}|G(x)|^2 + \lambda \sum_x |\nabla G(x)|^2 \leq G(0).$$ Hence, $0\leq G(0)\leq T$ and we have that $$\label{eq:3} \sum_{x}\big(|G(x)|^2+|\nabla G(x)|^2\big)\leq C(T,\lambda).$$ Formally we may upgrade  to the statement of Lemma \[L:Gint\] by testing the equation with $e^{\frac{\delta }{2}|x|}G(x)$. Since that is not an admissible $\ell^2({\mathbb{Z}}^d)$ test function, we appeal to an approximation of the form $\zeta G$ where $$\label{eq:16} \zeta(x):=\eta(x)e^{\delta g(x)},$$ and $\eta,g:{\mathbb{Z}}^d\to{\mathbb{R}}$ are bounded, compactly supported and non-negative functions, and $g$ mimics the behavior of the linearly growing function $x\mapsto \frac{|x|}{2}$. The truncation via $\eta$ and the discrete Leibniz rule introduce error terms. In order to treat these terms in a convenient way, we will appeal to test functions $\eta$ and $g$ that additionaly satisfy the following property: $$\label{eq:test-fun-supp} \begin{aligned} \nabla_i\eta(x)\neq0\ \Rightarrow\ g(x)=g(x+e_i)=0\text{ for all $i=1,\ldots,d$ and $x\in{\mathbb{Z}}^d$.} \end{aligned}$$ After these remarks we turn to the proof of . We first establish a chain rule inequality for test functions in the form of assuming . In Step 2 we test by $G\zeta$, and finally in Step 3 we conclude by explicitly defining a sequence of test functions approaching $e^{\frac{\delta }{2}|x|}$. [**Step 1.**]{} Choice of test functions: For an arbitrary parameter $R\ge3$, say, we first construct the appropriate test functions $\eta$ and $g$. Let (a) $\eta:{\mathbb{R}}^d\to{\mathbb{R}}$ be a smooth function satisfying $${\eta}(x)=\begin{cases} 1& \text{if }|x|\in[0,R],\\ 0& \text{if }|x|\in[R+1,\infty), \end{cases} \quad\text{such that }|\nabla\eta|\leq 2,$$ (b) and $g:{\mathbb{R}}^d\to{\mathbb{R}}$ be a smooth function satisfying $${g}(x)=\begin{cases} \frac{x}{2}& \text{if }|x|\in[0,\frac{R}{2}],\\ 0& \text{if }|x|\in[R-1,\infty), \end{cases} \quad\text{such that }|\nabla g|\leq 2.$$ Furthermore we define $\zeta$ through . By construction $\eta$ and $g$ satisfy and there exists a constant $C=C(d)>0$ independent of $R$ such that $$\label{eq:test-fun-bounds} \|\nabla\eta\|_{\ell^\infty({\mathbb{Z}}^d)}+\|\nabla g\|_{\ell^\infty({\mathbb{Z}}^d)}\leq C(d).$$ Thus we have that $$\label{eq:14} |\nabla_i\zeta(x)| \le C(d)\big(\min\{|\zeta(x)|,|\zeta(x+e_i)|\} \delta + 1\big),$$ Indeed, this is seen by writing $|\nabla_i\zeta|$ in the following two equivalent forms: On the one hand, an application of the discrete Leibniz rule $$\nabla_i(fg)(x)=\nabla_if(x)g(x)+f(x+e_i)\nabla_ig(x)$$ yields $$\begin{aligned} \nabla_i\zeta(x) &= \eta(x+e_i)e^{\delta g(x+e_i)} - \eta(x)e^{\delta g(x)}\\ &= \eta(x+e_i)(e^{\delta g(x+e_i)}-e^{\delta g(x)})+(\eta(x+e_i)-\eta(x))e^{\delta g(x)}\\ &=\eta(x+e_i)e^{\delta g(x+e_i)}(1-e^{-\delta \nabla_ig(x)})+\nabla_i\eta(x),\end{aligned}$$ since $(\eta(x+e_i)-\eta(x))e^{\delta g(x)}=\nabla_i\eta(x)$ by . On the other hand, a similar calculation yields $$\begin{aligned} \nabla_i\zeta(x) &= \eta(x)\nabla_i(e^{\delta g(x)})+\nabla_i\eta(x)e^{\delta g(x+e_i)}\\ &= \zeta(x)(e^{\delta \nabla_ig(x)}-1)+ \nabla_i\eta(x).\end{aligned}$$ Therefore  follows from . [**Step 2**]{}. Testing the equation with $\zeta$: We claim that there exists $\delta=\delta(d,\lambda,T) > 0$ such that $$\label{eq:18} \sum_x |\zeta(x)|^2(|G(x)|^2+|\nabla G(x)|^2) \leq C(d,\lambda,T) \Big(G(0) + \sum_{x}(|G(x)|^2+|\nabla G(x)|^2) \Big).$$ for all $R\ge3$, say. Our argument is as follows: The discrete Leibniz rule yields $$\begin{aligned} |\zeta(x)|^2\nabla_iG(x) &= \nabla_i(\zeta^2G)(x)-G(x+e_i)\nabla_i(\zeta^2(x))\\ &= \nabla_i(\zeta^2G)(x)-G(x+e_i)\nabla_i\zeta(x)\big(\zeta(x)+\zeta(x+e_i)\big)\\ &= \nabla_i(\zeta^2G)(x)-G(x+e_i)\nabla_i\zeta(x)\big(2\zeta(x)+\nabla_i \zeta(x)\big).\end{aligned}$$ Together with ellipticity of $a$, cf. , we obtain that $$\begin{aligned} &\frac{1}{T}\sum_x |G(x)|^2|\zeta(x)|^2+\lambda\sum_x|\zeta(x)|^2|\nabla G(x)|^2\\\nonumber &\leq \frac{1}{T}\sum_x |G(x)|^2|\zeta(x)|^2+\sum_x|\zeta(x)|^2\nabla G(x)\cdot a(x)\nabla G(x)\\ &= \frac{1}{T}\sum_x |G(x)|^2|\zeta(x)|^2+\sum_x\nabla(\zeta^2 G)(x)\cdot a(x)\nabla G(x)\\ &\qquad\qquad -\sum_{x,i,j}G(x+e_i)\nabla_i\zeta(x)\big(2\zeta(x)+\nabla_i \zeta(x)\big)\,a_{ij}(x)\nabla_j G(x). \end{aligned}$$ By the defining equation  for $G$, the second-to-last line equals $G(0)|\zeta(0)|^2=G(0)$ (for the choice of test function in Step 1). Therefore Young’s inequality and $|a|\le 1$ yield $$\begin{gathered} \label{eq:15} \frac{1}{T}\sum_x |G(x)|^2|\zeta(x)|^2+\lambda\sum_x|\zeta(x)|^2|\nabla G(x)|^2\\ \le G(0) + \frac{1}{2\epsilon}\sum_{x,i} |G(x+e_i)|^2 |\nabla_i\zeta(x)|^2 + \epsilon\sum_{x,i} \big(2|\zeta(x)|^2+|\nabla_i \zeta(x)|^2\big) |\nabla_i G(x)|^2\end{gathered}$$ for all $\epsilon>0$. The gradient estimate  of the test function $\zeta$ yields $$\sum_{x,i}|G(x+e_i)|^2|\nabla_i\zeta(x)|^2 \leq C(d) \Big(\delta\sum_{x}|G(x)|^2|\zeta(x)|^2 + \sum_{x}|G(x)|^2\Big),$$ as well as $$\sum_{x}|\nabla_i\zeta(x)|^2|\nabla_i G(x)|^2 \leq C(d) \Big(\delta\sum_{x}|\nabla G(x)|^2|\zeta(x)|^2 + \sum_{x}|\nabla G(x)|^2\Big).$$ Inserting the last two estimates into  yields $$\begin{aligned} &\frac{1}{T}\sum_x |G(x)|^2 |\zeta(x)|^2+\lambda\sum_x|\zeta(x)|^2|\nabla G(x)|^2\\ &\le G(0) + \Big( \frac{C(d)\delta}{2\epsilon} + 2\epsilon \Big)\sum_{x}|\zeta(x)|^2 \big(|\nabla G(x)|^2 + |G(x)|^2 \big)\\ &\qquad\qquad\qquad\qquad+ C(d)\Big(\frac{1}{2\epsilon} + \epsilon \Big) \sum_{x} \big(|\nabla G(x)|^2 + |G(x)|^2 \big)\end{aligned}$$ for all $\epsilon, \delta>0$. An appropriate choice of $\epsilon$ and $\delta$, for instance $\epsilon = \sqrt{\delta}$ with $\delta = \delta(d,\lambda,T)$ small enough, allows us to absorb the sums involving $\zeta$ on the left-hand side and we obtain . **Step 3.** Conclusion: We substitute the definition  into  and recall the construction of $\eta$ and $g$ in Step 1 to obtain that $$\sum_{x\in{\mathbb{Z}}^d:|x|\le \frac{R}{2}} (|G(x)|^2+|\nabla G(x)|^2) e^{\delta(d,\lambda,T) |x|} \leq C(d,\lambda,T) \Big(G(0) + \sum_{x\in{\mathbb{Z}}^d}(|G(x)|^2+|\nabla G(x)|^2) \Big).$$ for all $R\ge3$. By , the right-hand side is bounded by $C(d,\lambda,T)$ and therefore the claim follows upon letting $R\uparrow\infty$. [10]{} \[1\][[\#1]{}]{} urlstyle \[1\][DOI \#1]{} S. Agmon. Lectures on exponential decay of solutions of second-order elliptic equations: bounds on eigenfunctions of [$N$]{}-body [S]{}chrödinger operators. , Princeton University Press, Princeton, NJ; University of Tokyo Press, Tokyo, 1982. G. Allaire and M. Amar. Boundary layer tails in periodic homogenization. , 4:209–243, 1999. S. Armstrong and C. Smart. Quantitative stochastic homogenization of elliptic equations in nondivergence form. S. Armstrong and C. Smart. Quantitative stochastic homogenization of convex integral functionals. M. Avellaneda and F.-H. Lin. Compactness methods in the theory of homogenization. , 40(6):803–847, 1987. P. Bella and F. Otto. Corrector estimates for elliptic systems with random periodic coefficients. J. Bergh and J. L[ö]{}fstr[ö]{}m. . Springer-Verlag, Berlin, 1976. Grundlehren der Mathematischen Wissenschaften, No. 223. M. Biskup. Recent progress on the Random Conductance Model. 8:294–373, 2011. A. Bourgeat and A. Piatnitski. Approximations of effective coefficients in stochastic homogenization. , 40:153–165, 2005. L.A. Caffarelli and P.E. Souganidis. Rates of convergence for the homogenization of fully nonlinear uniformly elliptic pde in random media. , 180(2):301–360, 2010. P. Caputo and D. Ioffe. Finite volume approximation of the effective diffusion matrix: the case of independent bond disorder. , 39(3):505–525, 2003. J. G. Conlon A. Naddaf. On homogenization of elliptic equations with random coefficients. , 5(9):1–58, 2000. J. G. Conlon and T. Spencer. Strong convergence to the homogenized limit of elliptic equations with random coefficients. , 366(3):1257–1288, 2014. J.G. Conlon and T. Spencer. A strong central limit theorem for a class of random surfaces. 2011. arXiv:1105.2814v2. T. Delmotte and J.-D. Deuschel. On estimating the derivatives of symmetric diffusions in stationary random environment, with applications to [$\nabla\phi$]{} interface model. , 133(3):358–390, 2005. J.-D. Deuschel and T. Kumagai, Markov chain approximations to non-symmetric diffusions with bounded coefficients. , 66(6):821–866, 2013. D. Gerard-Varet and N. Masmoudi Homogenization and boundary layers. 209:133–178, 2012. A. Guionnet, B. Zegarlinski. Lecture notes on Logarithmic Sobolev Inequalities. 1801:1–134, 2003. A. Gloria. Numerical approximation of effective coefficients in stochastic homogenization of discrete elliptic equations 46(1):1–38, 2012. A. Gloria, S. Neukamm and F. Otto. Quantifiation of ergodicity in stochastic homogenization: optimal bounds via spectral gap on Glauber dynamics. . Online First. 2014 A. Gloria, S. Neukamm and F. Otto. Quantification of ergodicity in stochastic homogenization: optimal bounds via spectral gap on Glauber dynamics — long version. . A. Gloria, S. Neukamm and F. Otto. An optimal quantitative two-scale expansion in stochastic homogenization of discrete elliptic equations. , 48(2):325–346, 2014. A. Gloria and F. Otto. An optimal variance estimate in stochastic homogenization of discrete elliptic equations. , 39(3), 779–856, 2011. A. Gloria and F. Otto. An optimal error estimate in stochastic homogenization of discrete elliptic equations. , 22(1):1–28, 2012. A. Gloria and F. Otto. Quantitative results on the corrector equation in stochastic homogenization. C. Kipnis and S.R.S. Varadhan. Central limit theorem for additive functional of reversible [M]{}arkov processes and applications to simple exclusion. , 104:1–19, 1986. S.M. Kozlov. The averaging of random operators. , 109(151)(2):188–202, 327, 1979. S.M. Kozlov. Averaging of difference schemes. , 57(2):351–369, 1987. T. Kumagai. Random Walks on Disordered Media and their Scaling Limits. , Springer, 2014. R. Künnemann. The diffusion limit for reversible jump processes on $\mathbb{Z}^d$ with ergodic random bond conductivities. , 90:27–68, 1983. A. Lamacz, S. Neukamm and F. Otto Moment bounds for the corrector in stochastic homogenization of a percolation model. WIAS Preprint No. 1836, 2013. Z.-M. Ma and M. Röckner. Introduction to the theory of (non-symmetric) Dirichlet forms. , Springer-Verlag, Berlin, 1992. D. Marahrens and F. Otto. Annealed estimates on the [Green]{} function. MPI Preprint 69/2012. J.-C. Mourrat. First-order expansion of homogenized coefficients under Bernoulli perturbations. , to appear, 2014. J. C. Mourrat and F. Otto. Correlation structure of the corrector in stochastic homogenization. A. Naddaf and T. Spencer. Estimates on the variance of some homogenization problems. Preprint, 1998. G.C. Papanicolaou and S.R.S. Varadhan. Boundary value problems with rapidly oscillating random coefficients. In [*Random fields, [V]{}ol. [I]{}, [II]{} ([E]{}sztergom, 1979)*]{}, volume 27 of [*Colloq. Math. Soc. János Bolyai*]{}, pages 835–873. North-Holland, Amsterdam, 1981. V. Sidoravicius, A.-S. Sznitman. Quenched invariance principles for walks on clusters of percolation or among random conductances. , 129(2):219–244, 2004. E. M. Stein. , volume 30 of [*Princeton Mathematical Series*]{}. Princeton University Press, Princeton, NJ, 1970. V.V. Yurinskii. , Sept. 78, 54. [^1]: `j.ben-artzi@imperial.ac.uk`, Department of Mathematics, South Kensington Campus, Imperial College London, London SW7 2AZ, United Kingdom [^2]: `daniel.marahrens@mis.mpg.de`, Max-Planck-Institut für Mathematik in den Naturwissenschaften, Inselstraße 22, 04103 Leipzig, Germany [^3]: `stefan.neukamm@tu-dresden.de`, Technische Universität Dresden, Fachrichtung Mathematik, 01062 Dresden, Germany
--- abstract: | Consistent interactions that can be added to a free, Abelian gauge theory comprising a finite collection of BF models and a finite set of two-form gauge fields (with the Lagrangian action written in first-order form as a sum of Abelian Freedman-Townsend models) are constructed from the deformation of the solution to the master equation based on specific cohomological techniques. Under the hypotheses of smoothness in the coupling constant, locality, Lorentz covariance, and Poincaré invariance of the interactions, supplemented with the requirement on the preservation of the number of derivatives on each field with respect to the free theory, we obtain that the deformation procedure modifies the Lagrangian action, the gauge transformations as well as the accompanying algebra. The interacting Lagrangian action contains a generalized version of non-Abelian Freedman-Townsend model. The consistency of interactions to all orders in the coupling constant unfolds certain equations, which are shown to have solutions. PACS number: 11.10.Ef author: - | C. Bizdadea[^1], E. M. Cioroianu[^2], I. Negru,\ S. O. Saliu[^3], S. C. Săraru[^4]\ Faculty of Physics, University of Craiova,\ 13 A. I. Cuza Str., Craiova 200585, Romania title: 'On the generalized Freedman-Townsend model' --- Introduction ============ The power of the BRST formalism was strongly increased by its cohomological development, which allowed, among others, a useful investigation of many interesting aspects related to the perturbative renormalization problem [4a,4b,4c,4d,5]{}, anomaly-tracking mechanism [@5; @6a; @6b; @6c; @6d; @6e], simultaneous study of local and rigid invariances of a given theory [@7] as well as to the reformulation of the construction of consistent interactions in gauge theories [@7a1; @7a2; @7a3; @7a4; @7a5] in terms of the deformation theory [@8b1; @8b2; @8b3] or, actually, in terms of the deformation of the solution to the master equation [@def; @contempmath]. The scope of this paper is to investigate the consistent interactions that can be added to a free, Abelian gauge theory consisting of a finite collection of BF models and a finite set of two-form gauge fields (described by a sum of Abelian Freedman-Townsend actions). Each BF model from the collection comprises a scalar field, a two-form and two sorts of one-forms. We work under the hypotheses that the interactions are smooth in the coupling constant, local, Lorentz covariant, and Poincaré invariant, supplemented with the requirement on the preservation of the number of derivatives on each field with respect to the free theory. Under these hypotheses, we obtain the most general form of the theory that describes the cross-couplings between a collection of BF models and a set of two-form gauge fields. The resulting interacting model is accurately formulated in terms of a gauge theory with gauge transformations that close according to an open algebra (the commutators among the deformed gauge transformations only close on the stationary surface of deformed field equations). Topological BF models  [@birmingham91] are important in view of the fact that certain interacting, non-Abelian versions are related to a Poisson structure algebra [@stroblspec] present in various versions of Poisson sigma models [psmikeda94,psmstrobl95,psmstroblCQG961,psmstroblCQG962,psmstrobl97,psmcattaneo2000,psmcattaneo2001]{}, which are known to be useful at the study of two-dimensional gravity [grav2teit83,grav2jackiw85,grav2katanaev86,grav2brown88,grav2katanaev90,grav2schmidt,grav2solod,grav2ikedaizawa90,grav2strobl94,grav2grumvassil02]{} (for a detailed approach, see [@grav2strobl00]). It is well known that pure three-dimensional gravity is just a BF theory. Moreover, in higher dimensions general relativity and supergravity in Ashtekar formalism may also be formulated as topological BF theories with some extra constraints [@ezawa; @freidel; @smolin; @ling]. Due to these results, it is important to know the self-interactions in BF theories as well as the couplings between BF models and other theories. This problem has been considered in literature in relation with self-interactions in various classes of BF models [defBFizawa2000,defBFmpla,defBFikeda00,defBFikeda01,defBFijmpa,defBFjhep,defBFijmpajuvi06]{} and couplings to matter fields [@defBFepjc] and vector fields [defBFikeda03,defBFijmpajuvi04]{} by using the powerful BRST cohomological reformulation of the problem of constructing consistent interactions. Other aspects concerning interacting, topological BF models can be found in [otherBFikeda02,otherBFikedaizawa04,otherBFikeda06]{}. On the other hand, models with $p$-form gauge fields play an important role in string and superstring theory as well as in supergravity. Based on these considerations, the study of interactions between BF models and two-forms appears as a topic that might enlighten certain aspects in both gravity and supergravity theories. Our strategy goes as follows. Initially, we determine in Section \[free\] the antifield-BRST symmetry of the free model, which splits as the sum between the Koszul-Tate differential and the exterior derivative along the gauge orbits, $s=\delta +\gamma $. Then, in Section \[defrev\] we briefly present the reformulation of the problem of constructing consistent interactions in gauge field theories in terms of the deformation of the solution to the master equation. Next, in Section \[int\] we determine the consistent deformations of the solution to the master equation for the model under consideration. The first-order deformation belongs to the local cohomology $H^{0}(s\vert d)$, where $d$ is the exterior spacetime derivative. The computation of the cohomological space $H^{0}(s\vert d)$ proceeds by expanding the co-cycles according to the antighost number and further using the cohomological groups $H(\gamma )$ and $H(\delta \vert d)$. We find that the first-order deformation is parameterized by $11$ types of smooth functions of the undifferentiated scalar fields, which become restricted to fulfill $19 $ kinds of equations in order to produce a deformation that is consistent to all orders in the coupling constant. With the help of these equations we show that the remaining deformations, of orders $2$ and higher, can be taken to vanish. The identification of the interacting model is developed in Section \[lagint\]. All the interaction vertices are derivative-free. Among the cross-couplings between the collection of BF models and the set of two-form gauge fields we find a generalized version of non-Abelian Freedman-Townsend vertex. (By ‘generalized’ we mean that its form is identical with the standard non-Abelian Freedman-Townsend vertex up to the point that the structure constants of a Lie algebra are replaced here with some functions depending on the undifferentiated scalar fields from the BF sector.) Meanwhile, both the gauge transformations corresponding to the coupled model and their algebra are deformed with respect to the initial Abelian theory in such a way that the new gauge algebra becomes open and the reducibility relations only close on-shell (on the stationary surface of deformed field equations). It is interesting to mention that by contrast to the standard non-Abelian Freedman-Townsend model, where the auxiliary vector fields are gauge-invariant, here these fields gain nonvanishing gauge transformations, proportional with some BF gauge parameters. In the end of Section [lagint]{} we comment on several classes of solutions to the equations satisfied by the various functions of the scalar fields that parameterize the deformed solution to the master equation. Section \[concl\] closes the paper with the main conclusions. The present paper also contains 4 appendices, in which various notations used in the main body of the paper as well as some formulas concerning the gauge structure of the interacting model are listed. Free model: Lagrangian formulation and BRST symmetry\[free\] ============================================================ The starting point is given by a free theory in four spacetime dimensions that describes a finite collection of BF models and a finite set of two-form gauge fields, with the Lagrangian action $$\begin{aligned} S_{0}[A_{\mu }^{a},H_{\mu }^{a},\varphi _{a},B_{a}^{\mu \nu },V_{\mu \nu }^{A},V_{\mu }^{A}] &=&\int d^{4}x\left( H_{\mu }^{a}\partial ^{\mu }\varphi _{a}+\tfrac{1}{2}B_{a}^{\mu \nu }\partial _{\lbrack \mu }^{\left. {}\right. }A_{\nu ]}^{a}\right. \notag \\ &&\left. +\tfrac{1}{2}V_{A}^{\mu \nu }F_{\mu \nu }^{A}+\tfrac{1}{2}V_{\mu }^{A}V_{A}^{\mu }\right) . \label{bfa1}\end{aligned}$$Each of the BF models from the collection (to be indexed by lower case letters $a$, $b$, etc.) comprises a scalar field $\varphi _{a}$, two kinds of one-forms $A_{\mu }^{a}$ and $H_{\mu }^{a}$, and a two-form $B_{a}^{\mu \nu }$. The action for the set of Abelian two-forms decomposes as a sum of individual two-form actions, indexed via capital Latin letters ($A$, $B$, etc.). Each two-form action is written in first-order form as an Abelian Freedman-Townsend action, in terms of a two-form $V_{A}^{\mu \nu }$ and of an auxiliary vector $V_{\mu }^{A}$, with the Abelian field strength $F_{\mu \nu }^{A}=\partial _{\lbrack \mu }^{\left. {}\right. }V_{\nu ]}^{A}$. The collection indices from the two-form sector are lowered with the (non-degenerate) metric $k_{AB}$ induced by the Lagrangian density $\tfrac{1}{2}\left( V_{A}^{\mu \nu }F_{\mu \nu }^{A}+V_{\mu }^{A}V_{A}^{\mu }\right) $ from (\[bfa1\]) (i.e. $F_{A}^{\mu \nu }=k_{AB}F^{B\mu \nu }$) and are raised with its inverse, of elements $k^{AB}$. Of course, we consider the general situation, where the two types of collection indexes run independently one from each other. Everywhere in this paper the notation $\left[ \mu \ldots \nu \right] $ signifies complete antisymmetry with respect to the (Lorentz) indices between brackets, with the conventions that the minimum number of terms is always used and the result is never divided by the number of terms. Action (\[bfa1\]) is found invariant under the gauge transformations $$\begin{gathered} \delta _{\epsilon }A_{\mu }^{a}=\partial _{\mu }\epsilon ^{a},\quad \delta _{\epsilon }H_{\mu }^{a}=-2\partial ^{\nu }\epsilon _{\nu \mu }^{a},\quad \delta _{\epsilon }\varphi _{a}=0, \label{bfa2} \\ \delta _{\epsilon }B_{a}^{\mu \nu }=-3\partial _{\rho }\epsilon _{a}^{\rho \mu \nu },\quad \delta _{\epsilon }V_{\mu \nu }^{A}=\varepsilon _{\mu \nu \rho \lambda }\partial ^{\rho }\epsilon ^{A\lambda },\quad \delta _{\epsilon }V_{\mu }^{A}=0, \label{bfa2i}\end{gathered}$$where all the gauge parameters are bosonic, with $\epsilon _{\mu \nu }^{a}$ and $\epsilon _{a}^{\mu \nu \rho }$ completely antisymmetric. It is easy to see that the above gauge transformations are Abelian and off-shell (everywhere in the space of field histories, not only on the stationary surface of field equations for (\[bfa1\])), second-order reducible. Indeed, related to the first-order reducibility, we observe that if we make the transformations $\epsilon _{\mu \nu }^{a}(\theta )=-3\partial ^{\lambda }\theta _{\lambda \mu \nu }^{a}$, $\epsilon _{a}^{\mu \nu \rho }(\theta )=-4\partial _{\lambda }\theta _{a}^{\lambda \mu \nu \rho }$, $\epsilon ^{A\lambda }(\theta )=\partial ^{\lambda }\theta ^{A}$, with $\theta $s arbitrary, bosonic functions, completely antisymmetric (where applicable) in their Lorentz indices, then the corresponding gauge transformations identically vanish, $\delta _{\epsilon (\theta )}H_{\mu }^{a}=0$, $\delta _{\epsilon (\theta )}B_{a}^{\mu \nu }=0$, $\delta _{\epsilon (\theta )}V_{\mu \nu }^{A}=0$. The last two transformation laws of the gauge parameters can be further annihilated by trivial transformations only: $\epsilon _{a}^{\mu \nu \rho }(\theta )=0$ if and only if $\theta _{a}^{\lambda \mu \nu \rho }=0$ and $\epsilon ^{A\lambda }(\theta )=0$ if and only if $\theta ^{A}=0$, so there is no higher-order reducibility associated with them. By contrast, the first one can be made to vanish strongly via the transformation $\theta _{\lambda \mu \nu }^{a}(\omega )=-4\partial ^{\alpha }\omega _{\alpha \lambda \mu \nu }^{a}$, with $\omega _{\alpha \lambda \mu \nu }^{a}$ an arbitrary, completely antisymmetric, bosonic function (which indeed produces $\epsilon _{\mu \nu }^{a}(\theta \left( \omega \right) )=0$), but there is no nontrivial transformation of $\omega _{\alpha \lambda \mu \nu }^{a}$ such that $\theta _{\lambda \mu \nu }^{a}$ becomes zero. Thus, the reducibility of (\[bfa2\])–(\[bfa2i\]) stops at order $2$ and holds off-shell. In order to construct the BRST symmetry of this free theory, we introduce the field/ghost and antifield spectra $$\begin{gathered} \Phi ^{\alpha _{0}}=\left( A_{\mu }^{a},H_{\mu }^{a},\varphi _{a},B_{a}^{\mu \nu },V_{\mu \nu }^{A},V_{\mu }^{A}\right) , \label{bfa6} \\ \Phi _{\alpha _{0}}^{\ast }=\left( A_{a}^{\ast \mu },H_{a}^{\ast \mu },\varphi ^{\ast a},B_{\mu \nu }^{\ast a},V_{A}^{\ast \mu \nu },V_{A}^{\ast \mu }\right) , \label{bfa6a} \\ \eta ^{\alpha _{1}}=\left( \eta ^{a},C_{\mu \nu }^{a},\eta _{a}^{\mu \nu \rho },C_{\mu }^{A}\right) , \label{bfa7} \\ \eta _{\alpha _{1}}^{\ast }=\left( \eta _{a}^{\ast },C_{a}^{\ast \mu \nu },\eta _{\mu \nu \rho }^{\ast a},C_{A}^{\ast \mu }\right) , \label{bfa7a} \\ \eta ^{\alpha _{2}}=\left( C_{\mu \nu \rho }^{a},\eta _{a}^{\mu \nu \rho \lambda },C^{A}\right) ,\quad \eta _{\alpha _{2}}^{\ast }=\left( C_{a}^{\ast \mu \nu \rho },\eta _{\mu \nu \rho \lambda }^{\ast a},C_{A}^{\ast }\right) , \label{bfa8} \\ \eta ^{\alpha _{3}}=\left( C_{\mu \nu \rho \lambda }^{a}\right) ,\quad \eta _{\alpha _{3}}^{\ast }=\left( C_{a}^{\ast \mu \nu \rho \lambda }\right) . \label{bfa9}\end{gathered}$$The fermionic ghosts $\eta ^{\alpha _{1}}$ respectively correspond to the bosonic gauge parameters $\epsilon ^{\alpha _{1}}=\left( \epsilon ^{a},\epsilon _{\mu \nu }^{a},\epsilon _{a}^{\mu \nu \rho },\epsilon _{\mu }^{A}\right) $, the bosonic ghosts for ghosts $\eta ^{\alpha _{2}}$ are due to the first-order reducibility relations (the $\theta $-parameters from the previous transformations), while the fermionic ghosts for ghosts for ghosts $\eta ^{\alpha _{3}}$ are required by the second-order reducibility relations (the $\omega $-function from the above). The star variables represent the antifields of the corresponding fields/ghosts. (Their Grassmann parities are respectively opposite to those of the associated fields/ghosts, in agreement with the general rules of the antifield-BRST method.) Since both the gauge generators and the reducibility functions are field-independent, it follows that the BRST differential reduces to $$s=\delta +\gamma , \label{desc}$$where $\delta $ is the Koszul-Tate differential and $\gamma $ denotes the exterior longitudinal derivative. The Koszul-Tate differential is graded in terms of the antighost number ($\mathrm{agh}$, $\mathrm{agh}\left( \delta \right) =-1$) and enforces a resolution of the algebra of smooth functions defined on the stationary surface of field equations for action (\[bfa1\]), $C^{\infty }\left( \Sigma \right) $, $\Sigma :\delta S_{0}/\delta \Phi ^{\alpha _{0}}=0$. The exterior longitudinal derivative is graded in terms of the pure ghost number ($\mathrm{pgh}$, $\mathrm{pgh}\left( \gamma \right) =1$) and is correlated with the original gauge symmetry via its cohomology at pure ghost number $0$ computed in $C^{\infty }\left( \Sigma \right) $, which is isomorphic to the algebra of physical observables for the free theory. These two degrees do not interfere ($\mathrm{agh}\left( \gamma \right) =0$, $\mathrm{pgh}\left( \delta \right) =0$). The pure ghost number and antighost number of BRST generators (\[bfa6\])–(\[bfa9\]) are valued as follows: $$\begin{gathered} \mathrm{pgh}\left( \Phi ^{\alpha _{0}}\right) =0,\quad \mathrm{pgh}\left( \eta ^{\alpha _{1}}\right) =1,\quad \mathrm{pgh}\left( \eta ^{\alpha _{2}}\right) =2,\quad \mathrm{pgh}\left( \eta ^{\alpha _{3}}\right) =3, \label{bfa10} \\ \mathrm{pgh}\left( \Phi _{\alpha _{0}}^{\ast }\right) =\mathrm{pgh}\left( \eta _{\alpha _{1}}^{\ast }\right) =\mathrm{pgh}\left( \eta _{\alpha _{2}}^{\ast }\right) =\mathrm{pgh}\left( \eta _{\alpha _{3}}^{\ast }\right) =0, \label{bfa11} \\ \mathrm{agh}\left( \Phi ^{\alpha _{0}}\right) =\mathrm{agh}\left( \eta ^{\alpha _{1}}\right) =\mathrm{agh}\left( \eta ^{\alpha _{2}}\right) =\mathrm{agh}\left( \eta ^{\alpha _{3}}\right) =0, \label{bfa12} \\ \mathrm{agh}\left( \Phi _{\alpha _{0}}^{\ast }\right) =1,\quad \mathrm{agh}\left( \eta _{\alpha _{1}}^{\ast }\right) =2,\quad \mathrm{agh}\left( \eta _{\alpha _{2}}^{\ast }\right) =3,\quad \mathrm{agh}\left( \eta _{\alpha _{3}}^{\ast }\right) =4, \label{bfa13}\end{gathered}$$where the actions of $\delta $ and $\gamma $ on them read as $$\begin{gathered} \delta \Phi ^{\alpha _{0}}=\delta \eta ^{\alpha _{1}}=\delta \eta ^{\alpha _{2}}=\delta \eta ^{\alpha _{3}}=0, \label{bfa15} \\ \delta A_{a}^{\ast \mu }=-\partial _{\nu }B_{a}^{\mu \nu },\quad \delta H_{a}^{\ast \mu }=-\partial ^{\mu }\varphi _{a},\quad \delta \varphi ^{\ast a}=\partial ^{\mu }H_{\mu }^{a}, \label{bfa16} \\ \delta B_{\mu \nu }^{\ast a}=-\tfrac{1}{2}\partial _{[\mu }^{\left. {}\right. }A_{\nu ]}^{a},\quad \delta V_{A}^{\ast \mu \nu }=-\tfrac{1}{2}F_{A}^{\mu \nu },\quad \delta V_{A}^{\ast \mu }=-\left( V_{A}^{\mu }+\partial _{\nu }V_{A}^{\mu \nu }\right) , \label{bfa17} \\ \delta \eta _{a}^{\ast }=-\partial _{\mu }A_{a}^{\ast \mu },\quad \delta C_{a}^{\ast \mu \nu }=\partial _{\left. {}\right. }^{[\mu }H_{a}^{\ast \nu ]},\quad \delta \eta _{\mu \nu \rho }^{\ast a}=\partial _{[\mu }^{\left. {}\right. }B_{\nu \rho ]}^{\ast a}, \label{bfa18} \\ \delta C_{A}^{\ast \mu }=\varepsilon ^{\mu \nu \rho \lambda }\partial _{\nu }V_{A\rho \lambda }^{\ast },\quad \delta C_{a}^{\ast \mu \nu \rho }=-\partial _{\left. {}\right. }^{\left[ \mu \right. }C_{a}^{\ast \nu \rho ]}, \label{bfa19} \\ \delta \eta _{\mu \nu \rho \lambda }^{\ast a}=-\partial _{[\mu }^{\left. {}\right. }\eta _{\nu \rho \lambda ]}^{\ast a},\quad \delta C_{A}^{\ast }=\partial _{\mu }C_{A}^{\ast \mu },\quad \delta C_{a}^{\ast \mu \nu \rho \lambda }=\partial _{\left. {}\right. }^{[\mu }C_{a}^{\ast \nu \rho \lambda ]}, \label{bfa19a} \\ \gamma \Phi _{\alpha _{0}}^{\ast }=\gamma \eta _{\alpha _{1}}^{\ast }=\gamma \eta _{\alpha _{2}}^{\ast }=\gamma \eta _{\alpha _{3}}^{\ast }=0, \label{bfa20} \\ \gamma A_{\mu }^{a}=\partial _{\mu }\eta ^{a},\quad \gamma H_{\mu }^{a}=2\partial ^{\nu }C_{\mu \nu }^{a},\quad \gamma B_{a}^{\mu \nu }=-3\partial _{\rho }\eta _{a}^{\mu \nu \rho }, \label{bfa21} \\ \gamma \varphi _{a}=0=\gamma V_{\mu }^{A},\quad \gamma V_{\mu \nu }^{A}=\varepsilon _{\mu \nu \rho \lambda }\partial ^{\rho }C^{A\lambda },\quad \gamma \eta ^{a}=0, \label{bfa22} \\ \gamma C_{\mu \nu }^{a}=-3\partial ^{\rho }C_{\mu \nu \rho }^{a},\quad \gamma \eta _{a}^{\mu \nu \rho }=4\partial _{\lambda }\eta _{a}^{\mu \nu \rho \lambda },\quad \gamma C_{\mu }^{A}=\partial _{\mu }C^{A}, \label{bfa23} \\ \gamma C_{\mu \nu \rho }^{a}=4\partial ^{\lambda }C_{\mu \nu \rho \lambda }^{a},\quad \gamma \eta _{a}^{\mu \nu \rho \lambda }=\gamma C^{A}=0,\quad \gamma C_{\mu \nu \rho \lambda }^{a}=0. \label{bfa24}\end{gathered}$$ The overall degree of the BRST complex is named ghost number ($\mathrm{gh}$) and is defined like the difference between the pure ghost number and the antighost number, such that $\mathrm{gh}\left( \delta \right) =\mathrm{gh}\left( \gamma \right) =\mathrm{gh}\left( s\right) =1$. The BRST symmetry admits a canonical action $s\cdot =\left( \cdot ,\bar{S}\right) $ in an antibracket structure $\left( ,\right) $, where its canonical generator is a bosonic functional of ghost number $0$ ($\varepsilon \left( \bar{S}\right) =0 $, $\mathrm{gh}\left( \bar{S}\right) =0$) that satisfies the classical master equation $\left( \bar{S},\bar{S}\right) =0$. In the case of the free theory under discussion, the solution to the master equation takes the form $$\begin{aligned} \bar{S}= S_{0}&+&\int d^{4}x\left( A_{a}^{\ast \mu }\partial _{\mu }\eta ^{a}+2H_{a}^{\ast \mu }\partial ^{\nu }C_{\mu \nu }^{a}-3B_{\mu \nu }^{\ast a}\partial _{\rho }\eta _{a}^{\mu \nu \rho }\right. \notag \\ &&+\varepsilon _{\mu \nu \rho \lambda }V^{\ast A\mu \nu }\partial ^{\rho }C_{A}^{\lambda }-3C_{a}^{\ast \mu \nu }\partial ^{\rho }C_{\mu \nu \rho }^{a}+4\eta _{\mu \nu \rho }^{\ast a}\partial _{\lambda }\eta _{a}^{\mu \nu \rho \lambda } \notag \\ &&\left. +C_{\mu }^{\ast A}\partial ^{\mu }C_{A}+4C_{a}^{\ast \mu \nu \rho }\partial ^{\lambda }C_{\mu \nu \rho \lambda }^{a}\right) \label{solfree}\end{aligned}$$and contains pieces of antighost number ranging from $0$ to $3$. Deformation of the solution to the master equation: a brief review \[defrev\] ============================================================================= We begin with a “free" gauge theory, described by a Lagrangian action $S_{0}^{\mathrm{L}}\left[ \Phi ^{\alpha _{0}}\right] $, invariant under some gauge transformations $\delta _{\epsilon }\Phi ^{\alpha _{0}}=Z_{\;\;\alpha _{1}}^{\alpha _{0}}\epsilon ^{\alpha _{1}}$, i.e. $\frac{\delta S_{0}^{\mathrm{L}}}{\delta \Phi ^{\alpha _{0}}}Z_{\;\;\alpha _{1}}^{\alpha _{0}}=0$, and consider the problem of constructing consistent interactions among the fields $\Phi ^{\alpha _{0}}$ such that the couplings preserve both the field spectrum and the original number of gauge symmetries. This matter is addressed by means of reformulating the problem of constructing consistent interactions as a deformation problem of the solution to the master equation corresponding to the “free" theory [@def; @contempmath]. Such a reformulation is possible due to the fact that the solution to the master equation contains all the information on the gauge structure of the theory. If an interacting gauge theory can be consistently constructed, then the solution $\bar{S}$ to the master equation $\left( \bar{S},\bar{S}\right) =0$ associated with the “free" theory can be deformed into a solution $S$ $$\bar{S}\rightarrow S=\bar{S}+\lambda S_{1}+\lambda ^{2}S_{2}+\cdots =\bar{S}+\lambda \int d^{D}x\,a+\lambda ^{2}\int d^{D}x\,b+\cdots \label{bff3.1}$$of the master equation for the deformed theory $$\left( S,S\right) =0, \label{bff3.2}$$such that both the ghost and antifield spectra of the initial theory are preserved. Equation (\[bff3.2\]) splits, according to the various orders in the coupling constant (deformation parameter) $\lambda $, into a tower of equations: $$\begin{aligned} \left( \bar{S},\bar{S}\right) &=&0, \label{bff3.3} \\ 2\left( S_{1},\bar{S}\right) &=&0, \label{bff3.4} \\ 2\left( S_{2},\bar{S}\right) +\left( S_{1},S_{1}\right) &=&0, \label{bff3.5} \\ \left( S_{3},\bar{S}\right) +\left( S_{1},S_{2}\right) &=&0, \label{bff3.6} \\ &&\vdots \notag\end{aligned}$$ Equation (\[bff3.3\]) is fulfilled by hypothesis. The next equation requires that the first-order deformation of the solution to the master equation, $S_{1}$, is a co-cycle of the “free" BRST differential, $sS_{1}=0$. However, only cohomologically nontrivial solutions to (\[bff3.4\]) should be taken into account, as the BRST-exact ones can be eliminated by some (in general nonlinear) field redefinitions. This means that $S_{1}$ pertains to the ghost number $0$ cohomological space of $s$, $H^{0}\left( s\right) $, which is generically nonempty because it is isomorphic to the space of physical observables of the “free" theory. It has been shown (by the triviality of the antibracket map in the cohomology of the BRST differential) that there are no obstructions in finding solutions to the remaining equations, namely (\[bff3.5\]), (\[bff3.6\]), etc. However, the resulting interactions may be nonlocal, and obstructions might even appear if one insists on their locality. The analysis of these obstructions can be carried out by means of standard cohomological techniques. Consistent interactions between a collection of topological BF models and a set of Abelian two-forms\[int\] =========================================================================================================== This section is devoted to the investigation of consistent interactions that can be introduced between a collection of topological BF models and a set of Abelian two-forms in four spacetime dimensions. This matter is addressed in the context of the antifield-BRST deformation procedure briefly addressed in the above and relies on computing the solutions to equations ([bff3.4]{})–(\[bff3.6\]), etc., with the help of the free BRST cohomology. Standard material: basic cohomologies ------------------------------------- For obvious reasons, we consider only smooth, local, Lorentz covariant, and Poincaré invariant deformations (i.e., we do not allow explicit dependence on the spacetime coordinates). Moreover, we require the preservation of the number of derivatives on each field with respect to the free theory (derivative-order assumption). The smoothness of the deformations refers to the fact that the deformed solution to the master equation, (\[bff3.1\]), is smooth in the coupling constant $\lambda $ and reduces to the original solution, (\[solfree\]), in the free limit ($\lambda =0$). The preservation of the number of derivatives on each field with respect to the free theory means here that the following two requirements must be simultaneously satisfied: (i) the derivative order of the equations of motion on each field is the same for the free and for the interacting theory, respectively; (ii) the maximum number of derivatives allowed within the interaction vertices is equal to $2$, i.e. the maximum number of derivatives from the free Lagrangian. If we make the notation $S_{1}=\int d^{4}x\,a$, with $a$ a local function, then equation (\[bff3.4\]), which we have seen that controls the first-order deformation, takes the local form $$sa=\partial _{\mu }m^{\mu },\quad \text{\textrm{gh}}\left( a\right) =0,\quad \varepsilon \left( a\right) =0, \label{3.1}$$for some local $m^{\mu }$. It shows that the nonintegrated density of the first-order deformation pertains to the local cohomology of $s$ in ghost number $0$, $a\in H^{0}\left( s\vert d\right) $, where $d$ denotes the exterior spacetime differential. The solution to (\[3.1\]) is unique up to $s$-exact pieces plus divergences $$a\rightarrow a+sb+\partial _{\mu }n^{\mu },\, \text{\textrm{gh}}\left( b\right) =-1,\, \varepsilon \left( b\right) =1,\, \text{\textrm{gh}}\left( n^{\mu }\right) =0,\, \varepsilon \left( n^{\mu }\right) =0. \label{3.1a}$$At the same time, if the general solution to (\[3.1\]) is found to be completely trivial, $a=sb+\partial _{\mu }n^{\mu }$, then it can be made to vanish $a=0$. In order to analyze equation (\[3.1\]) we develop $a$ according to the antighost number $$a=\sum\limits_{i=0}^{I}a_{i},\quad \text{\textrm{agh}}\left( a_{i}\right) =i,\quad \text{\textrm{gh}}\left( a_{i}\right) =0,\quad \varepsilon \left( a_{i}\right) =0, \label{3.2}$$and assume, without loss of generality, that the above decomposition stops at some finite value of $I$. This can be shown, for instance, like in [gen2]{} (Section 3), under the sole assumption that the interacting Lagrangian at the first order in the coupling constant, $a_{0}$, has a finite, but otherwise arbitrary derivative order. Inserting decomposition (\[3.2\]) into equation (\[3.1\]) and projecting it on the various values of the antighost number, we obtain the tower of equations $$\begin{aligned} \gamma a_{I} &=&\partial _{\mu }\overset{\left( I\right) }{m}^{\mu }, \label{3.3} \\ \delta a_{I}+\gamma a_{I-1} &=&\partial _{\mu }\overset{\left( I-1\right) }{m}^{\mu }, \label{3.4} \\ \delta a_{i}+\gamma a_{i-1} &=&\partial _{\mu }\overset{\left( i-1\right) }{m}^{\mu },\quad 1\leq i\leq I-1, \label{3.5}\end{aligned}$$where $\left( \overset{\left( i\right) }{m}^{\mu }\right) _{i=\overline{0,I}} $ are some local currents with $\text{agh}\left( \overset{\left( i\right) }{m}^{\mu }\right) =i$. Equation (\[3.3\]) can be replaced in strictly positive values of the antighost number by $$\gamma a_{I}=0,\quad I>0. \label{3.6}$$Due to the second-order nilpotency of $\gamma $ ($\gamma ^{2}=0$), the solution to (\[3.6\]) is clearly unique up to $\gamma $-exact contributions $$a_{I}\rightarrow a_{I}+\gamma b_{I},\quad \text{\textrm{agh}}\left( b_{I}\right) =I,\quad \text{\textrm{pgh}}\left( b_{I}\right) =I-1,\quad \varepsilon \left( b_{I}\right) =1. \label{r68}$$Meanwhile, if it turns out that $a_{I}$ exclusively reduces to $\gamma $-exact terms, $a_{I}=\gamma b_{I}$, then it can be made to vanish, $a_{I}=0$. In other words, the nontriviality of the first-order deformation $a$ is translated at its highest antighost number component into the requirement that $a_{I}\in H^{I}\left( \gamma \right) $, where $H^{I}\left( \gamma \right) $ denotes the cohomology of the exterior longitudinal derivative $\gamma $ in pure ghost number equal to $I$. So, in order to solve equation (\[3.1\]) (equivalent with (\[3.6\]) and (\[3.4\])–(\[3.5\])), we need to compute the cohomology of $\gamma $, $H\left( \gamma \right) $, and, as it will be made clear below, also the local homology of $\delta $, $H\left( \delta \vert d\right) $. On behalf of definitions (\[bfa20\])–(\[bfa24\]) it is simple to see that $H\left( \gamma \right) $ is spanned by $$F_{\bar{A}}=\left( \varphi _{a},\partial _{\lbrack \mu }^{\left. {}\right. }A_{\nu ]}^{a},\partial ^{\mu }H_{\mu }^{a},\partial _{\mu }B_{a}^{\mu \nu },V_{\mu }^{A},\tilde{F}_{\mu \nu \rho }^{A}\right) , \label{3.7}$$the antifields $$\chi _{\Delta }^{\ast }=\left( \Phi _{\alpha _{0}}^{\ast },\eta _{\alpha _{1}}^{\ast },\eta _{\alpha _{2}}^{\ast },\eta _{\alpha _{3}}^{\ast }\right) , \label{notat}$$all of their spacetime derivatives as well as by the undifferentiated ghosts $$\eta ^{\bar{\Upsilon}}=\left( \eta ^{a},C^{A},\eta _{a}^{\mu \nu \rho \lambda },C_{\mu \nu \rho \lambda }^{a}\right) . \label{notat1}$$In formula (\[3.7\]) we used the notation $$\tilde{F}_{\mu \nu \rho }^{A}=\partial _{\lbrack \mu }^{\left. {}\right. }\tilde{V}_{\nu \rho ]}^{A},\quad \tilde{V}_{\mu \nu }^{A}\equiv \tfrac{1}{2}\varepsilon _{\mu \nu \rho \lambda }V^{A\rho \lambda }. \label{notat2}$$(The derivatives of the ghosts $\eta ^{\bar{\Upsilon}}$ are removed from $H\left( \gamma \right) $ since they are $\gamma $-exact, in agreement with the first relation from (\[bfa21\]), the last formula in (\[bfa23\]), the second equation in (\[bfa23\]), and the first definition from ([bfa24]{}).) If we denote by $e^{M}\left( \eta ^{\bar{\Upsilon}}\right) $ the elements with pure ghost number $M$ of a basis in the space of the polynomials in the ghosts (\[notat1\]), then it follows that the general solution to equation (\[3.6\]) takes the form $$a_{I}=\alpha _{I}\left( \left[ F_{\bar{A}}\right] ,\left[ \chi _{\Delta }^{\ast }\right] \right) e^{I}\left( \eta ^{\bar{\Upsilon}}\right) , \label{3.8}$$where $\text{agh}\left( \alpha _{I}\right) =I$ and $\text{pgh}\left( e^{I}\right) =I$. The notation $f([q])$ means that $f$ depends on $q$ and its spacetime derivatives up to a finite order. The objects $\alpha _{I}$ (obviously nontrivial in $H^{0}\left( \gamma \right) $) will be called invariant polynomials". The result that we can replace equation (\[3.3\]) with the less obvious one (\[3.6\]) is a nice consequence of the fact that the cohomology of the exterior spacetime differential is trivial in the space of invariant polynomials in strictly positive antighost numbers. Inserting (\[3.8\]) in (\[3.4\]) we obtain that a necessary (but not sufficient) condition for the existence of (nontrivial) solutions $a_{I-1}$ is that the invariant polynomials $\alpha _{I}$ are (nontrivial) objects from the local cohomology of Koszul-Tate differential $H\left( \delta \vert d\right) $ in antighost number $I>0$ and in pure ghost number $0$, $$\delta \alpha _{I}=\partial _{\mu }\overset{\left( I-1\right) }{j}^{\mu },\quad \text{\textrm{agh}}\left( \overset{\left( I-1\right) }{j}^{\mu }\right) =I-1,\quad \text{\textrm{pgh}}\left( \overset{\left( I-1\right) }{j}^{\mu }\right) =0. \label{3.10a}$$We recall that the local cohomology $H\left( \delta \vert d\right) $ is completely trivial in both strictly positive antighost *and* pure ghost numbers (for instance, see [@gen1a], Theorem 5.4, and [@gen1b] ), so from now on it is understood that by $H\left( \delta \vert d\right) $ we mean the local cohomology of $\delta $ at pure ghost number $0$. Using the fact that the free BF model under study is a linear gauge theory of Cauchy order equal to $4$ and the general result from [@gen1a; @gen1b], according to which the local cohomology of the Koszul-Tate differential is trivial in antighost numbers strictly greater than its Cauchy order, we can state that $$H_{J}\left( \delta \vert d\right) =0\quad \text{\textrm{for\ all}}\quad J>4, \label{3.11}$$where $H_{J}\left( \delta \vert d\right) $ represents the local cohomology of the Koszul-Tate differential in antighost number $J$. Moreover, if the invariant polynomial $\alpha _{J}$, with $\left( \alpha _{J}\right) =J\geq 4$, is trivial in $H_{J}\left( \delta \vert d\right) $, then it can be taken to be trivial also in $H_{J}^{\text{\textrm{inv}}}\left( \delta \vert d\right) $$$\left( \alpha _{J}=\delta b_{J+1}+\partial _{\mu }\overset{(J)}{c}^{\mu },\text{\textrm{agh}}\left( \alpha _{J}\right) =J\geq 4\right) \Rightarrow \alpha _{J}=\delta \beta _{J+1}+\partial _{\mu }\overset{(J)}{\gamma }^{\mu }, \label{3.12ax}$$with both $\beta _{J+1}$ and $\overset{(J)}{\gamma }^{\mu }$ invariant polynomials. Here, $H_{J}^{\text{\textrm{inv}}}\left( \delta \vert d\right) $ denotes the invariant characteristic cohomology in antighost number $J$ (the local cohomology of the Koszul-Tate differential in the space of invariant polynomials). (An element of $H_{I}^{\text{\textrm{inv}}}\left( \delta \vert d\right) $ is defined via an equation like (\[3.10a\]), but with the corresponding current an invariant polynomial.). This result together with (\[3.11\]) ensures that the entire invariant characteristic cohomology in antighost numbers strictly greater than $4$ is trivial $$H_{J}^{\text{\textrm{inv}}}\left( \delta \vert d\right) =0\quad \text{\textrm{for all}}\quad J>4. \label{3.12x}$$ The nontrivial representatives of $H_{J}(\delta \vert d)$ and of $H_{J}^{\mathrm{inv}}(\delta \vert d)$ for $J\geq 2$ depend neither on $\left( \partial _{\lbrack \mu }^{\left. {}\right. }A_{\nu ]}^{a},\partial ^{\mu }H_{\mu }^{a},\partial _{\mu }B_{a}^{\mu \nu },\tilde{F}_{\mu \nu \rho }^{A}\right) $ nor on the spacetime derivatives of $F_{\bar{A}}$ defined in (\[3.7\]), but only on the undifferentiated scalar fields and auxiliary vector fields from the two-form sector, $\left( \varphi _{a},V_{\mu }^{A}\right) $. With the help of relations (\[bfa15\])–(\[bfa19a\]), it can be shown that $H_{4}^{\text{\textrm{inv}}}\left( \delta \vert d\right) $ is generated by the elements $$\begin{aligned} \left( P_{\Lambda }\left( W\right) \right) ^{\mu \nu \rho \lambda } &=&\frac{\partial W_{\Lambda }}{\partial \varphi _{a}}C_{a}^{\ast \mu \nu \rho \lambda }+\frac{\partial ^{2}W_{\Lambda }}{\partial \varphi _{a}\partial \varphi _{b}}\left( H_{a}^{\ast \lbrack \mu }C_{b}^{\ast \nu \rho \lambda ]}+C_{a}^{\ast \lbrack \mu \nu }C_{b}^{\ast \rho \lambda ]}\right) \notag \\ &&+\frac{\partial ^{3}W_{\Lambda }}{\partial \varphi _{a}\partial \varphi _{b}\partial \varphi _{c}}H_{a}^{\ast \lbrack \mu }H_{b}^{\ast \nu }C_{c}^{\ast \rho \lambda ]} \notag \\ &&+\frac{\partial ^{4}W_{\Lambda }}{\partial \varphi _{a}\partial \varphi _{b}\partial \varphi _{c}\partial \varphi _{d}}H_{a}^{\ast \mu }H_{b}^{\ast \nu }H_{c}^{\ast \rho }H_{d}^{\ast \lambda }, \label{3.13}\end{aligned}$$where $W_{\Lambda }=W_{\Lambda }\left( \varphi _{a}\right) $ are arbitrary, smooth functions depending only on the undifferentiated scalar fields $\varphi _{a}$ and $\Lambda $ is some multi-index (composed of internal and/or Lorentz indices). Indeed, direct computation yields $$\delta \left( P_{\Lambda }\left( W\right) \right) ^{\mu \nu \rho \lambda }=\partial _{\left. {}\right. }^{[\mu }\left( P_{\Lambda }\left( W\right) \right) ^{\nu \rho \lambda ]},\quad \mathrm{agh}\left( \left( P_{\Lambda }\left( W\right) \right) ^{\nu \rho \lambda }\right) =3, \label{3.13a}$$where we made the notation $$\begin{aligned} \left( P_{\Lambda }\left( W\right) \right) ^{\mu \nu \rho } &=&\frac{\partial W_{\Lambda }}{\partial \varphi _{a}}C_{a}^{\ast \mu \nu \rho }+\frac{\partial ^{2}W_{\Lambda }}{\partial \varphi _{a}\partial \varphi _{b}}H_{a}^{\ast \lbrack \mu }C_{b}^{\ast \nu \rho ]} \notag \\ &&+\frac{\partial ^{3}W_{\Lambda }}{\partial \varphi _{a}\partial \varphi _{b}\partial \varphi _{c}}H_{a}^{\ast \mu }H_{b}^{\ast \nu }H_{c}^{\ast \rho }. \label{3.14}\end{aligned}$$It is clear that $\left( P_{\Lambda }\left( W\right) \right) ^{\mu \nu \rho } $ is an invariant polynomial. By applying the operator $\delta $ on it, we have that $$\delta \left( P_{\Lambda }\left( W\right) \right) ^{\mu \nu \rho }=-\partial _{\left. {}\right. }^{[\mu }\left( P_{\Lambda }\left( W\right) \right) ^{\nu \rho ]},\quad \mathrm{agh}\left( \left( P_{\Lambda }\left( W\right) \right) ^{\nu \rho }\right) =2, \label{3.14a}$$where we employed the convention $$\left( P_{\Lambda }\left( W\right) \right) ^{\mu \nu }=\frac{\partial W_{\Lambda }}{\partial \varphi _{a}}C_{a}^{\ast \mu \nu }+\frac{\partial ^{2}W_{\Lambda }}{\partial \varphi _{a}\partial \varphi _{b}}H_{a}^{\ast \mu }H_{b}^{\ast \nu }. \label{3.15}$$Since $\left( P_{\Lambda }\left( W\right) \right) ^{\mu \nu }$ is also an invariant polynomial, from (\[3.14a\]) it follows that $\left( P_{\Lambda }\left( W\right) \right) ^{\mu \nu \rho }$ belongs to $H_{3}^{\text{\textrm{inv}}}\left( \delta \vert d\right) $. Moreover, further calculations produce$$\delta \left( P_{\Lambda }\left( W\right) \right) ^{\mu \nu }=\partial _{\left. {}\right. }^{[\mu }\left( P_{\Lambda }\left( W\right) \right) ^{\nu ]},\quad \mathrm{agh}\left( \left( P_{\Lambda }\left( W\right) \right) ^{\nu }\right) =1, \label{3.15a}$$with $$\left( P_{\Lambda }\left( W\right) \right) ^{\mu }=\frac{\partial W_{\Lambda }}{\partial \varphi _{a}}H_{a}^{\ast \mu }. \label{3.16}$$Due to the fact that $\left( P_{\Lambda }\left( W\right) \right) ^{\mu }$ is an invariant polynomial, we deduce that $\left( P_{\Lambda }\left( W\right) \right) ^{\mu \nu }$ pertains to $H_{2}^{\text{\textrm{inv}}}\left( \delta \vert d\right) $. Using again the actions of $\delta $ on the BRST generators, it can be proved that $H_{3}^{\text{\textrm{inv}}}\left( \delta \vert d\right) $ is spanned, beside the elements $\left( P_{\Lambda }\left( W\right) \right) ^{\mu \nu \rho }$ given in (\[3.14\]), also by the objects $$\begin{aligned} Q_{\Lambda }\left( f\right) &=&f_{\Lambda }^{A}C_{A}^{\ast }-\left( P_{\Lambda }^{A}\left( f\right) \right) ^{\mu }C_{A\mu }^{\ast }-\tfrac{1}{2}\varepsilon _{\mu \nu \rho \lambda }\left( \tfrac{1}{3}\left( P_{\Lambda }^{A}\left( f\right) \right) ^{\mu \nu \rho }V_{A}^{\lambda }\right. \notag \\ &&\left. +\left( P_{\Lambda }^{A}\left( f\right) \right) ^{\mu \nu }V_{A}^{\ast \rho \lambda }\right) \label{p}\end{aligned}$$and by the undifferentiated antifields $\eta _{\mu \nu \rho \lambda }^{\ast a}$ (according to the first definition from (\[bfa19a\])). In formula ([p]{}) $f_{\Lambda }^{A}=f_{\Lambda }^{A}\left( \varphi _{a}\right) $ are some arbitrary, smooth functions of the undifferentiated scalar fields $\varphi _{a}$ carrying at least an internal index $A$ from the two-form sector and possibly a supplementary multi-index $\Lambda $. The factors $\left( P_{\Lambda }^{A}\left( f\right) \right) ^{\mu }$, $\left( P_{\Lambda }^{A}\left( f\right) \right) ^{\mu \nu }$, and $\left( P_{\Lambda }^{A}\left( f\right) \right) ^{\mu \nu \rho }$ read as in (\[3.16\]), ([3.15]{}), and (\[3.14\]), respectively, with $W_{\Lambda }\left( \varphi _{a}\right) \rightarrow f_{\Lambda }^{A}\left( \varphi _{a}\right) $. Concerning $Q_{\Lambda }\left( f\right) $, we have that $$\delta Q_{\Lambda }\left( f\right) =\partial _{\mu }\left( Q_{\Lambda }\left( f\right) \right) ^{\mu },\quad \mathrm{agh}\left( \left( Q_{\Lambda }\left( f\right) \right) ^{\mu }\right) =2, \label{pa}$$where we employed the notation $$\left( Q_{\Lambda }\left( f\right) \right) ^{\mu }=f_{\Lambda }^{A}C_{A}^{\ast \mu }+\varepsilon ^{\mu \nu \rho \lambda }\left( \left( P_{\Lambda }^{A}\left( f\right) \right) _{\nu }V_{A\rho \lambda }^{\ast }+\tfrac{1}{2}\left( P_{\Lambda }^{A}\left( f\right) \right) _{\nu \rho }V_{A\lambda }\right) . \label{pm}$$With the help of definitions (\[bfa15\])–(\[bfa19a\]) it can be checked that $$\delta \left( Q_{\Lambda }\left( f\right) \right) ^{\mu }=\partial _{\nu }\left( Q_{\Lambda }\left( f\right) \right) ^{\mu \nu },\quad \mathrm{agh}\left( \left( Q_{\Lambda }\left( f\right) \right) ^{\mu \nu }\right) =1, \label{3.17}$$where we made the notation $$\left( Q_{\Lambda }\left( f\right) \right) ^{\mu \nu }=\varepsilon ^{\mu \nu \rho \lambda }\left( f_{\Lambda }^{A}V_{A\rho \lambda }^{\ast }+\left( P_{\Lambda }^{A}\left( f\right) \right) _{\rho }V_{A\lambda }\right) . \label{3.17a}$$Direct computation shows that the objects$$\begin{aligned} R_{\Lambda }\left( g\right) &=&g_{\Lambda }^{AB}\left( C_{A}^{\ast \mu }V_{B\mu }+\tfrac{1}{2}\varepsilon _{\mu \nu \rho \lambda }V_{A}^{\ast \mu \nu }V_{B}^{\ast \rho \lambda }\right) \notag \\ &&-\varepsilon _{\mu \nu \rho \lambda }\left( \left( P_{\Lambda }^{AB}\left( g\right) \right) ^{\mu }V_{A}^{\ast \nu \rho }+\tfrac{1}{4}\left( P_{\Lambda }^{AB}\left( g\right) \right) ^{\mu \nu }V_{A}^{\rho }\right) V_{B}^{\lambda } \label{q}\end{aligned}$$satisfy $$\delta R_{\Lambda }\left( g\right) =\partial ^{\mu }\left( R_{\Lambda }\left( g\right) \right) _{\mu },\quad \mathrm{agh}\left( \left( R_{\Lambda }\left( g\right) \right) _{\mu }\right) =1, \label{qa}$$with $$\left( R_{\Lambda }\left( g\right) \right) _{\mu }=-\varepsilon _{\mu \nu \rho \lambda }\left( g_{\Lambda }^{AB}V_{A}^{\ast \nu \rho }+\tfrac{1}{2}\left( P_{\Lambda }^{AB}\left( g\right) \right) ^{\nu }V_{A}^{\rho }\right) V_{B}^{\lambda }. \label{qm}$$In formulas (\[q\]) and (\[qm\]) $g_{\Lambda }^{AB}=g_{\Lambda }^{AB}\left( \varphi _{a}\right) $ stand for some smooth functions of the undifferentiated scalar fields that in addition are antisymmetric with respect to $A$ and $B$$$g_{\Lambda }^{AB}=-g_{\Lambda }^{BA}. \label{gABlambda}$$Looking at their expressions, it is easy to see that all the quantities denoted by $Q$s or $R$s are invariant polynomials. Putting together the above results we can state that $H_{2}^{\text{\textrm{inv}}}\left( \delta \vert d\right) $ is spanned by $\left( P_{\Lambda }\left( W\right) \right) ^{\mu \nu }$ listed in (\[3.15\]), $\left( Q_{\Lambda }\left( f\right) \right) ^{\mu }$ expressed by (\[pm\]), $R_{\Lambda }\left( g\right) $ given in (\[q\]), and the undifferentiated antifields $\eta _{\mu \nu \rho }^{\ast a} $ and $\eta _{a}^{\ast }$ (in agreement with the last formula from ([bfa18]{}) and the first definition in (\[bfa18\])). In contrast to the spaces $\left( H_{J}(\delta \vert d)\right) _{J\geq 2}$ and $\left( H_{J}^{\mathrm{inv}}(\delta \vert d)\right) _{J\geq 2}$, which are finite-dimensional, the cohomology $H_{1}(\delta \vert d)$ (known to be related to global symmetries and ordinary conservation laws) is infinite-dimensional since the theory is free. Fortunately, it will not be needed in the sequel. The previous results on $H(\delta \vert d)$ and $H^{\mathrm{inv}}(\delta \vert d)$ in strictly positive antighost numbers are important because they control the obstructions to removing the antifields from the first-order deformation. More precisely, we can successively eliminate all the pieces of antighost number strictly greater that $4$ from the nonintegrated density of the first-order deformation by adding solely trivial terms, so we can take, without loss of nontrivial objects, the condition $I\leq 4$ into (\[3.2\]). In addition, the last representative is of the form (\[3.8\]), where the invariant polynomial is necessarily a nontrivial object from $H_{4}^{\mathrm{inv}}(\delta \vert d)$. First-order deformation\[firstord\] ----------------------------------- In the case $I=4$ the nonintegrated density of the first-order deformation (see (\[3.2\])) becomes $$a=a_{0}+a_{1}+a_{2}+a_{3}+a_{4}. \label{fo1}$$We can further decompose $a$ in a natural manner as a sum between two kinds of deformations $$a=a^{\left( \mathrm{BF}\right) }+a^{\left( \mathrm{int}\right) }, \label{fo2}$$where $a^{\left( \mathrm{BF}\right) }$ contains only fields/ghosts/antifields from the BF sector and $a^{\left( \mathrm{int}\right) }$ describes the cross-interactions between the two theories. Strictly speaking, we should have added to (\[fo2\]) also a component $a^{\left( \mathrm{V}\right) }$ that involves only the two-form field sector. As it will be seen at the end of this subsection, $a^{\left( \mathrm{V}\right) }$ will be automatically included into $a^{\left( \mathrm{int}\right) }$. The piece $a^{\left( \mathrm{BF}\right) }$ is completely known (see [@defBFijmpa; @defBFepjc; @defBFijmpajuvi06]) and (separately) satisfies an equation of the type (\[3.1\]). It admits a decomposition similar to (\[fo1\]) $$a^{\left( \mathrm{BF}\right) }=a_{0}^{\left( \mathrm{BF}\right) }+a_{1}^{\left( \mathrm{BF}\right) }+a_{2}^{\left( \mathrm{BF}\right) }+a_{3}^{\left( \mathrm{BF}\right) }+a_{4}^{\left( \mathrm{BF}\right) }, \label{descBF}$$where $$\begin{aligned} a_{4}^{\left( \mathrm{BF}\right) } &=&\left( P_{ab}\left( W\right) \right) ^{\mu \nu \rho \lambda }\eta ^{a}C_{\mu \nu \rho \lambda }^{b}-\tfrac{1}{4}\left( P_{ab}^{c}\left( M\right) \right) _{\mu \nu \rho \lambda }\eta ^{a}\eta ^{b}\eta _{c}^{\mu \nu \rho \lambda } \notag \\ &&+\tfrac{1}{2}\varepsilon _{\mu \nu \rho \lambda }\left( \left( P^{ab}\left( M\right) \right) ^{\mu \nu \rho \lambda }\eta _{a\alpha \beta \gamma \delta }\eta _{b}^{\alpha \beta \gamma \delta }\right. \notag \\ &&\left. -\tfrac{1}{2\cdot \left( 4!\right) ^{2}}\left( P_{abcd}\left( M\right) \right) ^{\mu \nu \rho \lambda }\eta ^{a}\eta ^{b}\eta ^{c}\eta ^{d}\right) , \label{a4}\end{aligned}$$$$\begin{aligned} a_{3}^{\left( \mathrm{BF}\right) } &=&\left( P_{ab}\left( W\right) \right) ^{\mu \nu \rho }\left( -\eta ^{a}C_{\mu \nu \rho }^{b}+4A^{a\lambda }C_{\mu \nu \rho \lambda }^{b}\right) \notag \\ &&+2\left( 6\left( P_{ab}\left( W\right) \right) ^{\mu \nu }B^{\ast a\rho \lambda }+4\left( P_{ab}\left( W\right) \right) ^{\mu }\eta ^{\ast a\nu \rho \lambda }+W_{ab}\eta ^{\ast a\mu \nu \rho \lambda }\right) C_{\mu \nu \rho \lambda }^{b} \notag \\ &&+\tfrac{1}{2}\left( P_{ab}^{c}\left( M\right) \right) _{\mu \nu \rho }\left( \tfrac{1}{2}\eta ^{a}\eta ^{b}\eta _{c}^{\mu \nu \rho }-4A_{\lambda }^{a}\eta ^{b}\eta _{c}^{\mu \nu \rho \lambda }\right) \notag \\ &&-\left( 6\left( P_{ab}^{c}\left( M\right) \right) _{\mu \nu }B_{\rho \lambda }^{\ast a}+4\left( P_{ab}^{c}\left( M\right) \right) _{\mu }\eta _{\nu \rho \lambda }^{\ast a}+M_{ab}^{c}\eta _{\mu \nu \rho \lambda }^{\ast a}\right) \eta ^{b}\eta _{c}^{\mu \nu \rho \lambda } \notag \\ &&-\varepsilon _{\mu \nu \rho \lambda }\left( P^{ab}\left( M\right) \right) _{\alpha \beta \gamma }\eta _{a}^{\alpha \beta \gamma }\eta _{b}^{\mu \nu \rho \lambda }-\tfrac{1}{3!\cdot 4!}\varepsilon ^{\mu \nu \rho \lambda }\left( \left( P_{abcd}\left( M\right) \right) _{\mu \nu \rho }A_{\lambda }^{a}\right. \notag \\ &&+3\left( P_{abcd}\left( M\right) \right) _{\mu \nu }B_{\rho \lambda }^{\ast a}+2\left( P_{abcd}\left( M\right) \right) _{\mu }\eta _{\nu \rho \lambda }^{\ast a} \notag \\ &&\left. +M_{abcd}\eta _{\mu \nu \rho \lambda }^{\ast a}\right) \eta ^{b}\eta ^{c}\eta ^{d}, \label{a3}\end{aligned}$$$$\begin{aligned} a_{2}^{\left( \mathrm{BF}\right) } &=&\left( P_{ab}\left( W\right) \right) ^{\mu \nu }\left( \eta ^{a}C_{\mu \nu }^{b}-3A^{a\rho }C_{\mu \nu \rho }^{b}\right) -2\left( 3\left( P_{ab}\left( W\right) \right) ^{\mu }B^{\ast a\nu \rho }\right. \notag \\ &&\left. +W_{ab}\eta ^{\ast a\mu \nu \rho }\right) C_{\mu \nu \rho }^{b}-\tfrac{1}{2}\left( P_{ab}^{c}\left( M\right) \right) ^{\mu \nu }\left( \tfrac{1}{2}\eta ^{a}\eta ^{b}B_{c\mu \nu }-3A^{a\rho }\eta ^{b}\eta _{c\mu \nu \rho }\right) \notag \\ &&+\left( 3\left( P_{ab}^{c}\left( M\right) \right) _{\mu }B_{\nu \rho }^{\ast a}+M_{ab}^{c}\eta _{\mu \nu \rho }^{\ast a}\right) \eta ^{b}\eta _{c}^{\mu \nu \rho }+\tfrac{1}{2}\left( -\left( P_{ab}^{c}\left( M\right) \right) _{\mu }A_{c}^{\ast \mu }\right. \notag \\ &&\left. +M_{ab}^{c}\eta _{c}^{\ast }\right) \eta ^{a}\eta ^{b}+\left( 3\left( P_{ab}^{c}\left( M\right) \right) _{\mu \nu }A_{\rho }^{a}+12\left( P_{ab}^{c}\left( M\right) \right) _{\mu }B_{\nu \rho }^{\ast a}\right. \notag \\ &&\left. +4M_{ab}^{c}\eta _{\mu \nu \rho }^{\ast a}\right) A_{\lambda }^{b}\eta _{c}^{\mu \nu \rho \lambda }+\tfrac{9}{2}\varepsilon ^{\mu \nu \rho \lambda }\left( P^{ab}\left( M\right) \right) _{\mu \nu }\eta _{a\rho \alpha \beta }\eta _{b\lambda }^{\;\;\;\alpha \beta } \notag \\ &&-6M_{ab}^{c}B_{\mu \nu }^{\ast a}B_{\rho \lambda }^{\ast b}\eta _{c}^{\mu \nu \rho \lambda }+\tfrac{1}{4\cdot 4!}\varepsilon ^{\mu \nu \rho \lambda }\left( 3\left( P_{abcd}\left( M\right) \right) _{\mu \nu }A_{\rho }^{a}A_{\lambda }^{b}\right. \notag \\ &&\left. +12\left( P_{abcd}\left( M\right) \right) _{\mu }B_{\nu \rho }^{\ast a}A_{\lambda }^{b}+4M_{abcd}\eta _{\mu \nu \rho }^{\ast a}A_{\lambda }^{b}-6M_{abcd}B_{\mu \nu }^{\ast a}B_{\rho \lambda }^{\ast b}\right) \eta ^{c}\eta ^{d} \notag \\ &&+\varepsilon _{\mu \nu \rho \lambda }\left( 2\left( P^{ab}\left( M\right) \right) _{\alpha }A_{a}^{\ast \alpha }-2M^{ab}\eta _{a}^{\ast }\right. \notag \\ &&\left. +\left( P^{ab}\left( M\right) \right) _{\alpha \beta }B_{a}^{\alpha \beta }\right) \eta _{b}^{\mu \nu \rho \lambda }, \label{a2}\end{aligned}$$$$\begin{aligned} a_{1}^{\left( \mathrm{BF}\right) } &=&\left( P_{ab}\left( W\right) \right) ^{\mu }\left( -\eta ^{a}H_{\mu }^{b}+2A^{a\nu }C_{\mu \nu }^{b}\right) +W_{ab}\left( 2B_{\mu \nu }^{\ast a}C^{b\mu \nu }-\varphi ^{\ast a}\eta ^{b}\right) \notag \\ &&-\left( P_{ab}^{c}\left( M\right) \right) _{\mu }A_{\nu }^{a}\left( \eta ^{b}B_{c}^{\mu \nu }+\tfrac{3}{2}A_{\rho }^{b}\eta _{c}^{\mu \nu \rho }\right) -M_{ab}^{c}\left( B_{\mu \nu }^{\ast a}\eta ^{b}B_{c}^{\mu \nu }\right. \notag \\ &&\left. +A_{\mu }^{a}\eta ^{b}A_{c}^{\ast \mu }+3B_{\mu \nu }^{\ast a}A_{\rho }^{b}\eta _{c}^{\mu \nu \rho }\right) \notag \\ &&+2\varepsilon _{\nu \rho \sigma \lambda }\left( \left( P^{ab}\left( M\right) \right) _{\mu }B_{a}^{\mu \nu }-M^{ab}A_{a}^{\ast \nu }\right) \eta _{b}^{\rho \sigma \lambda } \notag \\ &&+\tfrac{1}{4!}\varepsilon ^{\mu \nu \rho \lambda }\left( \left( P_{abcd}\left( M\right) \right) _{\mu }A_{\nu }^{a}+3M_{abcd}B_{\mu \nu }^{\ast a}\right) A_{\rho }^{b}A_{\lambda }^{c}\eta ^{d}, \label{a1}\end{aligned}$$$$\begin{aligned} a_{0}^{\left( \mathrm{BF}\right) } &=&-W_{ab}A^{a\mu }H_{\mu }^{b}+\tfrac{1}{2}M_{ab}^{c}A_{\mu }^{a}A_{\nu }^{b}B_{c}^{\mu \nu } \notag \\ &&+\tfrac{1}{2}\varepsilon ^{\mu \nu \rho \lambda }\left( M^{ab}B_{a\mu \nu }B_{b\rho \lambda }-\tfrac{1}{2\cdot 4!}M_{abcd}A_{\mu }^{a}A_{\nu }^{b}A_{\rho }^{c}A_{\lambda }^{d}\right) . \label{a0}\end{aligned}$$In (\[a4\])–(\[a0\]) the quantities denoted by $\left( P_{ab}\left( W\right) \right) ^{\mu _{1}\ldots \mu _{k}}$, $\left( P_{ab}^{c}\left( M\right) \right) ^{\mu _{1}\ldots \mu _{k}}$, $\left( P^{ab}\left( M\right) \right) ^{\mu _{1}\ldots \mu _{k}}$, and $\left( P_{abcd}\left( M\right) \right) ^{\mu _{1}\ldots \mu _{k}}$ read as in (\[3.13\]), (\[3.14\]), (\[3.15\]), and (\[3.16\]) for $k=4$, $k=3$, $k=2$, and $k=1$, respectively, modulo the successive replacement of $W_{\Lambda }\left( \varphi _{a}\right) $ with the functions $W_{ab}$, $M_{ab}^{c}$, $M^{ab}$, and $M_{abcd}$, respectively. The last four kinds of functions depend only on the undifferentiated scalar fields and satisfy various symmetry/antisymmetry properties: $M_{ab}^{c}$ are antisymmetric in their lower indices, $M^{ab}$ are symmetric, and $M_{abcd}$ are completely antisymmetric. Due to the fact that $a^{\left( \mathrm{BF}\right) }$ and $a^{\left( \mathrm{int}\right) }$ involve different types of fields and $a^{\left( \mathrm{BF}\right) }$ separately satisfies an equation of the type (\[3.1\]), it follows that $a^{\left( \mathrm{int}\right) }$ is subject to the equation $$sa^{\left( \mathrm{int}\right) }=\partial _{\mu }m^{\left( \mathrm{int}\right) \mu }, \label{fo3}$$for some local current $m^{\left( \mathrm{int}\right) \mu }$. In the sequel we determine the general solution to (\[fo3\]) that complies with all the hypotheses mentioned in the beginning of the previous subsection. In agreement with (\[fo1\]), the solution to the equation $sa^{\left( \mathrm{int}\right) }=\partial _{\mu }m^{\left( \mathrm{int}\right) \mu }$ can be decomposed as $$a^{\left( \mathrm{int}\right) }=a_{0}^{\left( \mathrm{int}\right) }+a_{1}^{\left( \mathrm{int}\right) }+a_{2}^{\left( \mathrm{int}\right) }+a_{3}^{\left( \mathrm{int}\right) }+a_{4}^{\left( \mathrm{int}\right) }, \label{fo5}$$where the components on the right-hand side of (\[fo5\]) are subject to the equations $$\begin{aligned} \gamma a_{4}^{\left( \mathrm{int}\right) } &=&0, \label{fo6a} \\ \delta a_{k}^{\left( \mathrm{int}\right) }+\gamma a_{k-1}^{\left( \mathrm{int}\right) } &=&\partial _{\mu }\overset{(k-1)}{m}^{\left( \mathrm{int}\right) \mu },\quad k=\overline{1,4}. \label{fo6b}\end{aligned}$$The piece $a_{4}^{\left( \mathrm{int}\right) }$ as solution to equation ([fo6a]{}) has the general form expressed by (\[3.8\]) for $I=4$, with $\alpha _{4}$ from $H_{4}^{\mathrm{inv}}(\delta \vert d)$ and $e^{4}$ spanned by $$\left( \eta ^{a}\eta ^{b}\eta ^{c}\eta ^{d},\eta ^{a}\eta ^{b}\eta _{c}^{\mu \nu \rho \lambda },\eta ^{a}C_{\mu \nu \rho \lambda }^{b},\eta _{a}^{\mu \nu \rho \lambda }\eta _{b}^{\alpha \beta \gamma \delta },\eta ^{a}\eta ^{b}C^{A},C^{A}C^{B},C^{A}\eta _{a}^{\mu \nu \rho \lambda }\right) . \label{fo7}$$Taking into account the result that the general representative of $H_{4}^{\mathrm{inv}}(\delta \vert d)$ is given by (\[3.13\]) and recalling that $a_{4}^{\left( \mathrm{int}\right) }$ should mix the BF and the two-form sectors (in order to provide cross-couplings), it follows that the eligible representatives of $e^{4}$ from (\[fo7\]) allowed to enter $a_{4}^{\left( \mathrm{int}\right) }$ are those elements containing at least one ghost of the type $C^{A}$. Therefore, up to trivial, $\gamma $-exact terms, we can write $$\begin{aligned} a_{4}^{\left( \mathrm{int}\right) }&=&\tfrac{1}{2\cdot 4!}\varepsilon _{\mu \nu \rho \lambda }\left( \left( P_{abA}\left( N\right) \right) ^{\mu \nu \rho \lambda }\eta ^{a}\eta ^{b}C^{A}+\left( P_{AB}\left( N\right) \right) ^{\mu \nu \rho \lambda }C^{A}C^{B}\right) \notag \\ &&+\left( P_{A}^{a}\left( N\right) \right) _{\mu \nu \rho \lambda }C^{A}\eta _{a}^{\mu \nu \rho \lambda }, \label{fo8}\end{aligned}$$where the objects denoted by $\left( P_{abA}\left( N\right) \right) ^{\mu \nu \rho \lambda }$, $\left( P_{AB}\left( N\right) \right) ^{\mu \nu \rho \lambda }$, and respectively $\left( P_{A}^{a}\left( N\right) \right) _{\mu \nu \rho \lambda }$ are expressed as in (\[3.13\]), being generated by the arbitrary, smooth functions of the undifferentiated scalar fields $N_{abA}\left( \varphi _{m}\right) $, $N_{AB}\left( \varphi _{m}\right) $, and $N_{A}^{a}\left( \varphi _{m}\right) $, respectively. In addition, the functions $N_{abA}\left( \varphi _{m}\right) $ and $N_{AB}\left( \varphi _{m}\right) $ satisfy the symmetry/antisymmetry properties $$N_{abA}\left( \varphi _{m}\right) =-N_{baA}\left( \varphi _{m}\right) ,\;N_{AB}\left( \varphi _{m}\right) =N_{BA}\left( \varphi _{m}\right) . \label{fo9}$$ Inserting (\[fo8\]) into equation (\[fo6b\]) for $k=4$ and using definitions (\[bfa15\])–(\[bfa24\]), after some computation we obtain the interacting piece of antighost number $3$ from the first-order deformation in the form $$\begin{aligned} a_{3}^{\left( \mathrm{int}\right) }&=&-\left( P_{A}^{a}\left( N\right) \right) _{\mu \nu \rho }\left( C^{A}\eta _{a}^{\mu \nu \rho }+4C_{\lambda }^{A}\eta _{a}^{\mu \nu \rho \lambda }\right) \notag \\ &&-\tfrac{1}{3!}\varepsilon ^{\mu \nu \rho \lambda }\left[ \left( P_{abA}\left( N\right) \right) _{\mu \nu \rho }\eta ^{a}\left( A_{\lambda }^{b}C^{A}+\tfrac{1}{2}\eta ^{b}C_{\lambda }^{A}\right) \right. \notag \\ &&+\left( P_{AB}\left( N\right) \right) _{\mu \nu \rho }C^{A}C_{\lambda }^{B}-\left( 3\left( P_{abA}\left( N\right) \right) _{\mu \nu }B_{\rho \lambda }^{\ast a}\right. \notag \\ &&\left. \left. +2\left( P_{abA}\left( N\right) \right) _{\mu }\eta _{\nu \rho \lambda }^{\ast a}+\tfrac{1}{2}N_{abA}\eta _{\mu \nu \rho \lambda }^{\ast a}\right) \eta ^{b}C^{A}\right] \notag \\ &&+Q_{aA}\left( f\right) \eta ^{a}C^{A} +\tfrac{1}{3!}Q_{abc}\left( f\right) \eta ^{a}\eta ^{b}\eta ^{c} \notag \\ &&+\tfrac{1}{4!}\varepsilon _{\alpha \beta \gamma \delta }\left( Q_{\;\;b}^{a}\left( f\right) \eta ^{b}\eta _{a}^{\alpha \beta \gamma \delta }+Q_{a}\left( f\right) C^{a\alpha \beta \gamma \delta }\right) . \label{fo10}\end{aligned}$$(Solution (\[fo10\]) embeds also the general solution to the homogeneous equation $\gamma \bar{a}_{3}^{\left( \mathrm{int}\right) }=0$.) The elements denoted by $Q_{aA}\left( f\right) $, $Q_{abc}\left( f\right) $, $Q_{\;\;b}^{a}\left( f\right) $, and $Q_{a}\left( f\right) $ are generated via formula (\[p\]) by the smooth functions (of the undifferentiated scalar fields) $f_{aB}^{A}$, $f_{abc}^{A}$, $f_{\;\;b}^{Aa}$, and $f_{a}^{A}$, respectively. In addition, the functions $f_{abc}^{A}$ are completely antisymmetric in their BF collection indices. The interacting component of antighost number $2$ results as solution to equation (\[fo6b\]) for $k=3$ by relying on formula (\[fo10\]) and definitions (\[bfa15\])–(\[bfa24\]), and takes the form $$\begin{aligned} a_{2}^{\prime \left( \mathrm{int}\right) } &=&-\tfrac{1}{2}\left( P_{AB}\left( N\right) \right) ^{\mu \nu }\left( C^{A}V_{\mu \nu }^{B}-\tfrac{1}{2}\varepsilon _{\mu \nu \rho \lambda }C^{A\rho }C^{B\lambda }\right) \notag \\ &&-\tfrac{1}{4}\left( P_{abA}\left( N\right) \right) ^{\mu \nu }\left[ \eta ^{a}\eta ^{b}V_{\mu \nu }^{A}+\varepsilon _{\mu \nu \rho \lambda }\left( 2A^{a\rho }\eta ^{b}C^{A\lambda }+A^{a\rho }A^{b\lambda }C^{A}\right) \right] \notag \\ &&+\left( P_{A}^{a}\left( N\right) \right) _{\mu \nu }\left( C^{A}B_{a}^{\mu \nu }+3C_{\rho }^{A}\eta _{a}^{\mu \nu \rho }+\tfrac{1}{2}\varepsilon _{\alpha \beta \gamma \delta }V^{A\mu \nu }\eta _{a}^{\alpha \beta \gamma \delta }\right) \notag \\ &&-\varepsilon ^{\mu \nu \rho \lambda }\left( \left( P_{abA}\left( N\right) \right) _{\mu }B_{\nu \rho }^{\ast a}+\tfrac{1}{3}N_{abA}\eta _{\mu \nu \rho }^{\ast a}\right) \left( A_{\lambda }^{b}C^{A}+\eta ^{b}C_{\lambda }^{A}\right) \notag \\ &&+\tfrac{1}{4!}\varepsilon ^{\mu \nu \rho \lambda }\left( Q_{a}\left( f\right) \right) _{\mu }C_{\nu \rho \lambda }^{a}-\left( Q_{aA}\left( f\right) \right) _{\mu }\left( A^{a\mu }C^{A}+\eta ^{a}C^{A\mu }\right) \notag \\ &&-\tfrac{1}{4!}\left( Q_{\;\;b}^{a}\left( f\right) \right) ^{\mu }\left( \varepsilon _{\alpha \beta \gamma \delta }A_{\mu }^{b}\eta _{a}^{\alpha \beta \gamma \delta }-\varepsilon _{\mu \alpha \beta \gamma }\eta ^{b}\eta _{a}^{\alpha \beta \gamma }\right) \notag \\ &&-\tfrac{1}{2}\left( Q_{abc}\left( f\right) \right) ^{\mu }A_{\mu }^{a}\eta ^{b}\eta ^{c}. \label{ai2}\end{aligned}$$Using definitions (\[bfa15\])–(\[bfa24\]), we obtain $$\delta a_{2}^{\prime \left( \mathrm{int}\right) }=\delta c_{2}+\gamma e_{1}+\partial _{\mu }j_{1}^{\mu }+h_{1}, \label{eca2}$$where$$\begin{aligned} c_{2} &=&\left( \left( P_{AB}\left( N\right) \right) ^{\mu }C^{A}+\tfrac{1}{2}\left( P_{abB}\left( N\right) \right) ^{\mu }\eta ^{a}\eta ^{b}\right. \notag \\ &&\left. -\varepsilon _{\alpha \beta \gamma \delta }\left( P_{B}^{a}\left( N\right) \right) ^{\mu }\eta _{a}^{\alpha \beta \gamma \delta }\right) V_{\mu }^{\ast B}+2\left( N_{A}^{a}\eta _{a}^{\ast }-\left( P_{A}^{a}\left( N\right) \right) _{\mu }A_{a}^{\ast \mu }\right) C^{A} \notag \\ &&+\left( \left( Q_{aA}\left( f\right) \right) ^{\mu \nu }C^{A}+\tfrac{1}{2}\left( Q_{abc}\left( f\right) \right) ^{\mu \nu }\eta ^{b}\eta ^{c}\right) B_{\mu \nu }^{\ast a} \notag \\ &&+\tfrac{1}{3}\varepsilon ^{\mu \nu \rho \lambda }\eta _{\mu \nu \rho }^{\ast a}V_{B\lambda }\left( f_{aA}^{B}C^{A}+\tfrac{1}{2}f_{abc}^{B}\eta ^{b}\eta ^{c}\right) \notag \\ &&-\tfrac{1}{2}\varepsilon ^{\mu \nu \rho \lambda }N_{abA}B_{\mu \nu }^{\ast a}B_{\rho \lambda }^{\ast b}C^{A}+\tfrac{1}{4!}\varepsilon _{\alpha \beta \gamma \delta }\left( Q_{\;\;b}^{a}\left( f\right) \right) ^{\mu \nu }B_{\mu \nu }^{\ast b}\eta _{a}^{\alpha \beta \gamma \delta } \notag \\ &&-\tfrac{1}{3}f_{\;\;b}^{Ba}\eta _{\mu \nu \rho }^{\ast b}V_{B\lambda }\eta _{a}^{\mu \nu \rho \lambda }, \label{c2}\end{aligned}$$$$\begin{aligned} e_{1}&=&A_{\mu }^{a}\eta ^{b}\left( \left( P_{abB}\left( N\right) \right) _{\nu }V^{B\mu \nu }+N_{abB}V^{\ast B\mu }\right) +2\left( P_{A}^{a}\left( N\right) \right) _{\mu }C_{\nu }^{A}B_{a}^{\mu \nu } \notag \\ &&-\varepsilon _{\mu \alpha \beta \gamma }\eta _{a}^{\alpha \beta \gamma }\left( \left( P_{A}^{a}\left( N\right) \right) _{\nu }V^{A\mu \nu }+N_{B}^{a}V^{\ast B\mu }\right) -2N_{A}^{a}A_{a}^{\ast \mu }C_{\mu }^{A} \notag \\ &&+N_{abA}B_{\mu \nu }^{\ast a}\eta ^{b}V^{A\mu \nu }-\varepsilon ^{\mu \nu \rho \lambda }\left( \tfrac{1}{2}\left( P_{abA}\left( N\right) \right) _{\mu }A_{\nu }^{a}+N_{abA}B_{\mu \nu }^{\ast a}\right) A_{\rho }^{b}C_{\lambda }^{A} \notag \\ &&-C_{\mu }^{A}\left( \left( P_{AB}\left( N\right) \right) _{\nu }V^{B\mu \nu }+N_{AB}V^{\ast B\mu }\right) -\varepsilon ^{\mu \nu \rho \lambda }f_{aA}^{B}B_{\mu \nu }^{\ast a}V_{B\rho }C_{\lambda }^{A} \notag \\ &&+\left( Q_{aA}\left( f\right) \right) ^{\mu \nu }\left( A_{\mu }^{a}C_{\nu }^{A}+\tfrac{1}{4}\varepsilon _{\mu \nu \rho \lambda }\eta ^{a}V^{A\rho \lambda }\right) -\tfrac{1}{2}\left( Q_{abc}\left( f\right) \right) ^{\mu \nu }A_{\mu }^{a}A_{\nu }^{b}\eta ^{c} \notag \\ &&+\varepsilon ^{\mu \nu \rho \lambda }f_{abc}^{B}B_{\mu \nu }^{\ast a}V_{B\rho }A_{\lambda }^{b}\eta ^{c}+\tfrac{1}{2\cdot 4!}\varepsilon ^{\mu \nu \rho \lambda }\left( Q_{a}\left( f\right) \right) _{\mu \nu }C_{\rho \lambda }^{a} \notag \\ &&+\tfrac{1}{4!}\left( Q_{\;\;b}^{a}\left( f\right) \right) ^{\mu \nu }\left( \tfrac{1}{2}\varepsilon _{\mu \nu \rho \lambda }\eta ^{b}B_{a}^{\rho \lambda }-\varepsilon _{\nu \alpha \beta \gamma }A_{\mu }^{b}\eta _{a}^{\alpha \beta \gamma }\right) \notag \\ &&+\tfrac{1}{4}f_{\;\;b}^{Ba}B_{\mu \nu }^{\ast b}V_{B\rho }\eta _{a}^{\mu \nu \rho }, \label{e1}\end{aligned}$$$$\begin{aligned} j_{1}^{\mu } &=&-\left( N_{AB}C^{A}+\tfrac{1}{2}N_{abB}\eta ^{a}\eta ^{b}-\varepsilon _{\alpha \beta \gamma \delta }N_{B}^{a}\eta _{a}^{\alpha \beta \gamma \delta }\right) V^{\ast B\mu }+2\left( N_{A}^{a}A_{a}^{\ast \mu }\right. \notag \\ &&\left. +\left( P_{A}^{a}\left( N\right) \right) _{\nu }B_{a}^{\mu \nu }\right) C^{A}+\left( P_{A}^{a}\left( N\right) \right) _{\nu }\left( 6C_{\rho }^{A}\eta _{a}^{\mu \nu \rho }+\varepsilon _{\alpha \beta \gamma \delta }V^{A\mu \nu }\eta _{a}^{\alpha \beta \gamma \delta }\right) \notag \\ &&-\left( P_{AB}\left( N\right) \right) _{\nu }\left( C^{A}V^{B\mu \nu }-\tfrac{1}{2}\varepsilon ^{\mu \nu \rho \lambda }C_{\rho }^{A}C_{\lambda }^{B}\right) \notag \\ &&-\varepsilon ^{\mu \nu \rho \lambda }N_{abA}B_{\nu \rho }^{\ast a}\left( \eta ^{b}C_{\lambda }^{A}+A_{\lambda }^{b}C^{A}\right) -\tfrac{1}{2}\left( P_{abA}\left( N\right) \right) _{\nu }\eta ^{a}\eta ^{b}V^{A\mu \nu } \notag \\ &&-\varepsilon ^{\mu \nu \rho \lambda }\left( P_{abA}\left( N\right) \right) _{\nu }A_{\rho }^{a}\left( \eta ^{b}C_{\lambda }^{A}+\tfrac{1}{2}A_{\lambda }^{b}C^{A}\right) +f_{\;\;b}^{Ba}B_{\nu \rho }^{\ast b}V_{B\lambda }\eta _{a}^{\mu \nu \rho \lambda } \notag \\ &&+\left( Q_{aA}\left( f\right) \right) ^{\mu \nu }\left( A_{\nu }^{a}C^{A}+\eta ^{a}C_{\nu }^{A}\right) +\tfrac{1}{2}\left( Q_{abc}\left( f\right) \right) ^{\mu \nu }A_{\nu }^{a}\eta ^{b}\eta ^{c} \notag \\ &&-\varepsilon ^{\mu \nu \rho \lambda }B_{\nu \rho }^{\ast a}V_{B\lambda }\left( f_{aA}^{B}C^{A}+\tfrac{1}{2}f_{abc}^{B}\eta ^{b}\eta ^{c}\right) -\tfrac{1}{4!}\varepsilon _{\nu \alpha \beta \gamma }\left( Q_{a}\left( f\right) \right) ^{\mu \nu }C^{a\alpha \beta \gamma } \notag \\ &&+\tfrac{1}{4!}\left( Q_{\;\;b}^{a}\left( f\right) \right) ^{\mu \nu }\left( \varepsilon _{\alpha \beta \gamma \delta }A_{\nu }^{b}\eta _{a}^{\alpha \beta \gamma \delta }-\varepsilon _{\nu \alpha \beta \gamma }\eta ^{b}\eta _{a}^{\alpha \beta \gamma }\right) , \label{j1m}\end{aligned}$$$$\begin{aligned} h_{1} &=&\left( \left( P_{AB}\left( N\right) \right) ^{\mu }C^{A}+\tfrac{1}{2}\left( P_{abB}\left( N\right) \right) ^{\mu }\eta ^{a}\eta ^{b}-\varepsilon _{\alpha \beta \gamma \delta }\left( P_{B}^{a}\left( N\right) \right) ^{\mu }\eta _{a}^{\alpha \beta \gamma \delta }\right) V_{\mu }^{B} \notag \\ &&+\left( N_{AB}C^{A}+\tfrac{1}{2}N_{abB}\eta ^{a}\eta ^{b}-\varepsilon _{\alpha \beta \gamma \delta }N_{B}^{a}\eta _{a}^{\alpha \beta \gamma \delta }\right) \partial ^{\mu }V_{\mu }^{\ast B}. \label{h1}\end{aligned}$$If we make the notation $$a_{2}^{\left( \mathrm{int}\right) }\equiv a_{2}^{\prime \left( \mathrm{int}\right) }-c_{2}, \label{notata2}$$then (\[eca2\]) is equivalent with the equation$$\delta a_{2}^{\left( \mathrm{int}\right) }=\gamma e_{1}+\partial _{\mu }j_{1}^{\mu }+h_{1}. \label{eca3}$$Comparing (\[eca3\]) with equation (\[fo6b\]) for $k=2$, we obtain that a necessary condition for the existence of a local $a_{1}^{\left( \mathrm{int}\right) }$ is $$h_{1}=\delta g_{2}+\gamma f_{1}+\partial _{\mu }l_{1}^{\mu }, \label{eca3a}$$with $g_{2}$, $f_{1}$, and $l_{1}^{\mu }$ local functions. We show that equation (\[eca3a\]) cannot hold (locally) unless $h_{1}=0$. Indeed, assuming ([eca3a]{}) is satisfied, we act with $\delta $ on it and use its nilpotency and anticommutation with $\gamma $, which yields the necessary condition$$\delta h_{1}=\gamma (-\delta f_{1})+\partial _{\mu }\left( \delta l_{1}^{\mu }\right) . \label{eca3b}$$On the other hand, direct computation provides $$\begin{aligned} \delta h_{1} &=&\gamma \left[ \left( N_{AB}C_{\mu }^{A}-N_{abB}A_{\mu }^{a}\eta ^{b}+\varepsilon _{\mu \alpha \beta \gamma }N_{B}^{a}\eta _{a}^{\alpha \beta \gamma }\right) V^{B\mu }\right] \notag \\ &&+\partial _{\mu }\left[ -\left( N_{AB}C^{A}+\tfrac{1}{2}N_{abB}\eta ^{a}\eta ^{b}-\varepsilon _{\alpha \beta \gamma \delta }N_{B}^{a}\eta _{a}^{\alpha \beta \gamma \delta }\right) V^{B\mu }\right] . \label{eca3c}\end{aligned}$$Juxtaposing (\[eca3b\]) and (\[eca3c\]) and looking at definitions ([bfa15]{})–(\[bfa24\]), it follows that $V^{B\mu }$ must necessarily be $\delta $-exact modulo $d$ in the space of local functions. Since this is obviously not true, we find that (\[eca3b\]) cannot be satisfied and consequently neither does equation (\[eca3a\]). Thus, the consistency of $a_{2}^{\left( \mathrm{int}\right) }$ leads to the equation $$h_{1}=0, \label{ecca2}$$which further implies that the functions $N_{abA}$, $N_{AB}$, and $N_{A}^{a}$ must vanish $$N_{abA}=N_{AB}=N_{A}^{a}=0. \label{negal0}$$Based on (\[negal0\]), from (\[fo8\]), (\[fo10\]), (\[ai2\]), ([c2]{}), (\[e1\]), (\[notata2\]), and (\[eca3\]) we get the components of antighost number $4$, $3$, and $2$ from the nonintegrated density of the first-order deformation as $$a_{4}^{\left( \mathrm{int}\right) }=0, \label{fo12}$$$$\begin{aligned} a_{3}^{\left( \mathrm{int}\right) } &=&Q_{aA}\left( f\right) \eta ^{a}C^{A}+\tfrac{1}{3!}Q_{abc}\left( f\right) \eta ^{a}\eta ^{b}\eta ^{c} \notag \\ &&+\tfrac{1}{4!}\varepsilon _{\alpha \beta \gamma \delta }\left( Q_{\;\;b}^{a}\left( f\right) \eta ^{b}\eta _{a}^{\alpha \beta \gamma \delta }+Q_{a}\left( f\right) C^{a\alpha \beta \gamma \delta }\right) , \label{fo13}\end{aligned}$$$$\begin{aligned} a_{2}^{\left( \mathrm{int}\right) } &=&\tfrac{1}{4!}\varepsilon ^{\mu \nu \rho \lambda }\left( Q_{a}\left( f\right) \right) _{\mu }C_{\nu \rho \lambda }^{a}-\left( Q_{aA}\left( f\right) \right) ^{\mu }\left( A_{\mu }^{a}C^{A}+\eta ^{a}C_{\mu }^{A}\right) \notag \\ &&-\tfrac{1}{2}\left( Q_{abc}\left( f\right) \right) ^{\mu }A_{\mu }^{a}\eta ^{b}\eta ^{c}-\tfrac{1}{4!}\left( Q_{\;\;b}^{a}\left( f\right) \right) ^{\mu }\left( \varepsilon _{\alpha \beta \gamma \delta }A_{\mu }^{b}\eta _{a}^{\alpha \beta \gamma \delta }\right. \notag \\ &&\left. -\varepsilon _{\mu \alpha \beta \gamma }\eta ^{b}\eta _{a}^{\alpha \beta \gamma }\right) -\left( \left( Q_{aA}\left( f\right) \right) ^{\mu \nu }C^{A}+\tfrac{1}{2}\left( Q_{abc}\left( f\right) \right) ^{\mu \nu }\eta ^{b}\eta ^{c}\right) B_{\mu \nu }^{\ast a} \notag \\ &&-\tfrac{1}{3}\varepsilon ^{\mu \nu \rho \lambda }\eta _{\mu \nu \rho }^{\ast a}V_{B\lambda }\left( f_{aA}^{B}C^{A}+\tfrac{1}{2}f_{abc}^{B}\eta ^{b}\eta ^{c}\right) +\tfrac{1}{3}f_{\;\;b}^{Ba}\eta _{\mu \nu \rho }^{\ast b}V_{B\lambda }\eta _{a}^{\mu \nu \rho \lambda } \notag \\ &&-\tfrac{1}{4!}\varepsilon _{\alpha \beta \gamma \delta }\left( Q_{\;\;b}^{a}\left( f\right) \right) ^{\mu \nu }B_{\mu \nu }^{\ast b}\eta _{a}^{\alpha \beta \gamma \delta }+\tfrac{1}{2}R_{ab}\left( g\right) \eta ^{a}\eta ^{b} \notag \\ &&+R_{A}\left( g\right) C^{A}+\tfrac{1}{4!}\varepsilon _{\mu \nu \rho \lambda }R^{a}\left( g\right) \eta _{a}^{\mu \nu \rho \lambda }. \label{fo14}\end{aligned}$$The objects $R_{ab}\left( g\right) $, $R_{A}\left( g\right) $, and $R^{a}\left( g\right) $ are generated by formula (\[q\]) via the smooth functions of the undifferentiated scalar fields $g_{ab}^{AB}$, $g_{\quad C}^{AB}$, and $g^{aAB}$, respectively. All these functions are antisymmetric in $A$ and $B$ and in addition $g_{ab}^{AB}$ are antisymmetric also in their (lower) BF collection indices. Replacing now expression (\[fo14\]) into equation (\[fo6b\]) for $k=2$, we obtain that the interacting piece of antighost number $1$ from the first-order deformation is written as $$\begin{aligned} a_{1}^{\prime \left( \mathrm{int}\right) } &=&-\tfrac{1}{2\cdot 4!}\varepsilon ^{\mu \nu \rho \lambda }\left( Q_{a}\left( f\right) \right) _{\mu \nu }C_{\rho \lambda }^{a}-\left( Q_{aA}\left( f\right) \right) ^{\mu \nu }\left( A_{\mu }^{a}C_{\nu }^{A}\right. \notag \\ &&\left. +\tfrac{1}{4}\varepsilon _{\mu \nu \rho \lambda }\eta ^{a}V^{A\rho \lambda }\right) +\tfrac{1}{4!}\left( Q_{\;\;b}^{a}\left( f\right) \right) ^{\mu \nu }\left( \varepsilon _{\nu \alpha \beta \gamma }A_{\mu }^{b}\eta _{a}^{\alpha \beta \gamma }-\tfrac{1}{2}\varepsilon _{\mu \nu \alpha \beta }\eta ^{b}B_{a}^{\alpha \beta }\right) \notag \\ &&+\left( R_{A}\left( g\right) \right) ^{\mu }C_{\mu }^{A}-\left( R_{ab}\left( g\right) \right) ^{\mu }A_{\mu }^{a}\eta ^{b}-\tfrac{1}{4!}\varepsilon _{\mu \nu \rho \lambda }\left( R^{a}\left( g\right) \right) ^{\mu }\eta _{a}^{\nu \rho \lambda } \notag \\ &&+\varepsilon ^{\mu \nu \rho \lambda }B_{\mu \nu }^{\ast a}V_{B\rho }\left( f_{aA}^{B}C_{\lambda }^{A}-f_{abc}^{B}A_{\lambda }^{b}\eta ^{c}-\tfrac{1}{4!}\varepsilon _{\lambda \alpha \beta \gamma }f_{\;\;a}^{Bb}\eta _{b}^{\alpha \beta \gamma }\right) \notag \\ &&+\tfrac{1}{2}\left( Q_{abc}\left( f\right) \right) ^{\mu \nu }A_{\mu }^{a}A_{\nu }^{b}\eta ^{c}. \label{fo15}\end{aligned}$$Using definitions (\[bfa15\])–(\[bfa24\]), by direct computation we obtain that $$\delta a_{1}^{\prime \left( \mathrm{int}\right) }=\delta c_{1}+\gamma e_{0}+\partial _{\mu }j_{0}^{\mu }+h_{0}, \label{cond00}$$with $$c_{1}=-\eta ^{a}V_{B\mu }\left( f_{aA}^{B}V^{\ast A\mu }+\tfrac{1}{12}f_{\;\;a}^{Bb}A_{b}^{\ast \mu }+\tfrac{1}{2}\varepsilon ^{\mu \nu \rho \lambda }g_{ab}^{AB}V_{A\nu }B_{\rho \lambda }^{\ast b}\right) , \label{c1}$$$$\begin{aligned} e_{0} &=&-\tfrac{1}{2}\varepsilon ^{\mu \nu \rho \lambda }V_{A\mu }\left( -\tfrac{1}{3}f_{abc}^{A}A_{\nu }^{c}+\tfrac{1}{2}g_{ab}^{AB}V_{B\nu }\right) A_{\rho }^{a}A_{\lambda }^{b} \notag \\ &&+\tfrac{1}{4!}f_{a}^{A}V_{A}^{\mu }H_{\mu }^{a}-A_{\mu }^{a}V_{A\nu }\left( f_{aB}^{A}V^{B\mu \nu }+\tfrac{1}{12}f_{\;\;a}^{Ab}B_{b}^{\mu \nu }\right) \notag \\ &&-\tfrac{1}{2}\left( g_{\quad C}^{AB}V_{\mu \nu }^{C}+\tfrac{1}{12}g^{aAB}B_{a\mu \nu }\right) V_{A}^{\mu }V_{B}^{\nu }, \label{e0}\end{aligned}$$$$\begin{aligned} j_{0}^{\mu } &=&V_{A\nu }\left( \tfrac{1}{12}f_{a}^{A}C^{a\mu \nu }+f_{aB}^{A}\eta ^{a}V^{B\mu \nu }\right) +\tfrac{1}{4}f_{\;\;b}^{Aa}V_{A\nu }\left( A_{\rho }^{b}\eta _{a}^{\mu \nu \rho }\right. \notag \\ &&\left. +\tfrac{1}{3}\eta ^{b}B_{a}^{\mu \nu }\right) -\tfrac{1}{8}g^{aAB}V_{A\nu }V_{B\rho }\eta _{a}^{\mu \nu \rho }-\varepsilon ^{\mu \nu \rho \lambda }\left[ f_{aB}^{A}A_{\nu }^{a}V_{A\lambda }C_{\rho }^{B}\right. \notag \\ &&\left. -\tfrac{1}{2}f_{abc}^{A}A_{\nu }^{a}A_{\rho }^{b}\eta ^{c}V_{A\lambda }-\tfrac{1}{2}V_{A\nu }V_{B\rho }\left( g_{\quad C}^{AB}C_{\lambda }^{C}-g_{ab}^{AB}A_{\lambda }^{a}\eta ^{b}\right) \right] , \label{j0}\end{aligned}$$$$h_{0}=-f_{aB}^{A}\eta ^{a}V_{A}^{\mu }V_{\mu }^{B}. \label{h0}$$At this stage we act like between formulas (\[notata2\]) and (\[negal0\]). If we make the notation $$a_{1}^{\left( \mathrm{int}\right) }=a_{1}^{\prime \left( \mathrm{int}\right) }-c_{1}, \label{notata1}$$then (\[cond00\]) becomes$$\delta a_{1}^{\left( \mathrm{int}\right) }=\gamma e_{0}+\partial _{\mu }j_{0}^{\mu }+h_{0}, \label{eca4}$$which, compared with equation (\[fo6b\]) for $k=1$, reveals that the existence of $a_{0}^{(\mathrm{int})}$ demands$$h_{0}=\delta g_{1}+\gamma f_{0}+\partial _{\mu }l_{0}^{\mu }, \label{eca4a}$$with $g_{1}$, $f_{0}$, and $l_{0}^{\mu }$ some local elements. Using ([h0]{}) and definitions (\[bfa15\])–(\[bfa24\]), straightforward calculation shows that (\[eca4a\]) cannot be valid, and hence the consistency of $a_{1}^{\left( \mathrm{int}\right) }$ leads to the equation $$h_{0}=0, \label{ph14}$$which requires the antisymmetry of the functions $f_{aAB}$ ($\equiv k_{AM}f_{aB}^{M}$) with respect to their collection indices from the two-form sector $$f_{aAB}=-f_{aBA}. \label{ant}$$With the help of (\[fo15\]), (\[c1\]), (\[e0\]), (\[notata1\]), ([eca4]{}), and (\[ant\]) we completely determine $a_{1}^{\left( \mathrm{int}\right) }$ and then $a_{0}^{\left( \mathrm{int}\right) }$ as solution to (\[fo6b\]) for $k=1$ in the form $$\begin{aligned} a_{1}^{\left( \mathrm{int}\right) } &=&-\tfrac{1}{2\cdot 4!}\varepsilon ^{\mu \nu \rho \lambda }\left( Q_{a}\left( f\right) \right) _{\mu \nu }C_{\rho \lambda }^{a}-\left( Q_{aA}\left( f\right) \right) ^{\mu \nu }\left( A_{\mu }^{a}C_{\nu }^{A}\right. \notag \\ &&\left. +\tfrac{1}{4}\varepsilon _{\mu \nu \rho \lambda }\eta ^{a}V^{A\rho \lambda }\right) +\tfrac{1}{4!}\left( Q_{\;\;b}^{a}\left( f\right) \right) ^{\mu \nu }\left( \varepsilon _{\nu \alpha \beta \gamma }A_{\mu }^{b}\eta _{a}^{\alpha \beta \gamma }-\tfrac{1}{2}\varepsilon _{\mu \nu \alpha \beta }\eta ^{b}B_{a}^{\alpha \beta }\right) \notag \\ &&+\left( R_{A}\left( g\right) \right) ^{\mu }C_{\mu }^{A}-\left( R_{ab}\left( g\right) \right) ^{\mu }A_{\mu }^{a}\eta ^{b}-\tfrac{1}{4!}\varepsilon _{\mu \nu \rho \lambda }\left( R^{a}\left( g\right) \right) ^{\mu }\eta _{a}^{\nu \rho \lambda } \notag \\ &&+\varepsilon ^{\mu \nu \rho \lambda }B_{\mu \nu }^{\ast a}V_{B\rho }\left( f_{aA}^{B}C_{\lambda }^{A}-f_{abc}^{B}A_{\lambda }^{b}\eta ^{c}-\tfrac{1}{4!}\varepsilon _{\lambda \alpha \beta \gamma }f_{\;\;a}^{Bb}\eta _{b}^{\alpha \beta \gamma }\right) \notag \\ &&+\tfrac{1}{2}\left( Q_{abc}\left( f\right) \right) ^{\mu \nu }A_{\mu }^{a}A_{\nu }^{b}\eta ^{c}+\eta ^{a}V_{B\mu }\left( f_{aA}^{B}V^{\ast A\mu }+\tfrac{1}{12}f_{\;\;a}^{Bb}A_{b}^{\ast \mu }\right. \notag \\ &&\left. +\tfrac{1}{2}\varepsilon ^{\mu \nu \rho \lambda }g_{ab}^{AB}V_{A\nu }B_{\rho \lambda }^{\ast b}\right) , \label{fo16}\end{aligned}$$$$\begin{aligned} a_{0}^{\left( \mathrm{int}\right) } &=&\tfrac{1}{2}\varepsilon ^{\mu \nu \rho \lambda }V_{A\mu }\left( -\tfrac{1}{3}f_{abc}^{A}A_{\nu }^{c}+\tfrac{1}{2}g_{ab}^{AB}V_{B\nu }\right) A_{\rho }^{a}A_{\lambda }^{b} \notag \\ &&-\tfrac{1}{4!}f_{a}^{A}V_{A}^{\mu }H_{\mu }^{a}+f_{aB}^{A}A_{\mu }^{a}V_{A\nu }V^{B\mu \nu }+\tfrac{1}{12}f_{\;\;a}^{Ab}A_{\mu }^{a}V_{A\nu }B_{b}^{\mu \nu } \notag \\ &&+\tfrac{1}{2}\left( g_{\quad C}^{AB}V_{\mu \nu }^{C}+\tfrac{1}{12}g^{aAB}B_{a\mu \nu }\right) V_{A}^{\mu }V_{B}^{\nu }. \label{fo17}\end{aligned}$$ Thus, we can write the final form of the interacting part from the first-order deformation of the solution to the master equation for a collection of BF models and a set of two-form gauge fields as$$S_{1}^{\left( \mathrm{int}\right) }\equiv \int d^{4}x\,a^{(\mathrm{int})}=\int d^{4}x\left( a_{3}^{\left( \mathrm{int}\right) }+a_{2}^{\left( \mathrm{int}\right) }+a_{1}^{\left( \mathrm{int}\right) }+a_{0}^{\left( \mathrm{int}\right) }\right) , \label{s1int}$$where the $4$ components from (\[s1int\]) read as in formulas (\[fo13\])–(\[fo14\]) and (\[fo16\])–(\[fo17\]), respectively. The previous first-order deformation is parameterized by $7$ functions, $f_{abc}^{A}$, $g_{ab}^{AB}$, $f_{a}^{A}$, $f_{aB}^{A}$, $f_{\;\;a}^{Ab}$, $g_{\quad C}^{AB}$, and $g^{aAB}$, which depend smoothly on the undifferentiated scalar fields $\varphi _{d}$ and are antisymmetric as follows: $f_{abc}^{A}$ in the indices $\left\{ a,b,c\right\} $, $g_{ab}^{AB}$ with respect to $\left\{ A,B\right\} $ and $\left\{ a,b\right\} $, and $f_{aAB}\equiv k_{AM}f_{aB}^{M} $ together with $g_{\quad C}^{AB}$ and $g^{aAB}$ in $\left\{ A,B\right\} $. It is easy to see that (\[s1int\]) also includes the general solution that describes the self-interactions among the two-form gauge fields. Indeed, if we isolate from $S_{1}^{\left( \mathrm{int}\right) } $ the part containing the functions $g_{\quad C}^{AB}$, represent these functions as some series in the undifferentiated scalar fields, $g_{\quad C}^{AB}\left( \varphi _{a}\right) =k_{\quad C}^{AB}+k_{\quad C}^{ABa}\varphi _{a}+\cdots $, where $k_{\quad C}^{AB}$ and $k_{\quad C}^{ABa}$ are some real constants, antisymmetric in their upper, capital indices, and retain only the terms including $k_{\quad C}^{AB}$, then we obtain $$\begin{aligned} S_{1}^{\left( \mathrm{int}\right) }(k) &\equiv &\int d^{4}x\,a^{(\mathrm{V})}=\int d^{4}x\left( a_{2}^{\left( \mathrm{V}\right) }+a_{1}^{\left( \mathrm{V}\right) }+a_{0}^{\left( \mathrm{V}\right) }\right) \notag \\ &=&k_{\quad C}^{AB}\int d^{4}x\left[ \left( C_{A}^{\ast \mu }V_{B\mu }+\tfrac{1}{2}\varepsilon _{\mu \nu \rho \lambda }V_{A}^{\ast \mu \nu }V_{B}^{\ast \rho \lambda }\right) C^{C}\right. \notag \\ &&\left. +\varepsilon _{\mu \nu \rho \lambda }V_{A}^{\ast \mu \nu }V_{B}^{\rho }C^{C\lambda }+\tfrac{1}{2}V_{\mu \nu }^{C}V_{A}^{\mu }V_{B}^{\nu }\right] , \label{s1V}\end{aligned}$$which has been shown in [@henfreedman] to be the most general form of the first-order deformation for a set of two-form gauge fields in four spacetime dimensions with the Lagrangian action written in first-order form. In conclusion, the overall first-order deformation of the solution to the master equation for the model under study is expressed like the sum between (\[s1int\]) and the piece responsible for the interactions from the BF sector$$S_{1}=S_{1}^{\left( \mathrm{BF}\right) }+S_{1}^{\left( \mathrm{int}\right) }, \label{s1final}$$where$$S_{1}^{\left( \mathrm{BF}\right) }=\int d^{4}x\,a^{(\mathrm{BF})}, \label{s1BF}$$with $a^{(\mathrm{BF})}$ provided by (\[descBF\]) and (\[a4\])–(\[a0\]). We recall that $S_{1}^{\left( \mathrm{BF}\right) }$ is parameterized by $4 $ kinds of smooth functions of the undifferentiated scalar fields: $W_{ab}$, $M_{ab}^{c}$, $M^{ab}$, and $M_{abcd}$, where $M_{ab}^{c}$ are antisymmetric in their lower indices, $M^{ab}$ are symmetric, and $M_{abcd}$ are completely antisymmetric. Second-order deformation\[highord\] ----------------------------------- Next, we investigate the equations responsible for higher-order deformations. The second-order deformation is governed by equation ([bff3.5]{}). Making use of the first-order deformation derived in the previous subsection, after some computation we organize the second term on the left-hand side of (\[bff3.5\]) like $$\left( S_{1},S_{1}\right) =\int d^{4}x\left( \Delta +\bar{\Delta}\right) , \label{so1}$$where $$\begin{aligned} \Delta &=&\sum\limits_{p=0}^{4}\left( K_{,m_{1}\ldots m_{p}}^{abc}\frac{\partial ^{p}t_{abc}}{\partial \varphi _{m_{1}}\ldots \partial \varphi _{m_{p}}}+K_{d,m_{1}\ldots m_{p}}^{abc}\frac{\partial ^{p}t_{abc}^{d}}{\partial \varphi _{m_{1}}\ldots \partial \varphi _{m_{p}}}\right. \notag \\ &&+K_{m_{1}\ldots m_{p}}^{abcdf}\frac{\partial ^{p}t_{abcdf}}{\partial \varphi _{m_{1}}\ldots \partial \varphi _{m_{p}}}+K_{b,m_{1}\ldots m_{p}}^{a}\frac{\partial ^{p}t_{a}^{b}}{\partial \varphi _{m_{1}}\ldots \partial \varphi _{m_{p}}} \notag \\ &&\left. +K_{ab,m_{1}\ldots m_{p}}^{c}\frac{\partial ^{p}t_{c}^{ab}}{\partial \varphi _{m_{1}}\ldots \partial \varphi _{m_{p}}}\right) \label{so2}\end{aligned}$$and $$\begin{aligned} \bar{\Delta} &=&\sum\limits_{p=0}^{3}\left( X_{A,m_{1}\ldots m_{p}}^{abB}\frac{\partial ^{p}T_{abB}^{A}}{\partial \varphi _{m_{1}}\ldots \partial \varphi _{m_{p}}}+X_{A,m_{1}\ldots m_{p}}^{abcd}\frac{\partial ^{p}T_{abcd}^{A}}{\partial \varphi _{m_{1}}\ldots \partial \varphi _{m_{p}}}\right. \notag \\ &&+X_{A,m_{1}\ldots m_{p}}^{ab}\frac{\partial ^{p}T_{ab}^{A}}{\partial \varphi _{m_{1}}\ldots \partial \varphi _{m_{p}}}+X_{Ac,m_{1}\ldots m_{p}}^{ab}\frac{\partial ^{p}T_{ab}^{Ac}}{\partial \varphi _{m_{1}}\ldots \partial \varphi _{m_{p}}} \notag \\ &&\left. +X_{Aab,m_{1}\ldots m_{p}}\frac{\partial ^{p}T^{Aab}}{\partial \varphi _{m_{1}}\ldots \partial \varphi _{m_{p}}}+X_{a,m_{1}\ldots m_{p}}^{AB}\frac{\partial ^{p}T_{AB}^{a}}{\partial \varphi _{m_{1}}\ldots \partial \varphi _{m_{p}}}\right) \notag \\ &&+\sum\limits_{p=0}^{2}\left( X_{m_{1}\ldots m_{p}}^{aABC}\frac{\partial ^{p}T_{aABC}}{\partial \varphi _{m_{1}}\ldots \partial \varphi _{m_{p}}}+X_{AB,m_{1}\ldots m_{p}}^{abc}\frac{\partial ^{p}T_{abc}^{AB}}{\partial \varphi _{m_{1}}\ldots \partial \varphi _{m_{p}}}\right. \notag \\ &&\left. +X_{AB,m_{1}\ldots m_{p}}^{a}\frac{\partial ^{p}T_{a}^{AB}}{\partial \varphi _{m_{1}}\ldots \partial \varphi _{m_{p}}}+X_{ABa,m_{1}\ldots m_{p}}^{b}\frac{\partial ^{p}T_{b}^{ABa}}{\partial \varphi _{m_{1}}\ldots \partial \varphi _{m_{p}}}\right) \notag \\ &&+\sum\limits_{p=0}^{1}\left( X_{ABCD,m_{1}\ldots m_{p}}\frac{\partial ^{p}T^{ABCD}}{\partial \varphi _{m_{1}}\ldots \partial \varphi _{m_{p}}}+X_{ABC,m_{1}\ldots m_{p}}^{ab}\frac{\partial ^{p}T_{ab}^{ABC}}{\partial \varphi _{m_{1}}\ldots \partial \varphi _{m_{p}}}\right. \notag \\ &&\left. +X_{a,m_{1}\ldots m_{p}}^{ABC}\frac{\partial ^{p}T_{ABC}^{a}}{\partial \varphi _{m_{1}}\ldots \partial \varphi _{m_{p}}}\right) +X_{ABCD}^{a}T_{a}^{ABCD}. \label{so3}\end{aligned}$$In formulas (\[so2\]) and (\[so3\]) we used the notations $$\begin{aligned} t_{abc} &=&W_{ec}M_{ab}^{e}+W_{ea}\frac{\partial W_{bc}}{\partial \varphi _{e}}+W_{eb}\frac{\partial W_{ca}}{\partial \varphi _{e}}, \label{so4a} \\ t_{abc}^{d} &=&W_{e[a}\frac{\partial M_{bc]}^{d}}{\partial \varphi _{e}}+M_{e[a}^{d}M_{bc]}^{e}+M^{de}M_{eabc}, \label{so4b} \\ t_{abcdf} &=&W_{e[a}\frac{\partial M_{bcdf]}}{\partial \varphi _{e}}+M_{e[abc}M_{df]}^{e}, \label{so4c} \\ t_{a}^{b} &=&M^{be}W_{ea}, \label{so4d} \\ t_{a}^{bc} &=&W_{ea}\frac{\partial M^{bc}}{\partial \varphi _{e}}+M_{ea}^{(b}M_{\left. {}\right. }^{c)e}, \label{so4f}\end{aligned}$$$$T_{ab}^{A}=f_{aM}^{A}f_{b}^{M}+f_{e}^{A}\frac{\partial W_{ab}}{\partial \varphi _{e}}+W_{ea}\frac{\partial f_{b}^{A}}{\partial \varphi _{e}}+2W_{eb}f_{\;\;a}^{Ae}, \label{so5}$$$$T_{a}^{AB}=f_{e}^{A}\frac{\partial f_{a}^{B}}{\partial \varphi _{e}}-f_{e}^{B}\frac{\partial f_{a}^{A}}{\partial \varphi _{e}}-4!\left( g_{\quad M}^{AB}f_{a}^{M}+2W_{ea}g^{eAB}\right) , \label{so6}$$$$\begin{aligned} T_{ab}^{Ac} &=&f_{aM}^{A}f_{\;\;b}^{Mc}-f_{bM}^{A}f_{\;\;a}^{Mc}-\tfrac{1}{2}f_{e}^{A}\frac{\partial M_{ab}^{c}}{\partial \varphi _{e}}+f_{\;\;e}^{Ac}M_{ab}^{e} \notag \\ &&+f_{\;\;[a}^{Ae}M_{b]e}^{c}-2\cdot 4!f_{eab}^{A}M^{ec}+W_{e[a}\frac{\partial f_{\;\;b]}^{Ac}}{\partial \varphi _{e}}, \label{so7}\end{aligned}$$$$\begin{aligned} T_{abcd}^{A} &=&W_{e[a}\frac{\partial f_{bcd]}^{A}}{\partial \varphi _{e}}+f_{e[ab}^{A}M_{cd]}^{e}+f_{M[a}^{A}f_{bcd]}^{M} \notag \\ &&+\tfrac{1}{2\cdot 4!}\left( \tfrac{1}{2}f_{e}^{A}\frac{\partial M_{abcd}}{\partial \varphi _{e}}-f_{\;\;[a}^{Ae}M_{bcd]e}^{\left. {}\right. }\right) , \label{so8}\end{aligned}$$$$T^{Aab}=f_{e}^{A}\frac{\partial M^{ab}}{\partial \varphi _{e}}-2f_{\;\;e}^{Aa}M^{be}-2f_{\;\;e}^{Ab}M^{ae}, \label{so9}$$$$T_{abB}^{A}=f_{M[a}^{A}f_{b]B}^{M}+f_{eB}^{A}M_{ab}^{e}+W_{e[a}\frac{\partial f_{b]B}^{A}}{\partial \varphi _{e}}, \label{so10}$$$$\begin{aligned} T_{aABC} &=&f_{Ae}\frac{\partial f_{aBC}}{\partial \varphi _{e}}-f_{Be}\frac{\partial f_{aAC}}{\partial \varphi _{e}}+2f_{\;\;Aa}^{e}f_{eBC}-2f_{\;\;Ba}^{e}f_{eAC} \notag \\ &&+4!\left( -g_{ABM}f_{aC}^{M}+W_{ea}\frac{\partial g_{ABC}}{\partial \varphi _{e}}+f_{a[A}^{M}g_{B]MC}^{\left. {}\right. }\right) , \label{so11}\end{aligned}$$$$T_{AB}^{a}=f_{eAB}M^{ea}, \label{so12}$$$$\begin{aligned} T_{abc}^{AB} &=&f_{e}^{A}\frac{\partial f_{abc}^{B}}{\partial \varphi _{e}}-f_{e}^{B}\frac{\partial f_{abc}^{A}}{\partial \varphi _{e}}+2f_{\;\;[a}^{Ae}f_{bc]e}^{B}-2f_{\;\;[a}^{Be}f_{bc]e}^{A} \notag \\ &&+\tfrac{1}{2}g^{eAB}M_{abce}+4!\left( g_{e[a}^{AB}M_{bc]}^{e}+W_{e[a}\frac{\partial g_{bc]}^{AB}}{\partial \varphi _{e}}\right) \notag \\ &&-4!\left( g_{\quad M}^{AB}f_{abc}^{M}+f_{M[a}^{[A}g_{bc]}^{B]M}\right) , \label{so13}\end{aligned}$$$$\begin{aligned} T_{b}^{ABa} &=&f_{e}^{A}\frac{\partial f_{\;\;b}^{Ba}}{\partial \varphi _{e}}-f_{e}^{B}\frac{\partial f_{\;\;b}^{Aa}}{\partial \varphi _{e}}-2f_{\;\;e}^{Aa}f_{\;\;b}^{Be}+2f_{\;\;e}^{Ba}f_{\;\;b}^{Ae} \notag \\ &&+4!\left( g^{eAB}M_{eb}^{a}+W_{eb}\frac{\partial g^{aAB}}{\partial \varphi _{e}}\right) -4!\left( g_{\quad M}^{AB}f_{\;\;b}^{Ma}\right. \notag \\ &&\left. +2\cdot 4!g_{eb}^{AB}M^{ea}+f_{bM}^{A}g^{aBM}-f_{bM}^{B}g^{aAM}\right) , \label{so14}\end{aligned}$$$$T^{ABCD}=g_{\left. {}\right. }^{e[AB}f_{e}^{C]D}-\tfrac{1}{2}f_{e}^{[A}\frac{\partial g^{BC]D}}{\partial \varphi _{e}}-12g_{\quad M}^{[AB}g_{\left. {}\right. }^{C]MD}, \label{so15}$$$$T_{ab}^{ABC}=g_{\left. {}\right. }^{e[AB}f_{eab}^{C]}-\tfrac{1}{2}f_{e}^{[A}\frac{\partial g_{ab}^{BC]}}{\partial \varphi _{e}}-12g_{\quad M}^{[AB}g_{ab}^{C]M}+g_{e[a}^{[AB}f_{\;\;b]}^{C]e}, \label{so16}$$$$T_{ABC}^{a}=g_{[AB}^{e}f_{\;\;C]e}^{a}-\tfrac{1}{2}f_{e[A}\frac{\partial g_{BC]}^{a}}{\partial \varphi _{e}}-12g_{[AB}^{\quad M}g_{C]M}^{a}, \label{so17}$$$$T_{a}^{ABCD}=g_{\left. {}\right. }^{e[AB}g_{ea}^{CD]}, \label{so18}$$where the functions $g_{ABC}$, $g^{CMD}$, and $g_{AB}^{\quad M}$ result from $g_{\quad M}^{AB}$ by appropriately lowering or raising the two-form collection indices with the help of the metric $k_{AB}$ or its inverse $k^{AB}$: $g_{ABC}=k_{AM}k_{BN}g_{\quad C}^{MN}$, $g^{CMD}=g_{\quad E}^{CM}k^{ED}$, $g_{AB}^{\quad M}=k_{AE}k_{BF}g_{\quad N}^{EF}k^{NM}$. The remaining objects, of the type $K$ or $X$, are listed in Appendix [appendixA]{}. Each of them is a polynomial of ghost number $1$ involving only the *undifferentiated* fields/ghosts and antifields. Comparing equation (\[bff3.5\]) with (\[so1\]), we obtain that the existence of $S_{2}$ requires that $\int d^{4}x\left( \Delta +\bar{\Delta}\right) $ is $s$-exact. This is not possible since all the objects denoted by $K$ or $X$ are polynomials comprising only undifferentiated fields/ghosts/antifields, so (\[bff3.5\]) takes place if and only if the following equations are simultaneously obeyed $$\begin{gathered} t_{abc}=0,\quad t_{abc}^{d}=0,\quad t_{abcdf}=0,\quad t_{a}^{b}=0,\quad t_{a}^{bc}=0, \label{eqs1} \\ T_{ab}^{A}=0,\quad T_{a}^{AB}=0,\quad T_{ab}^{Ac}=0,\quad T_{abcd}^{A}=0,\quad T^{Aab}=0, \label{eqs2} \\ T_{abB}^{A}=0,\quad T_{aABC}=0,\quad T_{AB}^{a}=0,\quad T_{abc}^{AB}=0,\quad T_{b}^{ABa}=0, \label{eqs3} \\ T^{ABCD}=0,\quad T_{ab}^{ABC}=0,\quad T_{ABC}^{a}=0,\quad T_{a}^{ABCD}=0. \label{eqs4}\end{gathered}$$Based on the last equations, which enforce $\Delta =0=\bar{\Delta}$, from (\[so1\]) compared with (\[bff3.5\]) it follows that we can take $$S_{2}=0. \label{s2int}$$On behalf of (\[s2int\]) it is easy to show that one can safely set zero the solutions to the higher-order deformation equations, (\[bff3.6\]), etc. $$S_{k}=0,\quad k>2. \label{skint}$$ Collecting formulas (\[s2int\]) and (\[skint\]), we can state that the complete deformed solution to the master equation for the model under study, which is consistent to all orders in the coupling constant, reads as $$S=\bar{S}+\lambda S_{1}, \label{defsolmast}$$where $\bar{S}$ is given in (\[solfree\]) and $S_{1}$ is expressed by ([s1final]{}). The full deformed solution to the master equation comprises $11$ types of smooth functions of the undifferentiated scalar fields: $W_{ab}$, $M_{bc}^{a}$, $M_{abcd}$, $M^{ab}$, $f_{abc}^{A}$, $g_{ab}^{AB}$, $f_{a}^{A}$, $f_{aB}^{A}$, $f_{\;\;a}^{Ab}$, $g_{\quad C}^{AB}$, and $g^{aAB}$. They are subject to equations (\[eqs1\])–(\[eqs4\]), imposed by the consistency of the first-order deformation. Lagrangian formulation of the interacting model\[lagint\] ========================================================= The piece of antighost number $0$ from the full deformed solution to the master equation, of the form (\[defsolmast\]), furnishes us with the Lagrangian action of the interacting theory $$\begin{aligned} S^{\mathrm{L}}[A_{\mu }^{a},H_{\mu }^{a},\varphi _{a},B_{a}^{\mu \nu },V_{\mu \nu }^{A},V_{\mu }^{A}] &=&\int d^{4}x\left[ H_{\mu }^{a}D^{\mu }\varphi _{a}+\tfrac{1}{2}B_{a}^{\mu \nu }\bar{F}_{\mu \nu }^{a}\right. \notag \\ &&+\tfrac{1}{2}\left( V_{A}^{\mu \nu }\bar{F}_{\mu \nu }^{A}+V_{\mu }^{A}V_{A}^{\mu }\right) \notag \\ &&-\tfrac{\lambda }{4}\varepsilon ^{\mu \nu \rho \lambda }\left( \tfrac{1}{4!}M_{abcd}A_{\mu }^{a}A_{\nu }^{b}+\tfrac{2}{3}f_{Aacd}V_{\mu }^{A}A_{\nu }^{a}\right. \notag \\ &&\left. \left. -g_{ABcd}V_{\mu }^{A}V_{\nu }^{B}\right) A_{\rho }^{c}A_{\lambda }^{d}\right] , \label{ldef}\end{aligned}$$where we used the notations $$D^{\mu }\varphi _{a}=\partial ^{\mu }\varphi _{a}+\lambda W_{ab}A^{b\mu }-\tfrac{\lambda }{4!}f_{Aa}V^{A\mu }, \label{n1}$$$$\begin{aligned} \bar{F}_{\mu \nu }^{a} &=&\partial _{\lbrack \mu }^{\left. {}\right. }A_{\nu ]}^{a}+\lambda M_{bc}^{a}A_{\mu }^{b}A_{\nu }^{c}+\lambda \varepsilon _{\mu \nu \rho \lambda }M^{ab}B_{b}^{\rho \lambda } \notag \\ &&+\tfrac{\lambda }{12}\left( f_{Ab}^{a}A_{[\mu }^{b}V_{\nu ]}^{A}+g_{AB}^{a}V_{\mu }^{A}V_{\nu }^{B}\right) , \label{n2}\end{aligned}$$$$\bar{F}_{\mu \nu }^{A}=\partial _{\lbrack \mu }^{\left. {}\right. }V_{\nu ]}^{A}-\lambda f_{aB}^{A}A_{[\mu }^{a}V_{\nu ]}^{B}+\lambda g_{BC}^{\quad A}V_{\mu }^{B}V_{\nu }^{C}. \label{n3}$$Formula (\[ldef\]) expresses the most general form of the Lagrangian action describing the interactions between a finite collection of BF models and a finite set of two-form gauge fields that complies with our working hypotheses and whose free limit is precisely action (\[bfa1\]). We note that the deformed Lagrangian action is of maximum order $1$ in the coupling constant and includes two main types of vertices: one generates self-interactions among the BF fields and the other couples the two-form field spectrum to the BF field spectrum. The first type is already known from the literature and we will not comment on it. The second is yielded by the expression$$\begin{aligned} &&-\tfrac{\lambda }{4!}f_{Aa}V^{A\mu }H_{\mu }^{a}+\tfrac{\lambda }{24}B_{a}^{\mu \nu }\left( f_{Ab}^{a}A_{[\mu }^{b}V_{\nu ]}^{A}+g_{AB}^{a}V_{\mu }^{A}V_{\nu }^{B}\right) \notag \\ &&-\tfrac{\lambda }{2}V_{A}^{\mu \nu }\left( f_{aB}^{A}A_{[\mu }^{a}V_{\nu ]}^{B}-g_{BC}^{\quad A}V_{\mu }^{B}V_{\nu }^{C}\right) \notag \\ &&-\tfrac{\lambda }{4}\varepsilon ^{\mu \nu \rho \lambda }\left( \tfrac{2}{3}f_{Aacd}V_{\mu }^{A}A_{\nu }^{a}-g_{ABcd}V_{\mu }^{A}V_{\nu }^{B}\right) A_{\rho }^{c}A_{\lambda }^{d}. \label{intvert}\end{aligned}$$We observe that the vector fields $V^{A\mu }$ couple to all the BF fields from the collection, while the two-form gauge fields $V_{A}^{\mu \nu }$ interact only with the one-forms $A_{\mu }^{a}$ from the BF sector. Also, all the interaction vertices are derivative-free (we recall that the various functions that parameterize (\[ldef\]) depend only on the *undifferentiated* scalar fields). One of this couplings, $\tfrac{\lambda }{2}g_{BC}^{\quad A}V_{A}^{\mu \nu }V_{\mu }^{B}V_{\nu }^{C}$, is nothing but the generalized version of non-Abelian Freedman-Townsend vertex. (By ‘generalized’ we mean that its form is identical with the standard non-Abelian Freedman-Townsend vertex up to the point that $g_{BC}^{\quad A}$ are *not* the structure constants of a Lie algebra, but depend on the undifferentiated scalar fields.) Thus, action (\[ldef\]) contains the generalized version of non-Abelian Freedman-Townsend action $$S^{\mathrm{FT}}_{\mathrm{gen}}[V_{\mu \nu }^{A},V_{\mu }^{A},\varphi _{a}]=\tfrac{1}{2}\int d^{4}x\left[ V_{A}^{\mu \nu }\left( \partial _{\lbrack \mu }^{\left. {}\right. }V_{\nu ]}^{A}+\lambda g_{BC}^{\quad A}V_{\mu }^{B}V_{\nu }^{C}\right) +V_{\mu }^{A}V_{A}^{\mu }\right] . \label{lFT}$$ From the terms of antighost number $1$ present in (\[defsolmast\]) we read the deformed gauge transformations (which leave invariant action (\[ldef\])), namely$$\bar{\delta}_{\epsilon }A_{\mu }^{a}=\left( D_{\mu }\right) _{\;\;b}^{a}\epsilon ^{b}-2\lambda M^{ab}\varepsilon _{\mu \nu \rho \lambda }\epsilon _{b}^{\nu \rho \lambda }, \label{gaugeA}$$$$\begin{aligned} \bar{\delta}_{\epsilon }H_{\mu }^{a} &=&2\left( \bar{D}^{\nu }\right) _{\;\;b}^{a}\epsilon _{\mu \nu }^{b}+\tfrac{\lambda }{2}\varepsilon _{\mu \nu \rho \lambda }\left[ \left( -\tfrac{1}{12}\frac{\partial M_{bcde}}{\partial \varphi _{a}}A^{c\nu }+\frac{\partial f_{bde}^{A}}{\partial \varphi _{a}}V_{A}^{\nu }\right) A^{d\rho }\right. \notag \\ &&\left. +\frac{\partial g_{be}^{AB}}{\partial \varphi _{a}}V_{A}^{\nu }V_{B}^{\rho }\right] A^{e\lambda }\epsilon ^{b}+\lambda \left( -\frac{\partial W_{bc}}{\partial \varphi _{a}}H_{\mu }^{c}+\frac{\partial f_{bB}^{A}}{\partial \varphi _{a}}V_{A}^{\nu }V_{\mu \nu }^{B}\right) \epsilon ^{b} \notag \\ &&-\frac{\partial \left( D^{\nu }\right) _{\;\;b}^{d}}{\partial \varphi _{a}}B_{d\mu \nu }\epsilon ^{b}-\tfrac{3\lambda }{2}\frac{\partial M_{cd}^{b}}{\partial \varphi _{a}}A^{c\nu }A^{d\rho }\epsilon _{b\mu \nu \rho }+2\lambda \frac{\partial M^{bc}}{\partial \varphi _{a}}B_{c\mu \nu }\varepsilon ^{\nu \alpha \beta \gamma }\epsilon _{b\alpha \beta \gamma } \notag \\ &&+\tfrac{\lambda }{4}\left( \frac{\partial f_{Ac}^{b}}{\partial \varphi _{a}}V^{A\nu }A^{c\rho }-\tfrac{1}{2}\frac{\partial g_{AB}^{b}}{\partial \varphi _{a}}V^{A\nu }V^{B\rho }\right) \epsilon _{b\mu \nu \rho } \notag \\ &&+\lambda \varepsilon _{\mu \nu \rho \lambda }\left( \frac{\partial f_{bAB}}{\partial \varphi _{a}}V^{B\nu }A^{b\rho }+\tfrac{1}{2}\frac{\partial g_{\quad A}^{BC}}{\partial \varphi _{a}}V_{B}^{\nu }V_{C}^{\rho }\right) \epsilon ^{A\lambda }, \label{gaugeH}\end{aligned}$$$$\bar{\delta}_{\epsilon }\varphi _{a}=-\lambda W_{ab}\epsilon ^{b}, \label{gaugefi}$$$$\begin{aligned} \bar{\delta}_{\epsilon }B_{a}^{\mu \nu } &=&-3\left( D_{\rho }\right) _{a}^{\;\;b}\epsilon _{b}^{\mu \nu \rho }+2\lambda W_{ab}\epsilon ^{b\mu \nu }-\lambda \varepsilon ^{\mu \nu \rho \lambda }f_{aAB}V_{\rho }^{B}\epsilon _{\lambda }^{A}-\lambda M_{ab}^{c}B_{c}^{\mu \nu }\epsilon ^{b} \notag \\ &&+\lambda \varepsilon ^{\mu \nu \rho \lambda }\left( \tfrac{1}{8}M_{abcd}A_{\rho }^{c}A_{\lambda }^{d}+f_{Aabc}V_{\rho }^{A}A_{\lambda }^{c}-\tfrac{1}{2}g_{ABab}V_{\rho }^{A}V_{\lambda }^{B}\right) \epsilon ^{b}, \label{gaugeB}\end{aligned}$$$$\begin{aligned} \bar{\delta}_{\epsilon }V_{\mu \nu }^{A} &=&\varepsilon _{\mu \nu \rho \lambda }\left( D^{\rho }\right) _{\;\;B}^{A}\epsilon ^{B\lambda }+\tfrac{\lambda }{12}f_{a}^{A}\epsilon _{\mu \nu }^{a}+\tfrac{\lambda }{4}\left( f_{\;\;b}^{Aa}A^{b\rho }-g^{aAB}V_{B}^{\rho }\right) \epsilon _{a\mu \nu \rho } \notag \\ &&+\lambda \left[ \varepsilon _{\mu \nu \rho \lambda }\left( \tfrac{1}{2}f_{abc}^{A}A^{b\rho }+g_{ac}^{AB}V_{B}^{\rho }\right) A^{c\lambda }\right. \notag \\ &&\left. +f_{aB}^{A}V_{\mu \nu }^{B}+\tfrac{1}{12}f_{\;\;a}^{Ab}B_{b\mu \nu }\right] \epsilon ^{a}, \label{gaugeV2}\end{aligned}$$$$\bar{\delta}_{\epsilon }V_{\mu }^{A}=\lambda f_{aB}^{A}V_{\mu }^{B}\epsilon ^{a}. \label{gaugeV1}$$In (\[gaugeA\])–(\[gaugeV1\]) we employed the following notations for the various types of (generalized) covariant derivatives: $$\begin{aligned} \left( \bar{D}^{\mu }\right) _{\;\;b}^{a} &=&\delta _{b}^{a}\partial ^{\mu }-\lambda \left( \frac{\partial W_{bc}}{\partial \varphi _{a}}A^{c\mu }-\tfrac{1}{12}\frac{\partial f_{Ab}}{\partial \varphi _{a}}V^{A\mu }\right) , \label{dbarab} \\ \left( D_{\mu }\right) _{\;\;b}^{a} &=&\delta _{b}^{a}\partial _{\mu }-\lambda M_{bc}^{a}A_{\mu }^{c}-\tfrac{\lambda }{12}f_{Ab}^{a}V_{\mu }^{A}, \label{dabdir} \\ \left( D_{\mu }\right) _{a}^{\;\;b} &=&\delta _{a}^{b}\partial _{\mu }+\lambda \left( M_{ac}^{b}A_{\mu }^{c}+\tfrac{1}{12}f_{Aa}^{b}V_{\mu }^{A}\right) , \label{dabinv} \\ \left( D^{\mu }\right) _{\;\;B}^{A} &=&\delta _{B}^{A}\partial ^{\mu }-\lambda f_{aB}^{A}A^{a\mu }+\lambda g_{\quad B}^{AC}V_{C}^{\mu }. \label{dAB}\end{aligned}$$It is interesting to see that the gauge transformations of all fields get modified by the deformation procedure. Also, the gauge transformations of the BF fields $H_{\mu }^{a}$ and $B_{a}^{\mu \nu }$ involve the gauge parameters $\epsilon ^{A\lambda }$, which are specific to the two-form sector. Similarly, the gauge transformations of $V_{\mu \nu }^{A}$ and $V_{\mu }^{A}$ include pure BF gauge parameters. By contrast to the standard non-Abelian Freedman-Townsend model, where the vector fields $V_{\mu }^{A}$ are gauge-invariant, here these fields gain nonvanishing gauge transformations, proportional with the BF gauge parameters $\epsilon ^{a}$. The nonvanishing commutators among the deformed gauge transformations result from the terms quadratic in the ghosts with pure ghost number $1$ present in (\[defsolmast\]). The concrete form of the gauge generators and of the corresponding nonvanishing commutators is included in Appendix [appendixB]{} and \[appendixD\], respectively (see relations (\[g1\])–(\[g6d\]) and (\[co1\])–(\[co19\]), respectively). With the help of these relations we observe that the original Abelian gauge algebra is deformed into an open one, meaning that the commutators among the gauge transformations only close on-shell, i.e. on the field equations resulting from the deformed Lagrangian action (\[ldef\]). The deformed gauge generators remain reducible of order two, just like the original ones, but the reducibility relations of order one and two hold now only on the field equations resulting from the deformed Lagrangian action (on-shell reducibility). The expressions of the reducibility functions and relations are given in detail in Appendix \[appendixC\] (see formulas (\[r1\])–(\[r25\])). They are deduced from certain elements in (\[defsolmast\]) that are linear in the ghosts with the pure ghost number greater or equal to $2$. We recall that the entire gauge structure of the interacting model is controlled by the functions $W_{ab}$, $M_{bc}^{a}$, $M_{abcd}$, $M^{ab}$, $f_{abc}^{A}$, $g_{ab}^{AB}$, $f_{a}^{A}$, $f_{aB}^{A}$, $f_{\;\;a}^{Ab}$, $g_{\quad C}^{AB}$, and $g^{aAB}$, which are restricted to satisfy equations (\[eqs1\])–(\[eqs4\]). Thus, our procedure is consistent provided these equations are shown to possess solutions. We give below some classes of solutions to (\[eqs1\])–(\[eqs4\]), without pretending to exhaust all possibilities. - **Type I solutions** A first class of solutions to equations (\[eqs1\]) is given by $$M_{ab}^{c}=\frac{\partial W_{ab}}{\partial \varphi _{c}},\quad M_{abcd}=f_{e[ab}\frac{\partial W_{cd]}}{\partial \varphi _{e}},\quad M^{ab}=0, \label{z3}$$where $f_{eab}$ are arbitrary, antisymmetric constants and the functions $W_{ab}$ are required to fulfill the equations $$W_{e[a}\frac{\partial W_{bc]}}{\partial \varphi _{e}}=0. \label{z5}$$We remark that all the nonvanishing solutions are parameterized by the antisymmetric functions $W_{ab}$. Like in the pure BF case [@defBFjhep], we can interpret the functions $W_{ab}$ like the components of a two-tensor on a Poisson manifold with the target space locally parameterized by the scalar fields $\varphi _{e}$. Consequently, the first and third equations among (\[eqs2\]) are verified if we take $$f_{aB}^{A}=\lambda _{\;\;B}^{A}f_{a},\quad f_{a}^{A}=\tau ^{A}k^{c}W_{ac},\quad f_{\;\;b}^{Aa}=-\tfrac{1}{2}\tau ^{A}k^{c}\frac{\partial W_{bc}}{\partial \varphi _{a}}, \label{qq1}$$where $f_{a}$ are arbitrary functions of $\varphi _{b}$, $k^{c}$ stand for some arbitrary constants, and $\tau ^{A}$ and $\lambda _{\;\;B}^{A}$ ($\lambda ^{AB}=-\lambda ^{BA}$, $\lambda ^{AB}=k^{AC}\lambda _{\;\;C}^{B}$) represent some constants subject to the conditions $$\lambda _{\;\;B}^{A}\tau ^{B}=0. \label{qq2}$$Inserting (\[qq1\]) into the second equation from (\[eqs2\]), we obtain $$g_{AB}^{a}=\tfrac{1}{2}g_{ABC}\tau ^{C}k^{a}+\mu _{AB}\nu ^{a}, \label{qq3}$$where $\mu _{AB}$ are some arbitrary, antisymmetric constants and $\nu ^{a}\left( \varphi \right) $ are null vectors of $W_{ab}$ (if the matrix of elements $W_{ab}$ is degenerate), i.e. $$W_{ab}\nu ^{a}=0. \label{qq4}$$In the presence of the previous solutions the fourth equation from ([eqs2]{}) is solved for $$f_{abc}^{A}=\tfrac{1}{4\cdot 4!}\tau ^{A}k^{d}f_{e[ab}\frac{\partial W_{cd]}}{\partial \varphi _{e}}. \label{qq5}$$Due to the last relation in (\[z3\]), it is easy to see that the fifth equation from (\[eqs2\]) is now automatically satisfied. Next, we investigate equations (\[eqs3\]). The former equation is checked if we make the choice $$f_{a}=\bar{k}^{b}W_{ab}, \label{qq6}$$with $\bar{k}^{b}$ some arbitrary constants. The next equation from ([eqs3]{}) is fulfilled for $$g_{ABC}=C_{ABC}(1+\chi ),\quad \lambda _{\;\;B}^{A}=C_{CB}^{\quad A}\tau ^{C},\quad k^{a}=\bar{k}^{a}, \label{qq7}$$where $\chi \left( \varphi \right) $ has the property $$W_{ab}\frac{\partial \chi }{\partial \varphi _{b}}=0 \label{qq8}$$(if $W_{ab}$ allows for nontrivial null vectors) and the completely antisymmetric constants $C_{ABC}$ are imposed to satisfy the Jacobi identity $$C_{EA[B}C_{DC]}^{\quad E}=0. \label{qq9}$$Now, the third equation from (\[eqs3\]) is automatically verified by the last relation in (\[z3\]). The solution to the fourth equation reads as $$g_{ab}^{AB}=C^{ABC}\tau _{C}W_{ab},\quad \mu _{AB}=0. \label{qq10}$$So far we have determined all the unknown functions. The above solutions also fulfill the remaining equations from (\[eqs3\]) and the first three ones in (\[eqs4\]). However, the last equation present in (\[eqs4\]) produces the restriction $$C^{E[AB}C^{CD]F}\tau _{E}\tau _{F}=0. \label{qq11}$$The last equation possesses at least two different types of solutions, namely $$C^{ABC}=\varepsilon ^{ijk}e_{i}^{A}e_{j}^{B}e_{k}^{C},\quad i,j,k=1,2,3 \label{qq12}$$and $$C^{ABC}=\varepsilon ^{\bar{A}\bar{B}\bar{C}}l_{\bar{A}}^{A}l_{\bar{B}}^{B}l_{\bar{C}}^{C},\quad \bar{A},\bar{B},\bar{C}=1,2,3,4, \label{qq13}$$respectively, where $e_{i}^{A}$ and $l_{\bar{A}}^{A}$ are all constants and $\varepsilon ^{ijk}$ together with $\varepsilon ^{\bar{A}\bar{B}\bar{C}}$are completely antisymmetric symbols. These symbols are defined via the conventions $\varepsilon ^{123}=+1$ and $\varepsilon ^{124}=\varepsilon ^{134}=\varepsilon ^{234}=+1$, respectively. It is straightforward to see that the quantities $C^{ABC}$ given by either of the relations (\[qq12\]) or (\[qq13\]) indeed check (\[qq9\]). By assembling the previous results, we find the type I solutions to equations (\[eqs1\])–(\[eqs4\]) being expressed via relations (\[z3\]), (\[qq5\]), and $$\begin{gathered} f_{aB}^{A}=C_{DB}^{\quad A}\tau ^{D}k^{b}W_{ab},\quad f_{a}^{A}=\tau ^{A}k^{c}W_{ac}, \label{qq14} \\ \,f_{\;\;b}^{Aa}=-\tfrac{1}{2}\tau ^{A}k^{c}\frac{\partial W_{bc}}{\partial \varphi _{a}},\quad g_{ABC}=C_{ABC}(1+\chi ), \label{qq14a} \\ g_{AB}^{a}=\tfrac{1}{2}C_{ABC}(1+\chi )\tau ^{C}k^{a},\quad g_{ab}^{AB}=C^{ABC}\tau _{C}W_{ab}, \label{qq15}\end{gathered}$$where $\tau ^{A}$ and $k^{a}$ represent some arbitrary constants, $W_{ab}$ are assumed to satisfy equations (\[z5\]), and $\chi $ is subject to ([qq8]{}) (if the matrix of elements $W_{ab}$ is degenerate). The antisymmetric constants $C^{ABC}$ are imposed to verify relations (\[qq11\]) (which ensure that (\[qq9\]) are automatically checked). Two sets of solutions to (\[qq11\]) (and hence also to (\[qq9\])) are provided by formulas ([qq12]{}) and (\[qq13\])). - **Type II solutions** Another set of solutions to equations (\[eqs1\]) can be written as $$W_{ab}=0,\quad M_{ab}^{c}=C_{\;\;ab}^{c}\hat{M},\quad M_{abcd}=0,\quad M^{ab}=\mu ^{ab}M, \label{z7}$$with $\hat{M}$ and $M$ arbitrary functions of the undifferentiated scalar fields. The coefficients $\mu ^{ab}$ represent the elements of the inverse of the Killing metric $\bar{\mu}_{ad}$ of a semi-simple Lie algebra with the structure constants $C_{\;\;ab}^{c}$ ($\bar{\mu}_{ad}\mu ^{de}=\delta _{a}^{e}$), where, in addition $C_{abc}=\bar{\mu}_{ad}C_{\;\;bc}^{d}$ must be completely antisymmetric. Under these circumstances, the first equation from (\[eqs2\]) is solved if we take $$f_{aB}^{A}=\tilde{\lambda}_{\;\;B}^{A}\hat{f}_{a},\quad f_{a}^{A}=\sigma ^{A}\bar{f}_{a}, \label{qq16}$$where $\hat{f}_{a}$ and $\bar{f}_{a}$ are arbitrary functions of the undifferentiated scalar fields, and $\tilde{\lambda}_{\;\;B}^{A}$ as well as $\sigma ^{A}$ are some constants that must satisfy the relations $$\tilde{\lambda}_{\;\;B}^{A}\sigma ^{B}=0. \label{qq17}$$Then, the second equation from (\[eqs2\]) implies the fact that $g_{AB}^{\quad C}$ is restricted to fulfill the condition $$g_{AB}^{\quad C}\sigma _{C}=0. \label{qq18}$$Replacing the above solutions into the third equation from (\[eqs2\]), we get the relation $$f_{\;\;b}^{Aa}=\sigma ^{A}C_{\;\;bc}^{a}\frac{\partial P}{\partial \varphi _{c}},\quad f_{abc}^{A}=\sigma ^{A}C_{abc}N, \label{qq19}$$where $P$ and $N$ are functions of the undifferentiated scalar fields, with $N$ restricted to verify the equation $$\bar{f}_{a}\frac{\partial \hat{M}}{\partial \varphi _{a}}+4\cdot 4!NM=0. \label{qq20}$$Having in mind the solutions deduced until now, we find that the fourth equation from (\[eqs2\]) is automatically checked and the last equation in (\[eqs2\]) constrains the function $M$ to be constant (for the sake of simplicity, we take this constant to be equal to unity) $$M=1. \label{qq21}$$The first and the third equations from (\[eqs3\]) immediately yield $\hat{f}_{a}=0$, which further leads to $f_{aB}^{A}=0$. Under these circumstances, the second equation entering (\[eqs3\]) is identically satisfied and the fourth equation from the same formula possesses the solution $$g_{ab}^{AB}=C_{abc}\bar{\lambda}^{AB}\frac{\partial Q}{\partial \varphi _{c}}, \label{qq22}$$where $Q$ is an arbitrary function of the undifferentiated scalar fields and $\bar{\lambda}^{AB}$ denote some arbitrary, completely antisymmetric constants. Substituting the solutions deduced so far into the last equation from (\[eqs3\]), we get $$g_{AB}^{a}=\bar{\lambda}_{AB}\frac{\partial g}{\partial \varphi _{a}}, \label{qq23}$$where $g$ is a function of the undifferentiated scalar fields that is restricted to fulfill the equation $$\frac{\partial Q}{\partial \varphi _{a}}=\tfrac{1}{2\cdot 4!}\hat{M}\frac{\partial g}{\partial \varphi _{a}}. \label{qq24}$$The first equation from (\[eqs4\]) exhibits the solution $$g_{ABC}=\sigma _{\lbrack A}\hat{\lambda}_{B]C}\hat{\Phi}, \label{qq25}$$with $\hat{\Phi}$ an arbitrary function of the undifferentiated scalar fields and $\hat{\lambda}_{BC}$ some arbitrary, completely antisymmetric constants, which check the relations $$\hat{\lambda}_{BC}\sigma ^{C}=0. \label{qq26}$$Relations (\[qq26\]) ensure that equation (\[qq18\]) is verified. The second equation from (\[eqs4\]) displays a solution of the form $$\bar{\lambda}^{AB}=\sigma ^{\lbrack A}\hat{\lambda}^{B]C}\beta _{C}, \label{qq27}$$with $\beta _{C}$ some constants. The remaining equations entering ([eqs4]{}) are now identically verified. Putting together the results obtained until now, it follows that the type II solutions to equations (\[eqs1\])–(\[eqs4\]) can be written as $$\begin{gathered} W_{ab}=0,\quad M_{ab}^{c}=C_{\;\;ab}^{c}\hat{M},\quad M_{abcd}=0,\quad M^{ab}=\mu ^{ab}, \label{qq28} \\ f_{aB}^{A}=0,\quad f_{a}^{A}=\sigma ^{A}\bar{f}_{a},\quad f_{\;\;b}^{Aa}=\sigma ^{A}C_{\;\;bc}^{a}\frac{\partial P}{\partial \varphi _{c}}, \label{qq29} \\ f_{abc}^{A}=-\tfrac{1}{4\cdot 4!}\sigma ^{A}C_{abc}\bar{f}_{d}\frac{\partial \hat{M}}{\partial \varphi _{d}},\quad g_{ab}^{AB}=\tfrac{1}{2\cdot 4!}C_{abc}\sigma ^{\lbrack A}\hat{\lambda}^{B]C}\beta _{C}\hat{M}\frac{\partial g}{\partial \varphi _{c}}, \label{qq30} \\ g_{AB}^{a}=\sigma _{\lbrack A}\hat{\lambda}_{B]C}\beta ^{C}\frac{\partial g}{\partial \varphi _{a}},\quad g_{ABC}=\sigma _{\lbrack A}\hat{\lambda}_{B]C}\hat{\Phi}. \label{qq31}\end{gathered}$$We recall that $\hat{M}$, $\bar{f}_{a}$, $P$, $g$, and $\hat{\Phi}$ are arbitrary functions of the undifferentiated scalar fields and $\beta _{C}$, $\hat{\lambda}_{BC}$, and $\sigma ^{C}$ are some constants. In addition, the last two sets of constants are imposed to fulfill equation (\[qq26\]). The quantities $\mu ^{ab}$ are the elements of the inverse of the Killing metric of a semi-simple Lie algebra with the structure constants $C_{\;\;ab}^{c}$, where $C_{abc}$ must be completely antisymmetric. - **Type III solutions** The third type of solutions to (\[eqs1\]) is given by $$W_{ab}=0,\quad M_{ab}^{c}=\bar{C}_{\;\;ab}^{c}w,\quad M_{abcd}=\hat{f}_{e[ab}\bar{C}_{\;\;cd]}^{e}q,\quad M^{ab}=0, \label{qq32}$$with $w$ and $q$ arbitrary functions of the undifferentiated scalar fields, $\hat{f}_{eab}$ some arbitrary, antisymmetric constants, and $\bar{C}_{\;\;ab}^{c}$ the structure constants of a Lie algebra. Let us particularize the last solutions to the case where $$\bar{C}_{\;\;ab}^{c}=\hat{k}^{c}\bar{W}_{ab},\quad w\left( \varphi \right) =q\left( \varphi \right) =\frac{d\hat{w}\left( \hat{k}^{m}\varphi _{m}\right) }{d\left( \hat{k}^{n}\varphi _{n}\right) }, \label{z11}$$with $\hat{k}^{c}$ some arbitrary constants, $\hat{w}$ an arbitrary, smooth function depending on $\hat{k}^{m}\varphi _{m}$, and $\bar{W}_{ab}$some antisymmetric constants satisfying the relations $$\bar{W}_{a[b}\bar{W}_{cd]}=0. \label{z12}$$Obviously, equations (\[z12\]) ensure the Jacobi identity for the structure constants $\bar{C}_{\;\;ab}^{c}$. Replacing (\[z11\]) back in (\[qq32\]), we find $$W_{ab}=0,\quad M_{ab}^{c}=\frac{\partial \hat{W}_{ab}}{\partial \varphi _{c}},\quad M_{abcd}=\hat{f}_{e[ab}\frac{\partial \hat{W}_{cd]}}{\partial \varphi _{e}},\quad M^{ab}=0, \label{z13}$$where $$\hat{W}_{ab}=\bar{W}_{ab}\frac{d\hat{w}\left( \hat{k}^{m}\varphi _{m}\right) }{d\left( \hat{k}^{n}\varphi _{n}\right) }. \label{z16}$$Due to (\[z12\]), it is easy to see that $\hat{W}_{ab}$ satisfy the Jacobi identity for a Poisson manifold $$\hat{W}_{e[a}\frac{\partial \hat{W}_{bc]}}{\partial \varphi _{e}}=0. \label{z17}$$Relations (\[z13\]) and (\[z17\]) emphasize that we can generate solutions correlated with a Poisson manifold even if $W_{ab}=0$. In this situation the Poisson two-tensor results from a Lie algebra (see the first formula in (\[z11\]) and (\[z16\])). It is interesting to remark that the same equations, namely (\[z12\]), ensure the Jacobi identities for both the Lie algebra and the corresponding Poisson manifold. These equations possess at least two types of solutions, namely $$\bar{W}_{ab}=\varepsilon _{ijk}e_{a}^{i}e_{b}^{j}e_{c}^{k}\rho ^{c},\quad i,j,k=1,2,3 \label{qq33}$$and $$\bar{W}_{ab}=\varepsilon _{\bar{a}\bar{b}\bar{c}}l_{a}^{\bar{a}}l_{b}^{\bar{b}}l_{c}^{\bar{c}}\bar{\rho}^{c},\quad \bar{a},\bar{b},\bar{c}=1,2,3,4, \label{qq34}$$where $e_{a}^{i}$, $\rho ^{c}$, $l_{a}^{\bar{a}}$, and $\bar{\rho}^{c}$ are all constants and $\varepsilon _{ijk}$ together with $\varepsilon _{\bar{a}\bar{b}\bar{c}}$ are completely antisymmetric symbols, defined via the conventions $\varepsilon _{123}=+1$ and $\varepsilon _{124}=\varepsilon _{134}=\varepsilon _{234}=+1$, respectively. If we tackle the remaining equations in a manner similar to that employed at the previous cases, we infer that the third type of solutions to (\[eqs1\])–(\[eqs4\]) is expressed by (\[z13\]) and $$\begin{gathered} f_{aB}^{A}=m_{\;\;B}^{A}\hat{k}^{b}\bar{W}_{ab}\Omega ,\quad f_{a}^{A}=0,\quad f_{\;\;b}^{Aa}=-\bar{\lambda}^{A}\tilde{k}^{c}\frac{\partial \hat{W}_{bc}}{\partial \varphi _{a}}, \label{qq35} \\ f_{abc}^{A}=\bar{\lambda}^{A}\left( \hat{u}_{[a}\hat{W}_{bc]}+\tfrac{1}{2\cdot 4!}\tilde{k}^{d}\hat{f}_{e[ab}\frac{\partial \hat{W}_{cd]}}{\partial \varphi _{e}}\right) , \label{qq36} \\ g_{ab}^{AB}=\bar{\lambda}^{[A}m^{B]C}\bar{\beta}_{C}\bar{W}_{ab}\hat{Q},\quad g_{AB}^{a}=0,\quad g_{ABC}=\bar{\lambda}_{[A}m_{B]C}\hat{P}. \label{qq37}\end{gathered}$$In the above $\hat{k}^{b}$, $\tilde{k}^{a}$, $\bar{\beta}_{C}$, $\hat{f}_{eab}$, $\bar{\lambda}^{A}$, $\bar{W}_{ab}$ ($\bar{W}_{ab}=-\bar{W}_{ba}$), and $m^{AB}$ ($m^{AB}=-m^{BA}$) are some constants, the first four sets being arbitrary (up to the point that $\hat{f}_{eab}$ should be completely antisymmetric) and the last three sets being subject to the relations (\[z12\]) and$$m^{AB}\bar{\lambda}_{B}=0. \label{qq38}$$The quantities denoted by $\Omega $, $\hat{u}_{a}$, $\hat{Q}$, and $\hat{P}$ are arbitrary functions of the undifferentiated scalar fields. The functions $\hat{W}_{ab}$ read as in (\[z16\]), with $\hat{w}$ an arbitrary, smooth function depending on $\hat{k}^{m}\varphi _{m}$. If in particular we take $\Omega $ and $\hat{Q}$ to be respectively of the form of $w$ and $q$ from (\[z11\]), then we obtain that the functions $f_{aB}^{A}$ and $g_{ab}^{AB}$ will be parameterized by $\hat{W}_{ab}$. Conclusion\[concl\] =================== To conclude with, in this paper we have investigated the consistent interactions that can be introduced between a finite collection of BF theories and a finite set of two-form gauge fields (described by a sum of Abelian Freedman-Townsend actions). Starting with the BRST differential for the free theory, we compute the consistent first-order deformation of the solution to the master equation with the help of standard cohomological techniques, and obtain that it is parameterized by $11$ kinds of functions depending on the undifferentiated scalar fields. Next, we investigate the second-order deformation, whose existence imposes certain restrictions with respect to these functions. Based on these restrictions, we show that we can take all the remaining higher-order deformations to vanish. As a consequence of our procedure, we are led to an interacting gauge theory with deformed gauge transformations, a non-Abelian gauge algebra that only closes on-shell, and on-shell accompanying reducibility relations. The deformed action contains, among others, the generalized version of non-Abelian Freedman-Townsend action. It is interesting to mention that by contrast to the standard non-Abelian Freedman-Townsend model, where the auxiliary vector fields are gauge-invariant, here these fields gain nonvanishing gauge transformations, proportional with some BF gauge parameters. Finally, we investigate the equations that restrict the functions parameterizing the deformed solution to the master equation and give some particular classes of solutions, which can be suggestively interpreted in terms of Poisson manifolds and/or Lie algebras. Acknowledgment {#acknowledgment .unnumbered} ============== This work has been supported in part by grant CEX-05-D11-49/07.10.2005 with the Romanian Ministry of Education and Research (M.Ed.C.) and by EU contract MRTN-CT-2004-005104. Various notations used in subsection \[highord\] [appendixA]{} ============================================================== The various notations used within formula (\[so2\]) are listed below. The objects denoted by $\left( K_{,m_{1}\ldots m_{p}}^{abc}\right) _{p=\overline{0,4}}$ are expressed by $$\begin{aligned} K^{abc} &=&\eta ^{a}\eta ^{b}\varphi ^{\ast c}+2\eta ^{a}A^{b\mu }H_{\mu }^{c}+2\left( A^{a\mu }A^{b\nu }-2B^{\ast a\mu \nu }\eta ^{b}\right) C_{\mu \nu }^{c} \notag \\ &&+4\left( \eta ^{a}\eta ^{\ast b\mu \nu \rho }+3B^{\ast a\mu \nu }A^{b\rho }\right) C_{\mu \nu \rho }^{c} \notag \\ &&-4\left( \eta ^{a}\eta ^{\ast b\mu \nu \rho \lambda }+6B^{\ast a\mu \nu }B^{\ast b\rho \lambda }-4\eta ^{\ast a\mu \nu \rho }A^{b\lambda }\right) C_{\mu \nu \rho \lambda }^{c}, \label{xbfn39}\end{aligned}$$$$\begin{aligned} K_{,d}^{abc} &=&\left( 4H_{d}^{\ast \nu }A^{a\mu }\eta ^{b}-C_{d}^{\ast \mu \nu }\eta ^{a}\eta ^{b}\right) C_{\mu \nu }^{c}-H_{d}^{\ast \mu }\eta ^{a}\eta ^{b}H_{\mu }^{c} \notag \\ &&+\left( 6H_{d}^{\ast \rho }A^{a\mu }A^{b\nu }-12H_{d}^{\ast \rho }B^{\ast a\mu \nu }\eta ^{b}+6C_{d}^{\ast \mu \nu }\eta ^{a}A^{b\rho }\right. \notag \\ &&\left. -C_{d}^{\ast \mu \nu \rho }\eta ^{a}\eta ^{b}\right) C_{\mu \nu \rho }^{c}+\left( -48H_{d}^{\ast \lambda }B^{\ast a\mu \nu }A^{b\rho }\right. \notag \\ &&+12C_{d}^{\ast \mu \nu }A^{a\rho }A^{b\lambda }+16H_{d}^{\ast \lambda }\eta ^{\ast a\mu \nu \rho }\eta ^{b}-24C_{d}^{\ast \mu \nu }B^{\ast a\rho \lambda }\eta ^{b} \notag \\ &&\left. -8C_{d}^{\ast \mu \nu \rho }A^{a\lambda }\eta ^{b}-C_{d}^{\ast \mu \nu \rho \lambda }\eta ^{a}\eta ^{b}\right) C_{\mu \nu \rho \lambda }^{c}, \label{xbfn40}\end{aligned}$$$$\begin{aligned} K_{,de}^{abc} &=&-3\left( C_{d}^{\ast \mu \nu }H_{e}^{\ast \rho }\eta ^{a}+2H_{d}^{\ast \mu }H_{e}^{\ast \nu }A^{a\rho }\right) \eta ^{b}C_{\mu \nu \rho }^{c} \notag \\ &&-H_{d}^{\ast \mu }H_{e}^{\ast \nu }\eta ^{a}\eta ^{b}C_{\mu \nu }^{c}+\left( -24H_{d}^{\ast \mu }H_{e}^{\ast \nu }B^{\ast a\rho \lambda }\eta ^{b}\right. \notag \\ &&+12H_{d}^{\ast \mu }H_{e}^{\ast \nu }A^{a\rho }A^{b\lambda }-24C_{d}^{\ast \mu \nu }H_{e}^{\ast \rho }A^{a\lambda }\eta ^{b} \notag \\ &&\left. -3C_{d}^{\ast \mu \nu }C_{e}^{\ast \rho \lambda }\eta ^{a}\eta ^{b}+4C_{d}^{\ast \mu \nu \rho }H_{e}^{\ast \lambda }\eta ^{a}\eta ^{b}\right) C_{\mu \nu \rho \lambda }^{c}, \label{xbfn41}\end{aligned}$$$$\begin{aligned} K_{,def}^{abc} &=&-2\left( 4H_{d}^{\ast \mu }H_{e}^{\ast \nu }H_{f}^{\ast \rho }A^{a\lambda }+3C_{d}^{\ast \mu \nu }H_{e}^{\ast \rho }H_{f}^{\ast \lambda }\eta ^{a}\right) \eta ^{b}C_{\mu \nu \rho \lambda }^{c} \notag \\ &&-H_{d}^{\ast \mu }H_{e}^{\ast \nu }H_{f}^{\ast \rho }\eta ^{a}\eta ^{b}C_{\mu \nu \rho }^{c}, \label{xbfn42}\end{aligned}$$$$K_{,defg}^{abc}=-H_{d}^{\ast \mu }H_{e}^{\ast \nu }H_{f}^{\ast \rho }H_{g}^{\ast \lambda }\eta ^{a}\eta ^{b}C_{\mu \nu \rho \lambda }^{c}. \label{xbfn43}$$The elements $\left( K_{d,m_{1}\ldots m_{p}}^{abc}\right) _{p=\overline{0,4}} $ read as$$\begin{aligned} K_{d}^{abc} &=&\left( -2\eta ^{a}A_{\mu }^{b}A_{\nu }^{c}+B_{\mu \nu }^{\ast a}\eta ^{b}\eta ^{c}\right) B_{d}^{\mu \nu }-A_{\mu }^{a}\eta ^{b}\eta ^{c}A_{d}^{\ast \mu } \notag \\ &&+\left( -A_{\mu }^{a}A_{\nu }^{b}A_{\rho }^{c}+6\eta ^{a}B_{\mu \nu }^{\ast b}A_{\rho }^{c}+\eta ^{b}\eta ^{c}\eta _{\mu \nu \rho }^{\ast a}\right) \eta _{d}^{\mu \nu \rho } \notag \\ &&-\tfrac{1}{3}\eta ^{a}\eta ^{b}\eta ^{c}\eta _{d}^{\ast }+\left( -12A_{\mu }^{a}A_{\nu }^{b}B_{\rho \lambda }^{\ast c}+12\eta ^{a}B_{\mu \nu }^{\ast b}B_{\rho \lambda }^{\ast c}\right. \notag \\ &&\left. -8\eta ^{a}\eta _{\mu \nu \rho }^{\ast b}A_{\lambda }^{c}+\eta _{\mu \nu \rho \lambda }^{\ast c}\eta ^{a}\eta ^{b}\right) \eta _{d}^{\mu \nu \rho \lambda }, \label{xbfn44}\end{aligned}$$$$\begin{aligned} K_{d,e}^{abc} &=&\left( H_{e}^{\ast \mu }A^{a\nu }\eta ^{b}\eta ^{c}+\tfrac{1}{6}C_{e}^{\ast \mu \nu }\eta ^{a}\eta ^{b}\eta ^{c}\right) B_{d\mu \nu } \notag \\ &&-\tfrac{1}{3}H_{e}^{\ast \mu }\eta ^{a}\eta ^{b}\eta ^{c}A_{d\mu }^{\ast }+\left( -3H_{e}^{\ast \rho }\eta ^{a}A^{b\mu }A^{c\nu }\right. \notag \\ &&-3H_{e}^{\ast \rho }\eta ^{a}\eta ^{b}B^{\ast c\mu \nu }+\tfrac{3}{2}C_{e}^{\ast \mu \nu }\eta ^{a}\eta ^{b}A^{c\rho } \notag \\ &&\left. +\tfrac{1}{6}C_{e}^{\ast \mu \nu \rho }\eta ^{a}\eta ^{b}\eta ^{c}\right) \eta _{d\mu \nu \rho }+\left( 24A^{a\mu }H_{e}^{\ast \nu }\eta ^{b}B^{\ast c\rho \lambda }\right. \notag \\ &&+4H_{e}^{\ast \lambda }A^{a\mu }A^{b\nu }A^{c\rho }-4H_{e}^{\ast \lambda }\eta ^{a}\eta ^{b}\eta ^{\ast c\mu \nu \rho } \notag \\ &&+6C_{e}^{\ast \mu \nu }\eta ^{a}\eta ^{b}B^{\ast c\rho \lambda }-6C_{e}^{\ast \mu \nu }\eta ^{a}A^{b\rho }A^{c\lambda } \notag \\ &&\left. +8C_{e}^{\ast \mu \nu \rho }\eta ^{a}\eta ^{b}A^{c\lambda }+\tfrac{1}{6}C_{e}^{\ast \mu \nu \rho \lambda }\eta ^{a}\eta ^{b}\eta ^{c}\right) \eta _{d\mu \nu \rho \lambda }, \label{xbfn45}\end{aligned}$$$$\begin{aligned} K_{d,ef}^{abc} &=&\tfrac{1}{6}H_{e}^{\ast \mu }H_{f}^{\ast \nu }\eta ^{a}\eta ^{b}\eta ^{c}B_{d\mu \nu }+\tfrac{3}{2}H_{e}^{\ast \mu }H_{f}^{\ast \nu }\eta ^{a}\eta ^{b}A^{c\rho }\eta _{d\mu \nu \rho } \notag \\ &&+\tfrac{1}{2}C_{e}^{\ast \mu \nu }H_{f}^{\ast \rho }\eta ^{a}\eta ^{b}\eta ^{c}\eta _{d\mu \nu \rho }+\left( 6H_{e}^{\ast \mu }H_{f}^{\ast \nu }\eta ^{a}\eta ^{b}B^{\ast c\rho \lambda }\right. \notag \\ &&-6H_{e}^{\ast \mu }H_{f}^{\ast \nu }\eta ^{a}A^{b\rho }A^{c\lambda }+6C_{e}^{\ast \mu \nu }H_{f}^{\ast \rho }\eta ^{a}\eta ^{b}A^{c\lambda } \notag \\ &&\left. +\tfrac{2}{3}C_{e}^{\ast \mu \nu \rho }H_{f}^{\ast \lambda }\eta ^{a}\eta ^{b}\eta ^{c}+\tfrac{1}{2}C_{e}^{\ast \mu \nu }C_{f}^{\ast \rho \lambda }\eta ^{a}\eta ^{b}\eta ^{c}\right) \eta _{d\mu \nu \rho \lambda }, \label{xbfn46}\end{aligned}$$$$\begin{aligned} K_{d,efg}^{abc} &=&\left( 2H_{e}^{\ast \mu }H_{f}^{\ast \nu }H_{g}^{\ast \rho }\eta ^{a}\eta ^{b}A^{c\lambda }+C_{e}^{\ast \mu \nu }H_{f}^{\ast \rho }H_{g}^{\ast \lambda }\eta ^{a}\eta ^{b}\eta ^{c}\right) \eta _{d\mu \nu \rho \lambda } \notag \\ &&+\tfrac{1}{6}H_{e}^{\ast \mu }H_{f}^{\ast \nu }H_{g}^{\ast \rho }\eta ^{a}\eta ^{b}\eta ^{c}\eta _{d\mu \nu \rho }, \label{xbfn47}\end{aligned}$$$$K_{d,efgh}^{abc}=\tfrac{1}{6}H_{e}^{\ast \mu }H_{f}^{\ast \nu }H_{g}^{\ast \rho }H_{h}^{\ast \lambda }\eta ^{a}\eta ^{b}\eta ^{c}\eta _{d\mu \nu \rho \lambda }. \label{xbfn48}$$The quantities $\left( K_{m_{1}\ldots m_{p}}^{abcdf}\right) _{p=\overline{0,4}}$, $\left( K_{b,m_{1}\ldots m_{p}}^{a}\right) _{p=\overline{0,4}}$, and $\left( K_{ab,m_{1}\ldots m_{p}}^{c}\right) _{p=\overline{0,4}}$ are given by $$\begin{aligned} K^{abcdf} &=&\tfrac{1}{8}\varepsilon ^{\mu \nu \rho \lambda }\left[ \left( \tfrac{1}{3!}A_{\mu }^{a}A_{\nu }^{b}-B_{\mu \nu }^{\ast a}\eta ^{b}\right) A_{\rho }^{c}A_{\lambda }^{d}+\tfrac{1}{3}\left( B_{\mu \nu }^{\ast a}B_{\rho \lambda }^{\ast b}\right. \right. \notag \\ &&\left. \left. -\tfrac{2}{3}\eta _{\mu \nu \rho }^{\ast a}A_{\lambda }^{b}+\tfrac{1}{4!}\eta _{\mu \nu \rho \lambda }^{\ast a}\eta ^{b}\right) \eta ^{c}\eta ^{d}\right] \eta ^{f}, \label{bfk1}\end{aligned}$$$$\begin{aligned} K_{e}^{abcdf} &=&\tfrac{1}{4!}\varepsilon ^{\mu \nu \rho \lambda }\left[ \tfrac{1}{2}\left( \tfrac{1}{5!}C_{e\mu \nu \rho \lambda }^{\ast }\eta ^{a}+\tfrac{1}{3!}C_{e\mu \nu \rho }^{\ast }A_{\lambda }^{a}+\tfrac{1}{2}C_{e\mu \nu }^{\ast }B_{\rho \lambda }^{\ast a}\right. \right. \notag \\ &&\left. +\tfrac{1}{3}H_{e\mu }^{\ast }\eta _{\nu \rho \lambda }^{\ast a}\right) \eta ^{b}\eta ^{c}-H_{e\mu }^{\ast }\left( A_{\nu }^{a}A_{\rho }^{b}-2B_{\nu \rho }^{\ast a}\eta ^{b}\right) A_{\lambda }^{c} \notag \\ &&\left. -\tfrac{1}{2}C_{e\mu \nu }^{\ast }A_{\rho }^{a}A_{\lambda }^{b}\eta ^{c}\right] \eta ^{d}\eta ^{f}, \label{bfk2}\end{aligned}$$$$\begin{aligned} K_{eg}^{abcdf} &=&\tfrac{1}{2\cdot 4!}\varepsilon ^{\mu \nu \rho \lambda }\left[ \tfrac{1}{2}\left( \tfrac{1}{15}H_{e\mu }^{\ast }C_{g\nu \rho \lambda }^{\ast }\eta ^{a}+\tfrac{1}{20}C_{e\mu \nu }^{\ast }C_{g\rho \lambda }^{\ast }\eta ^{a}+H_{e\mu }^{\ast }C_{g\nu \rho }^{\ast }A_{\lambda }^{a}\right) \eta ^{b}\right. \notag \\ &&\left. -H_{e\mu }^{\ast }H_{g\nu }^{\ast }\left( A_{\rho }^{a}A_{\lambda }^{b}-2B_{\rho \lambda }^{\ast a}\eta ^{b}\right) \right] \eta ^{c}\eta ^{d}\eta ^{f}, \label{bfk3}\end{aligned}$$$$K_{egh}^{abcdf}=\tfrac{1}{4\cdot 4!}\varepsilon ^{\mu \nu \rho \lambda }H_{e\mu }^{\ast }H_{g\nu }^{\ast }\left( \tfrac{1}{10}C_{h\rho \lambda }^{\ast }\eta ^{a}+\tfrac{1}{3}H_{h\rho }^{\ast }A_{\lambda }^{a}\right) \eta ^{b}\eta ^{c}\eta ^{d}\eta ^{f}, \label{bfk4}$$$$K_{eghl}^{abcdf}=\tfrac{1}{2\cdot 4!\cdot 5!}\varepsilon ^{\mu \nu \rho \lambda }H_{e\mu }^{\ast }H_{g\nu }^{\ast }H_{h\rho }^{\ast }H_{l\lambda }^{\ast }\eta ^{a}\eta ^{b}\eta ^{c}\eta ^{d}\eta ^{f}, \label{bfk5}$$$$\begin{aligned} K_{b}^{a} &=&4\varepsilon ^{\mu \nu \rho \lambda }\left[ 2\left( -C_{\mu \nu \rho \lambda }^{a}\eta _{b}^{\ast }+C_{\mu \nu \rho }^{a}A_{b\lambda }^{\ast }\right) +C_{\mu \nu }^{a}B_{b\rho \lambda }\right. \notag \\ &&\left. -\left( \varphi ^{\ast a}\eta _{b\mu \nu \rho \lambda }-H_{\mu }^{a}\eta _{b\nu \rho \lambda }\right) \right] , \label{bfk7}\end{aligned}$$$$\begin{aligned} K_{b,c}^{a} &=&4\varepsilon ^{\mu \nu \rho \lambda }\left[ \eta _{b\mu \nu \rho \lambda }\left( C_{\sigma \tau \kappa \varsigma }^{a}C_{c}^{\ast \sigma \tau \kappa \varsigma }+C_{\sigma \tau \kappa }^{a}C_{c}^{\ast \sigma \tau \kappa }+C_{\sigma \tau }^{a}C_{c}^{\ast \sigma \tau }\right. \right. \notag \\ &&\left. +H_{\sigma }^{a}H_{c}^{\ast \sigma }\right) +C_{\mu \nu \rho \lambda }^{a}\left( \eta _{b\sigma \tau \kappa }C_{c}^{\ast \sigma \tau \kappa }+B_{b\sigma \tau }C_{c}^{\ast \sigma \tau }-2A_{b\sigma }^{\ast }H_{c}^{\ast \sigma }\right) \notag \\ &&\left. +\eta _{b\nu \rho \lambda }\left( 3C_{\mu \sigma \tau }^{a}C_{c}^{\ast \sigma \tau }-2C_{\mu \sigma }^{a}H_{c}^{\ast \sigma }\right) +3B_{b\rho \lambda }C_{\mu \nu \sigma }^{a}H_{c}^{\ast \sigma } \right] , \label{bfk8}\end{aligned}$$$$\begin{aligned} K_{b,cd}^{a} &=&4\varepsilon ^{\mu \nu \rho \lambda }\left[ \eta _{b\mu \nu \rho \lambda }\left( C_{\sigma \tau \kappa \varsigma }^{a}\left( 4H_{c}^{\ast \sigma }C_{d}^{\ast \tau \kappa \varsigma }+3C_{c}^{\ast \sigma \tau }C_{d}^{\ast \kappa \varsigma }\right) \right. \right. \notag \\ &&\left. +3C_{\sigma \tau \kappa }^{a}H_{c}^{\ast \sigma }C_{d}^{\ast \tau \kappa }+C_{\sigma \tau }^{a}H_{c}^{\ast \sigma }H_{d}^{\ast \tau }\right) \notag \\ &&\left. +C_{\mu \nu \rho \lambda }^{a}\left( 3\eta _{b\sigma \tau \kappa }H_{c}^{\ast \sigma }C_{d}^{\ast \tau \kappa }+B_{b\sigma \tau }H_{c}^{\ast \sigma }H_{d}^{\ast \tau }\right) \right] , \label{bfk9}\end{aligned}$$$$\begin{aligned} K_{b,cde}^{a} &=&4\varepsilon ^{\mu \nu \rho \lambda }\left[ \eta _{b\mu \nu \rho \lambda }\left( 6C_{\sigma \tau \kappa \varsigma }^{a}H_{c}^{\ast \sigma }H_{d}^{\ast \tau }C_{e}^{\ast \kappa \varsigma }+C_{\sigma \tau \kappa }^{a}H_{c}^{\ast \sigma }H_{d}^{\ast \tau }H_{e}^{\ast \kappa }\right) \right. \notag \\ &&\left. +C_{\mu \nu \rho \lambda }^{a}\eta _{b\sigma \tau \kappa }H_{c}^{\ast \sigma }H_{d}^{\ast \tau }H_{e}^{\ast \kappa }\right] , \label{bfk10}\end{aligned}$$$$K_{b,cdef}^{a}=4\varepsilon _{\mu \nu \rho \lambda }\eta _{b}^{\mu \nu \rho \lambda }C_{\sigma \tau \kappa \varsigma }^{a}H_{c}^{\ast \sigma }H_{d}^{\ast \tau }H_{e}^{\ast \kappa }H_{f}^{\ast \varsigma }, \label{bfk11}$$$$\begin{aligned} K_{ab}^{c} &=&\varepsilon _{\mu \nu \rho \lambda }\left[ -6\left( \eta _{a}^{\mu \nu \sigma }B_{b}^{\rho \lambda }A_{\sigma }^{c}+3\eta _{a}^{\mu \sigma \tau }\eta _{b\sigma \tau }^{\nu }B^{\ast c\rho \lambda }\right) \right. \notag \\ &&-2\eta _{a}^{\mu \nu \rho \lambda }\left( \eta _{b}^{\sigma \tau \kappa \varsigma }\eta _{\sigma \tau \kappa \varsigma }^{\ast c}+2\eta _{b}^{\sigma \tau \kappa }\eta _{\sigma \tau \kappa }^{\ast c}+2B_{b}^{\sigma \tau }B_{\sigma \tau }^{\ast c}\right. \notag \\ &&\left. \left. -2A_{b}^{\ast \sigma }A_{\sigma }^{c}-2\eta _{b}^{\ast }\eta ^{c}\right) +4\eta _{a}^{\mu \nu \rho }A_{b}^{\ast \lambda }\eta ^{c}-B_{a}^{\mu \nu }B_{b}^{\rho \lambda }\eta ^{c}\right] , \label{bfk12}\end{aligned}$$$$\begin{aligned} K_{ab,d}^{c} &=&\varepsilon _{\mu \nu \rho \lambda }\left[ -9\eta _{a}^{\mu \sigma \tau }\eta _{b\sigma \tau }^{\nu }\left( \eta ^{c}C_{d}^{\ast \rho \lambda }-2A^{c\rho }H_{d}^{\ast \lambda }\right) \right. \notag \\ &&-\eta _{a}^{\sigma \tau \kappa \varsigma }\eta _{b\sigma \tau \kappa \varsigma }\left( \eta ^{c}C_{d}^{\ast \mu \nu \rho \lambda }+4C_{d}^{\ast \mu \nu \rho }A^{c\lambda }\right. \notag \\ &&\left. +12C_{d}^{\ast \mu \nu }B^{\ast c\rho \lambda }+8H_{d}^{\ast \mu }\eta ^{\ast c\nu \rho \lambda }\right) +6\eta _{a}^{\mu \nu \sigma }B_{b}^{\rho \lambda }\eta ^{c}H_{d\sigma }^{\ast } \notag \\ &&-2\eta _{a}^{\mu \nu \rho \lambda }\left( \eta _{b}^{\sigma \tau \kappa }\left( \eta ^{c}C_{d\sigma \tau \kappa }^{\ast }+3A_{\kappa }^{c}C_{d\sigma \tau }^{\ast }-6B_{\tau \kappa }^{\ast c}H_{d\sigma }^{\ast }\right) \right. \notag \\ &&\left. \left. +2A_{b}^{\ast \sigma }\eta ^{c}H_{d\sigma }^{\ast }+B_{b}^{\sigma \tau }\left( \eta ^{c}C_{d\sigma \tau }^{\ast }+2A_{\tau }^{c}H_{d\sigma }^{\ast }\right) \right) \right] , \label{bfk13}\end{aligned}$$$$\begin{aligned} K_{ab,de}^{c} &=&-\varepsilon _{\mu \nu \rho \lambda }\left[ 2\eta _{a}^{\mu \nu \rho \lambda }\left( 3\eta _{b}^{\sigma \tau \kappa }\left( H_{d\sigma }^{\ast }C_{e\tau \kappa }^{\ast }\eta ^{c}+H_{d\sigma }^{\ast }H_{e\tau }^{\ast }A_{\kappa }^{c}\right) \right. \right. \notag \\ &&\left. +B_{b}^{\sigma \tau }H_{d\sigma }^{\ast }H_{e\tau }^{\ast }\eta ^{c}\right) +\eta _{a}^{\sigma \tau \kappa \varsigma }\eta _{b\sigma \tau \kappa \varsigma }\left( \left( 4H_{d}^{\ast \mu }C_{e}^{\ast \nu \rho \lambda }\right. \right. \notag \\ &&\left. \left. +3C_{d}^{\ast \mu \nu }C_{e}^{\ast \rho \lambda }\right) \eta ^{c}+12H_{d}^{\ast \mu }C_{e}^{\ast \nu \rho }A^{c\lambda }+12H_{d}^{\ast \mu }H_{e}^{\ast \nu }B^{\ast c\rho \lambda }\right) \notag \\ &&\left. +9\eta _{a}^{\mu \sigma \tau }\eta _{b\sigma \tau }^{\nu }H_{d}^{\ast \rho }H_{e}^{\ast \lambda }\eta ^{c}\right] , \label{bfk14}\end{aligned}$$$$\begin{aligned} K_{ab,def}^{c} &=&-2\varepsilon _{\mu \nu \rho \lambda }\left[ \eta _{a}^{\sigma \tau \kappa \varsigma }\eta _{b\sigma \tau \kappa \varsigma }\left( 3H_{d}^{\ast \mu }H_{e}^{\ast \nu }C_{f}^{\ast \rho \lambda }\eta ^{c}+2H_{d}^{\ast \mu }H_{e}^{\ast \nu }H_{f}^{\ast \rho }A^{c\lambda }\right) \right. \notag \\ &&\left. +\eta _{a}^{\mu \nu \rho \lambda }\eta _{b}^{\sigma \tau \kappa }H_{d\sigma }^{\ast }H_{e\tau }^{\ast }H_{f\kappa }^{\ast }\eta ^{c}\right] , \label{bfk15}\end{aligned}$$$$K_{ab,defg}^{c}=-\varepsilon _{\mu \nu \rho \lambda }\eta _{a}^{\sigma \tau \kappa \varsigma }\eta _{b\sigma \tau \kappa \varsigma }H_{d}^{\ast \mu }H_{e}^{\ast \nu }H_{f}^{\ast \rho }H_{g}^{\ast \lambda }\eta ^{c}. \label{bfk16}$$ Next, we identify the various notations employed in formula (\[so3\]). The polynomials $X_{A,m_{1}\ldots m_{p}}^{abB}$, $X_{A,m_{1}\ldots m_{p}}^{abcd}$, $X_{A,m_{1}\ldots m_{p}}^{ab}$, $X_{Ac,m_{1}\ldots m_{p}}^{ab}$, $X_{Aab,m_{1}\ldots m_{p}}$, and $X_{a,m_{1}\ldots m_{p}}^{AB}$, with $p=\overline{0,3}$, can be written as $$\begin{aligned} X_{A}^{abB} &=&\left( C_{A}^{\ast }\eta ^{a}-2C_{A}^{\ast \mu }A_{\mu }^{a}\right) \eta ^{b}C^{B}+C_{A}^{\ast \mu }\eta ^{a}\eta ^{b}C_{\mu }^{B} \notag \\ &&+\tfrac{2}{3}\left( V_{A}^{\mu }\eta ^{\ast a\nu \rho \lambda }-3V_{A}^{\ast \mu \nu }B^{\ast a\rho \lambda }\right) \eta ^{b}C^{B}\varepsilon _{\mu \nu \rho \lambda } \notag \\ &&-2\left( V_{A}^{\mu }B^{\ast a\nu \rho }-V_{A}^{\ast \mu \nu }A^{a\rho }\right) A^{b\lambda }C^{B}\varepsilon _{\mu \nu \rho \lambda }+2V_{A}^{\ast \mu \nu }A^{a\rho }\eta ^{b}C^{B\lambda }\varepsilon _{\mu \nu \rho \lambda } \notag \\ &&+\left( V_{A}^{\ast \mu \nu }V_{\mu \nu }^{B}+V_{A}^{\ast \mu }V_{\mu }^{B}\right) \eta ^{a}\eta ^{b}-2V_{A}^{\mu }B^{\ast a\nu \rho }\eta ^{b}C^{B\lambda }\varepsilon _{\mu \nu \rho \lambda } \notag \\ &&+V_{A}^{\mu }A^{a\nu }\left( A^{b\rho }C^{B\lambda }\varepsilon _{\mu \nu \rho \lambda }-2\eta ^{b}V_{\mu \nu }^{B}\right) , \label{f26}\end{aligned}$$$$\begin{aligned} X_{A,m_{1}}^{abB} &=&-\tfrac{1}{2}\left( 2H_{m_{1}}^{\ast \mu }C_{A\mu }^{\ast }+C_{m_{1}}^{\ast \mu \nu }V_{A}^{\ast \rho \lambda }\varepsilon _{\mu \nu \rho \lambda }+\tfrac{1}{3}C_{m_{1}}^{\ast \mu \nu \rho }V_{A}^{\lambda }\varepsilon _{\mu \nu \rho \lambda }\right) \eta ^{a}\eta ^{b}C^{B} \notag \\ &&+\left[ \left( C_{m_{1}}^{\ast \mu \nu }V_{A}^{\rho }+H_{m_{1}}^{\ast \mu }V_{A}^{\ast \nu \rho }\right) A^{a\lambda }-2H_{m_{1}}^{\ast \mu }V_{A}^{\nu }B^{\ast a\rho \lambda }\right] \eta ^{b}C^{B}\varepsilon _{\mu \nu \rho \lambda } \notag \\ &&-\tfrac{1}{2}\left( C_{m_{1}}^{\ast \mu \nu }V_{A}^{\rho }+2H_{m_{1}}^{\ast \mu }V_{A}^{\ast \nu \rho }\right) \eta ^{a}\eta ^{b}C^{B\lambda }\varepsilon _{\mu \nu \rho \lambda }+H_{m_{1}}^{\ast \mu }V_{A}^{\nu }\eta ^{a}\eta ^{b}V_{\mu \nu }^{B} \notag \\ &&+H_{m_{1}}^{\ast \mu }V_{A}^{\nu }A^{a\rho }\left( A^{b\lambda }C^{B}+2\eta ^{b}C^{B\lambda }\right) \varepsilon _{\mu \nu \rho \lambda } , \label{f27}\end{aligned}$$$$\begin{aligned} X_{A,m_{1}m_{2}}^{abB} &=&-\tfrac{1}{6}\left( 3H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }V_{A}^{\ast \rho \lambda }+C_{m_{1}}^{\ast \lbrack \mu \nu }H_{m_{2}}^{\ast \rho ]}V_{A}^{\lambda }\right) \eta ^{a}\eta ^{b}C^{B}\varepsilon _{\mu \nu \rho \lambda } \notag \\ &&+\tfrac{1}{2}H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }V_{A}^{\rho }\left( 2A^{a\lambda }\eta ^{b}C^{B}-\eta ^{a}\eta ^{b}C^{B\lambda }\right) \varepsilon _{\mu \nu \rho \lambda }, \label{f28}\end{aligned}$$$$X_{A,m_{1}m_{2}m_{3}}^{abB}=-\tfrac{1}{6}H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }H_{m_{3}}^{\ast \rho }V_{A}^{\lambda }\eta ^{a}\eta ^{b}C^{B}\varepsilon _{\mu \nu \rho \lambda }, \label{f29}$$$$\begin{aligned} X_{A}^{abcd} &=&\tfrac{1}{12}C_{A}^{\ast }\eta ^{a}\eta ^{b}\eta ^{c}\eta ^{d}-\tfrac{1}{3}V_{A}^{\mu }A^{a\nu }A^{b\rho }A^{c\lambda }\eta ^{d}\varepsilon _{\mu \nu \rho \lambda } \notag \\ &&-\tfrac{1}{3}\left[ C_{A}^{\ast \mu }A_{\mu }^{a}+\left( V_{A}^{\ast \mu \nu }B^{\ast a\rho \lambda }-\tfrac{1}{3}V_{A}^{\mu }\eta ^{\ast a\nu \rho \lambda }\right) \varepsilon _{\mu \nu \rho \lambda }\right] \eta ^{b}\eta ^{c}\eta ^{d} \notag \\ &&+\tfrac{1}{2}\left( V_{A}^{\ast \mu \nu }A^{a\rho }-2V_{A}^{\mu }B^{\ast a\nu \rho }\right) A^{b\lambda }\eta ^{c}\eta ^{d}\varepsilon _{\mu \nu \rho \lambda }, \label{f19}\end{aligned}$$$$\begin{aligned} X_{A,m_{1}}^{abcd} &=&-\tfrac{1}{4!}\left[ 2H_{m_{1}}^{\ast \mu }C_{A\mu }^{\ast }+\left( C_{m_{1}}^{\ast \mu \nu }V_{A}^{\ast \rho \lambda }+\tfrac{1}{3}C_{m_{1}}^{\ast \mu \nu \rho }V_{A}^{\lambda }\right) \varepsilon _{\mu \nu \rho \lambda }\right] \eta ^{a}\eta ^{b}\eta ^{c}\eta ^{d} \notag \\ &&+\tfrac{1}{6}\left[ \left( C_{m_{1}}^{\ast \mu \nu }V_{A}^{\rho }+2H_{m_{1}}^{\ast \mu }V_{A}^{\ast \nu \rho }\right) A^{a\lambda }-2H_{m_{1}}^{\ast \mu }V_{A}^{\nu }B^{\ast a\rho \lambda }\right] \eta ^{b}\eta ^{c}\eta ^{d}\varepsilon _{\mu \nu \rho \lambda } \notag \\ &&+\tfrac{1}{2}H_{m_{1}}^{\ast \mu }V_{A}^{\nu }A^{a\rho }A^{b\lambda }\eta ^{c}\eta ^{d}\varepsilon _{\mu \nu \rho \lambda }, \label{f20}\end{aligned}$$$$\begin{aligned} X_{A,m_{1}m_{2}}^{abcd} &=&-\tfrac{1}{4!}\left( H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }V_{A}^{\ast \rho \lambda }+\tfrac{1}{3}C_{m_{1}}^{\ast \lbrack \mu \nu }H_{m_{2}}^{\ast \rho ]}V_{A}^{\lambda }\right) \eta ^{a}\eta ^{b}\eta ^{c}\eta ^{d}\varepsilon _{\mu \nu \rho \lambda } \notag \\ &&+\tfrac{1}{6}H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }V_{A}^{\rho }A^{a\lambda }\eta ^{b}\eta ^{c}\eta ^{d}\varepsilon _{\mu \nu \rho \lambda }, \label{f21}\end{aligned}$$$$X_{A,m_{1}m_{2}m_{3}}^{abcd}=-\tfrac{1}{3\cdot 4!}H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }H_{m_{3}}^{\ast \rho }V_{A}^{\lambda }\eta ^{a}\eta ^{b}\eta ^{c}\eta ^{d}\varepsilon _{\mu \nu \rho \lambda }, \label{f22}$$$$\begin{aligned} X_{A}^{ab} &=&\tfrac{1}{12}\left( C_{A}^{\ast }\eta ^{a}-C_{A\mu }^{\ast }A^{a\mu }\right) C_{\alpha \beta \gamma \delta }^{b}\varepsilon ^{\alpha \beta \gamma \delta }-\tfrac{1}{12}C_{A\mu }^{\ast }\eta ^{a}C_{\nu \rho \lambda }^{b}\varepsilon ^{\mu \nu \rho \lambda } \notag \\ &&+\tfrac{1}{6}V_{A}^{\ast \mu \nu }\left( 12B^{\ast a\rho \lambda }C_{\mu \nu \rho \lambda }^{b}+\eta ^{a}C_{\mu \nu }^{b}-3A^{a\rho }C_{\mu \nu \rho }^{b}\right) -\tfrac{1}{12}V_{A}^{\mu }\left( 2A^{a\nu }C_{\mu \nu }^{b}\right. \notag \\ &&\left. +8\eta ^{\ast a\nu \rho \lambda }C_{\mu \nu \rho \lambda }^{b}-6B^{\ast a\nu \rho }C_{\mu \nu \rho }^{b}-\eta ^{a}H_{\mu }^{b}\right) , \label{f1}\end{aligned}$$$$\begin{aligned} X_{A,m_{1}}^{ab} &=&\left( -\tfrac{1}{12}H_{m_{1}}^{\ast \alpha }C_{A\alpha }^{\ast }\varepsilon ^{\mu \nu \rho \lambda }+C_{m_{1}}^{\ast \mu \nu }V_{A}^{\ast \rho \lambda }+\tfrac{1}{3}C_{m_{1}}^{\ast \mu \nu \rho }V_{A}^{\lambda }\right) \eta ^{a}C_{\mu \nu \rho \lambda }^{b} \notag \\ &&+\tfrac{1}{4}\left( 2H_{m_{1}}^{\ast \mu }V_{A}^{\ast \nu \rho }+C_{m_{1}}^{\ast \mu \nu }V_{A}^{\rho }\right) \eta ^{a}C_{\mu \nu \rho }^{b} \notag \\ &&-\tfrac{1}{2}\left( 2H_{m_{1}}^{\ast \mu }V_{A}^{\ast \nu \rho }+C_{m_{1}}^{\ast \mu \nu }V_{A}^{\rho }\right) A^{a\lambda }C_{\mu \nu \rho \lambda }^{b} \notag \\ &&+\tfrac{1}{2}H_{m_{1}}^{\ast \mu }V_{A}^{\nu }\left( 4B^{\ast a\rho \lambda }C_{\mu \nu \rho \lambda }^{b}+\tfrac{1}{3}\eta ^{a}C_{\mu \nu }^{b}-A^{a\rho }C_{\mu \nu \rho }^{b}\right) , \label{f2}\end{aligned}$$$$\begin{aligned} X_{A,m_{1}m_{2}}^{ab} &=&\tfrac{1}{3}\left( 3H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }V_{A}^{\ast \rho \lambda }+C_{m_{1}}^{\ast \lbrack \mu \nu }H_{m_{2}}^{\rho ]}V_{A}^{\lambda }\right) \eta ^{a}C_{\mu \nu \rho \lambda }^{b} \notag \\ &&+\tfrac{1}{4}H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }\left( V_{A}^{\rho }\eta ^{a}C_{\mu \nu \rho }^{b}-4V_{A}^{\rho }A^{a\lambda }C_{\mu \nu \rho \lambda }^{b}\right) , \label{f3}\end{aligned}$$$$X_{A,m_{1}m_{2}m_{3}}^{ab}=\tfrac{1}{3}H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }H_{m_{3}}^{\rho }V_{A}^{\lambda }\eta ^{a}C_{\mu \nu \rho \lambda }^{b}, \label{f4}$$$$\begin{aligned} X_{Ac}^{ab} &=&\tfrac{1}{4!}\left( C_{A}^{\ast }\eta ^{a}-2C_{A\alpha }^{\ast }A^{a\alpha }\right) \eta ^{b}\eta _{c\mu \nu \rho \lambda }\varepsilon ^{\mu \nu \rho \lambda }-\tfrac{1}{4!}C_{A\mu }^{\ast }\eta ^{a}\eta ^{b}\eta _{c\nu \rho \lambda }\varepsilon ^{\mu \nu \rho \lambda } \notag \\ &&+V_{A}^{\ast \mu \nu }\left( 2B^{\ast a\rho \lambda }\eta ^{b}-A^{a\rho }A^{b\lambda }\right) \eta _{c\mu \nu \rho \lambda } \notag \\ &&+\tfrac{1}{12}\left( V_{A}^{\ast \mu \nu }\eta ^{a}-2V_{A}^{\mu }A^{a\nu }\right) \eta ^{b}B_{c\mu \nu } \notag \\ &&-\tfrac{1}{2}\left( V_{A}^{\ast \mu \nu }A^{a\rho }-V_{A}^{\mu }B^{\ast a\nu \rho }\right) \eta ^{b}\eta _{c\mu \nu \rho } \notag \\ &&-2V_{A}^{\mu }\left( \tfrac{1}{3}\eta ^{\ast a\nu \rho \lambda }\eta ^{b}-B^{\ast a\nu \rho }A^{b\lambda }\right) \eta _{c\mu \nu \rho \lambda } \notag \\ &&-\tfrac{1}{12}V_{A}^{\mu }\eta ^{a}\eta ^{b}A_{c\mu }^{\ast }-\tfrac{1}{4}V_{A}^{\mu }A^{a\nu }A^{b\rho }\eta _{c\mu \nu \rho }, \label{f5}\end{aligned}$$$$\begin{aligned} X_{Ac,m_{1}}^{ab} &=&-\tfrac{1}{4!}\left( H_{m_{1}}^{\ast \alpha }C_{A\alpha }^{\ast }\varepsilon ^{\mu \nu \rho \lambda }-12C_{m_{1}}^{\ast \mu \nu }V_{A}^{\ast \rho \lambda }-4C_{m_{1}}^{\ast \mu \nu \rho }V_{A}^{\lambda }\right) \eta ^{a}\eta ^{b}\eta _{c\mu \nu \rho \lambda } \notag \\ &&+\tfrac{1}{4}\left( 2H_{m_{1}}^{\ast \mu }V_{A}^{\ast \nu \rho }+C_{m_{1}}^{\ast \mu \nu }V_{A}^{\rho }\right) \eta ^{a}\eta ^{b}\eta _{c\mu \nu \rho }+\tfrac{1}{12}H_{m_{1}}^{\ast \mu }V_{A}^{\nu }\eta ^{a}\eta ^{b}B_{c\mu \nu } \notag \\ &&-\left[ \left( 2H_{m_{1}}^{\ast \mu }V_{A}^{\ast \nu \rho }+C_{m_{1}}^{\ast \mu \nu }V_{A}^{\rho }\right) A^{a\lambda }-2H_{m_{1}}^{\ast \mu }V_{A}^{\nu }B^{\ast a\rho \lambda }\right] \eta ^{b}\eta _{c\mu \nu \rho \lambda } \notag \\ &&-\tfrac{1}{2}H_{m_{1}}^{\ast \mu }V_{A}^{\nu }A^{a\rho }\left( \eta ^{b}\eta _{c\mu \nu \rho }+2A^{b\lambda }\eta _{c\mu \nu \rho \lambda }\right) , \label{f6}\end{aligned}$$$$\begin{aligned} X_{Ac,m_{1}m_{2}}^{ab} &=&\tfrac{1}{6}\left( 3H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }V_{A}^{\ast \rho \lambda }+C_{m_{1}}^{\ast \lbrack \mu \nu }H_{m_{2}}^{\ast \rho ]}V_{A}^{\lambda }\right) \eta ^{a}\eta ^{b}\eta _{c\mu \nu \rho \lambda } \notag \\ &&+\tfrac{1}{8}H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }V_{A}^{\rho }\left( \eta ^{a}\eta ^{b}\eta _{c\mu \nu \rho }-8A^{a\lambda }\eta ^{b}\eta _{c\mu \nu \rho \lambda }\right) , \label{f7}\end{aligned}$$$$X_{Ac,m_{1}m_{2}m_{3}}^{ab}=\tfrac{1}{6}H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }H_{m_{3}}^{\ast \rho }V_{A}^{\lambda }\eta ^{a}\eta ^{b}\eta _{c\mu \nu \rho \lambda }, \label{f8}$$$$\begin{aligned} X_{Aab} &=&-\left( C_{A}^{\ast }\eta _{a\mu \nu \rho \lambda }+2C_{A\mu }^{\ast }\eta _{a\nu \rho \lambda }\right) \eta _{b}^{\mu \nu \rho \lambda }+\tfrac{3}{4}V_{A}^{\ast \mu \nu }\eta _{a\alpha \beta }^{\;\rho }\eta _{b}^{\lambda \alpha \beta }\varepsilon _{\mu \nu \rho \lambda } \notag \\ &&+2\left( V_{A}^{\ast \alpha \beta }B_{a\alpha \beta }-\tfrac{1}{12}V_{A}^{\alpha }A_{a\alpha }^{\ast }\right) \eta _{b\mu \nu \rho \lambda }\varepsilon ^{\mu \nu \rho \lambda } \notag \\ &&-\tfrac{1}{12}V_{A}^{\alpha }B_{a\alpha \mu }\eta _{b\nu \rho \lambda }\varepsilon ^{\mu \nu \rho \lambda }, \label{f9}\end{aligned}$$$$\begin{aligned} X_{Aab,m_{1}} &=&\tfrac{1}{6}\left( 6H_{m_{1}}^{\ast \mu }C_{A\mu }^{\ast }+3C_{m_{1}}^{\ast \mu \nu }V_{A}^{\ast \rho \lambda }\varepsilon _{\mu \nu \rho \lambda }+C_{m_{1}}^{\ast \mu \nu \rho }V_{A}^{\lambda }\varepsilon _{\mu \nu \rho \lambda }\right) \eta _{a\alpha \beta \gamma \delta }\eta _{b}^{\alpha \beta \gamma \delta } \notag \\ &&+\tfrac{1}{4}\left( 2H_{m_{1}}^{\ast \mu }V_{A}^{\ast \nu \rho }+C_{m_{1}}^{\ast \mu \nu }V_{A}^{\rho }\right) \eta _{a\mu \nu \rho }\eta _{b\alpha \beta \gamma \delta }\varepsilon ^{\alpha \beta \gamma \delta } \notag \\ &&+\tfrac{3}{4}H_{m_{1}}^{\ast \mu }V_{A}^{\nu }\eta _{a\alpha \beta }^{\;\rho }\eta _{b}^{\lambda \alpha \beta }\varepsilon _{\mu \nu \rho \lambda }+\tfrac{1}{6}H_{m_{1}}^{\ast \mu }V_{A}^{\nu }B_{a\mu \nu }\eta _{b\alpha \beta \gamma \delta }\varepsilon ^{\alpha \beta \gamma \delta }, \label{f10}\end{aligned}$$$$\begin{aligned} X_{Aab,m_{1}m_{2}} &=&\tfrac{1}{6}\left( 3H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }V_{A}^{\ast \rho \lambda }+C_{m_{1}}^{\ast \lbrack \mu \nu }H_{m_{2}}^{\ast \rho ]}V_{A}^{\lambda }\right) \eta _{a\alpha \beta \gamma \delta }\eta _{b}^{\alpha \beta \gamma \delta }\varepsilon _{\mu \nu \rho \lambda } \notag \\ &&+\tfrac{1}{4}H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }V_{A}^{\rho }\eta _{a\mu \nu \rho }\eta _{b\alpha \beta \gamma \delta }\varepsilon ^{\alpha \beta \gamma \delta }, \label{f11}\end{aligned}$$$$X_{Aab,m_{1}m_{2}m_{3}}=\tfrac{1}{6}H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }H_{m_{3}}^{\ast \rho }V_{A}^{\lambda }\eta _{a\alpha \beta \gamma \delta }\eta _{b}^{\alpha \beta \gamma \delta }\varepsilon _{\mu \nu \rho \lambda }, \label{f12}$$$$\begin{aligned} X_{a}^{AB} &=&-4\left( C^{\ast A}C^{B}+C_{\alpha }^{\ast A}C^{B\alpha }\right) \eta _{a\mu \nu \rho \lambda }\varepsilon ^{\mu \nu \rho \lambda } \notag \\ &&+4\left( C_{\mu }^{\ast A}C^{B}\varepsilon ^{\mu \nu \rho \lambda }-6V^{\ast A\nu \rho }C^{B\lambda }\right) \eta _{a\nu \rho \lambda }-8V^{\ast A\mu \nu }C^{B}B_{a\mu \nu } \notag \\ &&+8V_{\mu }^{A}C^{B}A_{a}^{\ast \mu }-\left( V_{\alpha }^{\ast A}V^{B\alpha }+4V^{\ast A\alpha \beta }V_{\alpha \beta }^{B}\right) \eta _{a\mu \nu \rho \lambda }\varepsilon ^{\mu \nu \rho \lambda } \notag \\ &&+4V^{A\alpha }V_{\alpha \mu }^{B}\varepsilon ^{\mu \nu \rho \lambda }\eta _{a\nu \rho \lambda }-8V^{A\mu }C^{B\nu }B_{a\mu \nu }, \label{f30}\end{aligned}$$$$\begin{aligned} X_{a,m_{1}}^{AB} &=&4\left( H_{m_{1}}^{\ast \alpha }C_{\alpha }^{\ast A}\varepsilon ^{\mu \nu \rho \lambda }-12C_{m_{1}}^{\ast \mu \nu }V^{\ast A\rho \lambda }-4C_{m_{1}}^{\ast \mu \nu \rho }V^{A\lambda }\right) C^{B}\eta _{a\mu \nu \rho \lambda } \notag \\ &&-12\left( 2H_{m_{1}}^{\ast \mu }V^{\ast A\nu \rho }+C_{m_{1}}^{\ast \mu \nu }V^{A\rho }\right) \left( C^{B}\eta _{a\mu \nu \rho }+4C^{B\lambda }\eta _{a\mu \nu \rho \lambda }\right) \notag \\ &&-4H_{m_{1}}^{\ast \mu }V^{A\nu }\left( 6C^{B\rho }\eta _{a\mu \nu \rho }+V_{\mu \nu }^{B}\eta _{a\alpha \beta \gamma \delta }\varepsilon ^{\alpha \beta \gamma \delta }+2C^{B}B_{a\mu \nu }\right) , \label{f31}\end{aligned}$$$$\begin{aligned} X_{a,m_{1}m_{2}}^{AB} &=&-16\left( 3H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }V^{\ast A\rho \lambda }+C_{m_{1}}^{\ast \lbrack \mu \nu }H_{m_{2}}^{\ast \rho ]}V^{A\lambda }\right) C^{B}\eta _{a\mu \nu \rho \lambda } \notag \\ &&-12H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }V^{A\rho }\left( C^{B}\eta _{a\mu \nu \rho }+4C^{B\lambda }\eta _{a\mu \nu \rho \lambda }\right) , \label{f32}\end{aligned}$$$$X_{a,m_{1}m_{2}m_{3}}^{AB}=-16H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }H_{m_{3}}^{\ast \rho }V_{A}^{\lambda }C^{B}\eta _{a\mu \nu \rho \lambda }. \label{f33}$$The objects denoted by $X_{m_{1}\ldots m_{p}}^{aABC}$, $X_{AB,m_{1}\ldots m_{p}}^{abc}$, $X_{AB,m_{1}\ldots m_{p}}^{a}$, and $X_{ABa,m_{1}\ldots m_{p}}^{b}$, with $p=\overline{0,2}$, read as $$\begin{aligned} X^{aABC} &=&-\tfrac{1}{4!}\left( 2C^{\ast A\mu }V_{\mu }^{B}+V^{\ast A\mu \nu }V^{\ast B\rho \lambda }\varepsilon _{\mu \nu \rho \lambda }\right) C^{C}\eta ^{a} \notag \\ &&+\tfrac{1}{12}V^{\ast A\mu \nu }V^{B\rho }\left( C^{C}A^{a\lambda }-C^{C\lambda }\eta ^{a}\right) \varepsilon _{\mu \nu \rho \lambda } \notag \\ &&+\tfrac{1}{4!}V^{A\mu }V^{B\nu }C^{C}B^{\ast a\rho \lambda }\varepsilon _{\mu \nu \rho \lambda } \notag \\ &&-\tfrac{1}{4!}V^{A\mu }V^{B\nu }\left( C^{C\rho }A^{a\lambda }\varepsilon _{\mu \nu \rho \lambda }+V_{\mu \nu }^{C}\eta ^{a}\right) , \label{f23}\end{aligned}$$$$\begin{aligned} X_{m_{1}}^{aABC} &=&\tfrac{1}{2\cdot 4!}\left( C_{m_{1}}^{\ast \mu \nu }V^{A\rho }+4H_{m_{1}}^{\ast \mu }V^{\ast A\nu \rho }\right) V^{B\lambda }C^{C}\eta ^{a}\varepsilon _{\mu \nu \rho \lambda } \notag \\ &&+\tfrac{1}{4!}H_{m_{1}}^{\ast \mu }V^{A\nu }V^{B\rho }\left( C^{C}A^{a\lambda }-C^{C\lambda }\eta ^{a}\right) \varepsilon _{\mu \nu \rho \lambda }, \label{f24}\end{aligned}$$$$X_{m_{1}m_{2}}^{aABC}=\tfrac{1}{2\cdot 4!}H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }V^{A\rho }V^{B\lambda }C^{C}\eta ^{a}\varepsilon _{\mu \nu \rho \lambda }, \label{f25}$$$$\begin{aligned} X_{AB}^{abc} &=&\tfrac{1}{2\cdot 4!}\left[ \left( V_{A}^{\ast \mu \nu }V_{B}^{\rho }A^{a\lambda }+V_{A}^{\mu }V_{B}^{\nu }B^{\ast a\rho \lambda }\right) \eta ^{b}-V_{A}^{\mu }V_{B}^{\nu }A^{a\rho }A^{b\lambda }\right] \eta ^{c}\varepsilon _{\mu \nu \rho \lambda } \notag \\ &&-\tfrac{1}{6\cdot 4!}\left( 2C_{A\mu }^{\ast }V_{B}^{\mu }+V_{A}^{\ast \mu \nu }V_{B}^{\ast \rho \lambda }\varepsilon _{\mu \nu \rho \lambda }\right) \eta ^{a}\eta ^{b}\eta ^{c}, \label{f34}\end{aligned}$$$$\begin{aligned} X_{AB,m_{1}}^{abc} &=&\tfrac{1}{12\cdot 4!}\left[ \left( 4H_{m_{1}}^{\ast \mu }V_{A}^{\ast \nu \rho }V_{B}^{\lambda }+C_{m_{1}}^{\ast \mu \nu }V_{A}^{\rho }V_{B}^{\lambda }\right) \eta ^{a}\right. \notag \\ &&\left. +6H_{m_{1}}^{\ast \mu }V_{A}^{\nu }V_{B}^{\rho }A^{a\lambda }\right] \eta ^{b}\eta ^{c}\varepsilon _{\mu \nu \rho \lambda }, \label{f35}\end{aligned}$$$$X_{AB,m_{1}m_{2}}^{abc}=\tfrac{1}{12\cdot 4!}H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }V_{A}^{\rho }V_{B}^{\lambda }\eta ^{a}\eta ^{b}\eta ^{c}\varepsilon _{\mu \nu \rho \lambda }, \label{f36}$$$$\begin{aligned} X_{AB}^{a} &=&-\tfrac{1}{12\cdot 4!}\left( C_{A\alpha }^{\ast }V_{B}^{\alpha }\varepsilon ^{\mu \nu \rho \lambda }-12V_{A}^{\ast \mu \nu }V_{B}^{\ast \rho \lambda }\right) C_{\mu \nu \rho \lambda }^{a} \notag \\ &&-\tfrac{1}{2\cdot 4!}V_{A}^{\ast \mu \nu }V_{B}^{\rho }C_{\mu \nu \rho }^{a}-\tfrac{1}{2\cdot 4!}V_{A}^{\mu }V_{B}^{\nu }C_{\mu \nu }^{a}, \label{f13}\end{aligned}$$$$\begin{aligned} X_{AB,m_{1}}^{a} &=&-\tfrac{1}{12}\left( H_{m_{1}}^{\ast \mu }V_{A}^{\ast \nu \rho }V_{B}^{\lambda }+\tfrac{1}{4}C_{m_{1}}^{\ast \mu \nu }V_{A}^{\rho }V_{B}^{\lambda }\right) C_{\mu \nu \rho \lambda }^{a} \notag \\ &&-\tfrac{1}{4\cdot 4!}H_{m_{1}}^{\ast \mu }V_{A}^{\nu }V_{B}^{\rho }C_{\mu \nu \rho }^{a}, \label{f14}\end{aligned}$$$$X_{AB,m_{1}m_{2}}^{a}=-\tfrac{1}{2\cdot 4!}H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }V_{A}^{\rho }V_{B}^{\lambda }C_{\mu \nu \rho \lambda }^{a}, \label{f15}$$$$\begin{aligned} X_{ABa}^{b} &=&-\tfrac{1}{12\cdot 4!}\left( C_{A\alpha }^{\ast }V_{B}^{\alpha }\varepsilon ^{\mu \nu \rho \lambda }-12V_{A}^{\ast \mu \nu }V_{B}^{\ast \rho \lambda }\right) \eta _{a\mu \nu \rho \lambda }\eta ^{b} \notag \\ &&+\tfrac{1}{2\cdot 4!}V_{A}^{\ast \mu \nu }V_{B}^{\rho }\left( \eta _{a\mu \nu \rho }\eta ^{b}-4\eta _{a\mu \nu \rho \lambda }A^{b\lambda }\right) -\tfrac{1}{4!}V_{A}^{\mu }V_{B}^{\nu }\eta _{a\mu \nu \rho \lambda }B^{\ast b\rho \lambda } \notag \\ &&-\tfrac{1}{12\cdot 4!}V_{A}^{\mu }V_{B}^{\nu }\left( B_{a\mu \nu }\eta ^{b}+3\eta _{a\mu \nu \rho }A^{b\rho }\right) , \label{f16}\end{aligned}$$$$\begin{aligned} X_{ABa,m_{1}}^{b} &=&-\tfrac{1}{12}\left( H_{m_{1}}^{\ast \mu }V_{A}^{\ast \nu \rho }+\tfrac{1}{4}C_{m_{1}}^{\ast \mu \nu }V_{A}^{\rho }\right) V_{B}^{\lambda }\eta _{a\mu \nu \rho \lambda }\eta ^{b} \notag \\ &&+\tfrac{1}{4\cdot 4!}H_{m_{1}}^{\ast \mu }V_{A}^{\nu }V_{B}^{\rho }\left( \eta _{a\mu \nu \rho }\eta ^{b}-4\eta _{a\mu \nu \rho \lambda }A^{b\lambda }\right) , \label{f17}\end{aligned}$$$$X_{ABa,m_{1}m_{2}}^{b}=-\tfrac{1}{2\cdot 4!}H_{m_{1}}^{\ast \mu }H_{m_{2}}^{\ast \nu }V_{A}^{\rho }V_{B}^{\lambda }\eta _{a\mu \nu \rho \lambda }\eta ^{b}. \label{f18}$$In the end of this section we list the remaining type-$X$ objects from ([so3]{}), namely $X_{ABCD,m_{1}\ldots m_{p}}$, $X_{ABC,m_{1}\ldots m_{p}}^{ab}$, and $X_{a,m_{1}\ldots m_{p}}^{ABC}$, with $p=\overline{0,1}$, as well as $X_{ABCD}^{a}$: $$X_{ABCD}=\tfrac{1}{12}\left( V_{A}^{\ast \mu \nu }V_{B}^{\rho }V_{C}^{\lambda }C_{D}+\tfrac{1}{3}V_{A}^{\mu }V_{B}^{\nu }V_{C}^{\rho }C_{D}^{\lambda }\right) \varepsilon _{\mu \nu \rho \lambda }, \label{f37}$$$$X_{ABCD,m_{1}}=\tfrac{2}{3\cdot 4!}H_{m_{1}}^{\ast \mu }V_{A}^{\nu }V_{B}^{\rho }V_{C}^{\lambda }C_{D}\varepsilon _{\mu \nu \rho \lambda }, \label{f38}$$$$X_{ABC}^{ab}=\tfrac{1}{4!}\left( V_{A}^{\ast \mu \nu }V_{B}^{\rho }V_{C}^{\lambda }\eta ^{a}\eta ^{b}-\tfrac{2}{3}V_{A}^{\mu }V_{B}^{\nu }V_{C}^{\rho }A^{a\lambda }\eta ^{b}\right) \varepsilon _{\mu \nu \rho \lambda }, \label{f42}$$$$X_{ABC,m_{1}}^{ab}=\tfrac{1}{3\cdot 4!}H_{m_{1}}^{\ast \mu }V_{A}^{\nu }V_{B}^{\rho }V_{C}^{\lambda }\eta ^{a}\eta ^{b}\varepsilon _{\mu \nu \rho \lambda }, \label{f43}$$$$X_{a}^{ABC}=-\tfrac{1}{12}V^{\ast A\mu \nu }V^{B\rho }V^{C\lambda }\eta _{a\mu \nu \rho \lambda }-\tfrac{1}{6\cdot 4!}V^{A\mu }V^{B\nu }V^{C\rho }\eta _{a\mu \nu \rho }, \label{f40}$$$$X_{a,m_{1}}^{ABC}=-\tfrac{2}{3\cdot 4!}H_{m_{1}}^{\ast \mu }V^{A\nu }V^{B\rho }V^{C\lambda }\eta _{a\mu \nu \rho \lambda }, \label{f41}$$$$X_{ABCD}^{a}=-\tfrac{1}{3\cdot 4!}V_{A}^{\mu }V_{B}^{\nu }V_{C}^{\rho }V_{D}^{\lambda }\eta ^{a}\varepsilon _{\mu \nu \rho \lambda}. \label{f44}$$ Gauge generators of the deformed model \[appendixB\] ==================================================== From the terms of antighost number $1$ present in (\[defsolmast\]) we determine the deformed gauge generators that produce the deformed gauge transformations (\[gaugeA\])–(\[gaugeV1\]). We added a supplementary index between parentheses to the gauge generators such as to distinguish among the fields to which the gauge generators are associated with. We list below only the nonvanishing generators of the various fields, which read as: $$(\bar{Z}_{a(\varphi )})_{b}=-\lambda W_{ab}, \label{g1}$$$$\begin{aligned} (\bar{Z}_{\mu (H)}^{a})_{b} &=&\tfrac{\lambda }{2}\varepsilon _{\mu \nu \rho \lambda }\left[ \left( -\tfrac{1}{12}\frac{\partial M_{bcde}}{\partial \varphi _{a}}A^{c\nu }+\frac{\partial f_{bde}^{A}}{\partial \varphi _{a}}V_{A}^{\nu }\right) A^{d\rho }\right. \notag \\ &&\left. +\frac{\partial g_{be}^{AB}}{\partial \varphi _{a}}V_{A}^{\nu }V_{B}^{\rho }\right] A^{e\lambda }+\lambda \left[ -\frac{\partial W_{bc}}{\partial \varphi _{a}}H_{\mu }^{c}+\frac{\partial f_{bB}^{A}}{\partial \varphi _{a}}V_{A}^{\nu }V_{\mu \nu }^{B}\right. \notag \\ &&\left. +\left( \frac{\partial M_{bc}^{d}}{\partial \varphi _{a}}A^{c\nu }+\tfrac{1}{12}\frac{\partial f_{\;\;b}^{Ad}}{\partial \varphi _{a}}V_{A}^{\nu }\right) B_{d\mu \nu }\right] , \label{g2a}\end{aligned}$$$$(\bar{Z}_{\mu (H)}^{a})_{b}^{\alpha \beta }=-\delta _{b}^{a}\partial _{\left. {}\right. }^{\left[ \alpha \right. }\delta _{\mu }^{\left. \beta \right] }+\lambda \left( \frac{\partial W_{bc}}{\partial \varphi _{a}}A_{\left. {}\right. }^{c[\alpha }\delta _{\mu }^{\beta ]}-\tfrac{1}{12}\frac{\partial f_{Ab}}{\partial \varphi _{a}}V_{\left. {}\right. }^{A[\alpha }\delta _{\mu }^{\beta ]}\right) , \label{g2b}$$$$\begin{aligned} (\bar{Z}_{\mu (H)}^{a})_{\alpha \beta \gamma }^{b} &=&-\tfrac{\lambda }{2}\frac{\partial M_{cd}^{b}}{\partial \varphi _{a}}\sigma _{\mu \lbrack \alpha }^{\left. {}\right. }A_{\beta }^{c}A_{\gamma ]}^{d}+2\lambda \frac{\partial M^{bc}}{\partial \varphi _{a}}\sigma _{\mu \rho }B_{c}^{\rho \lambda }\varepsilon _{\lambda \alpha \beta \gamma } \notag \\ &&+\tfrac{\lambda }{4!}\left( \frac{\partial f_{Ac}^{b}}{\partial \varphi _{a}}\sigma _{\mu \lbrack \alpha }V_{\beta }^{A}A_{\gamma ]}^{c}-\frac{\partial g_{AB}^{b}}{\partial \varphi _{a}}\sigma _{\mu \lbrack \alpha }V_{\beta }^{A}V_{\gamma ]}^{B}\right) , \label{g2c}\end{aligned}$$$$(\bar{Z}_{\mu (H)}^{a})_{A}^{\sigma }=\lambda \varepsilon _{\mu \nu \rho \lambda }\sigma ^{\lambda \sigma }\left( \frac{\partial f_{bAB}}{\partial \varphi _{a}}V^{B\nu }A^{b\rho }+\tfrac{1}{2}\frac{\partial g_{\quad A}^{BC}}{\partial \varphi _{a}}V_{B}^{\nu }V_{C}^{\rho }\right) , \label{g2d}$$$$(\bar{Z}_{\mu (A)}^{a})_{b}=\delta _{b}^{a}\partial _{\mu }-\lambda M_{bc}^{a}A_{\mu }^{c}-\tfrac{\lambda }{12}f_{Ab}^{a}V_{\mu }^{A}, \label{g3a}$$$$(\bar{Z}_{\mu (A)}^{a})_{\alpha \beta \gamma }^{b}=-2\lambda M^{ab}\varepsilon _{\mu \alpha \beta \gamma }, \label{g3b}$$$$\begin{aligned} (\bar{Z}_{a(B)}^{\mu \nu })_{b} &=&\lambda \varepsilon ^{\mu \nu \rho \lambda }\left( \tfrac{1}{8}M_{abcd}A_{\rho }^{c}A_{\lambda }^{d}+f_{Aabc}V_{\rho }^{A}A_{\lambda }^{c}-\tfrac{1}{2}g_{ABab}V_{\rho }^{A}V_{\lambda }^{B}\right) \notag \\ &&-\lambda M_{ab}^{c}B_{c}^{\mu \nu }, \label{g4a}\end{aligned}$$$$(\bar{Z}_{a(B)}^{\mu \nu })_{b}^{\alpha \beta }=\lambda W_{ab}\sigma ^{\mu \lbrack \alpha }\sigma ^{\beta ]\nu }, \label{g4b}$$$$(\bar{Z}_{a(B)}^{\mu \nu })_{\alpha \beta \gamma }^{b}=-\tfrac{1}{2}\delta _{a}^{b}\partial _{\left[ \alpha \right. }^{\left. {}\right. }\delta _{\beta }^{\mu }\delta _{\left. \gamma \right] }^{\nu }-\tfrac{\lambda }{2}\left( M_{ac}^{b}\delta _{\lbrack \alpha }^{\mu }\delta _{\beta }^{\nu }A_{\gamma ]}^{c}+\tfrac{1}{12}f_{Aa}^{b}\delta _{\lbrack \alpha }^{\mu }\delta _{\beta }^{\nu }V_{\gamma ]}^{A}\right) , \label{g4c}$$$$(\bar{Z}_{a(B)}^{\mu \nu })_{A}^{\lambda }=-\lambda \varepsilon ^{\mu \nu \rho \lambda }f_{aAB}V_{\rho }^{B}, \label{g4d}$$$$(\bar{Z}_{\mu (V)}^{A})_{a}=\lambda f_{aB}^{A}V_{\mu }^{B}, \label{g5}$$$$(\bar{Z}_{\mu \nu (V)}^{A})_{a}=\lambda f_{aB}^{A}V_{\mu \nu }^{B}+\tfrac{\lambda }{12}f_{\;\;a}^{Ab}B_{b\mu \nu }+\lambda \varepsilon _{\mu \nu \rho \lambda }\left( \tfrac{1}{2}f_{abc}^{A}A^{b\rho }+g_{ac}^{AB}V_{B}^{\rho }\right) A^{c\lambda }, \label{g6a}$$$$(\bar{Z}_{\mu \nu (V)}^{A})_{a}^{\alpha \beta }=\tfrac{\lambda }{4!}f_{a}^{A}\delta _{\mu }^{[\alpha }\delta _{\nu }^{\beta ]}, \label{g6b}$$$$(\bar{Z}_{\mu \nu (V)}^{A})_{\alpha \beta \gamma }^{a}=\tfrac{\lambda }{4!}\left( f_{\;\;b}^{Aa}A_{\sigma }^{b}-g^{aAB}V_{B\sigma }\right) \sigma _{\mu \rho }\sigma _{\nu \lambda }\delta _{\lbrack \alpha }^{\rho }\delta _{\beta }^{\lambda }\delta _{\gamma ]}^{\sigma }, \label{g6c}$$$$(\bar{Z}_{\mu \nu (V)}^{A})_{B\lambda }=\varepsilon _{\mu \nu \rho \lambda }\left( \delta _{B}^{A}\partial ^{\rho }-\lambda f_{aB}^{A}A^{a\rho }+\lambda g_{\quad B}^{AC}V_{C}^{\rho }\right) . \label{g6d}$$ Reducibility of the deformed gauge transformations \[appendixC\] ================================================================ From the terms of antighost number $2$ in (\[defsolmast\]) that are simultaneously linear in the ghosts for ghosts and in the antifields of the ghosts we identify the first-order reducibility functions for the coupled model as$$(\bar{Z}_{\alpha \beta }^{(1)a})_{b}^{\mu \nu \rho }=-\tfrac{1}{2}\left( \delta _{b}^{a}\partial _{\left. {}\right. }^{[\mu }\delta _{\alpha }^{\nu }\delta _{\beta }^{\rho ]}-\lambda \frac{\partial W_{bc}}{\partial \varphi _{a}}A_{\left. {}\right. }^{c[\mu }\delta _{\alpha }^{\nu }\delta _{\beta }^{\rho ]}\right) -\tfrac{\lambda }{2\cdot 4!}\frac{\partial f_{b}^{A}}{\partial \varphi _{a}}\delta _{\alpha }^{[\mu }\delta _{\beta }^{\nu }\delta _{\gamma }^{\rho ]}V_{A}^{\gamma }, \label{r1}$$$$\begin{aligned} (\bar{Z}_{\alpha \beta }^{(1)a})_{\mu \nu \rho \lambda }^{b} &=&\tfrac{\lambda }{8}\frac{\partial M_{cd}^{b}}{\partial \varphi _{a}}\sigma _{\alpha ^{\prime }[\alpha }\sigma _{\beta ]\beta ^{\prime }}\delta _{\lbrack \mu }^{\alpha ^{\prime }}\delta _{\nu }^{\beta ^{\prime }}A_{\rho }^{c}A_{\lambda ]}^{d}+\lambda \varepsilon _{\mu \nu \rho \lambda }\frac{\partial M^{bc}}{\partial \varphi _{a}}B_{c\alpha \beta } \notag \\ &&-\tfrac{\lambda }{4\cdot 4!}\varepsilon _{\mu \nu \rho \lambda }\varepsilon _{\alpha \beta \gamma \delta }\left( \frac{\partial g^{bAB}}{\partial \varphi _{a}}V_{A}^{\gamma }V_{B}^{\delta }-2\frac{\partial f_{\;\;c}^{Ab}}{\partial \varphi _{a}}V_{A}^{\gamma }A^{c\delta }\right) , \label{r2}\end{aligned}$$$$(\bar{Z}_{\alpha \beta }^{(1)a})_{A}=\tfrac{\lambda }{2} \varepsilon _{\alpha \beta \rho \lambda } \left( \frac{\partial f_{bA}^{B}}{\partial \varphi _{a}}V_{B}^{\rho }A^{b\lambda }-\tfrac{1}{2}\frac{\partial g_{\quad A}^{BC}}{\partial \varphi _{a}}V_{B}^{\rho }V_{C}^{\lambda }\right) , \label{r2a}$$$$\begin{aligned} (\bar{Z}_{a}^{(1)\alpha \beta \gamma })_{\mu \nu \rho \lambda }^{b} &=&-\tfrac{1}{6}\left( \delta _{a}^{b}\partial _{\lbrack \mu }^{\left. {}\right. }\delta _{\nu }^{\alpha }\delta _{\rho }^{\beta }\delta _{\lambda ]}^{\gamma }+\lambda M_{ac}^{b}A_{[\mu }^{c}\delta _{\nu }^{\alpha }\delta _{\rho }^{\beta }\delta _{\lambda ]}^{\gamma }\right) \notag \\ &&+\tfrac{\lambda }{3\cdot 4!}f_{\;\;a}^{Ab}\delta _{\lbrack \mu }^{\alpha }\delta _{\nu }^{\beta }\delta _{\rho }^{\gamma }\delta _{\lambda ]}^{\delta }V_{A\delta }, \label{r3}\end{aligned}$$$$(\bar{Z}_{a}^{(1)\alpha \beta \gamma })_{b}^{\mu \nu \rho }=-\tfrac{\lambda }{3}W_{ab}\left( \sigma ^{\alpha \lbrack \mu }\sigma ^{\nu ]\beta }\sigma ^{\rho \gamma }+\sigma ^{\alpha \lbrack \nu }\sigma ^{\rho ]\beta }\sigma ^{\mu \gamma }+\sigma ^{\alpha \lbrack \rho }\sigma ^{\mu ]\beta }\sigma ^{\nu \gamma }\right) , \label{r4}$$$$(\bar{Z}_{a}^{(1)\alpha \beta \gamma })_{A}=-\tfrac{\lambda }{3}\varepsilon ^{\alpha \beta \gamma \delta }f_{aA}^{B}V_{B\delta }, \label{r4a}$$$$(\bar{Z}_{\mu }^{(1)A})_{B}=\delta _{B}^{A}\partial _{\mu }-\lambda f_{aB}^{A}A_{\mu }^{a}+\lambda g_{\quad B}^{AC}V_{C\mu }, \label{r5}$$$$(\bar{Z}_{\mu }^{(1)A})_{a}^{\alpha \beta \gamma }=\tfrac{\lambda }{4!}f_{a}^{A}\sigma _{\mu \nu }\varepsilon ^{\nu \alpha \beta \gamma }, \label{r6}$$$$(\bar{Z}_{\mu }^{(1)A})_{\alpha \beta \gamma \delta }^{a}=-\tfrac{\lambda }{4!}\varepsilon _{\alpha \beta \gamma \delta }\left( f_{\;\;b}^{Aa}A_{\mu }^{b}-g^{aAB}V_{B\mu }\right) , \label{r7}$$$$(\bar{Z}^{(1)a})_{\alpha \beta \gamma \delta }^{b}=-2\lambda \varepsilon _{\alpha \beta \gamma \delta }M^{ab}. \label{r8}$$The first-order reducibility relations of the coupled theory result from the components of (\[defsolmast\]) with the antighost number equal to $2$ that are simultaneously linear in the ghosts for ghosts and quadratic in the antifields of the original fields, being expressed in De Witt condensed form as $$(\bar{Z}_{\mu (A)}^{a})_{e}(\bar{Z}^{(1)e})_{\alpha \beta \gamma \delta }^{b}+(\bar{Z}_{\mu (A)}^{a})_{\nu \rho \lambda }^{e}(\bar{Z}_{e}^{(1)\nu \rho \lambda })_{\alpha \beta \gamma \delta }^{b}=-2\lambda \varepsilon _{\alpha \beta \gamma \delta }\frac{\partial M^{ab}}{\partial \varphi _{c}}\frac{\delta S^{\mathrm{L}}}{\delta H^{c\mu }}, \label{r10}$$$$\begin{aligned} &&(\bar{Z}_{a(B)}^{\mu \nu })_{\rho \lambda \sigma }^{e}(\bar{Z}_{e}^{(1)\rho \lambda \sigma })_{b}^{\alpha \beta \gamma }+(\bar{Z}_{a(B)}^{\mu \nu })_{e}^{\rho \lambda }(\bar{Z}_{\rho \lambda }^{(1)e})_{b}^{\alpha \beta \gamma }+(\bar{Z}_{a(B)}^{\mu \nu })_{A}^{\sigma }(\bar{Z}_{\sigma }^{(1)A})_{b}^{\alpha \beta \gamma } \notag \\ &=&\lambda \frac{\partial W_{ab}}{\partial \varphi _{c}}\frac{\delta S^{\mathrm{L}}}{\delta H_{\rho }^{c}}\sigma ^{\mu \mu ^{\prime }}\sigma ^{\nu \nu ^{\prime }}\delta _{\mu ^{\prime }}^{[\alpha }\delta _{\nu ^{\prime }}^{\beta }\delta _{\rho }^{\gamma ]}, \label{r12}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{a(B)}^{\mu \nu })_{e}(\bar{Z}^{(1)e})_{\alpha \beta \gamma \delta }^{b}+(\bar{Z}_{a(B)}^{\mu \nu })_{\rho \lambda \sigma }^{e}(\bar{Z}_{e}^{(1)\rho \lambda \sigma })_{\alpha \beta \gamma \delta }^{b} \notag \\ &&+(\bar{Z}_{a(B)}^{\mu \nu })_{e}^{\rho \lambda }(\bar{Z}_{\rho \lambda }^{(1)e})_{\alpha \beta \gamma \delta }^{b}+(\bar{Z}_{a(B)}^{\mu \nu })_{A}^{\sigma }(\bar{Z}_{\sigma }^{(1)A})_{\alpha \beta \gamma \delta }^{b} \notag \\ &=&-\tfrac{\lambda }{2}\delta _{\lbrack \alpha }^{\mu }\delta _{\beta }^{\nu }\delta _{\gamma }^{\rho }\delta _{\delta ]}^{\lambda }\left( \frac{\partial M_{ac}^{b}}{\partial \varphi _{d}}\frac{\delta S^{\mathrm{L}}}{\delta H^{d\rho }}A_{\lambda }^{c}+M_{ac}^{b}\frac{\delta S^{\mathrm{L}}}{\delta B_{c}^{\rho \lambda }}\right) \notag \\ &&-\tfrac{\lambda }{4!}\delta _{\lbrack \alpha }^{\mu }\delta _{\beta }^{\nu }\delta _{\gamma }^{\rho }\delta _{\delta ]}^{\lambda }\left( f_{\;\;a}^{Ab}\frac{\delta S^{\mathrm{L}}}{\delta V^{A\rho \lambda }}+\frac{\partial f_{\;\;a}^{Ab}}{\partial \varphi _{c}}\frac{\delta S^{\mathrm{L}}}{\delta H^{c\rho }}V_{A\lambda }\right) , \label{r13}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{a(B)}^{\mu \nu })_{\rho \lambda \sigma }^{e}(\bar{Z}_{e}^{(1)\rho \lambda \sigma })_{A}+(\bar{Z}_{a(B)}^{\mu \nu })_{e}^{\rho \lambda }(\bar{Z}_{\rho \lambda }^{(1)e})_{A}+(\bar{Z}_{a(B)}^{\mu \nu })_{B}^{\sigma }(\bar{Z}_{\sigma }^{(1)B})_{A} \notag \\ &=&\lambda \varepsilon ^{\mu \nu \rho \lambda }\left( f_{aA}^{B}\frac{\delta S^{\mathrm{L}}}{\delta V^{B\rho \lambda }}+\frac{\partial f_{aA}^{B}}{\partial \varphi _{c}}\frac{\delta S^{\mathrm{L}}}{\delta H^{c\rho }}V_{B\lambda }\right) , \label{r13a}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{\mu \nu (V)}^{A})_{C}^{\sigma }(\bar{Z}_{\sigma }^{(1)C})_{B}+(\bar{Z}_{\mu \nu (V)}^{A})_{e}^{\rho \lambda }(\bar{Z}_{\rho \lambda }^{(1)e})_{B}+(\bar{Z}_{\mu \nu (V)}^{A})_{\rho \lambda \sigma }^{e}(\bar{Z}_{e}^{(1)\rho \lambda \sigma })_{B} \notag \\ &=&-\lambda \varepsilon _{\mu \nu \rho \lambda }\left( f_{aB}^{A}\frac{\delta S^{\mathrm{L}}}{\delta B_{a\rho \lambda }}+\frac{\partial f_{aB}^{A}}{\partial \varphi _{c}}\frac{\delta S^{\mathrm{L}}}{\delta H_{\rho }^{c}}A^{a\lambda }\right) \notag \\ &&+\lambda \varepsilon _{\mu \nu \rho \lambda }\left( g_{\quad B}^{AC}\frac{\delta S^{\mathrm{L}}}{\delta V_{\rho \lambda }^{C}}+\frac{\partial g_{\quad B}^{AC}}{\partial \varphi _{c}}\frac{\delta S^{\mathrm{L}}}{\delta H_{\rho }^{c}}V_{C}^{\lambda }\right) , \label{r14}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{\mu \nu (V)}^{A})_{B}^{\sigma }(\bar{Z}_{\sigma }^{(1)B})_{a}^{\alpha \beta \gamma }+(\bar{Z}_{\mu \nu (V)}^{A})_{e}^{\rho \lambda }(\bar{Z}_{\rho \lambda }^{(1)e})_{a}^{\alpha \beta \gamma }+(\bar{Z}_{\mu \nu (V)}^{A})_{\rho \lambda \sigma }^{e}(\bar{Z}_{e}^{(1)\rho \lambda \sigma })_{a}^{\alpha \beta \gamma } \notag \\ &=&\tfrac{\lambda }{4!}\delta _{\mu }^{[\alpha }\delta _{\nu }^{\beta }\delta _{\rho }^{\gamma ]}\frac{\partial f_{a}^{A}}{\partial \varphi _{b}}\frac{\delta S^{\mathrm{L}}}{\delta H_{\rho }^{b}}, \label{r15}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{\mu \nu (V)}^{A})_{B}^{\sigma }(\bar{Z}_{\sigma }^{(1)B})_{\alpha \beta \gamma \delta }^{a}+(\bar{Z}_{\mu \nu (V)}^{A})_{e}^{\rho \lambda }(\bar{Z}_{\rho \lambda }^{(1)e})_{\alpha \beta \gamma \delta }^{a} \notag \\ &&+(\bar{Z}_{\mu \nu (V)}^{A})_{e}(\bar{Z}^{(1)e})_{\alpha \beta \gamma \delta }^{a}+(\bar{Z}_{\mu \nu (V)}^{A})_{\rho \lambda \sigma }^{e}(\bar{Z}_{e}^{(1)\rho \lambda \sigma })_{\alpha \beta \gamma \delta }^{a} \notag \\ &=&\tfrac{\lambda }{4!}\sigma _{\mu \mu ^{\prime }}\sigma _{\nu \nu ^{\prime }}\delta _{\lbrack \alpha }^{\mu ^{\prime }}\delta _{\beta }^{\nu ^{\prime }}\delta _{\gamma }^{\rho }\delta _{\delta ]}^{\lambda }\left( f_{\;\;b}^{Aa}\frac{\delta S^{\mathrm{L}}}{\delta B_{b}^{\rho \lambda }}+\frac{\partial f_{\;\;b}^{Aa}}{\partial \varphi _{c}}\frac{\delta S^{\mathrm{L}}}{\delta H^{c\rho }}A_{\lambda }^{b}\right) \notag \\ &&-\tfrac{\lambda }{4!}\sigma _{\mu \mu ^{\prime }}\sigma _{\nu \nu ^{\prime }}\delta _{\lbrack \alpha }^{\mu ^{\prime }}\delta _{\beta }^{\nu ^{\prime }}\delta _{\gamma }^{\rho }\delta _{\delta ]}^{\lambda }\left( g^{aAB}\frac{\delta S^{\mathrm{L}}}{\delta V^{B\rho \lambda }}+\frac{\partial g^{aAB}}{\partial \varphi _{b}}\frac{\delta S^{\mathrm{L}}}{\delta H^{b\rho }}V_{B\lambda }\right) , \label{r15a}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{\mu (H)}^{a})_{\rho \lambda \sigma }^{e}(\bar{Z}_{e}^{(1)\rho \lambda \sigma })_{b}^{\alpha \beta \gamma }+(\bar{Z}_{\mu (H)}^{a})_{e}^{\rho \lambda }(\bar{Z}_{\rho \lambda }^{(1)e})_{b}^{\alpha \beta \gamma }+(\bar{Z}_{\mu (H)}^{a})_{B}^{\sigma }(\bar{Z}_{\sigma }^{(1)B})_{b}^{\alpha \beta \gamma } \notag \\ &=&\lambda \delta _{\mu }^{[\alpha }\delta _{\nu }^{\beta }\delta _{\rho }^{\gamma ]}\left( \frac{\partial W_{bc}}{\partial \varphi _{a}}\frac{\delta S^{\mathrm{L}}}{\delta B_{c\nu \rho }}+\frac{\partial ^{2}W_{bc}}{\partial \varphi _{a}\partial \varphi _{e}}\frac{\delta S^{\mathrm{L}}}{\delta H_{\nu }^{e}}A^{c\rho }\right) \notag \\ &&-\tfrac{\lambda }{4!}\delta _{\mu }^{[\alpha }\delta _{\nu }^{\beta }\delta _{\rho }^{\gamma ]}\left( \frac{\partial f_{b}^{A}}{\partial \varphi _{a}}\frac{\delta S^{\mathrm{L}}}{\delta V_{\nu \rho }^{A}}+\frac{\partial ^{2}f_{b}^{A}}{\partial \varphi _{a}\partial \varphi _{c}}\frac{\delta S^{\mathrm{L}}}{\delta H_{\nu }^{c}}V_{A}^{\rho }\right) , \label{r17}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{\mu (H)}^{a})_{e}(\bar{Z}^{(1)e})_{\alpha \beta \gamma \delta }^{b}+(\bar{Z}_{\mu (H)}^{a})_{\rho \lambda \sigma }^{e}(\bar{Z}_{e}^{(1)\rho \lambda \sigma })_{\alpha \beta \gamma \delta }^{b} \notag \\ &&+(\bar{Z}_{\mu (H)}^{a})_{e}^{\rho \lambda }(\bar{Z}_{\rho \lambda }^{(1)e})_{\alpha \beta \gamma \delta }^{b}+(\bar{Z}_{\mu (H)}^{a})_{B}^{\sigma }(\bar{Z}_{\sigma }^{(1)B})_{\alpha \beta \gamma \delta }^{b} \notag \\ &=&\tfrac{\lambda }{2}\sigma _{\mu \lbrack \alpha }^{\left. {}\right. }\delta _{\beta }^{\nu }\delta _{\gamma }^{\rho }\delta _{\delta ]}^{\lambda }\left( \frac{\partial M_{cd}^{b}}{\partial \varphi _{a}}\frac{\delta S^{\mathrm{L}}}{\delta B_{c}^{\nu \rho }}A_{\lambda }^{d}+\tfrac{1}{2}\frac{\partial M_{cd}^{b}}{\partial \varphi _{a}\partial \varphi _{e}}\frac{\delta S^{\mathrm{L}}}{\delta H^{e\nu }}A_{\rho }^{c}A_{\lambda }^{d}\right) \notag \\ &&+2\lambda \varepsilon _{\alpha \beta \gamma \delta }\left( \frac{\partial M^{bc}}{\partial \varphi _{a}}\frac{\delta S^{\mathrm{L}}}{\delta A^{c\mu }}+\frac{\partial ^{2}M^{bc}}{\partial \varphi _{a}\partial \varphi _{d}}\frac{\delta S^{\mathrm{L}}}{\delta H_{\nu }^{d}}B_{c\mu \nu }\right) \notag \\ &&-\tfrac{\lambda }{4!}\sigma _{\mu \lbrack \alpha }^{\left. {}\right. }\delta _{\beta }^{\nu }\delta _{\gamma }^{\rho }\delta _{\delta ]}^{\lambda }\left[ \frac{\partial ^{2}f_{\;\;c}^{Ab}}{\partial \varphi _{a}\partial \varphi _{d}}\frac{\delta S^{\mathrm{L}}}{\delta H^{d\nu }}V_{A\rho }A_{\lambda }^{c}+\frac{\partial f_{\;\;c}^{Ab}}{\partial \varphi _{a}}\left( \frac{\delta S^{\mathrm{L}}}{\delta V^{A\nu \rho }}A_{\lambda }^{c}\right. \right. \notag \\ &&\left. \left. -\frac{\delta S^{\mathrm{L}}}{\delta B_{c}^{\nu \rho }}V_{A\lambda }\right) -\left( \frac{\partial ^{2}g^{bAB}}{\partial \varphi _{a}\partial \varphi _{c}}\frac{\delta S^{\mathrm{L}}}{\delta H^{c\nu }}V_{A\rho }+\frac{\partial g^{bAB}}{\partial \varphi _{a}}\frac{\delta S^{\mathrm{L}}}{\delta V^{A\nu \rho }}\right) V_{B\lambda }\right] , \label{r18}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{\mu (H)}^{a})_{C}^{\sigma }(\bar{Z}_{\sigma }^{(1)C})_{A}+(\bar{Z}_{\mu (H)}^{a})_{e}^{\rho \lambda }(\bar{Z}_{\rho \lambda }^{(1)e})_{A}+(\bar{Z}_{\mu (H)}^{a})_{\rho \lambda \sigma }^{e}(\bar{Z}_{e}^{(1)\rho \lambda \sigma })_{A} \notag \\ &=&\lambda \varepsilon _{\mu \nu \rho \lambda }\left[ \frac{\delta S^{\mathrm{L}}}{\delta V_{\nu \rho }^{B}}\left( \frac{\partial f_{bA}^{B}}{\partial \varphi _{a}}A^{b\lambda }-\frac{\partial g_{\quad A}^{BC}}{\partial \varphi _{a}}V_{C}^{\lambda }\right) -\frac{\partial f_{bA}^{B}}{\partial \varphi _{a}}\frac{\delta S^{\mathrm{L}}}{\delta B_{b\nu \rho }}V_{B}^{\lambda }\right. \notag \\ &&\left. +\frac{\delta S^{\mathrm{L}}}{\delta H_{\nu }^{c}}\left( \frac{\partial ^{2}f_{bA}^{B}}{\partial \varphi _{a}\partial \varphi _{c}}V_{B}^{\rho }A^{b\lambda }-\tfrac{1}{2}\frac{\partial ^{2}g_{\quad A}^{BC}}{\partial \varphi _{a}\partial \varphi _{c}}V_{B}^{\rho }V_{C}^{\lambda }\right) \right] . \label{r18a}\end{aligned}$$The deformed gauge generators are given in (\[g1\])–(\[g6d\]) and $S^{\mathrm{L}}$ represents the deformed Lagrangian action (\[ldef\]). The pieces of antighost number $3$ from (\[defsolmast\]) that are simultaneously linear in the ghosts for ghosts for ghosts and in the antifields of the ghosts for ghosts offer us the second-order reducibility functions for the interacting model of the form $$(\bar{Z}^{(2)A})_{a}^{\mu \nu \rho \lambda }=\tfrac{\lambda }{4!}f_{a}^{A}\varepsilon ^{\mu \nu \rho \lambda }, \label{r19}$$$$\begin{aligned} (\bar{Z}_{\alpha \beta \gamma }^{(2)a})_{b}^{\mu \nu \rho \lambda } &=&-\tfrac{1}{6}\left( \delta _{b}^{a}\partial _{\left. {}\right. }^{[\mu }\delta _{\alpha }^{\nu }\delta _{\beta }^{\rho }\delta _{\gamma }^{\lambda ]}+\lambda \frac{\partial W_{cb}}{\partial \varphi _{a}}A_{\left. {}\right. }^{c[\mu }\delta _{\alpha }^{\nu }\delta _{\beta }^{\rho }\delta _{\gamma }^{\lambda ]}\right) \notag \\ &&+\tfrac{\lambda }{3!\cdot 4!}\delta _{\alpha }^{[\mu }\delta _{\beta }^{\nu }\delta _{\gamma }^{\rho }\delta _{\delta }^{\lambda ]}\frac{\partial f_{b}^{A}}{\partial \varphi _{a}}V_{A}^{\delta }, \label{r20}\end{aligned}$$$$(\bar{Z}_{a}^{(2)\mu _{1}\mu _{2}\mu _{3}\mu _{4}})_{b}^{\mu \nu \rho \lambda }=\tfrac{\lambda }{12}W_{ab}\sum\limits_{\pi \in S_{4}}\left( -\right) ^{\pi }\sigma ^{\mu _{\pi (1)}\mu }\sigma ^{\mu _{\pi (2)}\nu }\sigma ^{\mu _{\pi (3)}\rho }\sigma ^{\mu _{\pi (4)}\lambda }, \label{r21}$$where $S_{4}$ denotes the set of permutations of $\left\{ 1,2,3,4\right\} $ and $\left( -\right) ^{\pi }$ is the signature of a given permutation $\pi $. By means of the terms with the antighost number equal to $3$ present in (\[defsolmast\]) that are linear in the ghosts for ghosts for ghosts and also quadratic in antifields we infer the second-order reducibility relations for the interacting model in condensed De Witt form, which read as $$\begin{aligned} &&(\bar{Z}_{\mu }^{(1)A})_{B}(\bar{Z}^{(2)B})_{a}^{\alpha \beta \gamma \delta }+(\bar{Z}_{\mu }^{(1)A})_{b}^{\nu \rho \lambda }(\bar{Z}_{\nu \rho \lambda }^{(2)b})_{a}^{\alpha \beta \gamma \delta } \notag \\ &&+(\bar{Z}_{\mu }^{(1)A})_{\nu \rho \lambda \sigma }^{b}(\bar{Z}_{b}^{(2)\nu \rho \lambda \sigma })_{a}^{\alpha \beta \gamma \delta } \notag \\ &=&\tfrac{\lambda }{4!}\varepsilon ^{\alpha \beta \gamma \delta }\frac{\partial f_{a}^{A}}{\partial \varphi _{b}}\frac{\delta S^{\mathrm{L}}}{\delta H^{b\mu }}, \label{r22}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{a}^{(1)\alpha \beta \gamma })_{A}(\bar{Z}^{(2)A})_{b}^{\mu \nu \rho \lambda }+(\bar{Z}_{a}^{(1)\alpha \beta \gamma })_{e}^{\delta \sigma \varepsilon }(\bar{Z}_{\delta \sigma \varepsilon }^{(2)e})_{b}^{\mu \nu \rho \lambda } \notag \\ &&+(\bar{Z}_{a}^{(1)\alpha \beta \gamma })_{\delta \sigma \varepsilon \eta }^{e}(\bar{Z}_{e}^{(2)\delta \sigma \varepsilon \eta })_{b}^{\mu \nu \rho \lambda } \notag \\ &=&\tfrac{\lambda }{3}\delta _{\alpha ^{\prime }}^{[\mu }\delta _{\beta ^{\prime }}^{\nu }\delta _{\gamma ^{\prime }}^{\rho }\delta _{\delta ^{\prime }}^{\lambda ]}\sigma ^{\alpha \alpha ^{\prime }}\sigma ^{\beta \beta ^{\prime }}\sigma ^{\gamma \gamma ^{\prime }}\frac{\partial W_{ab}}{\partial \varphi _{c}}\frac{\delta S^{\mathrm{L}}}{\delta H_{\delta ^{\prime }}^{c}}, \label{r23}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{\mu \nu }^{(1)a})_{A}(\bar{Z}^{(2)A})_{b}^{\alpha \beta \gamma \delta }+(\bar{Z}_{\mu \nu }^{(1)a})_{e}^{\delta \sigma \varepsilon }(\bar{Z}_{\delta \sigma \varepsilon }^{(2)e})_{b}^{\alpha \beta \gamma \delta } \notag \\ &&+(\bar{Z}_{\mu \nu }^{(1)a})_{\delta \sigma \varepsilon \eta }^{e}(\bar{Z}_{e}^{(2)\delta \sigma \varepsilon \eta })_{b}^{\alpha \beta \gamma \delta } \notag \\ &=&\tfrac{\lambda }{2}\delta _{\mu }^{[\alpha }\delta _{\nu }^{\beta }\delta _{\rho }^{\gamma }\delta _{\lambda }^{\delta ]}\left[ \frac{\delta S^{\mathrm{L}}}{\delta H_{\rho }^{d}}\left( \frac{\partial ^{2}W_{bc}}{\partial \varphi _{a}\partial \varphi _{d}}A^{c\lambda }-\tfrac{1}{4!}\frac{\partial ^{2}f_{b}^{A}}{\partial \varphi _{a}\partial \varphi _{d}}V_{A}^{\lambda }\right) \right. \notag \\ &&\left. +\frac{\partial W_{bc}}{\partial \varphi _{a}}\frac{\delta S^{\mathrm{L}}}{\delta B_{c\rho \lambda }}-\tfrac{1}{4!}\frac{\partial f_{b}^{A}}{\partial \varphi _{a}}\frac{\delta S^{\mathrm{L}}}{\delta V_{\rho \lambda }^{A}}\right] . \label{r25}\end{aligned}$$ Gauge algebra of the deformed model\[appendixD\] ================================================ The nonvanishing commutators among the deformed gauge transformations ([gaugeA]{})–(\[gaugeV1\]) result from the terms quadratic in the ghosts with pure ghost number $1$ present in (\[defsolmast\]). By analyzing these terms and taking into account the expressions (\[g1\])–(\[g6d\]), we deduce the following nonvanishing relations: $$(\bar{Z}_{e(\varphi )})_{b}\frac{\delta (\bar{Z}_{a(\varphi )})_{c}}{\delta \varphi _{e}}-(\bar{Z}_{e(\varphi )})_{c}\frac{\delta (\bar{Z}_{a(\varphi )})_{b}}{\delta \varphi _{e}}=\lambda M_{bc}^{e}(\bar{Z}_{a(\varphi )})_{e}, \label{co1}$$$$\begin{aligned} &&(\bar{Z}_{e(\varphi )})_{b}\frac{\delta (\bar{Z}_{\mu (A)}^{a})_{c}}{\delta \varphi _{e}}+(\bar{Z}_{\sigma (A)}^{m})_{b}\frac{\delta (\bar{Z}_{\mu (A)}^{a})_{c}}{\delta A_{\sigma }^{m}}+(\bar{Z}_{\sigma (V)}^{A})_{b}\frac{\delta (\bar{Z}_{\mu (A)}^{a})_{c}}{\delta V_{\sigma }^{A}} \notag \\ &&-(\bar{Z}_{e(\varphi )})_{c}\frac{\delta (\bar{Z}_{\mu (A)}^{a})_{b}}{\delta \varphi _{e}}-(\bar{Z}_{\sigma (A)}^{m})_{c}\frac{\delta (\bar{Z}_{\mu (A)}^{a})_{b}}{\delta A_{\sigma }^{m}}-(\bar{Z}_{\sigma (V)}^{A})_{c}\frac{\delta (\bar{Z}_{\mu (A)}^{a})_{b}}{\delta V_{\sigma }^{A}} \notag \\ &=&\lambda \left[ M_{bc}^{d}(\bar{Z}_{\mu (A)}^{a})_{d}+\tfrac{1}{12}M_{dbce}\varepsilon ^{\alpha \beta \gamma \delta }A_{\delta }^{e}(\bar{Z}_{\mu (A)}^{a})_{\alpha \beta \gamma }^{d}\right. \notag \\ &&\left. -\tfrac{1}{3}f_{Abcd}\varepsilon ^{\alpha \beta \gamma \delta }V_{\delta }^{A}(\bar{Z}_{\mu (A)}^{a})_{\alpha \beta \gamma }^{d}-\frac{\delta S^{\mathrm{L}}}{\delta H^{d\mu }}\frac{\partial M_{bc}^{a}}{\partial \varphi _{d}}\right] , \label{co2}\end{aligned}$$$$(\bar{Z}_{e(\varphi )})_{b}\frac{\delta (\bar{Z}_{\mu (A)}^{a})_{\alpha \beta \gamma }^{c}}{\delta \varphi _{e}}-(\bar{Z}_{\sigma (A)}^{m})_{\alpha \beta \gamma }^{c}\frac{\delta (\bar{Z}_{\mu (A)}^{a})_{b}}{\delta A_{\sigma }^{m}}=-\lambda M_{bd}^{c}(\bar{Z}_{\mu (A)}^{a})_{\alpha \beta \gamma }^{d}, \label{co3}$$$$\begin{aligned} &&(\bar{Z}_{e(\varphi )})_{b}\frac{\delta (\bar{Z}_{a(B)}^{\mu \nu })_{c}}{\delta \varphi _{e}}+(\bar{Z}_{\sigma (A)}^{m})_{b}\frac{\delta (\bar{Z}_{a(B)}^{\mu \nu })_{c}}{\delta A_{\sigma }^{m}}+(\bar{Z}_{m(B)}^{\sigma \varepsilon })_{b}\frac{\delta (\bar{Z}_{a(B)}^{\mu \nu })_{c}}{\delta B_{m}^{\sigma \varepsilon }} \notag \\ &&+(\bar{Z}_{\sigma (V)}^{A})_{b}\frac{\delta (\bar{Z}_{a(B)}^{\mu \nu })_{c}}{\delta V_{\sigma }^{A}}-(\bar{Z}_{e(\varphi )})_{c}\frac{\delta (\bar{Z}_{a(B)}^{\mu \nu })_{b}}{\delta \varphi _{e}}-(\bar{Z}_{\sigma (A)}^{m})_{c}\frac{\delta (\bar{Z}_{a(B)}^{\mu \nu })_{b}}{\delta A_{\sigma }^{m}} \notag \\ &&-(\bar{Z}_{m(B)}^{\sigma \varepsilon })_{c}\frac{\delta (\bar{Z}_{a(B)}^{\mu \nu })_{b}}{\delta B_{m}^{\sigma \varepsilon }}-(\bar{Z}_{\sigma (V)}^{A})_{c}\frac{\delta (\bar{Z}_{a(B)}^{\mu \nu })_{b}}{\delta V_{\sigma }^{A}} \notag \\ &=&\lambda \left\{ M_{bc}^{d}(\bar{Z}_{a(B)}^{\mu \nu })_{d}-\tfrac{1}{3}f_{Abcd}\varepsilon ^{\alpha \beta \gamma \delta }V_{\delta }^{A}(\bar{Z}_{a(B)}^{\mu \nu })_{\alpha \beta \gamma }^{d}\right. \notag \\ &&+\tfrac{1}{12}M_{dbce}\varepsilon ^{\alpha \beta \gamma \delta }A_{\delta }^{e}(\bar{Z}_{a(B)}^{\mu \nu })_{\alpha \beta \gamma }^{d} \notag \\ &&-\tfrac{1}{2}\left[ \frac{\partial M_{bc}^{d}}{\partial \varphi _{e}}B_{d\alpha \beta }-\varepsilon _{\alpha \beta \gamma \delta }\left( \tfrac{1}{8}\frac{\partial M_{bcdf}}{\partial \varphi _{e}}A^{d\gamma }+\frac{\partial f_{bcf}^{A}}{\partial \varphi _{e}}V_{A}^{\gamma }\right) A^{f\delta }\right. \notag \\ &&\left. +\tfrac{1}{2}\varepsilon _{\alpha \beta \gamma \delta }\frac{\partial g_{bc}^{AB}}{\partial \varphi _{e}}V_{A}^{\gamma }V_{B}^{\delta }\right] (\bar{Z}_{a(B)}^{\mu \nu })_{e}^{\alpha \beta }+\left( g_{bc}^{AB}V_{B\lambda }-f_{bcd}^{A}A_{\lambda }^{d}\right) (\bar{Z}_{a(B)}^{\mu \nu })_{A}^{\lambda } \notag \\ &&-\lambda \varepsilon ^{\mu \nu \rho \lambda }\left[ \frac{\delta S^{\mathrm{L}}}{\delta H^{m\rho }}\left( \frac{\partial f_{abc}^{A}}{\partial \varphi _{m}}V_{A\lambda }-\tfrac{1}{4}\frac{\partial M_{abcd}}{\partial \varphi _{m}}A_{\lambda }^{d}\right) +f_{abc}^{A}\frac{\delta S^{\mathrm{L}}}{\delta V^{A\rho \lambda }}\right. \notag \\ &&\left. \left. -\tfrac{1}{4}M_{abcd}\frac{\delta S^{\mathrm{L}}}{\delta B_{d}^{\rho \lambda }}\right] \right\} , \label{co5}\end{aligned}$$$$(\bar{Z}_{e(\varphi )})_{b}\frac{\delta (\bar{Z}_{a(B)}^{\mu \nu })_{c}^{\alpha \beta }}{\delta \varphi _{e}}-(\bar{Z}_{m(B)}^{\sigma \varepsilon })_{c}^{\alpha \beta }\frac{\delta (\bar{Z}_{a(B)}^{\mu \nu })_{b}}{\delta B_{m}^{\sigma \varepsilon }}=\lambda \frac{\partial W_{bc}}{\partial \varphi _{d}}(\bar{Z}_{a(B)}^{\mu \nu })_{d}^{\alpha \beta }, \label{co6}$$$$\begin{aligned} &&(\bar{Z}_{e(\varphi )})_{b}\frac{\delta (\bar{Z}_{a(B)}^{\mu \nu })_{\alpha \beta \gamma }^{c}}{\delta \varphi _{e}}+(\bar{Z}_{\sigma (A)}^{m})_{b}\frac{\delta (\bar{Z}_{a(B)}^{\mu \nu })_{\alpha \beta \gamma }^{c}}{\delta A_{\sigma }^{m}}+(\bar{Z}_{\sigma (V)}^{A})_{b}\frac{\delta (\bar{Z}_{a(B)}^{\mu \nu })_{\alpha \beta \gamma }^{c}}{\delta V_{\sigma }^{A}} \notag \\ &&-(\bar{Z}_{\sigma (A)}^{m})_{\alpha \beta \gamma }^{c}\frac{\delta (\bar{Z}_{a(B)}^{\mu \nu })_{b}}{\delta A_{\sigma }^{m}}-(\bar{Z}_{m(B)}^{\sigma \varepsilon })_{\alpha \beta \gamma }^{c}\frac{\delta (\bar{Z}_{a(B)}^{\mu \nu })_{b}}{\delta B_{m}^{\sigma \varepsilon }} \notag \\ &=&\lambda \left[ -\tfrac{1}{4}\left( \frac{\partial M_{bd}^{c}}{\partial \varphi _{e}}A_{[\alpha }^{d}\delta _{\beta }^{\rho }\delta _{\gamma ]}^{\lambda }+\tfrac{1}{12}\frac{\partial f_{Ab}^{c}}{\partial \varphi _{e}}V_{[\alpha }^{A}\delta _{\beta }^{\rho }\delta _{\gamma ]}^{\lambda }\right) (\bar{Z}_{a(B)}^{\mu \nu })_{e\rho \lambda }\right. \notag \\ &&+M_{eb}^{c}(\bar{Z}_{a(B)}^{\mu \nu })_{\alpha \beta \gamma }^{e}+\tfrac{1}{2}\frac{\partial M_{ab}^{c}}{\partial \varphi _{m}}\frac{\delta S^{\mathrm{L}}}{\delta H^{m\rho }}\delta _{\lbrack \alpha }^{\mu }\delta _{\beta }^{\nu }\delta _{\gamma ]}^{\rho } \notag \\ &&\left. +\tfrac{1}{4!}\varepsilon _{\lambda \alpha \beta \gamma }f_{\;\;b}^{Mc}(\bar{Z}_{a(B)}^{\mu \nu })_{M}^{\lambda }\right] , \label{c7}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{\sigma (A)}^{m})_{\alpha \beta \gamma }^{b}\frac{\delta (\bar{Z}_{a(B)}^{\mu \nu })_{\alpha ^{\prime }\beta ^{\prime }\gamma ^{\prime }}^{c}}{\delta A_{\sigma }^{m}}-(\bar{Z}_{\sigma (A)}^{m})_{\alpha ^{\prime }\beta ^{\prime }\gamma ^{\prime }}^{c}\frac{\delta (\bar{Z}_{a(B)}^{\mu \nu })_{\alpha \beta \gamma }^{b}}{\delta A_{\sigma }^{m}} \notag \\ &=&-\tfrac{\lambda }{2}\frac{\partial M^{bc}}{\partial \varphi _{e}}\varepsilon ^{\rho \lambda \delta \varepsilon }\varepsilon _{\delta \alpha \beta \gamma }\varepsilon _{\varepsilon \alpha ^{\prime }\beta ^{\prime }\gamma ^{\prime }}(\bar{Z}_{a(B)}^{\mu \nu })_{e\rho \lambda }, \label{co8}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{e(\varphi )})_{b}\frac{\delta (\bar{Z}_{a(B)}^{\mu \nu })_{A}^{\lambda }}{\delta \varphi _{e}}+(\bar{Z}_{\sigma (V)}^{B})_{b}\frac{\delta (\bar{Z}_{a(B)}^{\mu \nu })_{A}^{\lambda }}{\delta V_{\sigma }^{B}}-(\bar{Z}_{m(B)}^{\sigma \varepsilon })_{A}^{\lambda }\frac{\delta (\bar{Z}_{a(B)}^{\mu \nu })_{b}}{\delta B_{m}^{\sigma \varepsilon }} \notag \\ &=&-\lambda f_{bA}^{B}(\bar{Z}_{a(B)}^{\mu \nu })_{B}^{\lambda }+\tfrac{\lambda }{2}\varepsilon ^{\alpha \beta \rho \lambda }\frac{\partial f_{bMA}}{\partial \varphi _{e}}V_{\rho }^{M}(\bar{Z}_{a(B)}^{\mu \nu })_{e\alpha \beta }, \label{co9}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{e(\varphi )})_{b}\frac{\delta (\bar{Z}_{\mu (V)}^{A})_{c}}{\delta \varphi _{e}}+(\bar{Z}_{\sigma (V)}^{B})_{b}\frac{\delta (\bar{Z}_{\mu (V)}^{A})_{c}}{\delta V_{\sigma }^{B}}-(\bar{Z}_{e(\varphi )})_{c}\frac{\delta (\bar{Z}_{\mu (V)}^{A})_{b}}{\delta \varphi _{e}} \notag \\ &&-(\bar{Z}_{\sigma (V)}^{B})_{c}\frac{\delta (\bar{Z}_{\mu (V)}^{A})_{b}}{\delta V_{\sigma }^{B}}=\lambda M_{bc}^{d}(\bar{Z}_{\mu (V)}^{A})_{d}, \label{co10}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{e(\varphi )})_{b}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{c}}{\delta \varphi _{e}}+(\bar{Z}_{\sigma (A)}^{m})_{b}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{c}}{\delta A_{\sigma }^{m}}+(\bar{Z}_{m(B)}^{\sigma \varepsilon })_{b}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{c}}{\delta B_{m}^{\sigma \varepsilon }} \notag \\ &&+(\bar{Z}_{\sigma (V)}^{B})_{b}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{c}}{\delta V_{\sigma }^{B}}+(\bar{Z}_{\sigma \varepsilon (V)}^{B})_{b}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{c}}{\delta V_{\sigma \varepsilon }^{B}}-(\bar{Z}_{e(\varphi )})_{c}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{b}}{\delta \varphi _{e}} \notag \\ &&-(\bar{Z}_{\sigma (A)}^{m})_{c}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{b}}{\delta A_{\sigma }^{m}}-(\bar{Z}_{m(B)}^{\sigma \varepsilon })_{c}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{b}}{\delta B_{m}^{\sigma \varepsilon }}-(\bar{Z}_{\sigma (V)}^{B})_{c}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{b}}{\delta V_{\sigma }^{B}} \notag \\ &&-(\bar{Z}_{\sigma \varepsilon (V)}^{B})_{c}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{b}}{\delta V_{\sigma \varepsilon }^{B}} \notag \\ &=&\lambda \left\{ M_{bc}^{d}(\bar{Z}_{\mu \nu (V)}^{A})_{d}-\tfrac{1}{3}\varepsilon ^{\alpha \beta \gamma \delta }\left[ f_{Mbcd}V_{\delta }^{M}-\tfrac{1}{4}M_{dbce}A_{\delta }^{e}\right] (\bar{Z}_{\mu \nu (V)}^{A})_{\alpha \beta \gamma }^{d}\right. \notag \\ &&-\tfrac{1}{2}\left[ \frac{\partial M_{bc}^{d}}{\partial \varphi _{e}}B_{d\alpha \beta }-\varepsilon _{\alpha \beta \gamma \delta }\left( \tfrac{1}{8}\frac{\partial M_{bcdf}}{\partial \varphi _{e}}A^{d\gamma }+\frac{\partial f_{bcf}^{M}}{\partial \varphi _{e}}V_{M}^{\gamma }\right) A^{f\delta }\right. \notag \\ &&\left. +\tfrac{1}{2}\varepsilon _{\alpha \beta \gamma \delta }\frac{\partial g_{bc}^{BC}}{\partial \varphi _{e}}V_{B}^{\gamma }V_{C}^{\delta }\right] (\bar{Z}_{\mu \nu (V)}^{A})_{e}^{\alpha \beta }+\left( g_{bc}^{MB}V_{B\lambda }-f_{bcd}^{M}A_{\lambda }^{d}\right) (\bar{Z}_{\mu \nu (V)}^{A})_{M}^{\lambda } \notag \\ &&+\varepsilon _{\mu \nu \rho \lambda }\left[ \frac{\delta S^{\mathrm{L}}}{\delta H_{\rho }^{m}}\left( \frac{\partial f_{bcd}^{A}}{\partial \varphi _{m}}A^{d\lambda }-\frac{\partial g_{bc}^{AB}}{\partial \varphi _{m}}V_{B}^{\lambda }\right) +f_{bcd}^{A}\frac{\delta S^{\mathrm{L}}}{\delta B_{d\rho \lambda }}\right. \notag \\ &&\left. \left. -g_{bc}^{AB}\frac{\delta S^{\mathrm{L}}}{\delta V_{\rho \lambda }^{B}}\right] \right\} , \label{c11}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{e(\varphi )})_{b}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{\alpha \beta \gamma }^{c}}{\delta \varphi _{e}}+(\bar{Z}_{\sigma (A)}^{m})_{b}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{\alpha \beta \gamma }^{c}}{\delta A_{\sigma }^{m}}+(\bar{Z}_{\sigma (V)}^{B})_{b}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{\alpha \beta \gamma }^{c}}{\delta V_{\sigma }^{B}} \notag \\ &&-(\bar{Z}_{\sigma (A)}^{m})_{\alpha \beta \gamma }^{c}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{b}}{\delta A_{\sigma }^{m}}-(\bar{Z}_{m(B)}^{\sigma \varepsilon })_{\alpha \beta \gamma }^{c}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{b}}{\delta B_{m}^{\sigma \varepsilon }}-(\bar{Z}_{\sigma \varepsilon (V)}^{B})_{\alpha \beta \gamma }^{c}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{b}}{\delta V_{\sigma \varepsilon }^{B}} \notag \\ &=&\lambda \left[ -\tfrac{1}{4}\left( \frac{\partial M_{bd}^{c}}{\partial \varphi _{e}}A_{[\alpha }^{d}\delta _{\beta }^{\rho }\delta _{\gamma ]}^{\lambda }+\tfrac{1}{12}\frac{\partial f_{Mb}^{c}}{\partial \varphi _{e}}V_{[\alpha }^{M}\delta _{\beta }^{\rho }\delta _{\gamma ]}^{\lambda }\right) (\bar{Z}_{\mu \nu (V)}^{A})_{e\rho \lambda }\right. \notag \\ &&+M_{eb}^{c}(\bar{Z}_{\mu \nu (V)}^{A})_{\alpha \beta \gamma }^{e}+\tfrac{1}{4!}\varepsilon _{\lambda \alpha \beta \gamma }f_{\;\;b}^{Mc}(\bar{Z}_{\mu \nu (V)}^{A})_{M}^{\lambda } \notag \\ &&\left. -\tfrac{1}{4!}\varepsilon _{\mu \nu \rho \lambda }\varepsilon _{\sigma \alpha \beta \gamma }\sigma ^{\lambda \sigma }\frac{\partial f_{\;\;b}^{Ac}}{\partial \varphi _{m}}\frac{\delta S^{\mathrm{L}}}{\delta H_{\rho }^{m}}\right] , \label{co11}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{e(\varphi )})_{b}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{c}^{\alpha \beta }}{\delta \varphi _{e}}-(\bar{Z}_{\sigma \varepsilon (V)}^{B})_{c}^{\alpha \beta }\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{b}}{\delta V_{\sigma \varepsilon }^{B}}-(\bar{Z}_{m(B)}^{\sigma \varepsilon })_{c}^{\alpha \beta }\frac{\delta (\bar{Z}_{\mu \nu (V)})_{b}}{\delta B_{m}^{\sigma \varepsilon }} \notag \\ &=&\lambda \frac{\partial W_{bc}}{\partial \varphi _{d}}(\bar{Z}_{\mu \nu (V)}^{A})_{d}^{\alpha \beta }, \label{c12}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{\sigma (A)}^{m})_{\alpha \beta \gamma }^{b}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{\alpha ^{\prime }\beta ^{\prime }\gamma ^{\prime }}^{c}}{\delta A_{\sigma }^{m}}-(\bar{Z}_{\sigma (A)}^{m})_{\alpha ^{\prime }\beta ^{\prime }\gamma ^{\prime }}^{c}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{\alpha \beta \gamma }^{b}}{\delta A_{\sigma }^{m}} \notag \\ &=&-\tfrac{\lambda }{2}\frac{\partial M^{bc}}{\partial \varphi _{e}}\varepsilon ^{\rho \lambda \delta \varepsilon }\varepsilon _{\delta \alpha \beta \gamma }\varepsilon _{\varepsilon \alpha ^{\prime }\beta ^{\prime }\gamma ^{\prime }}(\bar{Z}_{\mu \nu (V)}^{A})_{e\rho \lambda }, \label{co13}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{e(\varphi )})_{b}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{B}^{\lambda }}{\delta \varphi _{e}}+(\bar{Z}_{\sigma (A)}^{m})_{b}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{B}^{\lambda }}{\delta A_{\sigma }^{m}}+(\bar{Z}_{\sigma (V)}^{C})_{b}\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{B}^{\lambda }}{\delta V_{\sigma }^{C}} \notag \\ &&-(\bar{Z}_{m(B)}^{\sigma \varepsilon })_{B}^{\lambda }\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{b}}{\delta B_{m}^{\sigma \varepsilon }}-(\bar{Z}_{\sigma \varepsilon (V)}^{C})_{B}^{\lambda }\frac{\delta (\bar{Z}_{\mu \nu (V)}^{A})_{b}}{\delta V_{\sigma \varepsilon }^{C}} \notag \\ &=&-\lambda f_{bB}^{M}(\bar{Z}_{\mu \nu (V)}^{A})_{M}^{\lambda }+\tfrac{\lambda }{2}\varepsilon ^{\alpha \beta \rho \lambda }\frac{\partial f_{bMB}}{\partial \varphi _{e}}V_{\rho }^{M}(\bar{Z}_{\mu \nu (V)}^{A})_{e\alpha \beta } \notag \\ &&+\lambda \sigma ^{\lambda \sigma }\varepsilon _{\mu \nu \rho \sigma }\frac{\partial f_{bB}^{A}}{\partial \varphi _{m}}\frac{\delta S^{\mathrm{L}}}{\delta H_{\rho }^{m}}, \label{c14a}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{e(\varphi )})_{b}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{c}}{\delta \varphi _{e}}+(\bar{Z}_{\sigma (A)}^{m})_{b}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{c}}{\delta A_{\sigma }^{m}}+(\bar{Z}_{\sigma (H)}^{m})_{b}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{c}}{\delta H_{\sigma }^{m}} \notag \\ &&+(\bar{Z}_{m(B)}^{\sigma \varepsilon })_{b}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{c}}{\delta B_{m}^{\sigma \varepsilon }}+(\bar{Z}_{\sigma (V)}^{A})_{b}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{c}}{\delta V_{\sigma }^{A}}+(\bar{Z}_{\sigma \varepsilon (V)}^{A})_{b}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{c}}{\delta V_{\sigma \varepsilon }^{A}} \notag \\ &&-(\bar{Z}_{e(\varphi )})_{c}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{b}}{\delta \varphi _{e}}-(\bar{Z}_{\sigma (A)}^{m})_{c}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{b}}{\delta A_{\sigma }^{m}}-(\bar{Z}_{\sigma (H)}^{m})_{c}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{b}}{\delta H_{\sigma }^{m}} \notag \\ &&-(\bar{Z}_{m(B)}^{\sigma \varepsilon })_{c}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{b}}{\delta B_{m}^{\sigma \varepsilon }}-(\bar{Z}_{\sigma (V)}^{A})_{c}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{b}}{\delta V_{\sigma }^{A}}-(\bar{Z}_{\sigma \varepsilon (V)}^{A})_{c}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{b}}{\delta V_{\sigma \varepsilon }^{A}} \notag \\ &=&\lambda \left\{ M_{bc}^{d}(\bar{Z}_{\mu (H)}^{a})_{d}-\tfrac{1}{3}\varepsilon ^{\alpha \beta \gamma \delta }\left[ f_{Mbcd}V_{\delta }^{M}-\tfrac{1}{4}M_{dbce}A_{\delta }^{e}\right] (\bar{Z}_{\mu (H)}^{a})_{\alpha \beta \gamma }^{d}\right. \notag \\ &&-\tfrac{1}{2}\left[ \frac{\partial M_{bc}^{d}}{\partial \varphi _{e}}B_{d\alpha \beta }-\varepsilon _{\alpha \beta \gamma \delta }\left( \tfrac{1}{8}\frac{\partial M_{bcdf}}{\partial \varphi _{e}}A^{d\gamma }+\frac{\partial f_{bcf}^{M}}{\partial \varphi _{e}}V_{M}^{\gamma }\right) A^{f\delta }\right. \notag \\ &&\left. +\tfrac{1}{2}\varepsilon _{\alpha \beta \gamma \delta }\frac{\partial g_{bc}^{BC}}{\partial \varphi _{e}}V_{B}^{\gamma }V_{C}^{\delta }\right] (\bar{Z}_{\mu (H)}^{a})_{e}^{\alpha \beta }+\left( g_{bc}^{MB}V_{B\lambda }-f_{bcd}^{M}A_{\lambda }^{d}\right) (\bar{Z}_{\mu (H)}^{a})_{M}^{\lambda } \notag \\ &&+\frac{\delta S^{\mathrm{L}}}{\delta H_{\nu }^{m}}\left[ \frac{\partial ^{2}M_{bc}^{d}}{\partial \varphi _{m}\partial \varphi _{a}}B_{d\mu \nu }+\tfrac{1}{2}\varepsilon _{\mu \nu \rho \lambda }\left( \frac{\partial ^{2}g_{bc}^{AB}}{\partial \varphi _{m}\partial \varphi _{a}}V_{A}^{\rho }V_{B}^{\lambda }\right. \right. \notag \\ &&\left. \left. -2\frac{\partial ^{2}f_{bcd}^{A}}{\partial \varphi _{m}\partial \varphi _{a}}V_{A}^{\rho }A^{d\lambda }-\tfrac{1}{4}\frac{\partial ^{2}M_{bcde}}{\partial \varphi _{m}\partial \varphi _{a}}A^{d\rho }A^{e\lambda }\right) \right] \notag \\ &&+\varepsilon _{\mu \nu \rho \lambda }\frac{\delta S^{\mathrm{L}}}{\delta B_{d\rho \lambda }}\left( \frac{\partial f_{bcd}^{A}}{\partial \varphi _{a}}V_{A}^{\nu }-\tfrac{1}{8}\frac{\partial M_{bcde}}{\partial \varphi _{a}}A^{e\nu }\right) \notag \\ &&\left. +\varepsilon _{\mu \nu \rho \lambda }\frac{\delta S^{\mathrm{L}}}{\delta V_{\rho \lambda }^{A}}\left( \frac{\partial g_{bc}^{AB}}{\partial \varphi _{a}}V_{B}^{\nu }-\frac{\partial f_{bcd}^{A}}{\partial \varphi _{a}}A^{d\nu }\right) +\frac{\partial M_{bc}^{d}}{\partial \varphi _{a}}\frac{\delta S^{\mathrm{L}}}{\delta A^{d\mu }}\right\} , \label{c14}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{e(\varphi )})_{b}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{\alpha \beta \gamma }^{c}}{\delta \varphi _{e}}+(\bar{Z}_{\sigma (A)}^{m})_{b}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{\alpha \beta \gamma }^{c}}{\delta A_{\sigma }^{m}}+(\bar{Z}_{m(B)}^{\sigma \varepsilon })_{b}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{\alpha \beta \gamma }^{c}}{\delta B_{m}^{\sigma \varepsilon }} \notag \\ &&+(\bar{Z}_{\varepsilon (V)}^{A})_{b}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{\alpha \beta \gamma }^{c}}{\delta V_{\varepsilon }^{A}}-(\bar{Z}_{\sigma (A)}^{m})_{\alpha \beta \gamma }^{c}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{b}}{\delta A_{\sigma }^{m}} \notag \\ &&-(\bar{Z}_{\sigma (H)}^{m})_{\alpha \beta \gamma }^{c}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{b}}{\delta H_{\sigma }^{m}}-(\bar{Z}_{m(B)}^{\sigma \varepsilon })_{\alpha \beta \gamma }^{c}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{b}}{\delta B_{m}^{\sigma \varepsilon }} \notag \\ &=&\lambda \left\{ -\tfrac{1}{4}\left( \frac{\partial M_{bd}^{c}}{\partial \varphi _{e}}A_{[\alpha }^{d}\delta _{\beta }^{\rho }\delta _{\gamma ]}^{\lambda }+\tfrac{1}{12}\frac{\partial f_{Ab}^{c}}{\partial \varphi _{e}}V_{[\alpha }^{A}\delta _{\beta }^{\rho }\delta _{\gamma ]}^{\lambda }\right) (\bar{Z}_{\mu (H)}^{a})_{e\rho \lambda }\right. \notag \\ &&+M_{eb}^{c}(\bar{Z}_{\mu (H)}^{a})_{\alpha \beta \gamma }^{e}+\tfrac{1}{4!}\varepsilon _{\lambda \alpha \beta \gamma }f_{\;\;b}^{Mc}(\bar{Z}_{\mu (H)}^{a})_{M}^{\lambda } \notag \\ &&+\tfrac{1}{2}\sigma _{\mu \lbrack \alpha }^{\left. {}\right. }\delta _{\beta }^{\nu }\delta _{\gamma ]}^{\rho }\left[ \frac{\delta S^{\mathrm{L}}}{\delta H^{m\nu }}\left( \frac{\partial ^{2}M_{bd}^{c}}{\partial \varphi _{a}\partial \varphi _{m}}A_{\rho }^{d}+\tfrac{1}{12}\frac{\partial ^{2}f_{Ab}^{c}}{\partial \varphi _{a}\partial \varphi _{m}}V_{\rho }^{A}\right) \right. \notag \\ &&\left. \left. +\frac{\partial M_{bd}^{c}}{\partial \varphi _{a}}\frac{\delta S^{\mathrm{L}}}{\delta B_{d}^{\nu \rho }}+\tfrac{1}{12}\frac{\partial f_{Ab}^{c}}{\partial \varphi _{a}}\frac{\delta S^{\mathrm{L}}}{\delta V_{A}^{\nu \rho }}\right] \right\} , \label{co15}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{e(\varphi )})_{b}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{c}^{\alpha \beta }}{\delta \varphi _{e}}+(\bar{Z}_{\sigma (A)}^{m})_{b}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{c}^{\alpha \beta }}{\delta A_{\sigma }^{m}}+(\bar{Z}_{\sigma (V)}^{A})_{b}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{c}^{\alpha \beta }}{\delta V_{\sigma }^{A}} \notag \\ &&-(\bar{Z}_{\sigma (H)}^{m})_{c}^{\alpha \beta }\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{b}}{\delta H_{\sigma }^{m}}-(\bar{Z}_{m(B)}^{\sigma \varepsilon })_{c}^{\alpha \beta }\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{b}}{\delta B_{m}^{\sigma \varepsilon }}-(\bar{Z}_{\sigma \varepsilon (V)}^{A})_{c}^{\alpha \beta }\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{b}}{\delta V_{\sigma \varepsilon }^{A}} \notag \\ &=&\lambda \left[ \frac{\partial W_{bc}}{\partial \varphi _{d}}(\bar{Z}_{\mu (H)}^{a})_{d}^{\alpha \beta }-\frac{\partial ^{2}W_{bc}}{\partial \varphi _{a}\partial \varphi _{d}}\frac{\delta S^{\mathrm{L}}}{\delta H_{\nu }^{d}}\delta _{\mu }^{[\alpha }\delta _{\nu }^{\beta ]}\right] , \label{co16}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{\sigma (A)}^{m})_{\alpha \beta \gamma }^{b}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{\alpha ^{\prime }\beta ^{\prime }\gamma ^{\prime }}^{c}}{\delta A_{\sigma }^{m}}+(\bar{Z}_{m(B)}^{\sigma \varepsilon })_{\alpha \beta \gamma }^{b}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{\alpha ^{\prime }\beta ^{\prime }\gamma ^{\prime }}^{c}}{\delta B_{m}^{\sigma \varepsilon }} \notag \\ &&-(\bar{Z}_{\sigma (A)}^{m})_{\alpha ^{\prime }\beta ^{\prime }\gamma ^{\prime }}^{c}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{\alpha \beta \gamma }^{b}}{\delta A_{\sigma }^{m}}-(\bar{Z}_{m(B)}^{\sigma \varepsilon })_{\alpha ^{\prime }\beta ^{\prime }\gamma ^{\prime }}^{c}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{\alpha \beta \gamma }^{b}}{\delta B_{m}^{\sigma \varepsilon }} \notag \\ &=&-\tfrac{\lambda }{2}\frac{\partial M^{bc}}{\partial \varphi _{e}}\varepsilon ^{\rho \lambda \delta \varepsilon }\varepsilon _{\delta \alpha \beta \gamma }\varepsilon _{\varepsilon \alpha ^{\prime }\beta ^{\prime }\gamma ^{\prime }}(\bar{Z}_{\mu (H)}^{a})_{e\rho \lambda } \notag \\ &&+\lambda \varepsilon _{\mu \nu \rho \lambda }\varepsilon _{\delta \alpha \beta \gamma }\varepsilon _{\varepsilon \alpha ^{\prime }\beta ^{\prime }\gamma ^{\prime }}\sigma ^{\rho \delta }\sigma ^{\varepsilon \lambda }\frac{\partial ^{2}M^{bc}}{\partial \varphi _{a}\partial \varphi _{d}}\frac{\delta S^{\mathrm{L}}}{\delta H_{\nu }^{d}}, \label{co17}\end{aligned}$$$$\begin{aligned} &&(\bar{Z}_{e(\varphi )})_{b}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{B}^{\lambda }}{\delta \varphi _{e}}+(\bar{Z}_{\sigma (A)}^{m})_{b}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{B}^{\lambda }}{\delta A_{\sigma }^{m}}+(\bar{Z}_{\sigma (V)}^{C})_{b}\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{B}^{\lambda }}{\delta V_{\sigma }^{C}} \notag \\ &&-(\bar{Z}_{\sigma \varepsilon (V)}^{C})_{B}^{\lambda }\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{b}}{\delta V_{\sigma \varepsilon }^{C}}-(\bar{Z}_{\sigma (H)}^{m})_{B}^{\lambda }\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{b}}{\delta H_{\sigma }^{m}}-(\bar{Z}_{m(B)}^{\sigma \varepsilon })_{B}^{\lambda }\frac{\delta (\bar{Z}_{\mu (H)}^{a})_{b}}{\delta B_{m}^{\sigma \varepsilon }} \notag \\ &=&-\lambda f_{bB}^{M}(\bar{Z}_{\mu (H)}^{a})_{M}^{\lambda }+\tfrac{\lambda }{2}\varepsilon ^{\alpha \beta \rho \lambda }\frac{\partial f_{bMB}}{\partial \varphi _{e}}V_{\rho }^{M}(\bar{Z}_{\mu (H)}^{a})_{e\alpha \beta } \notag \\ &&-\lambda \varepsilon _{\mu \nu \rho \sigma }\sigma ^{\sigma \lambda }\left( \frac{\partial ^{2}f_{bMB}}{\partial \varphi _{a}\partial \varphi _{e}}V^{M\rho }\frac{\delta S^{\mathrm{L}}}{\delta H_{\nu }^{e}}+\frac{\partial f_{bMB}}{\partial \varphi _{a}}\frac{\delta S^{\mathrm{L}}}{\delta V_{M\nu \rho }}\right) . \label{co19}\end{aligned}$$ [99]{} B. Voronov and I. V. Tyutin, *Formulation of gauge theories of general form. I*, Theor. Math. Phys. **50** (1982) 218. B. Voronov and I. V. Tyutin, *Formulation of gauge theories of general form. II. Gauge invariant renormalizability and renormalization structure*, Theor. Math. Phys. **52** (1982) 628. J. Gomis and S. Weinberg, *Are nonrenormalizable gauge theories renormalizable?*, Nucl. Phys. **B469** (1996) 473 \[hep-th/9510087\]. S. Weinberg, *The Quantum Theory of Fields*, Cambridge University Press, Cambridge (1996). O. Piguet and S. P. Sorella, *Algebraic Renormalization: Perturbative Renormalization, Symmetries and Anomalies*, Lecture Notes in Physics, Springer Verlag, Berlin, Vol. **28** (1995). P. S. Howe, V. Lindström and P. White, *Anomalies and renormalization in the BRST-BV framework*, Phys. Lett. **B246** (1990) 430. W. Troost, P. van Nieuwenhuizen and A. van Proeyen, *Anomalies and the Batalin-Vilkovisky lagrangian formalism*, Nucl. Phys. **B333** (1990) 727. G. Barnich and M. Henneaux, *Renormalization of gauge invariant operators and anomalies in Yang-Mills theory*, Phys. Rev. Lett. **72** (1994) 1588 \[hep-th/9312206\]. G. Barnich, *Perturbative gauge anomalies in the Hamiltonian formalism: a cohomological analysis*, Mod. Phys. Lett. **A9** (1994) 665 \[hep-th/9310167\]. G. Barnich, *Higher order cohomological restrictions on anomalies and counterterms*, Phys. Lett. **B419** (1998) 211 \[hep-th/9710162\]. F. Brandt, M. Henneaux and A. Wilch, *Global symmetries in the antifield formalism*, Phys. Lett. **B387** (1996) 320 \[hep-th/9606172\]. R. Arnowitt and S. Deser, *Interaction between gauge vector fields*, Nucl. Phys. **49** (1963) 133. J. Fang and C. Fronsdal, *Deformation of gauge groups. Gravitation*, J. Math. Phys. **20** (1979) 2264. F. A. Berends, G. H. Burgers and H. Van Dam, *On spin three selfinteractions*, Z. Phys. **C24** (1984) 247. F. A. Berends, G. H. Burgers and H. Van Dam, *On the theoretical problems in constructing interactions involving higher spin massless particles*, Nucl. Phys. **B260** (1985) 295. A. K. H. Bengtsson, *On gauge invariance for spin-3 fields*, Phys. Rev. **D32** (1985) 2031. J. D. Stasheff, *Deformation theory and the Batalin-Vilkovisky master equation*, in Deformation theory and symplectic geometry, Proceedings of Ascona meeting, June 1996, Eds. D. Sternheimer, J. Rawnsley, and S. Gutt, Math. Physics Studies **20**, 271-284, Kluwer Acad. Publ., Dordrecht (1997) \[q-alg/9702012\] J. D. Stasheff, *The (secret?) homological algebra of the Batalin-Vilkovisky approach*, in Secondary Calculus and Cohomological Physics, Proceedings of Moscow meeting, August 1997, Eds. M. Henneaux, J. Krasil’shchik, A. Vinogradov, Contemporary Mathematics, vol. **219**, American Mathematical Society (1998) \[hep-th/9712157\]. J. A. Garcia and B. Knaepen, *Couplings between generalized gauge fields*, Phys. Lett. **B441** (1998) 198 \[hep-th/9807016\]. G. Barnich and M. Henneaux, *Consistent couplings between fields with a gauge freedom and deformations of the master equation*, Phys. Lett. **B311** (1993) 123 \[hep-th/9304057\]. M. Henneaux, *Consistent interactions between gauge fields: the cohomological approach*, Contemp. Math. **219** (1998) 93 \[hep-th/9712226\]. D. Birmingham, M. Blau, M. Rakowski and G. Thompson, *Topological field theory*, Phys. Rept. **209** (1991) 129. P. Schaller and T. Strobl, *Poisson structure induced (topological) field theories*, Mod. Phys. Lett. **A9** (1994) 3129 \[hep-th/9405110\]. N. Ikeda, *Two-dimensional gravity and nonlinear gauge theory*, Annals Phys. **235** (1994) 435 \[hep-th/9312059\]. A. Yu. Alekseev, P. Schaller and T. Strobl, *The topological G/G WZW model in the generalized momentum representation*, Phys. Rev. **D52** (1995) 7146 \[hep-th/9505012\]. T. Klösch and T. Strobl, *Classical and quantum gravity in 1+1 dimensions: I. A unifying approach*, Class. Quantum Grav. **13** (1996) 965 \[gr-qc/9508020\]; Erratum-ibid. **14** (1997) 825. T. Klösch and T. Strobl, *Classical and quantum gravity in 1+1 dimensions: II. The universal coverings*, Class. Quantum Grav. **13** (1996) 2395 \[gr-qc/9511081\]. T. Klösch and T. Strobl, *Classical and quantum gravity in 1+1 dimensions: III. Solutions of arbitrary topology*, Class. Quantum Grav. **14** (1997) 1689 \[hep-th/9607226\]. A. S. Cattaneo and G. Felder, *A path integral approach to the Kontsevich quantization formula*, Commun. Math. Phys. **212** (2000) 591 \[math.QA/9902090\]. A. S. Cattaneo and G. Felder, *Poisson Sigma models and deformation quantization*, Mod. Phys. Lett. **A16** (2001) 179 \[hep-th/0102208\]. C. Teitelboim, *Gravitation and hamiltonian structure in two spacetime dimensions*, Phys. Lett. **B126** (1983) 41. R. Jackiw, *Lower dimensional gravity*, Nucl. Phys. **B252** (1985) 343. M. O. Katanayev and I. V. Volovich, *String model with dynamical geometry and torsion*, Phys. Lett. **B175** (1986) 413. J. Brown, *Lower Dimensional Gravity*, World Scientific, Singapore (1988). M. O. Katanaev and I. V. Volovich, *Two-dimensional gravity with dynamical torsion and strings*, Annals Phys. **197** (1990) 1. H.-J. Schmidt, *Scale-invariant gravity in two dimensions*, J. Math. Phys. **32** (1991) 1562. S. N. Solodukhin, *Topological 2D Riemann-Cartan-Weyl gravity*, Class. Quantum Grav. **10** (1993) 1011. N. Ikeda and K. I. Izawa, *General form of dilaton gravity and nonlinear gauge theory*, Prog. Theor. Phys. **90** (1993) 237 \[hep-th/9304012\]. T. Strobl, *Dirac quantization of gravity-Yang-Mills systems in 1+1 dimensions*, Phys. Rev. **D50** (1994) 7346 \[hep-th/9403121\]. D. Grumiller, W. Kummer and D. V. Vassilevich, *Dilaton gravity in two dimensions*, Phys. Rept. **369** (2002) 327 \[hep-th/0204253\]. T. Strobl, *Gravity in two spacetime dimensions*, Habilitation thesis RWTH Aachen, May 1999 \[hep-th/0011240\]. K. Ezawa, *Ashtekar’s formulation for $N=1,2$ supergravities as “constrained" BF theories*, Prog. Theor. Phys. **95** (1996) 863 \[hep-th/9511047\]. L. Freidel, K. Krasnov and R. Puzio, *BF description of higher-dimensional gravity theories*, Adv. Theor. Math. Phys. **3** (1999) 1289 \[hep-th/9901069\]. L. Smolin, *A holographic formulation of quantum general relativity*, Phys. Rev. **D61** (2000) 084007 \[hep-th/9808191\]. Y. Ling and L. Smolin, *Holographic formulation of quantum supergravity*, Phys. Rev. **D63** (2001) 064010 \[hep-th/0009018\]. K.-I. Izawa, *On nonlinear gauge theory from a deformation theory perspective*, Prog. Theor. Phys. **103** (2000) 225 \[hep-th/9910133\]. C. Bizdadea, *Note on two-dimensional nonlinear gauge theories*, Mod. Phys. Lett. **A15** (2000) 2047 \[hep-th/0201059\]. N. Ikeda, *A deformation of three dimensional BF theory*, J. High Energy Phys. **11** (2000) 009 \[hep-th/0010096\]. N. Ikeda, *Deformation of BF theories, topological open membrane and a generalization of the star deformation*, J. High Energy Phys. **07** (2001) 037 \[hep-th/0105286\]. C. Bizdadea, E. M. Cioroianu and S. O. Saliu, *Hamiltonian cohomological derivation of four-dimensional nonlinear gauge theories*, Int. J. Mod. Phys. **A17** (2002) 2191 \[hep-th/0206186\]. C. Bizdadea, C. C. Ciobîrcă, E. M. Cioroianu, S. O. Saliu and S. C. Săraru, *Hamiltonian BRST deformation of a class of n-dimensional BF-type theories*, J. High Energy Phys. **01** (2003) 049 \[hep-th/0302037\]. E. M. Cioroianu and S. C. Săraru, *PT-symmetry breaking hamiltonian interactions in BF models*, Int. J. Mod. Phys. **A21** (2006) 2573 \[hep-th/0606164\]. C. Bizdadea, E. M. Cioroianu, S. O. Saliu and S. C. Săraru, *Couplings of a collection of BF models to matter theories*, Eur. Phys. J. **C41** (2005) 401 \[hep-th/0508037\]. N. Ikeda, *Chern-Simons gauge theory coupled with BF theory*, Int. J. Mod. Phys. **A18** (2003) 2689 \[hep-th/0203043\]. E. M. Cioroianu and S. C. Săraru, *Two-dimensional interactions between a BF-type theory and a collection of vector fields*, Int. J. Mod. Phys. **A19** (2004) 4101 \[hep-th/0501056\]. N. Ikeda, *Topological field theories and geometry of Batalin-Vilkovisky algebras*, J. High Energy Phys. **10** (2002) 076 \[hep-th/0209042\]. N. Ikeda and K.-I. Izawa, *Dimensional reduction of nonlinear gauge theories*, J. High Energy Phys. **09** (2004) 030 \[hep-th/0407243\]. N. Ikeda, *Deformation of Batalin-Vilkovisky structures* \[math.SG/0604157\]. G. Barnich, F. Brandt and M. Henneaux, *Local BRST cohomology in the antifield formalism: II. Application to Yang-Mills theory*, Commun. Math. Phys. **174** (1995) 93 \[hep-th/9405194\]. G. Barnich, F. Brandt and M. Henneaux, *Local BRST cohomology in the antifield formalism: I. General theorems*, Commun. Math. Phys. **174** (1995) 57 \[hep-th/9405109\]. G. Barnich, F. Brandt and M. Henneaux, *Local BRST cohomology in gauge theories*, Phys. Rept. **338** (2000) 439 \[hep-th/0002245\]. M. Henneaux, *Uniqueness of the Freedman-Townsend interaction vertex for two-form gauge fields*, Phys. Lett. **B368** (1996) 83 \[hep-th/9511145\]. [^1]: E-mail address: bizdadea@central.ucv.ro [^2]: E-mail address: manache@central.ucv.ro [^3]: E-mail address: osaliu@central.ucv.ro [^4]: E-mail address: scsararu@central.ucv.ro
--- abstract: 'We extend recent results of Ono and Raji, relating the number of self-conjugate $7$-core partitions to Hurwitz class numbers. Furthermore, we give a combinatorial explanation for the curious equality $2\operatorname{sc}_7(8n+1) = \operatorname{c}_4(7n+2)$. We also conjecture that an equality of this shape holds if and only if $t=4$, proving the cases $t\in\{2,3,5\}$ and giving partial results for $t>5$.' address: - 'Department of Mathematics and Computer Science, Division of Mathematics, University of Cologne, Weyertal 86-90, 50931 Cologne, Germany' - 'Department of Mathematics, University of Hong Kong, Pokfulam, Hong Kong' - 'Department of Mathematics and Computer Science, Division of Mathematics, University of Cologne, Weyertal 86-90, 50931 Cologne, Germany' author: - Kathrin Bringmann - Ben Kane - Joshua Males title: 'On $t$-core and self-conjugate $(2t-1)$-core partitions in arithmetic progressions' --- Introduction and Statement of Results ===================================== A *partition* $\Lambda$ of $n\in{\mathbb{N}}$ is a non-increasing sequence $\Lambda \coloneqq (\lambda_1, \lambda_2, \dots, \lambda_s)$ of non-negative integers $\lambda_j$ such that $\sum_{1\leq j\leq s} \lambda_j = n$. The *Ferrers–Young diagram* of $\Lambda$ is the $s$-rowed diagram $$\begin{matrix} \text{\large $\bullet$} & \text{\large $\bullet$} & \cdots & \text{\large $\bullet$} & \qquad \text{ $\lambda_1$ dots} \\ \text{\large $\bullet$} & \text{\large $\bullet$} & \cdots & \text{\large $\bullet$} & \qquad \text{ $\lambda_2$ dots} \\ \cdot \\ \cdot \\ \text{\large $\bullet$} & \cdots & \text{\large $\bullet$} & {} & \qquad \text{ $\lambda_s$ dots.} \end{matrix}$$ We label the cells of the Ferrers–Young diagram as if it were a matrix, and let $\lambda_k'$ denote the number of dots in column $k$. The *hook length* of the cell $(j,k)$ in the Ferrers–Young diagram of $\Lambda$ equals $$h(j,k) \coloneqq \lambda_j + \lambda_k' -k-j+1.$$ If no hook length in any cell of a partition $\Lambda$ is divisible by $t$, then $\Lambda$ is a *$t$-core partition*. A partition $\Lambda$ is said to be *self-conjugate* if it remains the same when rows and columns are switched. The partition $\Lambda = (3,2,1)$ of $6$ has the Ferrers–Young diagram $$\begin{matrix} \text{\large $\bullet$} & \text{\large $\bullet$} & \text{\large $\bullet$} \\ \text{\large $\bullet$} & \text{\large $\bullet$} \\ \text{\large $\bullet$} \end{matrix}$$ and has hook lengths $h(1,1)=5$, $h(1,2) = 3$, $h(1,3) = 1$, $h(2,1) = 3$, $h(2,2) = 1$, and $h(3,1) = 1$. Therefore, $\Lambda$ is a $t$-core partition for all $t \not\in\{ 1,3,5 \}$. Furthermore, switching rows and columns leaves $\Lambda$ unaltered, and so $\Lambda$ is self-conjugate. The theory of $t$-core partitions is intricately linked to various areas of number theory and beyond. For example, Garvan, Kim, and Stanton [@garvan1990cranks] used $t$-core partitions to investigate special cases of the famous Ramanujan congruences for the partition function $p(n)$. Furthermore, $t$-core partitions encode the modular representation theory of symmetric groups $S_n$ and $A_n$ (see e.g. [@MR1321575; @MR671655]) For $t,n \in {\mathbb{N}}$ we let $\operatorname{c}_t(n)$ denote the number of $t$-core partitions of $n$, along with $\operatorname{sc}_t(n)$ the number of self-conjugate $t$-core partitions of $n$. In 1997, Ono and Sze [@ono19974] investigated the relation between $4$-core partitions and class numbers. Denote by $H(|D|)$ ($D>0$ a discriminant) the $D$-th Hurwitz class number, which counts the number of ${{\text {\rm SL}}}_2({\mathbb{Z}})$-equivalence classes of integral binary quadratic forms of discriminant $D$, weighted by $\frac{1}{2}$ times the order of their automorphism group.[^1] Then Ono and Sze proved the following theorem. \[Theorem: Ono-Sze\] If $8n+5$ is square-free, then $$c_4(n) = \frac{1}{2} H(32n+20).$$ More recently Ono and Raji [@OnoRaji] showed similar relations between self-conjugate $7$-core partitions and class numbers. To state their result, let $$D_n \coloneqq \begin{cases} 28n+56 & \text{ if } n \equiv 1 {\ \, \left( \mathrm{mod} \, 4 \right)}, \\ 7n+14 & \text{ if } n \equiv 3 {\ \, \left( \mathrm{mod} \, 4 \right)}. \end{cases}$$ \[Theorem: Ono-Raji\] Let $n \not\equiv -2 {\ \, \left( \mathrm{mod} \, 7 \right)}$ be a positive odd integer. Then $$\operatorname{sc}_7(n) = \begin{cases} \frac{1}{4} H(D_n) & \text{ if } n \equiv 1 {\ \, \left( \mathrm{mod} \, 4 \right)}, \\ \frac{1}{2} H(D_n) & \text{ if } n \equiv 3 {\ \, \left( \mathrm{mod} \, 8 \right)}, \\ 0 & \text{ if } n \equiv 7 {\ \, \left( \mathrm{mod} \, 8 \right)}. \end{cases}$$ In particular, by combining Theorems \[Theorem: Ono-Sze\] and \[Theorem: Ono-Raji\] and using elementary congruence conditions, one may easily show that for $n \not\equiv 5 {\ \, \left( \mathrm{mod} \, 7 \right)}$ and $8n+5$ square-free, $$\label{Equation: 2sc7 = c4} 2 \operatorname{sc}_7(8n+1) = \operatorname{c}_4(7n+2).$$ This fact hints at a deeper relationship between $\operatorname{sc}_{2t-1}$ and $\operatorname{c}_t$, which we investigate. Our main results pertain to the case of $t=4$. We begin by extending recent results of Ono and Raji [@OnoRaji]. Letting $\operatorname{sc}_7(n)$ denote the number of self-conjugate $7$-core partitions of $n$ and $(\frac{\cdot}{\cdot})$ denote the extended Jacobi Symbol, we may state our first theorem. For this, for $n\in{\mathbb{Q}}$ we set $H(n):=0$ if $n\notin{\mathbb{Z}}$ or $-n$ is not a discriminant. \[thm:sc7manyH\] For every $n\in {\mathbb{N}}$, we have $$\operatorname{sc}_7(n)=\frac{1}{4}\left(H(28n+56)-H\left(\frac{4n+8}{7}\right)-2H(7n+14)+2H\left(\frac{n+2}{7}\right)\right).$$ While Theorem \[thm:sc7manyH\] gives a uniform formula for $\operatorname{sc}_7(n)$ as a linear combination of Hurwitz class numbers, it is also desirable to obtain a formula in terms of a single class number. For this, let $\ell\in{\mathbb{N}}_0$ be chosen maximally such that $n\equiv -2{\ \, ( \mathrm{mod} \, 2^{2\ell} )}$ and extend the definition of $D_n$ to $$\label{eqn:Dndef} D_n:=\begin{cases}28n+56&\text{if }n\equiv 0,1{\ \, \left( \mathrm{mod} \, 4 \right)},\\ 7n+14&\text{if }n\equiv 3{\ \, \left( \mathrm{mod} \, 4 \right)},\\ D_{\frac{n+2}{2^{2\ell}}-2}&\text{if }n\equiv 2{\ \, \left( \mathrm{mod} \, 4 \right)}, \end{cases}$$ and $$\label{eqn:nudef} \nu_{n}:=\begin{cases}\frac{1}{4} &\text{if }n\equiv 0,1{\ \, \left( \mathrm{mod} \, 4 \right)},\\ \frac{1}{2}&\text{if }n\equiv 3{\ \, \left( \mathrm{mod} \, 8 \right)},\\ \nu_{\frac{n+2}{2^{2\ell}}-2}&\text{if }n\equiv 2{\ \, \left( \mathrm{mod} \, 4 \right)},\\ 0&\text{otherwise}. \end{cases}$$ A binary quadratic form $[a,b,c]$ is called primitive if $\gcd(a,b,c)=1$ and, for a prime $p$, $p$-primitive if $p\nmid \gcd(a,b,c)$. We let $H_{p}(D)$ count the number of $p$-primitive classes of integral binary quadratic forms of discriminant $-D$, with the same weighting as $H(D)$. \[cor:counting\] For every $n\in{\mathbb{N}}$ we have $$ \operatorname{sc}_7(n)=\nu_{n}H_7\left(D_n\right).$$ For $n\not\equiv -2{\ \, \left( \mathrm{mod} \, 7 \right)}$, one has $H(D_n)=H_7(D_n)$ and hence the cases $n\equiv 1,3 {\ \, \left( \mathrm{mod} \, 4 \right)}$ of Corollary \[cor:counting\] with $n\not\equiv -2 {\ \, \left( \mathrm{mod} \, 7 \right)}$ are covered by Theorem \[Theorem: Ono-Raji\]. For $n+2$ squarefree, we may use Dirichlet’s class number formula to obtain another representation for $\operatorname{sc}_7(n)$; Ono and Raji [@OnoRaji Corollary 2] covered the case that $n\not\equiv -2{\ \, \left( \mathrm{mod} \, 7 \right)}$ is odd. \[cor:Cor2\] If $n\in{\mathbb{N}}$ is an integer for which $n+2$ is squarefree, then $$\operatorname{sc}_7(n)= -\frac{\nu_{n}}{D_n} \begin{cases}\vspace{3pt} \sum_{m=1}^{D_n-1}\left(\frac{-D_n}{m}\right) m&\text{if }n\not\equiv -2 {\ \, \left( \mathrm{mod} \, 7 \right)},\\ \vspace{3pt}7^2\left(7+\left(\frac{\frac{D_n}{7^2}}{7}\right)\right)\sum_{m=1}^{\frac{D_n}{7^2}-1}\left(\frac{-\frac{D_n}{7^2}}{m}\right) m&\text{if }n\equiv -2 {\ \, \left( \mathrm{mod} \, 7 \right)}. \end{cases}$$ The following corollary relates $\operatorname{sc}_7(m)$ with $m+2$ not necessarily squarefree to $\operatorname{sc}_7(n)$ with $n+2$ squarefree, for which Corollary \[cor:Cor2\] applies. The cases $\ell=r=0$ with $n\not\equiv -2 {\ \, \left( \mathrm{mod} \, 7 \right)}$ odd were proven in [@OnoRaji Corollary 3]. For this $\mu$ denotes the Möbius function and $\sigma(n):=\sum_{d\mid n} d$. \[cor:Cor3\] If $n\in{\mathbb{N}}$ satisfies $n+2$ squarefree, $\ell,r\in{\mathbb{N}}_0$, and $f\in{\mathbb{N}}$ with $\gcd(f,14)=1$, then $$\operatorname{sc}_7\left((n+2) 2^{2\ell}f^27^{2r} -2\right)=7^r\operatorname{sc}_7(n)\sum_{d|f} \mu(d)\left(\frac{-D_n}{d}\right)\sigma \left(\frac{f}{d}\right).$$ We also provide a combinatorial explanation for Corollary \[cor:counting\]. To do so, we first extend techniques of Ono and Sze [@ono19974] and explicitly describe the possible abaci (defined in Section \[Section: combinatorics\]) of self-conjugate $7$-core partitions . Then, in below we construct an explicit map $\phi$ sending self-conjugate $7$-core partitions to binary quadratic forms, via abaci and extended $t$-residue diagrams (defined in Section \[Section: combinatorics\]). In order to describe the image of this map, for a prime $p$ and a discriminant $D=\Delta f^2$ with $\Delta$ fundamental, we call a binary quadratic form of discriminant $D$ $p$-totally imprimitive if the power of $p$ dividing $\gcd(a,b,c)$ equals the power of $p$ dividing $f$ (i.e., if the power of $p$ dividing $\gcd(a,b,c)$ is maximal). Furthermore, recall that two binary quadratic forms of discriminant $D$ are said to be in the same [*genus*]{} if they represent the same values in $({\mathbb{Z}}/D{\mathbb{Z}})^*$. We call the genus containing the principal binary quadratic form of discriminant $D$ the [*principal genus*]{}. The image of $\phi$ is then described in the following theorem. \[Theorem: main - quad forms\] For every $n\in{\mathbb{N}}$, the image of $\phi$ is a unique non-principal genus of $7$-primitive and $2$-totally imprimitive binary quadratic forms with discriminant $-28n-56$. Moreover, suppose that $\ell$ is chosen maximally such that $n\equiv -2{\ \, ( \mathrm{mod} \, 2^{2\ell} )}$ and $\frac{7n+14}{2^{2\ell}}$ has $r$ distinct prime divisors. Then every equivalence class in this genus is the image of $\nu_n 2^r$ many self-conjugate $7$-cores of $n$. Note that Theorem \[Theorem: main - quad forms\] along with [@ono19974 Theorem 6] provides a combinatorial explanation for . The cases $t\in\{2,3\}$ are simple to describe, and immediately imply that relationships similar to along arithmetic progressions do not exist for $t\in\{2,3\}$, which we see in Section \[Sec: t=2,3\]. We prove a similar result for $t=5$ in Proposition \[Prop: t=5\]. Based on these results we offer the following conjecture, along with partial results on possible values of $t {\ \, \left( \mathrm{mod} \, 6 \right)}$ along with the possible shapes of arithmetic progressions in Section \[Section: t&gt;5\]. \[Theorem: main intro\] The only occurrence of arithmetic progressions for which $\operatorname{c}_t$ and $\operatorname{sc}_{2t-1}$ agree up to integer multiples non-trivially (even asymptotically) is when $t=4$. The paper is organised as follows. In Section \[sec:manyone\], we provide proofs for Theorem \[thm:sc7manyH\] and Corollary \[cor:Cor2\], Corollaries \[cor:counting\] and \[cor:Cor3\] are shown in Section \[sec:squarepartcounting\]. Section \[Section: combinatorics\] is dedicated to providing a combinatorial explanation of Theorem \[Theorem: Ono-Raji\] and its generalization in Corollary \[cor:counting\]. In Section \[Section: other t and conjecture\] we prove Conjecture \[Theorem: main intro\] in the cases $t\in\{2,3,5\}$ and provides partial results for larger $t$. Acknowledgments {#acknowledgments .unnumbered} =============== The research of the first author is supported by the Alfried Krupp Prize for Young University Teachers of the Krupp foundation. The research of the second author was supported by grants from the Research Grants Council of the Hong Kong SAR, China (project numbers HKU 17301317 and 17303618). The authors thank Ken Ono for insightful discussions pertaining to this paper, and for hosting the third author at the University of Virginia during which this research was partially completed. The authors also thank Andreas Mono for useful comments on an earlier draft. Proofs of Theorem \[thm:sc7manyH\] and Corollary \[cor:Cor2\] {#sec:manyone} ============================================================= Our investigation for the case $t=4$ begins by packaging the number of self-conjugate $7$-cores into a generating function and using the fact that it is a modular form to relate $\operatorname{sc}_7(n)$ to class numbers. We thus define $$S(\tau):=\sum_{n\geq 0} \operatorname{sc}_7(n) q^{n+2}.$$ As stated on [@OnoRaji page 4], $S$ is a modular form of weight $\frac{3}{2}$ on $\Gamma_0(28)$ with character $(\frac{28}{\cdot})$. Proof of Theorem \[thm:sc7manyH\] --------------------------------- To prove Theorem \[thm:sc7manyH\], we let $$\mathcal{H}_{\ell_1,\ell_2}(\tau):=\mathcal{H}\big|(U_{\ell_1,\ell_2}-\ell_2U_{\ell_1}V_{\ell_2})(\tau).$$ Here for $f(\tau):=\sum_{n\in{\mathbb{Z}}} c_f(n)q^n$ $$f\big|U_d(\tau):=\sum_{n\in{\mathbb{Z}}} c_f(dn)q^n,\qquad f\big|V_d(\tau):=\sum_{n\in{\mathbb{Z}}} c_f(n)q^{dn},$$ and $$\mathcal{H}(\tau):=\sum_{\substack{D\geq 0\\ D\equiv 0,3{\ \, \left( \mathrm{mod} \, 4 \right)}}}H(D)q^D.$$ Shifting $n\mapsto n-2$ in Theorem \[thm:sc7manyH\] and taking the generating function of both sides, the claim of the theorem is equivalent to $$\label{eqn:toprove} S=\frac{1}{4}\mathcal{H}_{1,2}\big|\left(U_{14}-U_2\big|V_7\right).$$ By [@BKPeterssonQF Lemma 2.3 and Lemma 2.6], both sides of are modular forms of weight $\frac{3}{2}$ on $\Gamma_0(56)$ with character $ (\frac{28}{\cdot}) $. By the valence formula, it thus suffices to check for the first $12$ coefficients; this has been done by computer, yielding and hence Theorem \[thm:sc7manyH\]. Rewriting $\operatorname{sc}_7(n)$ in terms of representation numbers --------------------------------------------------------------------- The next lemma rewrites $\operatorname{sc}_7(n)$ in terms of the representation numbers ($m\in {\mathbb{N}}_0$) $$r_{3}(m):=\#\left\{ \bm{x}\in {\mathbb{Z}}^3: x_1^2+x_2^2+x_3^2=m\right\}.$$ For $m\in{\mathbb{Q}}\setminus{\mathbb{N}}_0$, we furthermore set $r_3(m):=0$ for ease of notation. \[lem:sc7Theta\^3\] 1. For $n\in {\mathbb{N}}$, we have $$ \operatorname{sc}_7(n)=\frac{1}{48}\left(r_3(7n+14)-r_3\left(\frac{n+2}{7}\right)\right).$$ 2. If $n\equiv -2 {\ \, \left( \mathrm{mod} \, 7 \right)}$, then we have $$ \operatorname{sc}_7(n)=\frac{1}{48}\left(\left(7+\left(\frac{\frac{D_n}{7^2}}{7}\right)\right)r_3\left(\frac{n+2}{7}\right)-7r_3\left(\frac{n+2}{7^3}\right)\right).$$ \(1) By the proof of [@BKPeterssonQF Lemma 4.1] we have $$\Theta^3(\tau)=\sum_{n\geqslant0} r_3(n) q^n=12 \mathcal{H}_{1,2}\big|U_2(\tau),$$ where $\Theta(\tau):=\sum_{n\in{\mathbb{Z}}} q^{n^2}$ is the usual theta function. Plugging this into , the claim follows after picking off the Fourier coefficients and shifting $n\mapsto n+2$.\ (2) Recall that for $f(\tau) =\sum_{n\in{\mathbb{Z}}} c_f(n) q^n$ a modular form of weight $\lambda +\frac{1}{2}\in{\mathbb{Z}}+\frac 12$, the *$p^2$-th Hecke operator* is defined as $$ f| T_{p^2}(\tau) = \sum_{n\geqslant 0}\left(c_f\left(pn^2\right) +\left(\frac{(-1)^{\lambda}n}{p}\right)p^{\lambda -1}c_f(n)+p^{2\lambda-1}c_f\left(\frac{n}{p^2}\right)\right)q^n.$$ It is well-known that $$\label{Hecke1} \Theta^3|T_{p^2}=(p+1)\Theta^3.$$ Rearranging and comparing coefficients we obtain, by , for $m:=n+2\equiv 0{\ \, \left( \mathrm{mod} \, 7 \right)}$, $$r_3(7m)= 8r_3\left(\frac{m}{7}\right)-\left(\frac{-\frac{m}{7}}{7}\right) r_3\left(\frac{m}{7}\right) - 7r_3\left(\frac{m}{7^3}\right).$$ The claim follows by (1). Formulas in terms of single class numbers ----------------------------------------- We next turn to formulas for $\operatorname{sc}_7(n)$ in terms of a single class number. \[cor:sc7oneH\] 1. For $n\not\equiv -2{\ \, \left( \mathrm{mod} \, 7 \right)}$ and $n\not\equiv 2{\ \, \left( \mathrm{mod} \, 4 \right)}$, we have $$ \operatorname{sc}_7(n)=\nu_{n}H(D_n).$$ 2. For $n\equiv -2{\ \, \left( \mathrm{mod} \, 7 \right)}$, $n\not\equiv -2{\ \, \left( \mathrm{mod} \, 7^3 \right)}$, and $n\not\equiv 2{\ \, \left( \mathrm{mod} \, 4 \right)}$, we have $$ \operatorname{sc}_7(n)=\left(7+\left(\frac{\frac{D_n}{7^2}}{7}\right)\right)\nu_{n}H\left(\frac{D_n}{7^2}\right).$$ 3. If $n\equiv 2{\ \, \left( \mathrm{mod} \, 4 \right)}$, then $$\operatorname{sc}_7(n)=\operatorname{sc}_7\left(\frac{n+2}{4}-2\right).$$ 4. If $n\equiv -2{\ \, \left( \mathrm{mod} \, 7^2 \right)}$, then $$\operatorname{sc}_7(n)=7\operatorname{sc}_7\left(\frac{n+2}{7^2}-2\right).$$ For $n\not\equiv 2{\ \, \left( \mathrm{mod} \, 4 \right)}$, we have $7(n+2)\mid D_n$, so $n\equiv -2{\ \, \left( \mathrm{mod} \, 7 \right)}$ implies that $7^2\mid D_n$, and hence Corollary \[cor:sc7oneH\] (2) is meaningful. \(1) Since $n\not\equiv -2{\ \, \left( \mathrm{mod} \, 7 \right)}$, the final term in Lemma \[lem:sc7Theta\^3\] (1) vanishes, giving $$ \operatorname{sc}_7(n)= \frac{1}{48}r_3(7n+14).$$ The claim then follows immediately by plugging in the well-known formula of Gauss (see e.g. [@OnoBook Theorem 8.5]) $$\label{eqn:r3Gauss} r_3(n)=\begin{cases} 12 H(4n)&\text{if }n\equiv 1,2{\ \, \left( \mathrm{mod} \, 4 \right)},\\ 24H(n)&\text{if }n\equiv 3{\ \, \left( \mathrm{mod} \, 8 \right)},\\ r_3\left(\frac{n}{4}\right)&\text{if }4\mid n,\\ 0&\text{otherwise}. \end{cases}$$\ (2) Since $7^3\nmid (n+2)$, the final term in Lemma \[lem:sc7Theta\^3\] (2) vanishes, giving $$\operatorname{sc}_7(n)= \frac{1}{48}\left(7+\left(\frac{\frac{D_n}{7^2}}{7}\right)\right) r_3\left(\frac{n+2}{7}\right).$$ The claim then immediately follows by plugging in .\ (3) Since $n\equiv 2{\ \, \left( \mathrm{mod} \, 4 \right)}$, we have $4\mid (n+2)$, and hence and Lemma \[lem:sc7Theta\^3\] (1) imply the claim. \(4) Since $n\equiv -2{\ \, \left( \mathrm{mod} \, 7^2 \right)}$, $7^3\mid D_n$, so $7\mid \frac{D_n}{7^2}$. Hence Lemma \[lem:sc7Theta\^3\] (1), (2) imply the claim. Proof of Corollary \[cor:Cor2\] ------------------------------- We next consider the special case that $n+2$ is squarefree and use Dirichlet’s class number formula to obtain another formula for $\operatorname{sc}_7(n)$. Note that since $n+2$ is squarefree, either $-D_n$ is fundamental (for $n\not\equiv -2{\ \, \left( \mathrm{mod} \, 7 \right)}$) or $-\frac{D_n}{7^2}$ is fundamental (for $n\equiv -2{\ \, \left( \mathrm{mod} \, 7 \right)}$). Dirichlet’s class number formula (see e.g. [@ZagierZeta Satz 3]) states that $$\label{eqn:ClassNumberFormulaCorrect} H(|D|)= -\frac{1}{|D|} \sum_{m=1}^{|D|-1} \left(\frac{D}{m}\right)m.$$ By Corollary \[cor:sc7oneH\] (1), (2) (the conditions given there are satisfied because $n+2$ is squarefree and thus neither $n\equiv 2 {\ \, \left( \mathrm{mod} \, 4 \right)}$ nor $n\equiv-2{\ \, ( \mathrm{mod} \, 7^3 )}$), we have $$\label{SH}\operatorname{sc}_7(n)=\nu_{n} \begin{cases} H(D_n) & \text{if } n\not\equiv -2{\ \, \left( \mathrm{mod} \, 7 \right)},\\ \left(7+\left(\frac{\frac{D_n}{7^2}}{7}\right)\right) H\left(\frac{D_n}{7^2}\right) & \text{if } n\equiv -2 {\ \, \left( \mathrm{mod} \, 7 \right)}. \end{cases}$$ Since $-D_n$ is fundamental in the first case and $-\frac{D_n}{7^2}$ is fundamental in the second case, we may plug in with $D= -D_n$ in the first case and $D=-\frac{D_n}{7^2}$ in the second case. Thus for $n\not\equiv -2 {\ \, \left( \mathrm{mod} \, 7 \right)}$ we plug $$H\left(D_n\right)=-\frac{1}{D_n} \sum_{m=1}^{D_n-1} \left(\frac{-D_n}{m}\right)m$$ into , while for $n\equiv -2 {\ \, \left( \mathrm{mod} \, 7 \right)}$ we plug in $$H\left(\frac{D_n}{7^2}\right)=-\frac{7^2}{D_n} \sum_{m=1}^{\frac{D_n}{7^2}-1}\left(\frac{-\frac{D_n}{7^2}}{m}\right)m.$$ This yields the claim. Proofs of Corollaries \[cor:counting\] and \[cor:Cor3\] {#sec:squarepartcounting} ======================================================= This section relates $\operatorname{sc}_7(m)$ and $\operatorname{sc}_7(n)$ if $\frac{m+2}{n+2}$ is a square. A recursion for $\operatorname{sc}_7(n)$ ---------------------------------------- In this subsection, we consider the case $\frac{m+2}{n+2}=2^{2j}7^{2\ell}$. \[lem:div4\] Let $\ell\in{\mathbb{N}}_0$ and $n\in{\mathbb{N}}$. 1. We have $$ \operatorname{sc}_7\left((n+2)2^{2\ell}-2\right)=\operatorname{sc}_7(n).$$ 2. We have $$\operatorname{sc}_7\left((n+2)7^{2\ell}-2\right)=7^\ell\operatorname{sc}_7(n).$$ \(1) Corollary \[cor:sc7oneH\] (3) gives inductively that for $0\leq j\leq \ell$ we have $$\operatorname{sc}_7\left((n+2)2^{2\ell}-2\right) =\operatorname{sc}_7\left((n+2)2^{2(\ell-j)}-2\right).$$ In particular, $j=\ell$ yields the claim.\ (2) The claim is trivial if $\ell=0$. For $\ell\geq 1$, Corollary \[cor:sc7oneH\] (4) inductively yields that for $0\leq j\leq \ell$ $$\operatorname{sc}_7\left((n+2)7^{2\ell}-2\right)=7^j\operatorname{sc}_7\left((n+2)7^{2(\ell-j)}-2\right).$$ The case $j=\ell$ is precisely the claim. Proof of Corollary \[cor:Cor3\] ------------------------------- We are now ready to prove Corollary \[cor:Cor3\]. We first use Lemma \[lem:div4\] (1), (2) to obtain that $$\label{eqn:2and7gone} \operatorname{sc}_7\left((n+2) 2^{2\ell} f^27^{2r} -2\right) =7^r\operatorname{sc}_7\left((n+2)f^2-2\right).$$ We split into the case $n\not\equiv -2 {\ \, \left( \mathrm{mod} \, 7 \right)}$ (in which case $-D_n$ is fundamental) and $n\equiv -2 {\ \, \left( \mathrm{mod} \, 7 \right)}$ (in which case $-\frac{D_n}{7^2}$ is fundamental). First suppose that $n\not\equiv -2 {\ \, \left( \mathrm{mod} \, 7 \right)}$. We use Corollary \[cor:sc7oneH\] (1) to obtain $$ \operatorname{sc}_7\left((n+2)f^2-2\right)=\nu_{n}H\left(D_nf^2\right)$$ We then plug in [@Cohen p. 273] ($-D$ a fundamental discriminant) $$\label{eqn:Df^2} H\left(Df^2\right)=H(D)\sum_{1\leq d|f} \mu(d)\left(\frac{-D}{d}\right)\sigma\left(\frac{f}{d}\right).$$ Hence by Corollary \[cor:sc7oneH\] (1) $$\operatorname{sc}_7\left((n+2)f^2-2\right)=\operatorname{sc}_7(n)\sum_{1\leq d|f} \mu(d)\left(\frac{-D_n}{d}\right)\sigma\left(\frac{f}{d}\right),$$ and plugging back into yields the corollary in that case. We next suppose that $n\equiv -2{\ \, \left( \mathrm{mod} \, 7 \right)}$. First note that since $7\nmid f$ and $n+2$ is squarefree, $(n+2)f^2-2\not\equiv -2{\ \, \left( \mathrm{mod} \, 7^3 \right)}$ and $n\not\equiv 2{\ \, \left( \mathrm{mod} \, 4 \right)}$. We plug in Corollary \[cor:sc7oneH\] (2), use (recall that $-\frac{D_n}{7^2}$ is fundamental), and note that $(\frac{\frac{D_n f^2}{7^2}}{7}) = (\frac{\frac{D_n}{7^2}}{7})$ to obtain that $$\operatorname{sc}_7\left((n+2)f^2-2\right)= \left(7+\left(\frac{\frac{D_n}{7^2}}{7}\right)\right)\nu_{n}H\left(\frac{D_n}{7^2}\right)\sum_{1\leq d|f} \mu(d)\left(\frac{-\frac{D_n}{7^2}}{d}\right)\sigma\left(\frac{f}{d}\right).$$ We then use Corollary \[cor:sc7oneH\] (2) again and plug back into to conclude that $$\operatorname{sc}_7\left((n+2) 2^{2\ell}f^27^{2r} -2\right)=7^r\operatorname{sc}_7(n)\sum_{1\leq d|f} \mu(d)\left(\frac{-\frac{D_n}{7^2}}{d}\right)\sigma\left(\frac{f}{d}\right).$$ Since $7\nmid f$, we have $(\frac{-\frac{D_n}{7^2}}{d})=(\frac{-D_n}{d})$ for $d\mid f$. Therefore the corollary follows. Proof of Corollary \[cor:counting\] ----------------------------------- We next rewrite Corollary \[cor:sc7oneH\] (2) in order to uniformly package Corollary \[cor:sc7oneH\] (1), (2), and (3). We first require a lemma relating the $7$-primitive class numbers $H_7$ and the Hurwitz class numbers. \[lem:H7diff\] For a discriminant $-D$, we have $$H_7(D)=H(D)-H\left(\frac{D}{7^2}\right).$$ To rewrite the right-hand side, we write $D=\Delta 7^{2\ell} f^2$ with $7\nmid f$ and $-\Delta$ fundamental discriminant and then plug in the well-known identity $$H(D)=\sum_{d^2\mid D} \frac{h\left(-\frac{D}{d^2}\right)}{\omega_{-\frac{D}{d^2}}},$$ where as usual $h(-\tfrac{D}{d^2})$ counts the number of classes of primitive quadratic forms $[a,b,c]$ with discriminant $-\frac{D}{d^2}$ and $\gcd(a,b,c)=1$. This yields $$\begin{aligned} \nonumber H(D)-H\left(\frac{D}{7^2}\right)&= \sum_{d\mid 7^\ell f} \frac{h\left(-\frac{ D }{d^2}\right)}{\omega_{-\frac{ D }{d^2}}}-\sum_{d\mid 7^{\ell-1}f}\frac{h\left( -\frac{D}{7^2d^2} \right)}{\omega_{ -\frac{D}{7^2d^2} }} \label{eqn:finalprimitive}= \sum_{d\mid 7^\ell f} \frac{h\left( -\frac{D}{d^2} \right)}{\omega_{ -\frac{D}{d^2} }} -\sum_{\substack{d\mid 7^{\ell}f\\ 7\mid d}}\frac{h\left(-\frac{D}{d^2}\right)}{\omega_{-\frac{D}{d^2}}} \\ &= \sum_{\substack{d\mid 7^\ell f\\ 7\nmid d}} \frac{h\left(-\frac{D }{d^2}\right)}{\omega_{-\frac{D}{d^2}}}. \end{aligned}$$ The claim of the lemma is thus equivalent to showing that the right-hand side of equals $H_7(D)$. Multiplying each form counted by $h(-\tfrac{D}{d^2})$ by $d$, we see that precisely counts those quadratic forms $[a,b,c]$ of discriminant $-D$ with $7\nmid \gcd(a,b,c)$, weighted in the usual way. To finish the proof of Corollary \[cor:counting\], for a fundamental discriminant $-\Delta$, we also require the evaluation of $$C_{r,\Delta}:=\sum_{d\mid 7^r} \mu(d)\left(\frac{-\Delta}{d}\right)\sigma\left(\frac{7^r}{d}\right)-\sum_{d\mid 7^{r-1}} \mu(d)\left(\frac{-\Delta}{d}\right)\sigma\left(\frac{7^{r-1}}{d}\right).$$ A straightforward calculation gives the following lemma. \[lem:Creval\] For $r\in{\mathbb{N}}$ we have $$C_{r,\Delta}=7^{r-1}\left(7+\left(\frac{\Delta}{7}\right)\right).$$ We are now ready to prove Corollary \[cor:counting\]. We first consider the case that $n\not\equiv 2{\ \, \left( \mathrm{mod} \, 4 \right)}$. If $n\not \equiv -2{\ \, \left( \mathrm{mod} \, 7 \right)}$, [then Corollary]{} \[cor:counting\] follows directly from Corollary \[cor:sc7oneH\] (1) and Lemma \[lem:H7diff\]. For $n\equiv -2{\ \, \left( \mathrm{mod} \, 7 \right)}$, we choose $r_n\in{\mathbb{N}}_0$ maximally such that $n\equiv -2{\ \, \left( \mathrm{mod} \, 7^{2r_n+1} \right)}$ and proceed by induction on $r_n$. For $r_n=0$ we have $D_n=\Delta_n f^27^2$ with $-\Delta_n$ a fundamental discriminant and $7\nmid f$. Since $7\nmid f$, we have $$\left(\frac{-\Delta_{n}f^2}{7}\right)=\left(\frac{-\Delta_{n}}{7}\right),$$ and hence combining Corollary \[cor:sc7oneH\] (2), , and Lemma \[lem:Creval\] gives $$\operatorname{sc}_7(n)=\nu_{n} H(\Delta_n)\left(\sum_{d\mid 7} \mu(d)\left(\frac{-\Delta_n}{d}\right)\sigma\left(\frac{7}{d}\right)-1\right)\sum_{d\mid f} \mu(d)\left(\frac{-\Delta_n}{d}\right)\sigma\left(\frac{f}{d}\right).$$ Noting that $7\nmid f$ and $$\label{eqn:multiplicative} \sum_{d\mid f} \mu(d)\left(\frac{-\Delta_n}{d}\right)\sigma\left(\frac{f}{d}\right)$$ is multiplicative, we obtain $$\operatorname{sc}_7(n)=\nu_{n} H(\Delta_n)\left(\sum_{d\mid 7f} \mu(d)\left(\frac{-\Delta_n}{d}\right)\sigma\left(\frac{7f}{d}\right)-\sum_{d\mid f} \mu(d)\left(\frac{-\Delta_n}{d}\right)\sigma\left(\frac{f}{d}\right)\right).$$ We then apply again and use Lemma \[lem:H7diff\] to obtain Corollary \[cor:counting\] in this case. This completes the base case $r_n=0$ of the induction. Let $r\geq 1$ be given and assume the inductive hypothesis that that Corollary \[cor:counting\] holds for all $n$ with $r_n<r$. We then let $n$ be arbitrary with $r_n=r$ and show that Corollary \[cor:counting\] holds for $n$. By Corollary \[cor:sc7oneH\] (4), we have $$\label{eqn:toinduct} \operatorname{sc}_7(n)=7\operatorname{sc}_7\left(\frac{n+2}{7^2}-2\right).$$ By the maximality of $r_n$, $7^{2r-1}\mid \frac{n+2}{7^2}$ but $7^{2r+1}\nmid \frac{n+2}{7^2}$, so $r_{\frac{n+2}{7^2}-2}=r-1<r$ and hence by induction we may plug Corollary \[cor:counting\] into the right-hand side of to obtain $$\label{eqn:inductstep} \operatorname{sc}_7(n)= 7\nu_{\frac{n+2}{7^2}-2}H_7\left(D_{\frac{n+2}{7^2}-2}\right).$$ A straightforward calculation shows that $$\nu_{\frac{n+2}{7^2}-2}=\nu_n\quad \text{ and }\quad D_{\frac{n+2}{7^2}-2}=\frac{D_n}{7^2}$$ and hence implies that $$\operatorname{sc}_7(n)=7\nu_{n}H_7\left(\frac{D_n}{7^2}\right).$$ Hence Corollary \[cor:counting\] in this case is equivalent to showing that $$\label{eqn:remarktoshow} H_7\left(D_n\right)= 7H_7\left(\frac{D_{n}}{7^2}\right).$$ Plugging Lemma \[lem:H7diff\] and then into both sides of , cancelling $H(\Delta_n)$, and again using the multiplicativity of , one obtains that is equivalent to $C_{r+1,\Delta_n}=7C_{r,\Delta_n}$. Since $r\geq 1$, we have $r+1\geq 2$, and Lemma \[lem:Creval\] implies that $C_{r+1,\Delta_n}=7C_{r,\Delta_n}$, yielding Corollary \[cor:counting\] for all $n\not\equiv 2{\ \, \left( \mathrm{mod} \, 4 \right)}$. We finally consider the case $n\equiv 2{\ \, \left( \mathrm{mod} \, 4 \right)}$. We choose $\ell$ maximally such that $n\equiv -2{\ \, \left( \mathrm{mod} \, 2^{2\ell} \right)}$. Lemma \[lem:div4\] (1) implies that $$\operatorname{sc}_7(n)=\operatorname{sc}_7\left(\left({\frac{n+2}{2^{2\ell}}-2} +2\right)2^{2\ell}-2\right)=\operatorname{sc}_7\left({\frac{n+2}{2^{2\ell}}-2}\right).$$ The choice of $\ell$ implies that $\frac{n+2}{2^{2\ell}}-2\not\equiv 2{\ \, \left( \mathrm{mod} \, 4 \right)}$. We may therefore plug in Corollary \[cor:counting\] and the definitions and to conclude that $$\operatorname{sc}_7\left(\frac{n+2}{2^{2\ell}}-2\right)= \nu_{\frac{n+2}{2^{2\ell}}-2} H_7\left(D_{\frac{n+2}{2^{2\ell}}-2}\right)=\nu_{n}H_7\left(D_n\right).\qedhere$$ A combinatorial explanation of Corollary \[cor:counting\] {#Section: combinatorics} ========================================================= Here we provide a combinatorial explanation for Corollary \[cor:counting\]. We use the theory of abaci, following the construction in [@ono19974]. Abaci, extended $t$-residue diagrams, and self-conjugate $t$-cores ------------------------------------------------------------------ Given a partition $\Lambda = (\lambda_1, \lambda_2, \dots, \lambda_s)$ with $\lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_s >0$ of a positive integer $n$ and a positive integer $t$, we next describe the $t$-abacus associated to $\Lambda$. This consists of $s$ beads on $t$ rods constructed in the following way [@ono19974]. For every $1 \leq j \leq s$ define *structure numbers* by $$B_j \coloneqq \lambda_j - j+s.$$ For each $B_j$ there are unique integers $(r_j,c_j)$ such that $$B_j = t(r_j-1) +c_j,$$ and $0 \leq c_j < t-1$. The *abacus* for the partition $\Lambda$ is then formed by placing one bead for each $B_j$ in row $r_j$ and column $c_j$. The [*extended $t$-residue diagram*]{} associated to a $t$-core partition $\Lambda$ is constructed as follows (see [@garvan1990cranks page 3]). Label a cell in the $j$-th row and $k$-th column of the Ferrers–Young diagram of $\Lambda$ by $k-j {\ \, \left( \mathrm{mod} \, t \right)}$. We also label the cells in column $0$ in the same way. A cell is called [*exposed*]{} if it is at the end of a row. The [*region $r$*]{} of the extended $t$-residue diagram of $\Lambda$ is the set of cells $(j,k)$ satisfying $t(r-1) \leq k-j < tr$. Then we define $n_j$ to be the maximum region of $\Lambda$ which contains an exposed cell labeled $j$. As noted in [@garvan1990cranks], this is well-defined since column $0$ contains infinitely many exposed cells. Let $t=4$ and construct the abacus and $4$-residue diagram for the partition $\Lambda = (3,2,1)$. We begin with the abacus, computing the structure numbers $B_1 = 5$, $B_2 = 3$, and $B_3 = 1$. Then diagrammatically the abacus is $$\begin{matrix} {}\vphantom{\begin{smallmatrix}a\\ a\end{smallmatrix}} & \text{\large{$0$}} & \text{\large{$1$}} & \text{\large{$2$}} &\text{\large{$3$}} \\ \text{\large{$1$}} \vphantom{\begin{matrix}a\\ a\end{matrix}}& {} & \text{\large{$B_3$}} & {} & \text{\large{$B_2$}} \\ \text{\large{$2$}}\vphantom{\begin{matrix}a\\ a\end{matrix}} & {} & \text{\large{$B_1$}} & & \end{matrix}$$ The extended $4$-residue diagram of the partition is $$\begin{matrix} {} & \text{\large{$0$}}\vphantom{\begin{smallmatrix}a\\ a\end{smallmatrix}} & \text{\large{$1$}} & \text{\large{$2$}} & \text{\large{$3$}} \\ \text{\large{$1$}}\vphantom{\begin{matrix}a\\ a\end{matrix}} & {}_3 & \text{\large $\bullet$}_0 & \text{\large $\bullet$}_1 & \text{\large $\bullet$}_2 \\ \text{\large{$2$}}\vphantom{\begin{matrix}a\\ a\end{matrix}} & {}_2 & \text{\large $\bullet$}_3 & \text{\large $\bullet$}_0 & {} \\ \text{\large{$3$}}\vphantom{\begin{matrix}a\\ a\end{matrix}} & {}_1 & \text{\large $\bullet$}_2 & {} & {} \\ \end{matrix}$$ Then the exposed cells in this diagram are $(1,3)$, $(2,2)$, and $(3,1)$. One may then determine the region of these cells in the prescribed fashion. For example, the exposed cell $(1,3)$ labeled by $2$ belongs to the region $1$, and hence $n_2 = 1$. Using this construction, [@ono19974 Theorem 4] reads as follows. \[thm:t-coreabacus\] Let $A$ be an abacus for a partition $\Lambda$, and let $m_j$ denote the number of beads in column $j$. Then $\Lambda$ is a $t$-core partition if and only if the $m_j$ beads in column $j$ are the beads in positions $ (1,j), (2,j), \dots, (m_j,j). $ Furthermore, using extended $t$-residue diagrams, the authors of [@garvan1990cranks] showed the following result. \[Lemma: Garvan size of lists\] Let $P_t(n)$ be the set of $t$-core partitions of $n$. There is a bijection $P_t(n) \rightarrow \{ N \coloneqq [n_0, \dots, n_{t-1}] \colon n_j \in {\mathbb{Z}}, n_0 + \dots + n_{t-1} = 0 \}$ such that $$|\Lambda| = \frac{t|N|^2}{2} + B \cdot N, \hspace{20pt} B \coloneqq [0,1, \dots, t-1].$$ When computing the norm and dot-product, we consider $N,B$ as elements in ${\mathbb{Z}}^t$. We call $N$ the [*list associated*]{} to the $t$-residue diagram. We now show a relationship between abaci and lists of a partition. \[Proposition: list to abacus\] Let $N = [n_0,\dots, n_{t-1}]$ be the list associated to the extended $t$-residue diagram of a $t$-core partition $\Lambda$. Let $\ell + s = \alpha_\ell t + \beta_\ell$ with $0 \leq \beta_\ell \leq t-1$. Then $N$ also uniquely represents the abacus $ ( \dots,n_{t-1} +\alpha_{t-1}, n_0 + \alpha_0, n_1 + \alpha_1, \dots ), $ where $n_\ell+\alpha_\ell$ occurs in position $\beta_\ell$ of the abacus. The largest part $\lambda_1$ corresponds to the maximum region of the $t$-residue diagram, and also the lowest right-hand bead on the abacus. Let $m_1 \coloneqq \max\{n_0, \dots, n_{t-1}\}$ be achieved at $n_{\ell_1}$. Then $\lambda_1 = t(m_1-1) + \ell_1 + 1$. For the abacus, we correspondingly find $B_1 = \lambda_1 -1+s = t(m_1-1) +\ell_1 +s = t(m_1 + \alpha_1 -1) +\beta_1$, where $\ell_1 + s = \alpha_1 t + \beta_1$ with $0 \leq \beta_1 \leq t-1$. Hence we place a bead in the abacus at the slot $(m_1 +\alpha_1, \beta_1)$. Since this is a $t$-core partition, we also know that there are beads in all places above this slot. These beads correspond to other parts in the partition whose labels of exposed cells in the $t$-residue diagram are $\ell_1$ but where the exposed cells themselves lie in a lower region. Thus the $\beta_1$-th entry in the abacus takes value $m_1+\alpha_1$. Then removing the element $n_{\ell_1}$ from the list we are left with $[n_0, \dots, n_{\ell_1 -1}, n_{\ell_1 +1}, \dots, n_{t-1}]$. We use the same technique as before, identifying $m_2 \coloneqq \max\{n_0, \dots, n_{\ell_1 -1}, n_{\ell_1 +1}, \dots, n_{t-1}\}$, achieved at $n_{\ell_2}$. We have $k-j \equiv \ell_2 {\ \, \left( \mathrm{mod} \, t \right)}$ such that $t(m_2-1) \leq k-j < tm_2$, meaning that $\lambda_j = k = t(m_2-1)+\ell_2 +j$. Plugging this in to the formula for the structure numbers we find that $B_j = t(m_2 -1) +\ell_2 + s = t(m_2 + \alpha_2 - s) + \beta_2$, where $\ell_2 = \alpha_2 t + \beta_2$ with $0 \leq \beta < t$. Hence we place a bead in the abacus in the slot $(m_2+\alpha_2, \beta_2)$ and all other slots vertically above this, and so the $\beta_2$-nd entry in the abacus list is given by $m_2+\alpha_2$. This process continues for each entry of the list. If this process gives a non-positive value for the slots of the abacus in which beads are to be placed, we define the value in that column of the abacus list to be $0$ (it is seen that these values arise from the exposed cells in column $0$ of the extended $t$-residue diagram and hence are not a part of the partition). It is clear that the $\beta_\ell$ run through exactly a complete set of residues modulo $t$, and hence each column in the abacus is represented exactly once. It is easily seen that this process defines a unique abacus for each list $N$ (up to equivalency by Lemma \[Lemma: Abacus can have 0 first column\]). The converse is also seen to hold. If the resulting abacus $A$ that appears under an application of Proposition \[Proposition: list to abacus\] has a non-zero first column, we may use Lemma \[Lemma: Abacus can have 0 first column\] to rewrite $A$ as an equivalent abacus with a $0$ in the first place. We use Proposition \[Proposition: list to abacus\] to restrict the possible shapes of abaci associated to self-conjugate $t$-core partitions. \[Lemma: shape of self-conj abaci\] With the notation defined as in Proposition \[Proposition: list to abacus\], an abacus is self-conjugate if and only if it is of the form $ (\dots, -n_1 + \alpha_1, -n_0+\alpha_0, n_0+\alpha_0, n_1+\alpha_1, \dots). $ The proof of [@garvan1990cranks Bijection 2] implies that the elements in the list $[n_0, \dots, n_{t-1}]$ associated to a self-conjugate partition satisfy the relations $n_{\ell} = -n_{t-\ell-1}$ for every $0 \leq \ell \leq t-1$. Combining this with Proposition \[Proposition: list to abacus\] immediately yields the claim. Self-conjugate $7$-cores ------------------------ We now restrict our attention to abaci of self-conjugate $7$-cores. We require [@ono19974 Lemma 1], which allows us to form a system of canonical representatives for abaci associated to $7$-core partitions. \[Lemma: Abacus can have 0 first column\] The two abaci $A_1 = (m_0,m_1, \dots, m_6)$ and $A_2 = (m_{6}+1,m_0,\dots, m_{5})$ represent the same $7$-core partition. Thus every $7$-core partition may be represented by an abacus of the form $(0,a,b,c,d,e,f)$. Then in a similar fashion to Ono and Sze, we find that there is a one-to-one correspondence $$(0,a,b,c,d,e,f) \leftrightarrow \{ \text{all } 7\text{-core partitions} \},$$ where $a,b,c,d,e,$ and $f$ are non-negative integers. We thus assume that the first column in each abacus has no beads. We next use Lemma \[Lemma: shape of self-conj abaci\] to considerably reduce the number of abaci we need to consider. \[Lemma: conditions on A,B..\] Assume that $A = (0,a,b,c,d,e,f)$ is an abacus for a self-conjugate $7$-core partition and recall that $s = a+b+c+d+e+f$. Let $s \not\equiv 4 {\ \, \left( \mathrm{mod} \, 7 \right)}$ and $r\in{\mathbb{N}}_0$. 1. Assume that $s=7r$. Then $ f=2r,$ $ a+e=2r,$ $ b+d = 2r,$ $ c=r.$ 2. Assume that $s=7r +1$. Then $ a=2r+1,$ $ b+f=2r,$ $ c+e=2r,$ $ d = r.$ 3. Assume that $s=7r+2$. Then $ b+a=2r+1,$ $ c=2r+1,$ $ d+f=2r,$ $ e = r.$ 4. Assume that $s=7r+3$. Then $ b+c = 2r+1,$ $ a+d=2r+1,$ $ e=2r+1,$ $ f = r.$ 5. Assume that $s=7r+5$. Then $ d+e=2r+1,$ $ c+f=2r+1,$ $ b=2r+2,$ $ a= r+1.$ 6. Assume that $s=7r+6$. Then $ e+f=2r+1,$ $ d=2r+2,$ $ a+c=2r+2,$ $ b = r +1.$ We prove (1). By Proposition \[Proposition: list to abacus\] we see that $A$ corresponds to the list $[-r,a-r,b-r,c-r,d-r,e-r,f-r]$. Using Lemma \[Lemma: shape of self-conj abaci\] and the fact that $s = 7r$, the conditions are easy to determine. The other cases follow in the same way. 1. It is clear how a similar result to Lemma \[Lemma: conditions on A,B..\] may be obtained for all self-conjugate $t$-cores. 2. The lack of the case $s \equiv 4 {\ \, \left( \mathrm{mod} \, 7 \right)}$ in Lemma \[Lemma: conditions on A,B..\] follows from the fact that there are no self-conjugate $2t-1$-core partitions with $s \equiv t {\ \, \left( \mathrm{mod} \, (2t-1) \right)}$, which may be seen by inspecting the upper-left cell in the Ferrers–Young diagram of such a partition. Lemma \[Lemma: conditions on A,B..\] shows that the abaci of self-conjugate $7$-core partitions naturally fall into one of the distinct families given in Table \[Table: acabi families\], enumerated with parameters $a,b,r \in {\mathbb{N}}_0$. ----------------------- ---------------------------------- [Type of Partition]{} [Shape of Abaci]{} \[5pt\] I $(0,a,b,r,2r-b,2r-a,2r)$ \[5pt\] II $ (0,2r+1,a,b,r,2r-b,2r-a)$ \[5pt\] III $(0,a,2r+1-a,2r+1,b,r,2r-b)$ \[5pt\] IV $(0,a,b,2r+1-b,2r+1-a,2r+1,r)$ \[5pt\] V $(0,r+1,2r+2,a,b,2r+1-b,2r+1-a)$ \[5pt\] VI $(0,a,r+1,2r+2-a,2r+2,b,2r+1-b)$ \[10pt\] ----------------------- ---------------------------------- : \[Table: acabi families\] The different types of abaci for self-conjugate $7$-core partitions. We relate the families of partitions to quadratic forms, with the relationship shown in the following proposition. For brevity, we write only triples without $\pm$ signs - it is clear that changing the sign on any entry preserves the result. \[Prop: lists to quad forms\] Let $n \in {\mathbb{N}}$ and $a,b,r\in{\mathbb{N}}_0$ be given. 1. The Type I partition with parameters $a$, $b$, and $r$ is a partition of $n$ if and only if $$7n+14 =(7r+3)^2 + (7r+2-7a)^2 + ( 7r+1-7b)^2.$$ 2. The Type II partition with parameters $a$, $b$, and $r$ is a partition of $n$ if and only if $$7n+14 = ( 7r+4 )^2 +( 7r+2-7a)^2 + (7r+1-7b )^2.$$ 3. The Type III partition with parameters $a$, $b$, and $r$ is a partition of $n$ if and only if $$7n+14 = (7r +5)^2 + (7r+4-7a )^2 + (7r+1-7b )^2.$$ 4. The Type IV partition with parameters $a$, $b$, and $r$ is a partition of $n$ if and only if $$7n+14 = (7r+6 )^2 + (7r+5-7a )^2 + (7r+4-7b)^2.$$ 5. The Type V partition with parameters $a$, $b$, and $r$ is a partition of $n$ if and only if $$7n+14 = (7r+8 )^2 + (7r+5-7a )^2 + (7r+4-7b )^2.$$ 6. The Type VI partition with parameters $a$, $b$, and $r$ is a partition of $n$ if and only if $$7n+14 = (7r+9 )^2 + (7r+8-7a )^2 + (7r+4-7b )^2.$$ We only prove (1). Combining the definition with Proposition \[Proposition: list to abacus\], the Type I partition $\Lambda$ with parameters $a$, $b$, and $r$ has the associated list $[-r,a-r,b-r,0,r-b,r-a,r]$. By Lemma \[Lemma: Garvan size of lists\], we thus have $$n=|\Lambda|=7\left(r^2+(a-r)^2 +(b-r)^2\right) + (a-r)+2(b-r)+4(r-b)+5(r-a).$$ Hence we see that $$\begin{aligned} 7n+14 = & 49\left( r^2 + (a-r)^2 +(b-r)^2 \right) + 7\left(a-r + 2(b-r) +4(r-b) +5(r-a) +6r\right) + 14 \\ =& 147r^2 + 49a^2 + 49b^2 + 84r - 98ar - 98br - 28a - 14b +14. \end{aligned}$$ This is exactly the expansion of $$(7r+3)^2 + (7r+2-7a)^2 + (7r+1-7b)^2.$$ The other cases follow in the same way, using the associated lists in Table \[Table: list families\]. ----------------------- ------------------------------------ [Type of Partition]{} [Shape of Associated list]{} \[5pt\] I $[-r,a-r,b-r,0,r-b,r-a,r]$ \[5pt\] II $[r+1,a-r,b-r,0,r-b,r-a,-r-1]$ \[5pt\] III $[a-r,r+1,b-r,0,r-b,-r-1,r-a]$ \[5pt\] IV $[a-r,b-r,r+1,0,-r-1,r-b,r-a]$ \[5pt\] V $[a-r,b-r,-r-1,0,r+1,r-b,r-a]$ \[5pt\] VI $[b-r,-r-1,a-r-1,0,r+1-a,r+1,r-b]$ \[10pt\] ----------------------- ------------------------------------ : \[Table: list families\] The different types of associated lists for self-conjugate $7$-core partitions. Proposition \[Prop: lists to quad forms\] shows that for each self-conjugate $7$-core of $n$ there is a representation of $7n+14 = x^2 + y^2 + z^2$ as the sum of three squares with none of $x,y,z$ divisible by $7$. Define $$\begin{aligned} J(7n+14) \coloneqq \{ (x,y,z) \in {\mathbb{Z}}^3 \colon & x^2+y^2+z^2 = 7n+14, \text{ and } x,y,z \not\equiv 0 {\ \, \left( \mathrm{mod} \, 7 \right)} \}. \end{aligned}$$ Let $K(7n+14) \coloneqq J(7n+14)/ \sim$ where $(x,y,z) \sim (x',y',z')$ if $(x',y',z')$ is any permutation of the triple $(x,y,z)$, including minus signs i.e., $(-x,y,z) \sim (x,y,z)$. Then it is easy to see that we obtain the following corollary. \[cor:sc7K\] There is an isomorphism between self-conjugate $7$-core partitions and $K(7n+14)$. Corollary \[cor:sc7K\] gives a combinatorial explanation for Lemma \[lem:sc7Theta\^3\] (1). We then obtain an explanation of Corollary \[cor:counting\] via the following exposition, using Gauss’ bijective map from solutions of the equation $x^2+y^2+z^2 = 7n+14$ to primitive binary quadratic forms in certain class groups. We elucidate the case $n \equiv 0,1 {\ \, \left( \mathrm{mod} \, 4 \right)}$ (a similar story holds for $n \equiv 3 {\ \, \left( \mathrm{mod} \, 8 \right)}$). By Gauss’s [@10.2307/j.ctt1cc2mnd article 278], for each representation of $7n+14$ as the sum of three squares there corresponds a primitive binary quadratic form of discriminant $-28n-56$. This correspondence is invariant under a pair of simultaneous sign changes on the triple $(x,y,z)$. Explicitly, the correspondence is given by the following. For $(x,y,z) \in J(7n+14)$ let $(m_0, m_1, m_2, n_0,n_1,n_2)$ be an integral solution to $$x=m_1n_2 - m_2n_1, \hspace{20pt} y = m_2n_0 - m_0n_2 \hspace{20pt} z = m_0n_1- m_1n_0,$$ where a solution is guaranteed by Gauss’s [@10.2307/j.ctt1cc2mnd article 279]. Then $$(m_0u +n_0v)^2 + (m_1u +n_1v)^2 + (m_2u+n_2v)^2$$ is a form in $\operatorname{CL}(-28n-56)$. Further, this map is independent of $(m_0,m_1,m_2,n_0,n_1,n_2)$. Hence, similarly to [@ono19974], we find a map $\phi$ taking self-conjugate $7$-cores to binary quadratic forms of discriminant $-28n-56$ given by $$\label{Equation: defnintion phi} \phi \colon \Lambda \rightarrow A \rightarrow N \rightarrow (x,y,z) \rightarrow (m_0,m_1,m_2,n_0,n_1,n_2) \rightarrow \text{binary quadratic form}.$$ We are now in a position to prove Theorem \[Theorem: main - quad forms\]. We first assume that $n \equiv 0,1 {\ \, \left( \mathrm{mod} \, 4 \right)}$. It is well-known (see e.g. [@10.2307/j.ctt1cc2mnd article 291]) that we have $|\operatorname{CL}(-28n-56)| = 2^{r-1}k$, where $k$ is the number of classes per genus, and $2^{r-1}$ is the number of genera in $\operatorname{CL}(-28n-56)$. Fix $f_1, \dots, f_k$ to be representatives of the $k$ classes of the unique genus of $\operatorname{CL}(-28n-56)$ that $\phi$ maps onto. As in [@ono19974] we say that $(x,y,z)$ and $f_j$ are *represented* by $(m_0,m_1,m_2,n_0,n_1,n_2)$ if $$x = m_1n_2 - m_2n_1, \hspace{20pt} y= m_2n_0 - m_0n_2, \hspace{20pt} z = m_0n_1 - m_1n_0,$$ and $$(m_0u + n_0v)^2 + (m_1u +n_1v)^2 + (m_2u +n_2v)^2 = f_j,$$ respectively. Let $\mathfrak{M}$ denote the set of all tuples $(m_0,m_1,m_2,n_0,n_1,n_2)$ that represent some pair $(x,y,z)$ and $f_j$. By Gauss’s [@10.2307/j.ctt1cc2mnd article 291], we have $|\mathfrak{M}| = 3\cdot2^{r+3}k$, and each $f_j$ is representable by $3\cdot 2^{r+3}$ elements in $\mathfrak{M}$. It is clear that all representatives $f_j$ have $(x,y,z) \in J(7n+14)$. Note that the elements $(m_0,m_1,m_2,n_0,n_1,n_2)$ and $(-m_0,-m_1,-m_2, -n_0,-n_1,-n_2)$ both map to the same form in $K(7n+14)$, and there are no other such relations. Since each class in $K(7n+14)$ corresponds to $8\cdot 6$ different triples, we see that each element in $K(7n+14)$ has $\frac{3 \cdot 2^{r+3}}{8\cdot 6 \cdot 2} = 2^{r-2}$ different preimages. Hence the set of self-conjugate $7$-cores is a $2^{r-2}$-fold cover of this genus. To see that the genus is non-principal, we note as in [@ono19974 Remark 3 ii)] that to be in the principal genus, one of $x,y,z$ would need to vanish. However, this is guaranteed to not happen by the congruence conditions on elements in $K(7n+14)$. The case where $n \equiv 3 {\ \, \left( \mathrm{mod} \, 8 \right)}$ is similar. Finally, for $n\equiv 2{\ \, \left( \mathrm{mod} \, 4 \right)}$, one uses the simple fact that if the sum of three squares is congruent to $0$ modulo $4$, then all squares must be even. Iterating this eventually reduces it to one of the cases covered above or the $n\equiv 7{\ \, \left( \mathrm{mod} \, 8 \right)}$ case. Other $t$ and Conjecture \[Theorem: main intro\] {#Section: other t and conjecture} ================================================ In this section we consider other values of $t$, proving Conjecture \[Theorem: main intro\] in the cases $t\in\{2,3,5\}$ and offering partial results if $t>5$. The cases $t\in\lbrace 2,3 \rbrace$ {#Sec: t=2,3} ----------------------------------- With $\eta(\tau):=q^{\frac{1}{24}}\prod_{n\geq 1}(1-q^n)$ the usual [*Dedekind eta-function*]{}, [@ono19974 (3)] and [@alpoge2014self Theorem 13] give the generating functions of $\operatorname{c}_2(n)$ and $\operatorname{sc}_3(n)$ as $$\begin{aligned} \sum_{n \geq 1} \operatorname{c}_2(n)q^n = q^{-\frac{1}{8}}\frac{\eta(2\tau)^2}{\eta(\tau)}, \qquad \sum_{n \geq 1} \operatorname{sc}_3(n)q^n = q^{-\frac{1}{3}} \frac{\eta(2\tau)^2 \eta(3\tau) \eta(12\tau)}{\eta(\tau) \eta(4\tau) \eta(6\tau)}.\end{aligned}$$ These are modular forms of weight $\frac{1}{2}$ and levels $2$ and $12$, respectively. It is a classical fact that each is a lacunary series, i.e., that the asymptotic density of its non-zero coefficients is zero (for example, see the discussion after [@MR1321575 (2)]). We immediately see that $$\operatorname{c}_2(n) = \begin{cases} 1 &\text{ if } n = \frac{j(j+1)}{2} \text{ for some } j \in {\mathbb{N}}, \\ 0 &\text{ otherwise}. \end{cases}$$ Furthermore, [@garvan1990cranks (7.4)] stated that $$\operatorname{sc}_3(n) = \begin{cases} 1 &\text{ if } n = j(3j \pm 2) \text{ for some } j \in {\mathbb{N}}, \\ 0 &\text{ otherwise}. \end{cases}$$ From these, we immediately obtain the following corollary. \[Cor: sc3 = c2\] For any $n\in \mathbb N$ that is both a triangular number and satisfies $n=j(3j \pm 2)$ for some $j \in {\mathbb{Z}}$ we have that $\operatorname{sc}_3(n) = \operatorname{c}_2(n) = 1$. Clearly, there are progressions on which both $\operatorname{sc}_3(n)$ and $\operatorname{c}_2(n)$ trivially vanish. For example, we have $\operatorname{sc}_3(4n+3) = \operatorname{c}_2(3n+2) = 0$. For $t=3$ we simply observe the following corollary. \[Cor: t=3\] There are no arithmetic progressions on which $\operatorname{c}_3$ and $\operatorname{sc}_5$ are integer multiples of one-another, even asymptotically. Comparing the explicit descriptions for $\operatorname{c}_3(n)$ and $\operatorname{sc}_5(n)$ given in [@hirschhorn2009elementary Theorem 6] and [@garvan1990cranks Theorem 7] respectively immediately yields the claim. The case $t =5$ --------------- In [@garvan1990cranks Theorem 4], Garvan, Kim, and Stanton proved that $$\operatorname{c}_5(n) = \sigma_5(n+1),$$ where $\sigma_5(n) \coloneqq \sum_{d | n} (\frac{d}{5}) \frac nd $ denotes the usual twisted divisor sum. Furthermore, Alpoge provided an exact formula for $\operatorname{sc}_9(n)$ in [@alpoge2014self Theorem 10]: $$\begin{aligned} 27 \operatorname{sc}_9(n) = \begin{cases} \sigma(3n+10) + a_{3n+10} (36a) -a_{3n+10}(54a) - a_{3n+10}(108a) & \text{ if } n \equiv 1,3 {\ \, \left( \mathrm{mod} \, 4 \right)},\\ \sigma(3n+10) + a_{3n+10} (36a) -3 a_{3n+10}(54a) - a_{3n+10}(108a) & \text{ if } n \equiv 0 {\ \, \left( \mathrm{mod} \, 4 \right)}, \\ \sigma(k) + a_{3n+10} (36a) -3 a_{3n+10}(54a) - a_{3n+10}(108a) & \text{ if } n \equiv 2 {\ \, \left( \mathrm{mod} \, 4 \right)}, \end{cases}\end{aligned}$$ where $k$ is odd and is defined by $3n+10 = 2^e k$ where $e \in {\mathbb{N}}_0$ is maximal such that $2^e \mid (3n+10)$. Here, the $a_n(E)$ are the coefficients appearing in the Dirichlet series for the $L$-function of the elliptic curve $E$. The curve $36a$ is $y^2 = x^3+1$, the curve $54a$ is $y^2+xy = x^3-x^2+12x+8$, and the curve $108a$ is $y^2 = x^3 +4$. \[Prop: t=5\] There are no arithmetic progressions on which $27 \operatorname{sc}_9(n) $ and $\operatorname{c}_5(n)$ are asymptotically equal up to an integral multiplicative factor. Applying the Hasse–Weil bound for counting points on elliptic curves as in [@alpoge2014self (13)] and letting $n \rightarrow \infty$ we have, for $n \not\equiv 2 {\ \, \left( \mathrm{mod} \, 4 \right)}$, that $$\begin{aligned} \frac{27 \operatorname{sc}_9(n)}{\operatorname{c}_5(3n+9)} \sim \frac{\sigma(3n+10)}{\sigma_5(3n+10)}, \end{aligned}$$ and for $n \equiv 2 {\ \, \left( \mathrm{mod} \, 4 \right)}$ $$\begin{aligned} \frac{27 \operatorname{sc}_9(n)}{\operatorname{c}_5(n)} \sim \frac{\sigma(k)}{\sigma_5(n+1)}. \end{aligned}$$ It is then enough to show that $\sigma_5$ is never constant along arithmetic progressions, i.e., the limit is not constant. To see this, consider an arithmetic progression $n \equiv m {\ \, \left( \mathrm{mod} \, M \right)}$. Let $\ell$ be any prime which does not divide $(3m+10)M$ and for which $\left(\frac{\ell}{5}\right)=-1$. For each prime $p\neq \ell$ that lies in the congruence class of the inverse of $\ell{\ \, \left( \mathrm{mod} \, 3M \right)}$ and is relatively prime to $5(3m+10)$ we may construct $n(p)=n_{\ell}(p)$ such that $$3n(p)+10=(3m+10)p\ell.$$ Note that $3n(p)+10$ lies in the arithmetic progression. A straightforward calculation shows that if the limit exists, then $$\lim_{p\to\infty}\frac{\sigma(3n(p)+10)}{\sigma_5(3n(p)+10)} = \pm \frac{1+\ell}{1-\ell}\frac{\sum_{d\mid (3m+10)} d}{\sum_{d\mid (3m+10)}\left(\frac{d}{5}\right) d}.$$ Since $\ell$ is arbitrary and there are infinitely many choices of $\ell$ by Dirchlet’s primes in arithmetic progressions theorem, this is a contradiction. The case $t \geq 6$ {#Section: t>5} ------------------- Anderson showed in [@anderson2008asymptotic Theorem 2] that, for $t \geq 6$ and $n \rightarrow \infty$, $$\label{Equation: asymps c_t} \operatorname{c}_t(n) = \frac{(2\pi)^{\frac{t-1}{2}} A_t(n) }{t^{\frac{t}{2}} \Gamma\left(\frac{t-1}{2}\right)}\left(n+\frac{t^2-1}{24}\right)^{\frac{t-3}{2}} +O\left(n^{\frac{t-1}{2}} \right),$$ where $${A}_t(n) \coloneqq \sum_{\substack{k \geq 1 \\ \gcd(t,k)=1}} k^{\frac{1-t}{2}} \sum_{\substack{0 \leq h <k \\ \gcd(h,k)=1 }} e\left( - \frac{hn}{k}\right) \psi_{h,k}$$ for a certain $24k$-th root of unity $\psi_{h,k}$ independent of $n$. As Anderson remarked, it is possible to show that $0.05<A_t(n)<2.62$, although $A_t$ varies depending on both $t$ and $n$. In a similar vein, Alpoge showed in [@alpoge2014self Theorem 3] that, for $r \geq 10$ odd and $n \rightarrow \infty$, we have $$\label{Equation: asymps for sc_t} \operatorname{sc}_r(n) = \frac{(2\pi)^{\frac{r-1}{4}} \mathcal{A}_r(n)}{(2r)^{\frac{r-1}{4}} \Gamma \left(\frac{r-1}{4} \right) } \left( n+ \frac{r^2 -1}{24} \right)^{\frac{r-1}{4}-1} + O_r\left(n^{\frac{r-1}{8}}\right),$$ where $$\mathcal{A}_r(n) \coloneqq \sum_{\substack{\gcd(k,r) =1 \\ k \not\equiv 2 {\ \, \left( \mathrm{mod} \, 4 \right)}}} (2,k)^{\frac{r-1}{2}} k^{\frac{1-r}{4}} \sum_{\substack{0 \leq h < k \\ \gcd(h,k) = 1}} e\left( - \frac{hn}{k}\right) \chi_{h,k}$$ with $\chi_{h,k}$ a particular $24$-th root of unity independent of $n$. Moreover, [@alpoge2014self (86) and (87)] imply that $ 0.14 < \mathcal{A}_r(n) < 1.86$. Inspecting the asymptotic behaviours given in and , it is clear that the only possibility of arithmetic progressions where the two asymptotics of $\operatorname{c}_t(n)$ and $\operatorname{sc}_r(n)$ are integer multiples of one another is $r=2t-1$. The following lemma provides partial results on Conjecture \[Theorem: main intro\]. For $t \geq 6$ and $t \not \equiv 1 {\ \, \left( \mathrm{mod} \, 6 \right)}$ there are no arithmetic progressions on which $\operatorname{c}_t(n)$ and $\operatorname{sc}_{2t-1}(n)$ are integer multiples of one another. Using equations and we find that, for $M_1,M_2,m_1,m_2 \in {\mathbb{N}}$, $$\begin{aligned} \frac{\operatorname{c}_t(M_1n+m_1)}{\operatorname{sc}_{2t-1} (M_2n+m_2)} \sim \frac{(4t-2)^{\frac{t-1}{2}} A_t(M_1n+m_1)}{4^{\frac{t-3}{2}} t^{\frac{t}{2}} \mathcal{A}_{2t-1}(M_2n+m_2)} \frac{\left( 24( M_1n+m_1) + t^2 - 1\right)^{\frac{t-3}{2}}}{\left(6 (M_2 n+m_2) + t^2 - t\right)^{\frac{t-3}{2}}} , \end{aligned}$$ as $n \rightarrow \infty$. Furthermore, for the two growing powers of $n$ to be equal and cancel on arithmetic progressions, it is not difficult to see that we must also have that $t \equiv 1 {\ \, \left( \mathrm{mod} \, 6 \right)}$. To prove Conjecture \[Theorem: main intro\] it remains to consider the cases where $t \equiv 1 {\ \, \left( \mathrm{mod} \, 6 \right)}$. We easily find that for the powers of $n$ to be equal we must have $$M_2 = 4M_1, \qquad m_2 = 4m_1+\frac{t^2-1}{6}.$$ It would therefore suffice to show that $$ \frac{ (4t-2)^{\frac{t-1}{2}} A_t(M_1n+m_1)}{4^{\frac{t-3}{2}} t^{\frac{t}{2}} \mathcal{A}_{2t-1}\left(4M_1n+4m_1+\frac{t^2-1}{6}\right)}$$ is never constant as $n$ runs. However, this appears to be a difficult problem, and we leave Conjecture \[Theorem: main intro\] open. [99]{} K. Bringmann and B. Kane, Class numbers and representations by ternary quadratic forms with congruence conditions , submitted for publication. H. Cohen, Sums involving the values at negative integers of $L$-functions of quadratic characters , Math. Ann. **217** (1975), 271–285. G. James and A. Kerber, The representation theory of the symmetric group , Encyclopedia of Mathematics and its Applications **16** , Addison-Wesley Publishing Co., Reading, Mass., 1981. K. Ono, The web of modularity: arithmetic of the coefficients of modular forms and $q$-series , CMBS Regional Conference Series in Mathematics **102** (2004), American Mathematical Society, Providence, RI, USA. K. Ono and W. Raji, Class numbers and self-conjugate $7$-cores , submitted for publication. D. Zagier, Zetafunktionen und quadratische Körper , Springer–Verlag, Berlin, 1981. [^1]: Some authors write $H(D)$ instead of $H(|D|)$; in particular this notation was used in [@OnoRaji].
--- abstract: 'Continuing the series of papers on a new model for a barred galaxy, we investigate the heteroclinic connections between the two normally hyperbolic invariant manifolds sitting over the two index-1 saddle points of the effective potential. The heteroclinic trajectories and the nearby periodic orbits of similar shape populate the bar region of the galaxy and a neighbourhood of its nucleus. Thereby we see a direct relation between the important structures of the interior region of the galaxy and the projection of the heteroclinic tangle into the position space. As a side result, we obtain a detailed picture of the primary heteroclinic intersection surface in the phase space.' author: - | Euaggelos E. Zotos$^1$[^1] and Christof Jung$^2$[^2]\ $^1$ Department of Physics, School of Science, Aristotle University of Thessaloniki, GR-541 24, Thessaloniki, Greece\ $^2$ Instituto de Ciencias Físicas, Universidad Nacional Autónoma de México Av. Universidad s/n, 62251 Cuernavaca, Mexico date: 'Accepted 2019 May 9. Received 2019 May 3; in original form 2019 January 23' title: | Orbital and escape dynamics in barred galaxies - IV.\ Heteroclinic connections --- =1 \[firstpage\] stellar dynamics – galaxies: kinematics and dynamics – galaxies: spiral – galaxies: structure Introduction {#intro} ============ In disc galaxies, which contain a rotating bar, the index-1 saddle points $L_2$ and $L_3$ are very important for the whole dynamics of the galactic system. These Lagrange points are directly associated with the corresponding Lyapunov orbits [@L07; @L49] and the respective normally hyperbolic invariant manifolds (NHIMs). We can imagine the stable and unstable manifolds of the NHIMs as tubes inside the phase space, which guide and control the motion of stars through the Lagrange points $L_2$ and $L_3$. Therefore, the NHIMs are very important for the escape dynamics of barred galaxies. Moreover, the manifolds are also related with the observed stellar structures, such as rings and spirals, in galaxies with a bar. @RGMA06 discussed how the manifolds affect the shape and the velocity of rings. In the same vein, the analysis was expanded in a series of papers in an attempt to determine the correlations between the manifolds and the rings and spirals in barred galaxies [@RGAM07; @ARGM09; @ARGBM09], while a comparison with related observational data has been performed in @ARGBM10. In another series of papers, $N$-body simulations revealed the role of the manifolds in the observed stellar structures [@VTE06], the effect of “stickiness", which slows down the rate of escape [@TEV08], and the role of non-axisymmetric components [@TKEC09]. For many years, the Ferrers’ triaxial model [@F77] was the only realistic model for describing the motion of stars in barred galaxies. However, the main disadvantage of this model is its high mathematical complexity, regarding the corresponding potential (which is not known in closed form) and the equation of motion [@P84]. On this basis, in @JZ15 we introduced a new barred galaxy model with a much simpler bar potential, which requires significantly less computational time compared to the Ferrers’ potential. Our model potential for a single barred galaxy has been introduced and explained in all details in @JZ15 and used in @JZ16a (hereafter ), @JZ16b (hereafter ) and @ZJ18 (hereafter ). Therefore, mainly for saving space, we do not repeat the presentation of the model, we just give some short remarks on its important properties: The total gravitational potential consists of four parts which describe the nucleus, the bar, the disc and the halo, respectively. Because we use a description of the dynamics in a rotating frame of reference, the effective potential consists of the sum of the total gravitational potential and the centrifugal potential. We use a coordinate system where the plane of the disk lies in the $(x,y)$ plane. All the saddle points of the effective potential lie in this plane $z=0$. A plot of the effective potential in this horizontal plane has been given in Fig. 1 in . The most important saddle points of index-1 are the Lagrange points $L_2$ and $L_3$. The numerical parameter values in the potential are chosen with the galaxy NGC 1300 in mind. Over the saddle points of index-1 we expect to find normally hyperbolic invariant manifolds of codimension 2 (see also the detailed explanation in section 4 of ). More details regarding the NHIMs can be found in @W94. These NHIMs have stable manifolds and unstable manifolds of codimension 1 which direct and channel the global behaviour of the dynamics of the whole system to a large extent. From each saddle NHIM there is a branch of its stable manifold and a branch of its unstable manifold going to the outside and also another branch of its stable manifold and another branch of its unstable manifold going to the inside. A main topic of was, to show that the unstable manifolds going to the outside determine the structure of rings or spirals of the galaxy. Now we show the role of the branches of the stable and unstable manifolds going to the inside. The main direction of arguments will be to show how these inner branches and their heteroclinic connections are related to the shape of the nucleus and of the bar of the galaxy. The Lyapunov orbits, over a saddle point, are the most important periodic orbits within the corresponding saddle NHIM. In our case of a 3 degrees of freedom (3-dof) system we have one horizontal Lyapunov orbit (in the following called $l_h$) and one vertical Lyapunov orbit (in the following called $l_v$) over each one of the index-1 saddles. These particular periodic orbits and their development scenarios as a function of the energy have been described in detail in subsection 4.3 of and plots of these orbits in the position space have been given in Fig. 6 of . Of course, it should be clear that the NHIMs are of essential importance for the dynamics only for energies close to and slightly above the saddle energy. For such energies the NHIMs direct the complete escape processes and they fix also the global dynamics and the structures formed in the system to a large extent. For the parameter case used mainly in previous publications and also here (named the standard model) the saddle energy is $E_s = -3242$. In the present text we will restrict all considerations to the energy value $E = -3200$. This is still an energy typical for the escape processes and we find a qualitatively equal behaviour for all other energy values, a little above the saddle energy. In the following, we rely on some properties of the saddle NHIMs which are important for the argumentation: The NHIMs are invariant and therefore the restriction of the Poincaré map to the NHIMs exists, we call it the restricted map $M_{\rm res}$ (for details on how to construct this $M_{\rm res}$ see @GDJ14). As in the and , we use $z = 0$ as intersection condition for the Poincaré map. Because of symmetry reasons the intersection orientation is irrelevant. For a 3-dof system the $M_{\rm res}$ acts on a 2-dimensional domain and is very similar to a usual Poincaré map for a 2-dof system, it is the Poincaré map for the internal 2-dof dynamics of the NHIM. Therefore it is the ideal graphical representation for this internal dynamics of the NHIMs. In the whole development scenario of the $M_{\rm res}$, as function of the energy, has been presented and discussed in detail, see Figs. 7 and 8 in . Because in the present text we restrict all considerations to the energy level -3200 we repeat in Fig. \[map\] the restricted map for just this single energy value. ![Plot of the restricted Poincaré map on the NHIM in the coordinates $\phi = \arctan(y/x)$ and $L = x p_y - y p_x$. Many iterations of a moderate number of initial points are plotted. The structures belonging to the various initial points are labelled. The red boundary curve is $l_h$.[]{data-label="map"}](map.jpg){width="\hsize"} The plot is represented in the canonical coordinates $\phi = \arctan(y/x)$ and $L = x p_y - y p_x$. As usual, for the Poincaré maps we show many iterations under $M_{res}$ of a moderate number of initial points. For more detailed explanations see subsection 4.4 of . In the following, the words tangential and normal always refer to directions relative to the NHIM surface. In Fig. \[map\] note the following properties: We are still close to the saddle energy, therefore the map looks rather regular, it is still close to an integrable map, there are no large scale chaos regions. The fixed point in the middle represents $l_v$, while the boundary represents $l_h$. The fixed points at $\phi \approx \pm 0.06$ and $L \approx 486$ represent a pair of tilted loop orbits split off from $l_v$ at $E \approx -3223$. These orbits are tangentially stable. At the energy $E \approx -3214$ $l_v$ splits off another pair of tilted loop orbits, they are tangentially unstable and are represented in the $M_{\rm res}$ as the centres of a fine chaos strip which appears in the diagram like a separatrix. This separatrix separates 3 systems of concentric KAM curves: First, the curves around the central fixed point. Second, the curves around the tilted loop orbits. Note that because of the $z$ reflection symmetry the two tilted loop islands can be identified and treated as a single island structure. And as a third set of curves we have the curves running parallel and near to the boundary. Later we will refer to the individual curves seen in Fig. \[map\] and we give these curves the following names also included in the figure: The curves in the inner island are called $i_1$ to $i_7$, from the inside counting outwards. The curves in the tilted loop islands are called $t_1$ to $t_7$ again outgoing from the centre to the outside and the curves running parallel to the boundary are called a1 and a2, again counting outwards. Periodic orbits approaching heteroclinic connections {#per} ==================================================== The two NHIMs of codimension 2 sit over the index-1 saddle points $L_2$ and $L_3$ of the effective potential. In the following we call NHIM2 the NHIM over $L_2$ and NHIM3 the NHIM over $L_3$. The inner branches of their stable and unstable manifolds run into the region of the bar and they form heteroclinic intersections. The corresponding heteroclinic trajectories start on one NHIM and end on the other one. We can even do more. We can look for heteroclinic connections not just between the NHIMs, we can look for heteroclinic connections between individual substructures identified in the two NHIMs. For stable and unstable manifolds of NHIMs a foliation theorem holds (see chapter 5 in @W94) which shows that the internal structures of the NHIMs are transported along these manifolds. In addition to the heteroclinic connections themselves we study the periodic orbits running in the neighbourhood of heteroclinic connections. Remember that heteroclinic trajectories are accumulation points of periodic orbits. They are periodic orbits which oscillate between the neighbourhoods of NHIM2 and NHIM3, i.e. between the two saddle points $L_2$ and $L_3$ of the effective potential. Because of discrete symmetry NHIM3 is obtained from NHIM2 by a rotation of the system around the $z$-axis by an angle $\pi$. Let us call this symmetry operation $D_z(\pi)$ in the following. This discrete symmetry leads to a corresponding symmetry in many heteroclinic structures and in many periodic orbits and we will exploit this symmetry whenever we can do so. The system has another important discrete symmetry which will also be useful. It is the reflection symmetry in $z$ direction. These two discrete symmetries together lead to the existence of the following classes of most basic, most simple periodic orbits and also to corresponding classes of heteroclinic trajectories. We call a periodic orbit simple if it intersects the plane $x = 0$ only once in each one of the two orientations, during one period. This implies that in a classification of these orbits by resonance relations in the 3 degrees of freedom (the 3 coordinate directions) the resonance number of $x$ for simple periodic orbits is always 1. Of course, in addition there are also periodic orbits and heteroclinic trajectories without discrete symmetry and with various intersection numbers of the plane $x = 0$ in each one of the two orientations. But the simple symmetric ones are also the shortest ones and it makes sense to study first and mainly the simple ones having some additional symmetry properties, with respect to the $z$ reflection. First, there is a class of periodic orbits which we will call the horizontal class 1: These orbits always have $z \equiv 0$ and $ p_z \equiv 0$ and in addition as a point set (not taking care of the orientation of motion) these orbits are symmetric under $x$ reflection and under $y$ reflection. Fig. \[orbs\] shows some periodic orbits of this class 1. We observe that parts (a), (b), (c) and (d) represent the $x : y$ resonances 1:3, 1:5, 1:7, 1:9, respectively of a sequence of orbits which are relatively wide at their moment of crossing the line $x = 0$. Of course, also the corresponding continuations of this sequence, with resonances $x:y=1:n$ for all larger odd integers $n$ exist. It should be obvious from the plot how this sequence converges to a horizontal heteroclinic trajectory going from $L_2$ to $L_3$ and its $D_z(\pi)$ rotated counterpart which goes from $L_3$ to $L_2$. Parts (e) and (f) are the beginning of another sequence which is rather narrow at the moment of the crossing of the line $x = 0$. The 1:3 and the 1:5 resonances are plotted. Also this sequence has its continuation and converges against another horizontal heteroclinic trajectory. More on the limiting horizontal heteroclinic trajectories comes below. Next let us consider periodic orbits which also perform motion in the $z$ direction. The simplest ones are periodic orbits whose projection into the horizontal plane are qualitatively equal to the horizontal orbits of Fig. \[orbs\] and of the continuation of this sequence. With respect to the phase relation between the horizontal motion and the $z$ motion we find two particularly simple possibilities. First, we can have that $p_z = 0$ when the trajectories cross the plane $x = 0$. We call such orbits symmetric $z$ excitations and put them into class 2. Under a $x$ reflection the point set of these orbits is $z$ invariant. Second, we can have that $z = 0$ when the orbit crosses the plane $x = 0$. We call such orbits antisymmetric $z$ excitations and put them into class 3. Under $x$ reflection the point set of these orbits is also $z$ reflected. In Fig. \[orb1\] we present an orbit of class 2 (perspective view in part (a) and the projections into the three coordinate planes $(x,y)$, $(x,z)$ and $(y,z)$ in parts (b), (c) and (d), respectively), it shows a $x:y = 1:9$ resonance in its horizontal projection and this horizontal projection is qualitatively similar to the periodic orbit in Fig. \[orbs\]d, i.e. to the one representing the $x:y = 1:9$ resonance. For comparison, this horizontal orbit is also included in part (b) in red colour. In the 3 dimensional position space the orbit from Fig. \[orb1\] performs the $x:y:z$ resonance $1:9:14$. Fig. \[orb2\] presents the corresponding orbit of class 3 with the same resonance ratio $x:y:z = 1:9:14$. Again, for comparison the horizontal 1:9 orbit is included in part (b) in red colour. To each periodic orbit of class 2 and of class 3 exists also the $z$ reflected periodic orbit. Panel (d) of Fig. \[orb2\] helps us to make the following comment evident. We see trajectory segments running close to the $y - z$ diagonal and the corresponding antidiagonal. Clearly, along the diagonal or the antidiagonal the $y$ and $z$ degrees of freedom run in a 1:1 resonance. These trajectory segments are the ones running in the outer parts of the bar. We also see a trajectory segment running in an approximate $y:z = 1:5$ resonance. It is the segment coming from the inner part of the bar, where the trajectory makes a large semi-loop around the nucleus. And the total $y:z$ resonance ratio depends on the relative length of the time intervals in which the orbit runs in the approximate 1:1 resonance and the one in which it runs in the approximate 1:5 resonance. In the particular example shown in this figure this time ratio happens to turn out such that the resulting total resonance ratio becomes $y:z = 9:14$. It is obvious how the logical continuation of the sequence of orbits presented in Fig. \[orbs\] approaches a heteroclinic connection. With increasing $y$ number in their resonance relation these orbits come closer to the saddles and spend more time near the saddles making more loops in the saddle region. In the limit the time over the saddles diverges to infinity and thereby the limit of these orbits turns into horizontal heteroclinic connections. The corresponding sequences of periodic orbits with $z$ excitation converge to heteroclinic trajectories with $z$ excitation. Heteroclinic trajectories {#het} ========================= When we are looking for simple heteroclinic trajectories then we can again look first at a class 1, which contains horizontal trajectories, i.e. trajectories which lie completely on the horizontal $(x,y)$ plane. They are the most simple and most symmetric heteroclinic connections between $l_h(L_2)$ and $l_h(L_3)$. There are two trajectories of this type and they are presented in Fig. \[pn\]. They both start in the past near $l_h(L_2)$ and end in the future near $l_h(L_3)$. Of course, there also exist the two corresponding heteroclinic trajectories going from $L_3$ to $L_2$. They are obtained by an application of $D_z(\pi)$ to Fig. \[pn\]. Note that each one of the heteroclinic trajectories from Fig. \[pn\] taken together with its rotated counterpart has the same symmetry properties as each one of the periodic orbits from Fig. \[orbs\]. In analogy to the periodic orbits we call a heteroclitic trajectory simple if it intersects the plane $x = 0$ only once, it does it in negative orientation if it goes from NHIM2 to NHIM3 and it does it in positive orientation if it goes from NHIM3 to NHIM2. As initial conditions for the two heteroclinic trajectories we take their point of intersection with the line $x = 0$ and integrate forward (green and orange colour in the plot) and backward (red and purple colour in the plot). The limit sets, namely $l_h$ over the saddles, are also included in blue colour. ![The two simple horizontal heteroclinic trajectories connecting $l_h(L_2)$ and $l_h(L_3)$. The parts of the trajectories with $x > 0$ (past parts) are plotted in red and in purple, respectively. The parts with $x < 0$ (future parts) are plotted in green and in orange, respectively. Also included in blue colour are $l_h(L_2)$ and $l_h(L_3)$. The saddle points $L_2$ and $L_3$ themselves are included as black dots. (Colour figure online).[]{data-label="pn"}](PN.jpg){width="\hsize"} The following considerations are the logical initial steps for our search of simple symmetric and antisymmetric heteroclinic trajectories, i.e. heteroclinic trajectories crossing the plane $x = 0$ only once and having the same symmetry properties as the periodic orbits shown in Fig. \[orb1\] or Fig. \[orb2\], respectively. We will again use the labels class 2 and class 3 respectively for these two groups of trajectories. First, it should be clear that a single heteroclinic trajectory going from $L_2$ to $L_3$ can never have the reflection symmetry in $y$. However, to any trajectory going from $L_2$ to $L_3$ there is the rotated trajectory going from $L_3$ to $L_2$. And we should take these two $D_z(\pi)$ related trajectories together as counterpart for a single periodic trajectory. The two rotation related heteroclinic trajectories taken together have the correct symmetry properties. Second, to obtain the desired symmetry property in $x$ it should be clear that we have to look for heteroclinic trajectories connecting equivalent (rotation symmetry related) substructures in the two NHIMs. Let us identify simple heteroclinic trajectories from $L_2$ to $L_3$ by giving their coordinates in the moment when they cross the plane $x = 0$ in negative orientation. In the intersection point they have some value $y_0$, because of symmetry reasons (when they belong to class 2 or to class 3) they have $p_y = 0$. For the symmetric class 2 they have $p_{z,0} = 0$ and a value $z_0 \ne 0$, and for the antisymmetric class 3 they have $z_0 = 0$ and a value $p_{z,0} \ne 0$. That is, heteroclinic trajectories from class 2 are identified by giving $y_0$ and $z_0$ and heteroclinic trajectories from class 3 are identified by giving $y_0$ and $p_{z,0}$ at the moment of crossing the plane $x = 0$. In Fig. \[pts\] we present the initial conditions of these symmetric heteroclinic trajectories. Part (a) gives the initial conditions of class 2 on the $(y,z)$ plane and part (b) gives the initial conditions of class 3 on the $(y,p_z)$ plane. The points are marked with labels corresponding to the substructures of the NHIMs included and labelled in Fig. \[map\]. Note that not all substructures lead to simple symmetric and/or antisymmetric heteroclinic connections and that some substructures of type $i_n$ and $a_n$ lead to 2 different heteroclinic connections of each symmetry class while the separatrix and some substructures in the tilted loop islands lead to four different heteroclinic connections of each symmetry class. The local branches (segments leading to the first intersections with the plane $x = 0$) of the stable and unstable manifolds of the substructures $i_4$, $i_3$, $i_2$, $i_1$, $t_1$, $t_2$ and the ones of the tilted loop orbits and of $l_v$ do not reach any point of the plane $x = 0$ with $p_y = 0$ and $z = 0$ or with $p_y = 0$ and $p_z = 0$. Therefore, we do not find corresponding simple symmetric or simple antisymmetric heteroclinic trajectories. In addition for $t_3$, $t_4$ and $t_5$ we do not find simple antisymmetric heteroclinic connections whereas the symmetric ones exist. In Fig. \[map\] only a small number of the substructures has been included and labelled. In total there is an infinity of further substructures and many of them lead to simple symmetric and antisymmetric heteroclinic connections. They are indicated in Fig. \[pts\] by the green curves. The red stars on the green curves mark the boundary points between different branches, where the sequence of labels turns its orientation and repeats labels. This means a collision of two heteroclinic trajectories, i.e. a heteroclinic bifurcation. The symmetric connections between the two horizontal Lyapunov orbits (the trajectories presented in Fig. \[pn\]) can be considered limiting cases as well for class 2 as for class 3. A horizontal trajectory fulfills at the same time the defining conditions of class 2 and of class 3. These symmetric horizontal heteroclinic trajectories are the end points of the curves plotted in both parts of Fig. \[pts\], i.e. in both symmetry classes. In the same figure only contributions for positive values of $z$ in part (a) or positive values of $p_z$ in part (b) are included. Because of symmetry reasons also the corresponding contributions with negative values exist. Therefore we can supplement the two parts of the figure by the vertically reflected plots and thereby in both parts the green curve turns into a closed loop with the topology of a circle. Then there are no longer any end points of the green curves. In Figs. \[orb3\] and \[orb4\] we present, as numerical examples, the symmetric and antisymmetric heteroclinic trajectories respectively in position space for the substructure $i_6$ of the NHIMs. Of course, for all plots the heteroclinic trajectories are truncated at some finite time, when they are already close to the limit sets. This holds in the past and in the future. Because of symmetry reasons in Fig. \[orb3\](d) the green segment (future segment of the trajectory) coincides exactly with the red segment (past segment of the trajectory) and is covered by the red segment and is invisible. Also included by blue colour in Fig. \[orb3\](a) is a projection into the position space of the limit sets over the two saddles, which are the substructures $i_6$ of the NHIMs. In the full dimensional phase space these substructures have the topology of a 2 dimensional torus. The projection into the position space still gives an impression of this torus shape. In Fig. \[orb3\]a this limit set is not well resolved, therefore we repeat this limit structure over the saddle $L_2$ in better resolution in Fig. \[torus\]. To produce this plot the following has been done. First 500 points on the substructure $i_6$ in Fig. \[map\] have been picked. All these points have $z = 0$ and have been used as initial conditions for the trajectories. Each one of these trajectories has been integrated until the next intersection with the plane $z = 0$ in the same orientation. Point sequences along these 500 trajectory segments are plotted in order to visualize the projected torus. The corresponding projected torus over the saddle $L_3$ is obtained by an application of $D_z(\pi)$ to Fig. \[torus\]. The two limit sets in Fig. \[orb4\] coincide with the ones in Fig. \[orb3\] and are again given by the torus magnified in Fig. \[torus\]. To each heteroclinic trajectory of class 2 and of class 3 exists also the $z$ reflected heteroclinic trajectory. We do not distinguish these two trajectories. So far we have treated the most simple heteroclinic trajectories with particular symmetry properties. There are also simple nonsymmetric heteroclinic trajectories. Just consider a heteroclinic trajectory starting on NHIM2 on the substructure $s_1$ and ending on NHIM3 on substructure $s_2$, where these two substructures are different, i.e. are not identified by an application of $D_z(\pi)$. Then it is immediately clear that this heteroclinic trajectory can not belong to the classes 1 or 2 or 3 considered so far. Fig. \[orb5\] is an example where the past limit set is close to $i_6(L_2)$ and the future limit set is close to $i_5(L_3)$. In part (c) we see clearly that we have a nonsymmetric trajectory. In the moment of the crossing of the plane $x = 0$ the value of $z$ is neither zero nor is it an extremal value, i.e. also $p_z$ is different from zero. We also observe that at the moment of this crossing the value of $p_y$ is rather small and the projection into the $(x,y)$ plane is close to symmetric. As a consequence, the trajectory connects substructures of the NHIMs which lie rather close in Fig. \[map\]. All simple heteroclinic trajectories have this property because of the following explanation. The simple heteroclinic trajectories always have rather short trajectory segments in the central region of the bar and spend very little time in this central region. As seen in the NHIM plot of Fig. \[map\] the dynamics over the saddles is almost regular and this means that the partition of the available total energy between horizontal motion and vertical motion is almost constant. This energy distribution can only be changed significantly along some trajectory segment clearly distant from the saddle region. However, also in other regions of the position space this energy transfer between horizontal and vertical motion is slow. Therefore, to obtain a significant energy transfer we need a trajectory which stays away from the saddle regions for a sufficiently long time. As we have just seen, the simple heteroclinic trajectories like the one shown in Fig. \[orb5\] do not do this. Therefore, these trajectories end over the saddle $L_3$ with almost the same vertical energy with which they have started over the saddle $L_2$. This explains why they connect equal or neighbouring substructures of the NHIMs. In addition, $l_v$ and the substructures close to it, like $i_1$ or $i_2$, or also the tilted loop orbits and structures close to them, like $t_1$ or $t_2$, do not contribute to the simple heteroclinic trajectories at all. In this context see again Fig. \[pts\]. To get heteroclinic trajectories connecting more distant substructures of the NHIMs or having $l_v$ or the tilted loop orbits and their neighbourhoods as past or future limits these trajectories must make some extra loops away from the saddle regions. This means they must have multiple intersections with the plane $x = 0$ and can not be simple heteroclinic trajectories. As a numerical example we show in Fig. \[orb6\] a nonsimple heteroclinic trajectory starting close to $l_v(L_2)$ and ending over $L_3$, close to the substructure $i_6$ of NHIM3. In order not to overload the plot we did not include these limit sets into the figure. To make it easier to follow the trajectory we have cut it into 3 time segments and plotted the first segment $(t \in [0,3.2])$ in black, the second segment $(t \in [3.2, 5.1])$ in red and the third segment $(t \in [5.1,10])$ in green. Remember that in the plot we only show a finite segment of the heteroclinic trajectory which in principle runs for ever into the past and into the future. Global description of the set of simple heteroclinic trajectories {#glb} ================================================================= Now we consider the set of all simple heteroclinic trajectories connecting NHIM2 and NHIM3, let us call this set $\tilde{S}$. For the moment, we concentrate on the ones which have NHIM2 as past limit and NHIM3 as future limit. Each one of these trajectories intersects only once the intersection surface $R$ defined by the condition $x = 0$. Therefore, we can represent each element of $\tilde{S}$ by a point in $R$. Let us call this corresponding set of intersection points $S$. There is a 1:1 relation between trajectories from $\tilde{S}$ and points from $S$. First let us discuss the dimension of $S$. We still consider a single value $E$ of the total energy only. The dimension of the corresponding energy shell in the phase space is 5. The dimension of the intersection surface $R$ is 4, as coordinates in $R$ we naturally use $y$, $z$, $p_y$, and $p_z$. The dimension of NHIM1 and NHIM2 for fixed energy is 3. The dimension of the stable and unstable manifolds of the NHIMs is 4. The dimension of the intersections between the stable and unstable manifolds of the NHIMs and $R$ is 3. Let us call these intersections $M_2$ and $M_3$ for the intersection of the local branch of the unstable manifold of NHIM2 and the local branch of the stable manifold of NHIM3, respectively. Local means here that we only consider first intersections between $R$ and trajectories running along the stable and unstable manifolds and we ignore possible later additional intersections. Simple heteroclinic trajectories are then given by intersections between $M_2$ and $M_3$. In a nondegenerate case this is the transverse intersection between two 3 dimensional sets located in a 4 dimensional embedding set. This intersection is the set $S$ defined before. We can interpret it as the primary heteroclinic intersection set between NHIM2 and NHIM3. And according to the previous considerations its dimension is 2. Next we need the argument that the topology of $S$ is the one of a 2 dimensional sphere. We start our considerations in the 5 dimensional energy shell in the phase space. There the NHIMs are 3 dimensional surfaces and have the topology of a 3 dimensional sphere $S^3$. For the stable and the unstable manifolds of NHIMs we have the foliation theorem which shows that the internal structure of these manifolds is essentially a Cartesian product of the NHIM and a line. Then the transverse intersection between the stable or the unstable manifold and the hypersurface $R$ reproduces a continuous image of the NHIM and therefore it also has the topology of $S^3$. This holds for the unstable manifold of NHIM2 and also for the stable manifold of NHIM3. Then transverse nonempty intersections of these two manifolds within $R$ are nonempty transverse intersections between 2 copies of $S^3$ embedded in the 4 dimensional manifold $R$. And this intersection (i.e. $S$) is 2 dimensional and has the topology of a 2 dimensional sphere $S^2$. The intersection of $S$ with the plane ($p_y = 0$, $p_z =0$) is the green curve in Fig. \[pts\]a representing simple symmetric heteroclinic trajectories together with its $z$ reflected mirror image. Now it should no longer be surprising that the intersection between a surface $S$ with the topology of $S^2$ and a plane gives a curve of the topology of a circle. Let us call this curve $C_s$. We can imagine the curve $C_s$ as the curve on $S^2$ with constant azimuth angle 0 and $\pi$. We can define this azimuth angle as $\phi = \arctan ( p_z / z )$. And the intersection of $S$ with the plane ($p_y = 0$, $z = 0$) is the green curve in Fig. \[pts\]b representing simple antisymmetric heteroclinic trajectories together with its $p_z$ reflected mirror image. It is again a curve with the topology of a circle. Let us call this curve $C_a$. We can imagine the curve $C_a$ as the curve on $S^2$ with constant azimuth angle $\pm \pi/2$. The two horizontal simple heteroclinic trajectories shown in Fig. \[pn\] are the intersection between $S$ and the $y$ axis, i.e. they are the two points on $S$ fulfilling simultaneously $z = 0$, $p_y = 0$ and $p_z = 0$. They are the two intersection points between the curves $C_s$ and $C_a$. We can imagine these two points as the two poles of $S^2$. The nonsymmetric simple heteroclinic trajectories fill the whole rest of $S$ not belonging to the two curves $C_s$ and $C_a$. Let us call this complement $C_n$. We can imagine $C_n$ as all points on $S^2$ with an azimuth angle which is not an integer multiple of $\pi/2$. Next we can imagine that we cover $S$ by two different systems of 1 dimensional curves. We call these systems of curves $SC_2$ and $SC_3$. Elements of these two sets are labelled by the substructures of the NHIMs. A curve $SC_2(s_n)$ from the set $SC_2$ contains points on $S$ which have the substructure $s_n$ of NHIM2 as past limit set. And a curve $SC_3(s_k)$ from the set $SC_3$ contains points on $S$ which have the substructure $s_k$ of NHIM3 as future limit set. Note that not all existing substructures on the NHIMs have corresponding curves in the sets $SC_2$ and $SC_3$, this happens for example for the substructures $i_1$, $i_2$, $i_3$, $i_4$, $t_1$, $t_2$, $l_v$ and for the tilted loop orbits. It principle it can be allowed that some of these curves consist of various connected components. The intersection points between $SC_2(s_n)$ and $SC_3(s_k)$ represent the simple heteroclinic trajectories going from the substructure $s_n$ of NHIM2 to the substructure $s_k$ on NHIM3, while crossing the plane $x = 0$ only once. The intersections between $SC_2(s_n)$ and $SC_3(s_k)$, where $s_n$ and $s_k$ are the same substructures on NHIM2 and NHIM3 (i.e. when $s_n$ on NHIM2 is transformed into $s_k$ on NHIM3 by $D_z(\pi)$) are exactly the points along the curves $C_s$ and $C_a$, i.e. they represent the simple symmetric and antisymmetric heteroclinic trajectories. Curves from the set $SC_2$ really have intersections only with a part of the curves from $SC_3$. Because of the discrete symmetries of the system the number of intersection points between a curve from $SC_2$ with a curve from $SC_3$ can be either 0 or 4 or 8 or 12 or 16, when we count all symmetry related copies of simple heteroclinic trajectories. For the simple heteroclinic trajectories going from NHIM3 to NHIM2 we have an equivalent sphere. These two equivalent spheres are transformed into each other by $D_z(\pi)$. We have given numerical examples for the energy $E = -3200$ only. The qualitative description of $S$ is similar for all energy values a little higher than the saddle energy. Of course, it depends on the energy value exactly which substructures from NHIM2 and NHIM3 are connected by symmetric or by antisymmetric or by nonsymmetric simple heteroclinic trajectories. Besides the surface $S$ representing the primary heteroclinic intersection surface and consisting of simple heteroclinic trajectories there is an infinity of other heteroclinic intersection surfaces representing more complicated heteroclinic trajectories making additional loops and intersecting the surface $x = 0$ several times. Such heteroclinic trajectories can connect substructures from NHIM2 and NHIM3 which are not connected by simple heteroclinic trajectories. Remember the example shown in Fig. \[orb6\]. The description given here relies heavily on the discrete symmetries of the system. But it should be clear that the qualitative picture remains valid under small perturbations of the symmetry. Interpretation of described periodic orbits and heteroclinic trajectories as x1 orbits {#x1} ====================================================================================== By looking over the various figures showing heteroclinic trajectories and periodic orbits close to the heteroclinic tangle we note the following common feature: All these trajectories are confined to a narrow strip in $y$ (approximately between -1.2 and +1.2) whenever $|x| > 5$. In addition, when $|x| < 5 $ then only a small relative fraction of these trajectories enters the interior of the nucleus. These trajectories have a large density in a narrow shell around the nucleus. In this sense this set of trajectories traces out the outer parts of the bar together with a shell around the nucleus. The individual trajectories from the neighbourhood of this set are rather unstable. However, when we perturb some trajectory from this set, then it switches to a similar trajectory of the same set. In this sense this whole set of trajectories is rather robust, dynamically and also structurally. All heteroclinic trajectories run along the bar axis (which coincides with the $x$-axis), with small values of $y$, while they stay rather close to the horizontal plane $(x,y)$, i.e. they only make small oscillations in $z$ direction $(|z| < 1)$. This means that all heteroclinic trajectories are excellent candidates of x1 type of orbits, which support the barred structure of the galaxy. Fig. \[dist\] has been constructed to demonstrate the mentioned distribution in an additional form. We have introduced cylindrical coordinates with the $x$ axis as the cylinder axis and the cylindrical radius $d = \sqrt{y^2 + z^2}$. Next we have initiated 1000 trajectories near the NHIM2 and let them run to the interior region, close to the inner branch of the unstable manifold of NHIM2. We let these trajectories run until a time $t = 10$. The figure shows the density of this collection of trajectories over the $(x,d)$ plane. We obtain a very high density in the two outer parts of the bar where however the density is relatively moderate in the direct neighbourhood of the cylinder axis. We have a small density in the interior of the nucleus and a larger density in a shell around the nucleus. This plot gives a good impression how the unstable manifolds of the saddle NHIMs and their heteroclinic tangle confine the bar and the nucleus. In a real galaxy, stars can escape from this confinement by close encounters causing a change of momentum and energy. ![Probability distribution of trajectories running close to the unstable manifold of NHIM2, plotted in the cylinder coordinates $x$ and $d = \sqrt{y^2 + z^2}$. For more details see the main text.[]{data-label="dist"}](dist.jpg){width="\hsize"} Next let us have a look at relevant time scales. The short, simple periodic orbits (like the ones shown in Fig. \[orbs\]) have periods of a few of our dimensionless time units and the time unit corresponds to approximately 100 million years, i.e. lies in the order of magnitude of the rotation period of the bar. In this sense, when a trajectory remains near the heteroclinic tangle for several time units then it remains in the bar region for several rotation periods. That is, such trajectories belong to the set of trajectories which populate the bar for some time, form and stabilize the bar. They behave like x1 orbits. For the energy $E = -3200$ we did not find any dynamically stable periodic x1 orbits, but as just explained the heteroclinic tangle seems to take over the job which usually is attributed to the periodic x1 orbits, namely to shape and stabilize the bar structure. The heteroclinic tangle certainly is not uniformly hyperbolic. This should already be clear from the almost integrable internal dynamics of the NHIMs. That is, we have a mixed phase space. Usually in homoclinic/heteroclinic tangles with mixed phase spaces, we find stable periodic orbits, at least of high periods. Therefore, it would be no surprise for us, if also in our present heteroclinic tangle stable periodic orbits of high period would exist. Now let us check how the calculated periodic and heteroclinic trajectories fit into the real galaxy NGC 1300. Remember, that the parameters of our potential model are chosen to fit the properties of this particular galaxy. To construct Fig. \[sml\] we have rotated 6 horizontal periodic orbits (the ones from Fig. \[orbs\]) and 2 simple horizontal heteroclinic trajectories (the ones from Fig. \[pn\]) into the appropriate direction of view and have included them, using the correct scale, into a real image of the barred galaxy NGC 1300. According to plate 10 in @BT08 the semi-major axis of NGC 1300 is about 10 kpc. Using this as a scale measure we created a tilted frame of reference, so as to appropriately fit all trajectories on top of the real image, at the correct size and position. Fig. \[sml\] contains horizontal trajectories only because for them it is easy to imagine how they are running in the full 3 dimensional position space. In contrast in the 6 panels of Fig. \[sml2\] we present an analogous plot for the 6 trajectories already shown in the Figs. \[orb1\], \[orb2\], \[orb3\], \[orb4\], \[orb5\], \[orb6\], respectively. These trajectories explore all 3 degrees of freedom and to understand well their motion in these pictures of the real galaxy we need to consult the projections of these trajectories into the various coordinate planes as given in the mentioned previous plots. From Figs. \[sml\] and \[sml2\] it becomes evident how the periodic orbits and the heteroclinic trajectories populate the bar and the neighbourhood of the nucleus and how they form the skeleton of the bar and the surrounding of the nucleus. At the same time these combined plots indicate that our dynamical model is realistic for barred galaxies with features similar to NGC 1300. Many barred galaxies have dust stripes in their bars and these lines start near the saddle point (connection between bar and outer spirals) and they run mainly in longitudinal direction of the bar, but not along the symmetry axis of the bar, they are shifted to one side. When we compare these dust lines with Figs. 13a and 13b in or Fig. 9a in , then it becomes evident that these dust stripes run along the local segments of the inner branches of the unstable manifolds of the saddle NHIMs. Good examples of real galaxies showing these patterns are the following NGC numbers: 613, 1097, 1300, 1365, 1530, 4303, 5236, 5921, 6221, 6907, 6951, 7552 [see e.g., @KB19]. The relation of these patterns to the NHIM properties might be understood along the following arguments: Many barred galaxies have an inflow of gas and dust from the outer parts to the interior. Then it is obvious that this gas and dust first approaches the saddle from the outside along the local segments of the outer branches of the stable manifolds of the saddle NHIMs. Next, it flows over the saddle and then continues along the inward going branches of the unstable manifolds. In this sense, the dust stripes visualise the projection into the position space of the local segments of the inner branches of the unstable saddle manifolds. Of course, a part of this gas and dust which has come close to the saddle points, returns to the outside and leaves along the outer branches of the unstable saddle manifolds. Also this outer dust pattern is clearly visible in some real barred galaxies, good examples are the NGC numbers 1097, 1300, 5236, 6951, 7479. Discussion and conclusions {#disc} ========================== The NHIMs of codimension 2 sitting over the two index-1 saddle points of the effective galactic potential together with their stable and unstable manifolds direct the global orbital dynamics of the whole system to a large extent. Here also the nearby periodic orbits having a similar shape contribute. In previous publications we have already explained how the outer branches of the unstable manifolds determine the structure of outer rings or spirals (it should be mentioned that in the case of rings formed by the outer branches of the unstable manifolds there are in addition spirals in the outer part of the disk caused by other mechanisms not related to the NHIMs and their invariant manifolds). In the present article we explain how the inner branches and the corresponding heteroclinic connections are related to the bar and to a neighbourhood of the nucleus. In this sense, the important visible structures in the interior parts of barred galaxies are directly related to the projection into the $(x,y,z)$ position space of the inner branches of unstable manifolds of saddle NHIMs. Thereby the barred galaxy is the most beautiful example, known so far, to show the structure of these mathematical objects directly in position space and easily accessible to observations. Figs. \[sml\] and \[sml2\] demonstrate how the important periodic orbits, close to the heteroclinic tangle and the heteroclinic trajectories themselves, fit into the observed structure of NGC 1300. We consider this an important confirmation of our model potential, where the part describing the bar has been introduced by us as a simpler alternative to the long established standard Ferrers’ triaxial model [@F77]. Figs. \[sml\] and \[sml2\] of the present article together with Fig. 13 from give a rather complete picture how the NHIMs and the corresponding stable and unstable manifolds are the dominating subsets for the formation of structures in the system. From the dynamical system theory point of view, we made an important progress in the detailed understanding of the primary heteroclinic intersection surface, i.e. the one representing simple heteroclinic trajectories. This 2 dimensional surface has the form of a sphere $S^2$ where the poles represent the two horizontal heteroclinic trajectories and the circle of azimuth angle 0 and $\pi$ on one hand and the circle of azimuth angle $+\pi/2$ and $-\pi/2$ on the other hand represent the symmetric and antisymmetric heteroclinic trajectories respectively. The mentioned symmetry properties refer to the $z$ motion relative to the $x$ motion. The heteroclinic trajectories with symmetry always connect equivalent substructures on the two NHIMs. The rest of the simple heteroclinic trajectories connect different substructures on the 2 NHIMs, i.e. substructures which are not identified by the discrete symmetries of the system. This description should also hold for other systems with the same discrete symmetries and many of its qualitative features should survive small perturbations of the symmetry. To our knowledge such a detailed description of the primary heteroclinic intersection surface between two NHIMs of 3-dof systems is new. Moreover, it was also shown that heteroclinic trajectories are excellent candidates of x1 types orbits which support the barred structure and geometry of the galaxy. For numerically integrating the equations of motion we used a Bulirsch-Stoer routine in standard version of `FORTRAN 77` [e.g., @PTVF92], with double precision. The relative error regarding the conservation of the orbital energy was of the order of $10^{-14}$, using a fixed time step equal to 0.001 and a Quad-Core i7 vPro 4.0 GHz processor. All the graphics of the paper have been constructed by using the 11.3 version of the Mathematica$^{\circledR}$ software [@W03]. Acknowledgments {#acknowledgments .unnumbered} =============== One of the authors (CJ) thanks DGAPA for financial support under grant number IG-100819. The authors would like to thank the anonymous referee for all the apt suggestions and comments which improved both the quality and the clarity of the paper. Athanassoula E., Romero-Gómez M., Masdemont J.J., 2009a, MNRAS, 394, 67 Athanassoula E., Romero-Gómez M., Bosma, A., Masdemont J.J., 2009b, MNRAS, 400, 1706 Athanassoula E., Romero-Gómez M., Bosma A., Masdemont J.J., 2010, MNRAS, 407, 1433 Binney J., Tremaine S., 2008, Galactic Dynamics, 2nd edn. Princeton Univ. Press Princeton Ferrers N.M., 1877, Q. J. Pure Appl. Math., 14, 1 Gonzalez F., Drotos G., Jung C., 2014 J. Phys. A: Math. Theor., 47, 045101 Jung Ch., Zotos, E.E., 2015, PASA, 32, e042 Jung Ch., Zotos, E.E., 2016a, MNRAS, 457, 2583 (Part I) Jung Ch., Zotos, E.E., 2016b, MNRAS, 463, 3965 (Part II) König M., Binnewies S., 2019, Bildatlas der Galaxien, 2. Auflage Editorial: Kosmos Verlag, Stuttgart Lyapunov A.M., 1907, Ann. Fac. Sci. Toulouse 9, 203 Lyapunov A.M, 1949, Annals of Mathematical Studies, Vol. 17 Pfenniger D., 1984, A&A 134, 373 Press H.P., Teukolsky S.A, Vetterling W.T., Flannery B.P., 1992, Numerical Recipes in FORTRAN 77, 2nd Ed., Cambridge Univ. Press, Cambridge, USA Romero-Gómez M., Masdemont J.J., Athanassoula E., García-Gómez C., 2006, A&A, 453, 39 Romero-Gómez M., Athanassoula E., Masdemont J.J., García-Gómez C., 2007, A&A, 472, 63 Tsoutsis P., Efthymiopoulos C., Voglis N., 2008, MNRAS, 387, 1264 Tsoutsis P., Kalapotharakos C., Efthymiopoulos C., Contopoulos G., 2009, A&A, 495, 743 Voglis N., Tsoutsis P., Efthymiopoulos C., 2006, MNRAS, 373, 280 Wiggins S., 1994, Normally Hyperbolic Invariant Manifolds in Dynamical Systems, Berlin: Springer Verlag Wolfram S., 2003, The Mathematica Book. Wolfram Media, Champaign Zotos E.E., Jung, Ch., 2018, MNRAS, 473, 806 (Part III) \[lastpage\] [^1]: E-mail: evzotos@physics.auth.gr [^2]: E-mail: jung@fis.unam.mx
--- abstract: 'A new definition of Lie invariance for nonlinear multi-dimensional boundary value problems (BVPs) is proposed by the generalization of known definitions to much wider classes of BVPs. The class of (1+3)-dimensional nonlinear BVPs of the Stefan type, modeling the process of melting and evaporation of metals, is studied in detail. Using the definition proposed, the group classification problem for this class of BVPs is solved and some reductions (with physical meaning) to BVPs of lower dimensionality are made. Examples of how to construct exact solutions of the (1+3)-dimensional nonlinear BVP with the correctly-specified coefficients are presented.' --- [**Lie symmetries and reductions of multi-dimensional boundary value problems of the Stefan type**]{} [**Roman Cherniha$^{\dag,}$ $^\ddag$**]{} [**and Sergii Kovalenko$^\dag$**]{}\ [*$^\dag$ Institute of Mathematics, Ukrainian National Academy of Sciences,\ 3 Tereshchenkivs’ka Street, Kyiv 01601, Ukraine*]{}\ [*$^\ddag$ Department of Mathematics, National University ‘Kyiv-Mohyla Academy’,\ 2 Skovoroda Street, Kyiv 04070 , Ukraine*]{}\ E-mail: cherniha@imath.kiev.ua and kovalenko@imath.kiev.ua **Introduction** ================ Currently, Lie symmetries are widely applied to study partial differential equations (PDEs) (including multi-component systems of multi-dimensional PDEs), notably, for their reductions to ordinary differential equations (ODEs) and constructing exact solutions. There are a vast number of papers and many excellent books (see, e.g., [@bl-anco02; @b-k; @fss; @olv; @ovs] and references cited therein) devoted to such applications. However, one may note that the authors usually do not pay any attention to the application of Lie symmetries for solving boundary value problems (BVPs). To the best of our knowledge, the first papers that did so were published at the beginning of 1970s (see [@pukh-72] and [@bl-1974] and their extended versions presented in books [@pukh-et-al-98] and [@b-k], respectively). The books, which highlight essential role of Lie symmetries in solving BVPs and present several examples, were published much later [@b-k; @rog-ames-89; @ibr96]. The main object of this paper is a class of (1+3)-dimensional nonlinear BVPs of the Stefan type. These problems are widely used in mathematical modeling of a wide range of processes, which arise in physics, biology, chemistry and industry [@alex93; @bri-03; @crank84; @ready; @rub71; @anisimov70]. Nevertheless, these processes can be very different from the formal point of view, they have the common peculiarity, unknown moving boundaries (free boundaries). Movement of unknown boundaries is described by the famous Stefan boundary conditions [@st; @rub71; @gupta]. It is well known that exact solutions of BVPs of the Stefan type can be derived only in exceptional cases and the relevant list is not very long at the present time (see [@alex93; @ch-od90; @ch93; @br-tr02; @br-tr10; @br-tr07; @voller10; @barry08; @ch-kov-09] and references cited therein). Notably, those exact solutions were constructed under additional conditions on their form and/or the coefficients arising in the relevant BVP. It should also be stressed that all analytical results derived in those papers are concerned with two-dimensional BVPs. To the best of our knowledge, there are no invariant solutions (with physical meaning) of multidimensional BVPs with free boundaries excepting particular problems with radial symmetry, for which analytical solutions are found (see, e.g., [@alex93; @crank75] and references cited therein). Perhaps [@pukh-06] (see also references by the same author cited therein) is unique because it contains such exact solutions for (1+3)-dimensional hydrodynamical problems with free boundaries. Of course, there are many interesting papers, devoted to the rigorous asymptotic analysis of such BVPs, leading to the relevant analytical results (see,e.g., [@kartashov-01; @king-et-all-03; @king-et-all-05] and references cited therein). From the mathematical point of view, BVPs with free boundaries are more complicated objects than the standard BVPs with fixed boundaries. In a particular case, each BVP with Stefan boundary conditions is nonlinear; nevertheless, the basic equations may be linear [@rub71; @cr-jg59]. Thus, the classical methods of solving linear BVPs (the Fourier method, the Laplace transformation and so forth) cannot be directly applied for solving any BVP with free boundaries. On the other hand, it can be noted that the Lie symmetry method can be more applicable just for solving problems with moving boundaries than for other BVPs. In fact, the structure of unknown boundaries may depend on invariant variable(s) and this provides the possibility to reduce the BVP in question to that of lover dimensionality. This is the reason why some authors have applied the Lie symmetry method to BVPs with free boundaries [@bl-1974; @pukh-72; @ben-olv-82; @pukh-06; @ch-2003; @ch-kov-09; @ch-kov-11]. The paper is organized as follows. In section 2, we propose a new definition of Lie invariance for any BVPs with basic evolution equations, which generalize the known definitions, and formulate the algorithm for solving the group classification problem. In section 3, we apply the definition and the algorithm to the class of (1+3)-dimensional BVPs of the Stefan type, used to describe melting and evaporation of materials in the case when their surface is exposed to a powerful flux of energy. The main result is presented in Theorem 2, which is a highly non-trivial generalization of that, derived for two-dimensional BVPs in [@ch-kov-09] . In section 4, all possible systems of subalgebras (optimal systems of subalgebras) for a subclass of (1+3)-dimensional BVPs admitting a five-dimensional Lie algebra of invariance are constructed. We reduce such problems to the two-dimensional BVPs via the non-conjugate two-dimensional subalgebra. Moreover, we show that this reduction admits a clear physical interpretation. Examples of how to construct exact solutions of the (1+3)-dimensional nonlinear BVP with the specified coefficients of diffusion are presented too. Finally, we present conclusions in section 5. **Definition of Lie invariance for a BVP with free boundaries** =============================================================== Consider a BVP for a system of $n$ evolution equations ($n \geq 2$) with $m + 1$ independent $(t, x)$ (hereafter $x = (x_1, x_2, \ldots, x_m)$) and $n$ dependent $u = (u_1, u_2, \ldots, u_n)$ variables. Let us assume that the basic equations possess the form $$\label{1} u_t^i=F^i \left(x, u, u_x, \ldots , u_{x}^{(k_i)}\right), \ i = 1, \ldots, n$$ and are defined on a domain $(0,+\infty)\times{\Omega} \subset \mathbb{R}^{m + 1} $, where ${\Omega}$ is an open domain with smooth boundaries. Hereafter, the lower subscripts $t$ and $x$ denote differentiation with respect to these variables and $u_{x}^{(k_i)}$ denotes a totality of partial derivatives with respect to $x$ of order $k_i$ (for example, $u_{x}^{(1)}=u_{x_1}, \ldots, u_{x_m}$). Consider three types of boundary and initial conditions, which widely arise in applications: $$\label{2} s_a(t,x)=0: \ B^{j}_a \left(t,x, u, u_x, \ldots , u_{x}^{(k_{a}^j)}\right) = 0,\ a = 1, \ldots, p, \, j =1,\ldots,n_a,$$ $$\label{3} S_b(t,x)=0: \ B^{l}_b \left(t,x, u, \ldots , u_{x}^{(k_b^l)}, S_b^{(1)}, \ldots , S^{(K^l_b)}_b \right) = 0, \ b = 1, \ldots, q, \, l =1,\ldots,n_b,$$ and $$\label{4} \gamma_c(t,x)=\infty: \ \Gamma^{m}_c \left(t,x, u, u_x, \ldots , u_{x}^{(k_{c}^m)}\right) = 0, \ c = 1, \ldots, r, \, m = 1, \ldots, n_c.$$ Here $k_{a}^j < k=\max\{k_1,\ldots,k_n\}, \ k_{b}^l< k, \ k_{c}^m < k$ and $K^l_b$ are the given numbers, $s_a(t,x)$ and $\gamma_c(t,x)$ are the known functions, while the functions $S_b(t,x)$ defining free boundary surfaces must be found. In (\[3\]), the function $S^{(K^l_b)}_b$ denotes a totality of derivatives with respect to $t$ and $x$ of order $K^l_b$ (for instance, $S_b^{(1)} = \frac{\partial S_b}{\partial t}, \frac{\partial S_b}{\partial x_1}, \ldots, \frac{\partial S_b}{\partial x_m}$). We also assume that all functions arising in (\[1\])–(\[4\]) are sufficiently smooth so that a classical solution exists for this BVP. We note that boundary conditions (\[4\]) essentially differ from those (\[2\]) because they are defined on the non-regular manifolds ${\cal{M}}_c = \{ \gamma_c(t,x)=\infty \}$. Such conditions appears if one considers BVPs in the unbounded domains and often leads to difficulties. For example, one may check that the definition of BVP invariance presented in [@b-k; @bl-anco02] is not valid for such conditions. Consider an $N$–parameter (local) Lie group $G_N$ of point transformations of variables $(t,x,u)$ in the Euclidean space $\mathbb{R}^{n+m+1}$ (open subset of $\mathbb{R}^{n+m+1}$) defined by the equations $$\label{5} t^{\ast} = T(t,x,\varepsilon), \ \ x^{\ast}_i = X_i (t,x,\varepsilon), \ \ u^{\ast}_j = U_j(t,x,u,\varepsilon), \ i = 1, \ldots, m, \ j = 1, \ldots, n,$$ where $\varepsilon = (\varepsilon_1, \varepsilon_2, \ldots, \varepsilon_N)$ are the group parameters. According to the general Lie group theory, one may construct the corresponding $N$-dimensional Lie algebra $L_N$ with the basic generators $$\label{6} X_\alpha = \xi^0_\alpha \frac{\partial}{\partial t}+\xi^1_\alpha \frac{\partial}{\partial x_1} + \ldots + \xi^m_\alpha \frac{\partial}{\partial x_m} + \eta^1_\alpha \frac{\partial}{\partial u_1}+ \ldots +\eta^n_\alpha \frac{\partial}{\partial u_n}, \ \alpha = 1,2, \ldots, N,$$ where $\xi^0_\alpha = \left. \frac{\partial T(t,x,\varepsilon)}{\partial \varepsilon_\alpha}\right \vert_{\varepsilon = 0}, \ \xi^i_\alpha = \left. \frac{\partial X_i(t,x,\varepsilon)}{\partial \varepsilon_\alpha}\right \vert_{\varepsilon = 0}, \ \eta^j_\alpha = \left. \frac{\partial U_j(t,x,u,\varepsilon)}{\partial \varepsilon_\alpha}\right \vert_{\varepsilon = 0}$. In the extended space $\mathbb{R}^{n+m+q+1}$ of the variables $(t,x,u,S)$ (hereafter $S = (S_1, ..., S_q)$ ), the Lie algebra $L_N$ defines the Lie group $\widetilde{G}_N$: $$\label{7} t^{\ast} = T(t,x,\varepsilon), \ x_i^{\ast} = X_i(t,x,\varepsilon), \ u^{\ast}_j = U_j(t,x,u,\varepsilon), \ S^{\ast}_b = S_b,$$ where $i = 1, \ldots ,m, \ j = 1, \ldots, n, \ b = 1, \ldots, q$. Now we propose a new definition, which is based on the standard definition of differential equation invariance as an invariant manifold ${\cal{M}}$ [@olv] and generalizes the previous definitions of BVP invariance (see, e.g., [@ben-olv-82; @b-k; @bl-anco02; @ibr92]). The BVP of the form (\[1\])–(\[4\]) is called invariant with respect to the Lie group $\widetilde{G}_N$ (\[7\]) if - the manifold determined by equation (\[1\]) in the space of variables $\left(t,x,u, \ldots, u_x^{(k)}\right)$ is invariant with respect to the $k$th-order prolongation of the group $G_N$; - each manifold determined by conditions (\[2\]) with any fixed number $a$ is invariant with respect to the $k_a$th-order prolongation of the group $G_N$ in the space of variables $\left(t,x,u, \ldots, u_x^{(k_a)}\right)$ , where $k_a = \max \{k_{a}^j, \ j = 1, \ldots, n_a \}$; - each manifold determined by conditions (\[3\]) with any fixed number $b$ is invariant with respect to the $k_b$th-order prolongation of the group $\widetilde{G}_N$ in the space of variables $\left(t,x,u, \ldots, u_x^{(k_b)}, S_b, \ldots , S^{(k_b)}_b \right)$ , where $k_b=\max \{k_{b}^l, \ K_b^l, \ l =1,\ldots,n_b \}$; - each manifold determined by conditions (\[4\]) with any fixed number $c$ is invariant with respect to the $k_c$th-order prolongation of the group $G_N$ in the space of variables $\left(t,x,u, \ldots, u_x^{(k_c)}\right)$ , where $k_c = \max \{k_{c}^m, \ m = 1, \ldots, n_c \}$. The functions $u_j = \Phi_j(t,x), \,j = 1, \ldots, n$ and $S_b = \Psi_b(t,x),\, b = 1, \ldots, q$ form an invariant solution $(u,S)$ of the BVP of the form (\[1\])–(\[4\]) corresponding to the Lie group (\[7\]) if - $(u,S)$ satisfies equations (\[1\]) and conditions (\[1\])–(\[4\]); - the manifold ${\cal{M}} = \{u_j = \Phi_j(t,x), \ j = 1, \ldots, n; \, S_b = \Psi_b(t,x), \ b = 1, \ldots, q \}$ is an invariant manifold of the Lie group (\[7\]). Definition 1 can be straightforwardly generalized on BVPs with governing systems of equations of hyperbolic, elliptic and mixed types. However, one should additionally assume that $n$-component governing system of PDEs is presented in a ’canonical’ form (some authors uses the natation ’involution form’ in this context), i.e. one possesses the simplest form and there are no non-trivial differential consequences. If the system of differential equations contain arbitrary functions as coefficients (formally speaking, they can be constants), the group classification problem arises. Such problems was formulated and solved for a class of non-linear heat equations (NHEs) in a pioneering work [@ovs-59] (see also [@ovs]). At the present time, there are algorithms for rigorous solving group classification problems (see, e.g., [@ch-se-ra-08] and references cited therein), which were successfully applied to different classes of PDEs. Thus, if system (\[1\]) and/or the boundary conditions (\[2\])–(\[4\]) contain arbitrary functions as coefficients, then we should formulate and solve the group classification problem for the BVP of the form (\[1\])–(\[4\]). We propose the following algorithm of the group classification for the class of BVPs (\[1\])–(\[4\]): - to construct the equivalence group $E_{\mathrm{eq}}$ of local transformations, which transform the governing system of equations into itself; - to extend the space of $E_{\mathrm{eq}}$ action on the variables $S = (S_1, ..., S_q)$ by adding the identity transformations for them (see the last formula in (\[7\])) and to denote the group obtained as $\widetilde{E}_{\mathrm{eq}}$; - to find the equivalence group $\widetilde{E}_{\mathrm{eq}}^{\mathrm{BVP}}$ of local transformations, which transform the class of BVPs (\[1\])–(\[4\]) into itself, one extends the space of the $\widetilde{E}_{\mathrm{eq}}$ action on the prolonged space, where all arbitrary elements arising in boundary conditions (\[2\])–(\[4\]) are treated as new variables. - to perform the group classification of the governing system (\[1\]) up to local transformations generated by the group $\widetilde{E}_{\mathrm{eq}}^{\mathrm{BVP}}$; - using Definition 1, to find the principal group of invariance $\widetilde{G}^{0}$, which is admitted by each BVP belonging to the class in question; - using Definition 1 and the results obtained at step (IV), to describe all possible $\widetilde{E}_{\mathrm{eq}}^{\mathrm{BVP}}$-inequivalent BVPs of the form (\[1\])–(\[4\]) admitting maximal invariance groups of higher dimensionality than $\widetilde{G}^{0}$. **Lie invariance of a class of (1+3)-dimensional nonlinear BVPs with free boundaries** ====================================================================================== **Mathematical model of melting and evaporation under a powerful flux of energy** --------------------------------------------------------------------------------- Consider the process of melting and evaporation in half-space $\Omega = \{\textbf{x} = (x_1, x_2, x_3): x_3 > 0\}$ occupied by a solid material, when its surface (initially it is the plane $x_3=0$) is exposed to a powerful flux of energy. We neglect the initial short-time non-equilibrium stage of the process and consider the process at the stage when three phases already take place and assume that this occurs at any moment $t \in \mathfrak{T}=(t_{\ast},+ \infty)$, where $t_{\ast}$ is a positive real number. Thus, the heat transfer domain $\Omega(t)= \Omega \times \mathfrak{T}$ consists of three sub-domains occupied by the gas, liquid and solid phases, which will be denoted by $\Omega_0(t)$, $\Omega_1(t)$ and $\Omega_2(t),$ respectively, and the phase division boundary surfaces, $S_1(t, \textbf{x})=0$ and $S_2(t, \textbf{x})=0$ (see Fig.1). In other words, the domain $\Omega(t)$ admits the disjoint decomposition ![A scheme for the process of melting and evaporation of a solid material which is exposed to a powerful energy flux.](pic1a.eps){width="8cm"} $$\Omega(t) = \Omega_0(t) \cup \Gamma_1(t) \cup \Omega_1(t) \cup \Gamma_2(t) \cup \Omega_2(t),$$ where $$\Gamma_k(t) = \{(t, \textbf{x}): S_k(t, \textbf{x}) = 0, \ t \in \mathfrak{T}, \ \textbf{x} \in \Omega \}, \ k = 1,2,$$ $$\Omega_0(t) = \{(t, \textbf{x}): S_1(t, \textbf{x}) < 0, \ S_2(t, \textbf{x}) < 0, \ t \in \mathfrak{T}, \ \textbf{x} \in \Omega \},$$ $$\Omega_1(t) = \{(t, \textbf{x}): S_1(t, \textbf{x}) > 0, \ S_2(t, \textbf{x}) < 0, \ t \in \mathfrak{T}, \ \textbf{x} \in \Omega \},$$ $$\Omega_2(t) = \{(t, \textbf{x}): S_1(t, \textbf{x}) > 0, \ S_2(t, \textbf{x}) > 0, \ t \in \mathfrak{T}, \ \textbf{x} \in \Omega \}.$$ Let us consider a class of (1+3)-dimensional nonlinear BVPs of the Stefan type used to describe melting and evaporation of materials in the case when their surface is exposed to a powerful flux of energy [@anisimov70; @ready; @gupta; @ch-od91]: $$\begin{aligned} & & \nabla \left(\lambda_{1}(T_{1}) \nabla T_1 \right) = C_{1}(T_{1}) \frac{\partial T_{1}}{\partial t}, \ \ (t, \textbf{x}) \in \Omega_1(t), \label{8} \\ & & \nabla \left(\lambda_{2}(T_{2}) \nabla T_2 \right) = C_{2}(T_{2}) \frac{\partial T_{2}}{\partial t}, \ \ (t, \textbf{x}) \in \Omega_2(t), \label{9} \\ & & \qquad S_{1}(t,\textbf{x}) = 0:\ \lambda_{1}(T_{v}) \frac{\partial T_{1}}{\partial \textbf{n}_1} = H_v \textbf{V}_1 \cdot \textbf{n}_1- \textbf{Q}(t) \cdot \textbf{n}_1, \ T_1 = T_v,\label{10} \\ & & \qquad S_{2}(t,\textbf{x}) = 0: \ \lambda_{2}(T_{m}) \frac{\partial T_{2}}{\partial \textbf{n}_2} = \lambda_{1}(T_{m}) \frac{\partial T_{1}}{\partial \textbf{n}_2} + H_m \textbf{V}_2 \cdot \textbf{n}_2,\ T_{1} = T_{2} = T_{m},\label{11} \\ & & \qquad |\textbf{x}| = +\infty: \ T_{2} = T_{\infty}, \ \ t \in \mathfrak{T},\label{12}\end{aligned}$$ where $T_v$, $T_{m}$ and $T_{\infty}$ are the known temperatures of evaporation, melting and solid phases of the material, respectively; $\lambda_{k}(T_k), \, k = 1,2$ are the positive thermal conductivities; $C_k(T_k)$, $H_v$, $H_m$ are the positive specific heat values per unit volume; $\textbf{Q}(t) = (Q_1(t),Q_2(t),Q_3(t))$ is the energy flux being absorbed by the material; $S_{k}(t,\textbf{x}) = 0, \, k = 1,2$ are the phase division boundary surfaces to be found; $\textbf{V}_k(t,\textbf{x}), \, k = 1,2$ are the phase division boundary velocities; $\textbf{n}_k, \, k = 1,2$ are the unit outward normals to the surfaces $S_k(t,\textbf{x}) = 0, \, k = 1,2$; $T_{k}(t,\textbf{x}), \, k = 1,2$ are the unknown temperature fields; $\nabla = \left(\frac{\partial}{\partial x_1}, \frac{\partial}{\partial x_2}, \frac{\partial}{\partial x_3} \right)$; the subscripts $k = 1$ and $k =2$ correspond to the liquid and solid phases, respectively. Here equations (\[8\]) and (\[9\]) are basic and describe the heat transfer process in liquid and solid phases, the boundary conditions (\[10\]) present evaporation dynamics on the surface $S_{1} = 0$, and the boundary conditions (\[11\]) are the well-known Stefan conditions on the surface $S_{2} = 0$ dividing the liquid and solid phases. Since the liquid phase thickness is considerably less than the solid phase thickness, one may use the Dirichlet condition (\[12\]), where $T_{\infty}$ can be treated as the initial temperature of material. We also assume that the gas phase does not interact with liquid and solid phases; hence the problem in question does not involve any equation for the gas phase. From the mathematical and physical points of view, we should impose the same additional conditions on the functions and constants arising in the BVP class in question, which guarantee existing classical solutions. Namely, we assume that all functions in (\[8\])–(\[12\]) are sufficiently smooth; the free boundary surfaces $S_k(t, \textbf{x}) = 0$ satisfy the restrictions $\frac{\partial S_k}{\partial t} \neq 0$ and $|\nabla S_k| \neq 0$, $k = 1,2$; the projection of the heat flux vector $\textbf{Q}(t)$ on the normal $\textbf{n}_1$ is nonzero, i.e. $\textbf{Q}(t) \cdot \textbf{n}_1 \neq 0$, and $\textbf{V}_k \cdot \textbf{n}_k \neq~0, \ k=1,2 $. Finally, the constants $T_v$, $ T_m$ and $T_{\infty}$ satisfy the natural inequalities $T_v > T_m >T_{\infty}$. First of all, we simplify the governing system of equations (\[8\]) and (\[9\]) using the Goodman substitution [@kozdoba; @goodman] $$\label{13} u = \phi_1(T_1) \equiv \int\limits_0^{T_{1}} {C_{1}(\zeta)}\,d\zeta, \quad v = \phi_2(T_2) \equiv \int\limits_0^{T_{2}} {C_{2}(\xi)}\,d\xi.$$ Substituting (\[13\]) into (\[8\])–(\[12\]) and making the relevant calculations, we arrive at the equivalent class of BVPs $$\begin{aligned} & & \frac{\partial u}{\partial t} = \nabla \left(d_{1}(u) \nabla u \right), \ \ (t, \textbf{x}) \in \Omega_1(t),\label{14} \\ & & \frac{\partial v}{\partial t} = \nabla \left(d_{2}(v) \nabla v \right) , \ \ (t, \textbf{x}) \in \Omega_2(t),\label{15} \\ & & \qquad S_{1}(t,\textbf{x}) = 0:\ d_{1v} \frac{\partial u}{\partial \textbf{n}_1} = H_v \textbf{V}_1 \cdot \textbf{n}_1- \textbf{Q}(t) \cdot \textbf{n}_1, \ u = u_v,\label{16} \\ & & \qquad S_{2}(t,\textbf{x}) = 0: \ d_{2m} \frac{\partial v}{\partial \textbf{n}_2} = d_{1m} \frac{\partial u}{\partial \textbf{n}_2} + H_m \textbf{V}_2 \cdot \textbf{n}_2,\ u = u_m, \ v = v_m,\label{17} \\ & & \qquad |\textbf{x}| = +\infty: \ v = v_{\infty}, \ \ t \in \mathfrak{T},\label{18}\end{aligned}$$ where $u_v = \int\limits_0^{T_{v}} {C_{1}(\zeta)}\,d\zeta$, $u_m = \int\limits_0^{T_{m}} {C_{1}(\zeta)}\,d\zeta$, $v_m = \int\limits_0^{T_{m}} {C_{2}(\xi)}\,d\xi$, $v_{\infty} = \int\limits_0^{T_{\infty}} {C_{2}(\xi)}\,d\xi$; $d_1(u) = \frac{\lambda_1(\phi^{-1}_1(u))}{C_1(\phi^{-1}_1(u))}$, $d_2(v) = \frac{\lambda_2(\phi^{-1}_2(v))}{C_2(\phi^{-1}_2(v))}$; $d_{1v} = d_1(u_v)$, $d_{1m} = d_1(u_m)$, $d_{2m} = d_2(v_m)$ (here $\phi^{-1}_k, \, k=1,2$ are the inverse functions of $\phi_k$, the functions $d_1(u)$ and $d_2(v)$ are strictly positive and $u_v \neq u_m$, $v_m \neq v_{\infty}$). **Group classification of the basic equations (\[14\])–(\[15\])** ------------------------------------------------------------------ One sees that the BVP of the form (\[14\])–(\[18\]) consists of two standard NHEs with arbitrary smooth functions $d_1(u)$ and $d_2(v)$ and the boundary conditions (\[16\])–(\[18\]) containing an arbitrary vector function $\textbf{Q}(t)$ and a number of arbitrary parameters. Thus, we deal with a class of BVPs, and to carry out the group classification the algorithm formulated in Section 2 can be used. According to item (I), we need to find the group of equivalent transformations of the non-coupled system of NHEs (\[14\])–(\[15\]). Note that this group is well-known in the case of a single NHE (see, e.g., [@dor-svi]). However, one cannot extend this result in the case of system (\[14\])–(\[15\]) in a formal way because the group obtained may be incomplete. Thus, we carefully check the group of equivalent transformations for the non-coupled system (\[14\])–(\[15\]). The equivalence transformations group $E_{\mathrm{eq}}$ of system (\[14\])–(\[15\]) consists of the group $\mathcal{E}_{\mathrm{eq}}$ of continuous equivalence transformations $$\begin{aligned} & & \nonumber \bar{t} = \alpha t + \gamma_0, \ \bar{X} = \beta A_i(\beta_1) A_j(\beta_2) A_k(\beta_3) X + \Gamma \ (i,j,k = 1,2,3; \ i \neq j, i \neq k, j \neq k), \\ & & \nonumber \bar{u} = \delta_1 u + \gamma_4, \ \bar{v} = \delta_2 v + \gamma_5, \ \bar{d}_1 = \frac{\beta^2}{\alpha} \ d_1, \ \bar{d}_2 = \frac{\beta^2}{\alpha} \ d_2,\end{aligned}$$ and the group of discrete equivalence transformations - $t \rightarrow -t$,  $x_i \rightarrow (-1)^jx_i \ (i = 1, 2, 3, \, j=0,1)$,  $u \rightarrow -u$,  $v \rightarrow -v$,  $d_1 \rightarrow -d_1$,  $d_2 \rightarrow -d_2$; - $t \rightarrow t$,  $x_i \rightarrow (-1)^jx_i \ (i = 1, 2, 3, \, j=0,1)$,  $u \rightarrow v$,  $v \rightarrow u$,  $d_1 \rightarrow d_2$,  $d_2 \rightarrow d_1$. Here $\alpha>0, \beta>0, \beta_1, \ldots, \beta_3, \gamma_0, \ldots, \gamma_5, \delta_1>0, \delta_2>0$ are arbitrary constants; $$A_1(\theta) = \left (\begin{array}{ccc} \cos\theta & \sin\theta & 0 \\ - \sin\theta & \cos\theta & 0 \\ 0 & 0 & 1 \end{array} \right ), A_2(\theta) = \left (\begin{array}{ccc} \cos\theta & 0 & \sin\theta \\ 0 & 1 & 0 \\ -\sin\theta & 0 & \cos\theta \end{array} \right), A_3(\theta) = \left (\begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos\theta & \sin\theta \\ 0 & - \sin\theta & \cos\theta \end{array} \right)$$ are the matrix of rotation in space, and $$X = \left (\begin{array}{c} x_1 \\ x_2 \\ x_3 \end{array} \right), \ \Gamma = \left (\begin{array}{c} \gamma_1 \\ \gamma_2 \\ \gamma_3 \end{array} \right).$$ [**Proof.**]{} To find the group $\mathcal{E}_{\mathrm{eq}}$, we use the standard infinitesimal method [@ovs; @ibr91], i.e., search for the generators of the form $$Y = \xi^0 \frac{\partial}{\partial t}+\xi^a \frac{\partial}{\partial x_a} + \eta^1 \frac{\partial}{\partial u} + \eta^2 \frac{\partial}{\partial v} + \mu^1 \frac{\partial}{\partial d_1} + \mu^2 \frac{\partial}{\partial d_2},$$ where $a = 1, 2, 3$ (hereafter, summation is assumed from 1 to 3 over repeated indices $a$ or $b$). The generator $Y$ defines the group $\mathcal{E}_{\mathrm{eq}}$ of equivalence transformations $$\bar{t} = \phi(t, \textbf{x}, u, v), \ \bar{x}_a = \psi_a(t, \textbf{x}, u, v), \ \bar{u} = \Phi_1(t, \textbf{x}, u, v), \ \bar{v} = \Phi_2(t, \textbf{x}, u, v),$$ $$\bar{d_1} = \Psi_1(t, \textbf{x}, u, v, d_1, d_2), \ \bar{d_2} = \Psi_2(t, \textbf{x}, u, v, d_1, d_2)$$ for the class of systems (\[14\])–(\[15\]) iff $Y$ obeys the condition of invariance of the following system: $$\label{lem1} \frac{\partial u}{\partial t} - \nabla \left(d_{1} \nabla u \right) = 0, \ \frac{\partial d_1}{\partial t} = 0, \ \frac{\partial d_1}{\partial x_a} = 0, \ \frac{\partial d_1}{\partial v} = 0,$$ $$\label{lem2} \frac{\partial v}{\partial t} - \nabla \left(d_{2} \nabla v \right) = 0, \ \frac{\partial d_2}{\partial t} = 0, \ \frac{\partial d_2}{\partial x_a} = 0, \ \frac{\partial d_2}{\partial u} = 0.$$ Here, the variables $u$ and $v$ are considered in the space $(t, \textbf{x})$, while $d_k$ in the extended space $(t, \textbf{x}, u, v)$. The coordinates $\xi^0, \xi^a$ and $\eta^k$ of the operator $Y$ are the functions of the variables $t, \textbf{x}, u, v$, while the coordinates $\mu^k$ are the functions of $t, \textbf{x}, u, v, d_1, d_2$. Thus, the invariance criterium for system (\[lem1\])–(\[lem2\]) is given by the formulae $$\label{lem3} Y^{(2)} \left.\left(\frac{\partial u}{\partial t} - \nabla \left(d_{1} \nabla u \right)\right)\right\vert_{\cal S} = 0, \ Y^{(2)} \left.\left(\frac{\partial d_1}{\partial t}\right)\right\vert_{\cal S} = 0, \ Y^{(2)} \left.\left(\frac{\partial d_1}{\partial x_a}\right)\right\vert_{\cal S} = 0, \ Y^{(2)} \left.\left(\frac{\partial d_1}{\partial v}\right)\right\vert_{\cal S} = 0,$$ $$\label{lem4} Y^{(2)} \left.\left(\frac{\partial v}{\partial t} - \nabla \left(d_{2} \nabla v \right)\right)\right\vert_{\cal S} = 0, \ Y^{(2)} \left.\left(\frac{\partial d_2}{\partial t}\right)\right\vert_{\cal S} = 0, \ Y^{(2)} \left.\left(\frac{\partial d_2}{\partial x_a}\right)\right\vert_{\cal S} = 0, \ Y^{(2)} \left.\left(\frac{\partial d_2}{\partial v}\right)\right\vert_{\cal S} = 0,$$ where $S$ is the manifold, defined by system (\[lem1\])–(\[lem2\]), $Y^{(2)}$ is the second prolongation of the operator $Y$ calculated via the known formulae [@ovs; @ibr91]. Using these formulae and (\[lem3\])–(\[lem4\]) and carrying out the relevant calculations one obtains the 13-dimensional Lie algebra with the basic operators $$Y_1 = \frac{\partial}{\partial t}, \ Y_2 = \frac{\partial}{\partial x_1}, \ Y_3 = \frac{\partial}{\partial x_2}, \ Y_4 = \frac{\partial}{\partial x_3},$$ $$Y_5 = x_2 \frac{\partial}{\partial x_1} - x_1 \frac{\partial}{\partial x_2}, \ Y_6 = x_3 \frac{\partial}{\partial x_1} - x_1 \frac{\partial}{\partial x_3}, \ Y_7 = x_3 \frac{\partial}{\partial x_2} - x_2 \frac{\partial}{\partial x_3},$$ $$Y_8 = \frac{\partial}{\partial u}, \ Y_9 = \frac{\partial}{\partial v}, \ Y_{10} = u \frac{\partial}{\partial u}, \ Y_{11} = v \frac{\partial}{\partial v},$$ $$Y_{12} = t \frac{\partial}{\partial t} - d_1 \frac{\partial}{\partial d_1} - d_2 \frac{\partial}{\partial d_2}, \ Y_{13} = x_a \frac{\partial}{\partial x_a} + 2 d_1 \frac{\partial}{\partial d_1} + 2 d_2 \frac{\partial}{\partial d_2}.$$ One easily checks that this algebra generates the group $\mathcal{E}_{\mathrm{eq}}$. Finally, to obtain the group of equivalence transformations $E_{\mathrm{eq}}$, we should add the discrete transformations 1) and 2) listed in Lemma 1. It is easy to verify by the direct calculations that system (\[14\])–(\[15\]) is invariant under these discrete transformations. The proof is now complete. $\blacksquare$ According to item (II) of the algorithm, we should now construct the group $\widetilde{E}_{\mathrm{eq}}$ by adding the identity transformations $\widetilde{S}_1=S_1, \widetilde{S}_2=S_2$. To realize item (III), one needs to apply the transformations generated by the group $\widetilde{E}_{\mathrm{eq}}$ to the boundary conditions (\[16\])–(\[18\]) and to find the group $\widetilde{E}^{\mathrm{BVP}}_{\mathrm{eq}}$. The result obtained is formulated as follows. The class of BVPs (\[14\])–(\[18\]) admits the group of equivalence transformations $\widetilde{E}^{\mathrm{BVP}}_{\mathrm{eq}}$: $$\begin{aligned} & & \nonumber \tilde{t} = \alpha t + \gamma_0, \ \widetilde{X} = \beta A_i(\beta_1) A_j(\beta_2) A_k(\beta_3) X + \Gamma \ (i,j,k = 1,2,3; \ i \neq j, i \neq k, j \neq k),\\ & & \nonumber \tilde{u} = \delta_1 u + \gamma_4, \ \tilde{v} = \delta_2 v + \gamma_5, \ \widetilde{S}_1 = S_1, \ \widetilde{S}_2 = S_2, \ \tilde{d}_1 = \frac{\beta^2}{\alpha} \ d_1, \ \tilde{d}_2 = \frac{\beta^2}{\alpha} \ d_2, \\ & & \nonumber \tilde{d}_{1v} = \frac{\beta}{\delta_1} \ d_{1v}, \ \tilde{d}_{1m} = \frac{\beta}{\delta_1} \ d_{1m}, \ \tilde{d}_{2m} = \frac{\beta}{\delta_2} \ d_{2m}, \ \widetilde{H}_v = \frac{\alpha}{\beta} H_v, \ \widetilde{H}_m = \frac{\alpha}{\beta} H_m, \\ & & \nonumber \tilde{u}_v = \delta_1 u_v + \gamma_4, \ \tilde{u}_m = \delta_1 u_m + \gamma_4, \ \tilde{v}_m = \delta_2 v_m + \gamma_5, \ \tilde{v}_{\infty} = \delta_2 v_{\infty} + \gamma_5\\ & & \nonumber \widetilde{Q} = A_i(\beta_1) A_j(\beta_2) A_k(\beta_3) Q \ (i,j,k = 1,2,3; \ i \neq j, i \neq k, j \neq k),\end{aligned}$$ with arbitrary coefficients $\alpha, \beta, \beta_1, \ldots, \beta_3, \gamma_0, \ldots, \gamma_5, \delta_1, \delta_2$ obeying only the condition $$\alpha \beta \delta_1 \delta_2 \neq 0.$$ =5 pt ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [no]{} [$d_1(u)$]{} [$d_2(v)$]{} [Basic operators of MAI]{} -------- -------------------- -------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1. $\forall$ $\forall$ $ AE(1,3) = \langle \partial_{t}, \partial_{x_a}, x_a \partial_{x_b} - x_b \partial_{x_a}, 2t \partial_{t} + x_a \partial_{x_a} \rangle$ 2. $k_1$ $\forall$ $ AE(1,3), u \partial_u, \alpha(t,\textbf{x}) \partial_u $ 3. $\forall$ $k_2$ $ AE(1,3), v \partial_v, \beta(t,\textbf{x}) \partial_v $ 4. $e^u$ $e^v$ $ AE(1,3), x_a \partial_{x_a} + 2 \partial_{u} + 2 \partial_{v}$ 5. $e^u$ $v^m$ $ AE(1,3), x_a \partial_{x_a} + 2 \partial_u + \frac{2}{m} v \partial_{v} $ 6. $u^n$ $e^v$ $ AE(1,3), x_a \partial_{x_a} + \frac{2}{n} u \partial_u + 2 \partial_{v} $ 7. $u^n$ $v^m$ $ AE(1,3), D=x_a \partial_{x_a} + \frac{2}{n} u \partial_u + \frac{2}{m} v \partial_{v} $ 8. $u^{-\frac{4}{5}}$ $v^{-\frac{4}{5}}$ $ AE(1,3), |\textbf{x}|^2 \partial_{x_b} - 2 x_b x_a \partial_{x_a} + 5 x_b u \partial _u + 5 x_b v \partial_v, \, D$ with $m=n= -\frac{4}{5} $ 9. $k_1$ $k_2$ $AE(1,3), u \partial_u, v \partial_v, \alpha(t,\textbf{x}) \partial_u, \beta(t,\textbf{x}) \partial_v, G_a=t \partial_{x_a} - x_a \left(\frac{1}{2k_1} u \partial_u + \frac{1}{2k_2}v \partial_v \right) $, $\Pi= t^2 \partial_t + t x_a \partial_{x_a} - \frac{1}{4k_1}(|\textbf{x}|^2 + 6 k_1 t) u \partial_u - \frac{1}{4k_2}(|\textbf{x}|^2 + 6 k_2 t) v \partial_v$ 10. $k_1$ $k_1$ $AE(1,3), u \partial_u, v \partial_v, v \partial_u, u \partial_v, \alpha(t,\textbf{x}) \partial_u, \beta(t,\textbf{x}) \partial_v, G_a$ and $\Pi$ with $ k_2=k_1$ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : Lie algebras of the NHE system (\[14\])–(\[15\]) ($k_1,k_2, m$ and $n$ are arbitrary non-zero constants, $b < a = 1, 2, 3$; while $\alpha(t,x)$ and $\beta(t,x)$ are arbitrary solutions of the linear heat equations $\alpha_t = k_1 \alpha_{x x}$ and $\beta_t = k_2 \beta_{x x}$, respectively.)[]{data-label="tab1"} According to item (IV) of the algorithm, we should now perform the group classification of the governing system (\[14\])–(\[15\]) up to local transformations generated by the group $\widetilde{E}_{\mathrm{eq}}^{\mathrm{BVP}}$. The result can be presented as follows. All possible maximal algebras of invariance (MAIs) (up to equivalent representations generated by transformations from the group $\widetilde{E}^{\mathrm{BVP}}_{\mathrm{eq}}$) of system (\[14\])–(\[15\]) for any fixed vector function $(d_1, d_2)$ with strictly positive functions $d_1(u)$ and $d_2(v)$ are presented in Table \[tab1\]. Any other system of the form (\[14\])–(\[15\]) is reduced to one of those with diffusivities from Table \[tab1\] by an equivalence transformation from the group $\widetilde{E}^{\mathrm{BVP}}_{\mathrm{eq}}$. [**Proof.**]{} First of all, we remind the reader that a complete description of Lie symmetries for the class of multi-dimensional nonlinear reaction-diffusion systems $$\begin{aligned} & & \nonumber \frac{\partial u}{\partial t} = \nabla \left(d_{1}(u) \nabla u \right) + F_1(u,v), \\ & & \nonumber \frac{\partial v}{\partial t} = \nabla \left(d_{2}(v) \nabla v \right) + F_2(u,v),\end{aligned}$$ where $F_1$ and $F_2$ are the arbitrary nonzero smooth functions, was obtained in [@ch-king00; @ch-king03] (the case of constant diffusivities) and [@ch-king06] (the case of non-constant diffusivities). Nevertheless, the detailed examination of the special case $F_1 =F_2 = 0$ was omitted in the papers [@ch-king00; @ch-king03; @ch-king06], we can use the relevant system of the determining equations with $F_1 =F_2 = 0$ to find all possible MAIs of system (\[14\])–(\[15\]). Let us assume that the MAI in question is generated by the infinitesimal operator $$X = \xi^0(t, \textbf{x}, u, v) \frac{\partial}{\partial t}+\xi^a(t, \textbf{x}, u, v) \frac{\partial}{\partial x_a} + \eta^1(t, \textbf{x}, u, v) \frac{\partial}{\partial u} + \eta^2(t, \textbf{x}, u, v) \frac{\partial}{\partial v},$$ where $ \xi^0, \ \xi^a, \ \eta^1$ and $ \eta^2$ are the unknown smooth functions. The form of the system of determining equations to find these functions essentially depends on the derivatives $\frac{d d_1}{d u}$ and $\frac{d d_2}{d v}$. Let us consider the most general case $\frac{d d_1}{d u} \cdot \frac{d d_2}{d v} \neq 0$. The relevant system of determining equations has the form (see formulae (11)–(16) with $F_1 =F_ 2 = 0$ in [@ch-king06]) $$\label{p6} \xi^0_{x_a} = \xi^0_{u} = \xi^0_{v} = 0, \ \xi^a_{u} = \xi^a_{v} = \eta^1_v = \eta^2_u = 0, \ a = 1, 2, 3,$$ $$\label{p7} \xi^1_{x_1} = \xi^2_{x_2} = \xi^3_{x_3}, \ \xi^a_{x_b} + \xi^b_{x_a} = 0, \ b < a = 1, 2, 3,$$ $$\label{p8} \xi^0_t = 2 \xi^a_{x_a} - \eta^1 \frac{d}{d u} \log d_1, \ \xi^0_t = 2 \xi^a_{x_a} - \eta^2 \frac{d}{d v} \log d_2, \ a = 1, 2, 3,$$ $$\label{p9} 2 \frac{\partial^2 \eta^1}{\partial x_a \partial u} + 2 \eta^1_{x_a} \frac{d}{d u} \log d_1 = \Delta \xi^a - \xi^a_t \left(d_1 \right)^{-1}, \ 2 \frac{\partial^2 \eta^2}{\partial x_a \partial v} + 2 \eta^2_{x_a} \frac{d}{d v} \log d_2 = \Delta \xi^a - \xi^a_t \left(d_2 \right)^{-1},$$ $$\label{p10} 2 \xi^a_{x_a} - \xi^0_t - \eta^1_u = \eta^1_{u u} \left(\frac{d}{d u} \log d_1 \right)^{-1} + \eta^1 \frac{d}{d u} \log \frac{d d_1}{d u}, \ a = 1, 2, 3,$$ $$\label{p11} 2 \xi^a_{x_a} - \xi^0_t - \eta^2_v = \eta^2_{v v} \left(\frac{d}{d v} \log d_2 \right)^{-1} + \eta^2 \frac{d}{d v} \log \frac{d d_2}{d v}, \ a = 1, 2, 3,$$ $$\label{p12} \eta^1_t - \Delta \eta^1 d_1 = 0, \ \eta^2_t - \Delta \eta^2 d_2 = 0.$$ If $d_1(u)$ and $d_2(v)$ are the arbitrary smooth functions, then system (\[p6\])–(\[p12\]) can be easily solved resulting the eight-dimensional Lie algebra $AE(1,3)$ with the basic operators $$P_t = \frac{\partial}{\partial t}, \ P_a = \frac{\partial}{\partial x_a}, \ J_{a b} = x_a P_b - x_b P_a, \ D_0 = 2 t \frac{\partial}{\partial t} + x_a \frac{\partial}{\partial x_a}, \, b < a = 1, 2, 3.$$ Now, we should find all possible pairs of the function $d_1(u)$ and $d_2(v)$ leading to extensions of the algebra $AE(1,3)$. It is evident that equations (\[p6\]) and (\[p8\]) can be easily solved and the functions $$\label{p13} \xi^0 = A(t) , \ \xi^a = B^a(t,x), \ \ a = 1, 2, 3,$$ $$\label{p14} \eta^1 = (2 B^a_a - A_t) \left(\frac{d}{d u} \log d_1 \right)^{-1}, \ \eta^2 = (2 B^a_a - A_t) \left(\frac{d}{d v} \log d_2 \right)^{-1}, \ \ a = 1, 2, 3$$ are obtained (here $A$ and $B^a$ are the arbitrary functions at the moment). However, the functions $B^a$ can be specified using (\[p7\]) as follows: $$\label{p15} B^a = c_{0a} + c_{ab} x_b + k_a(t) |\textbf{x}|^2 - 2 k_b(t) x_b x_a, \ a, b = 1, 2, 3,$$ where $c_{ab} + c_{ba} = 0 \, (a \neq b), c_{11} = \ldots = c_{33}$ and $c_{0a}, c_{ab}, k_a(t), k_b(t)$ are arbitrary functions. Substituting formulae (\[p13\]) and (\[p14\]) in (\[p10\]) and (\[p11\]) we arrive at the system of classification equations to find pairs of the functions $(d_1, d_2)$: $$\label{p16} \frac{d^2}{du^2} \left(\frac{d}{d u} \log d_1 \right)^{-1} = 0, \ \frac{d^2}{dv^2} \left(\frac{d}{d v} \log d_2 \right)^{-1} = 0,$$ otherwise, the conditions $$\label{p17} 2 B^a_a - A_t = 0, \ a = 1, 2, 3,$$ must hold. However, substituting (\[p15\]) in conditions (\[p17\]) and taking into account (\[p9\]) and (\[p13\])–(\[p14\]) we immediately obtain only the Lie algebra $AE(1,3)$. System (\[p16\]), which consists of two independent ODEs, can be easy solved: $$\label{p18} d_1(u) = \left \{\begin{array}{lll} D_1 (u + C_1)^{\alpha_1}, \\ D_1 \exp(\alpha_1 u), \end{array} \right.$$ $$\label{p19} d_2(v) = \left \{\begin{array}{lll} D_2 (v + C_2)^{\alpha_2}, \\ D_2 \exp(\alpha_1 v), \end{array} \right.$$ where $D_k \neq 0, \ \alpha_k \neq 0$ and $C_k$ are arbitrary constants ($k = 1, 2$). Substitutions the functions $\xi^0, \xi^a, \eta^1$ and $\eta^2$ from (\[p13\])–(\[p14\]) in (\[p9\]) we arrive at the equations $$B_t^a = - \Delta B^a \left (5 + 4 \frac{d^2}{d u^2} \log d_1 \right) \frac{d d_1}{d u}, \ \ B_t^a = - \Delta B^a \left (5 + 4 \frac{d^2}{d v^2} \log d_2 \right) \frac{d d_2}{d v}.$$ Since the functions $B_a, \ (a = 1, 2, 3)$ depend only on $t$ and $x$, there are only two possibilities. The first one is $\Delta B^a = 0$ and then, applying (\[p15\]), we obtain $$B^a = c_{a b} x_b + c_{a 0}, \ \ c_{a b}, c_{a 0} \in \mathbb{R}.$$ The second possibility, having $\Delta B^a \neq 0$, requires $$\label{p20} \left (5 + 4 \frac{d^2}{d u^2} \log d_1 \right) \frac{d d_1}{d u} = m_1, \ \ \left (5 + 4 \frac{d^2}{d v^2} \log d_2 \right) \frac{d d_2}{d v} = m_2,$$ where $m_1$ and $m_2$ are some constants. Using (\[p18\])–(\[p19\]) it is easily seen that conditions (\[p20\]) can be satisfied only for $m_1 = m_2 = 0$, and then $$\label{p21} d_1(u) = D_1 \left(u + C_1 \right)^{- \frac{4}{5}}, \ \ d_1(v) = D_2 \left(v + C_2 \right)^{- \frac{4}{5}}.$$ Thus, we have obtained all possible forms of the functions $d_1(u)$ and $d_2(v)$, namely formulae (\[p18\])–(\[p19\]) and (\[p21\]), leading to extensions of the invariance algebra $AE(1,3)$ of the nonlinear system (\[14\])–(\[15\]), when both diffusivities are non-constant. Finally, taking into account the point transformations from the group $\widetilde{E}^{\mathrm{BVP}}_{\mathrm{eq}}$, we immediately obtain cases 4–8 from Table \[tab1\]. Other possibility, when at least one of the functions $d_1(u)$ and $d_2(v)$ is constant, was examined in a similar way, and cases 2,3 and 9, 10 from Table \[tab1\] were obtained. The proof is now complete. $\blacksquare$ Cases 2 and 5 from Table \[tab1\] are equivalent to 3 and 6, respectively, if one takes into account the discrete transformations 2) from the group $E_{\mathrm{eq}}$. However, these transformations do not belong to the $\widetilde{E}^{\mathrm{BVP}}_{\mathrm{eq}}$, because the boundary conditions (\[16\]) and (\[18\]) are not invariant under them. **Group classification of the class of BVPs (\[14\])–(\[18\])** ---------------------------------------------------------------- We note that each MAI from Table \[tab1\] generates the corresponding maximal groups of invariance (MGI) with the transformations, which can be easily derived from the basic generators listed therein. These transformations are well-known and used below. Taking this into account and according to items (V) and (VI) of the algorithm, we formulate the main theorem giving the complete list of Lie symmetries of the BVP class (\[14\])–(\[18\]). The BVP of the form (\[14\])–(\[18\]), with the arbitrary given functions $d_1(u)$, $d_2(v)$ ($d_1(u)\not= d_2(v)$) and $Q_a(t), \ a = 1, 2 , 3$, is invariant under the three-parameter Lie group (trivial Lie group) presented in case 1 of Table \[tab2\]. The MGI of any BVP of the form (\[14\])–(\[18\]) does not depend on the form of $d_1(u)$ and $d_2(v)$. There are only five BVPs from class (\[14\])–(\[18\]) with the correctly specified functions $Q_a(t), \ a = 1, 2 , 3$ admitting the MGI of a higher dimensionality, namely: four- or five-parameter Lie groups of invariance (up to equivalent representations generated by equivalence transformations from the group $\widetilde{E}^{\mathrm{BVP}}_{\mathrm{eq}}$). These MGIs and the relevant functions $Q_a(t), \ a = 1, 2 , 3$ are presented in cases 2–6 of Table \[tab2\]. [no]{} [$Q_1(t)$]{} [$Q_2(t)$]{} [$Q_3(t)$]{} [MGI]{} -------- ----------------------------------------------------------------------- ----------------------------------------------------------------------- ----------------------- -------------------------------------------------------------------------------------------- 1. $ \forall $ $ \forall $ $ \forall $ $ \widetilde{T}_1, \widetilde{T}_2, \widetilde{T}_3 $ 2. $0$ $0$ $q(t)$ $ \widetilde{T}_1, \widetilde{T}_2, \widetilde{T}_3, \widetilde{T}_5 $ 3. $\Theta_1(\lambda t)$ $\Theta_2(\lambda t)$ $q_3$ $ \widetilde{T}_1, \widetilde{T}_2, \widetilde{T}_3, \widetilde{T}_{6}$ 4. $\frac{1}{\sqrt t} \ \Theta_1\left(\frac{1}{2} \lambda \log t\right)$ $\frac{1}{\sqrt t} \ \Theta_2\left(\frac{1}{2} \lambda \log t\right)$ $\frac{q_3}{\sqrt t}$ $ \widetilde{T}_1, \widetilde{T}_2, \widetilde{T}_3, \widetilde{T}_{7}$ 5. $0$ $0$ $q$ $ \widetilde{T}_0, \widetilde{T}_1, \widetilde{T}_2, \widetilde{T}_{3}, \widetilde{T}_{5}$ 6. $0$ $0$ $\frac{q}{\sqrt t}$ $ \widetilde{T}_1, \widetilde{T}_2, \widetilde{T}_3, \widetilde{T}_{4}, \widetilde{T}_{5}$ : Lie invariance of BVP (\[14\])–(\[18\])[]{data-label="tab2"} In Table \[tab2\], the following designations are used: $q \neq 0$, $q_3$, $\lambda$ are arbitrary constants, $q(t) \neq 0$ is an arbitrary function while the functions $$\Theta_1(\tau) = q_1 \cos{\tau} + q_2 \sin{\tau}, \ \ \Theta_2(\tau) = - q_1 \sin{\tau} + q_2 \cos{\tau},$$ where $q_1$, $q_2$ are arbitrary constants satisfying the condition $q_1^2 + q_2^2 \neq 0$ if $\tau \neq 0$. The explicit form of the transformations generating the MGI are: $$\widetilde{T}_0: t^{\ast} = t + \varepsilon_0, \ x_1^{\ast} = x_1, \ x_2^{\ast} = x_2, \ x_3^{\ast} = x_3, \ u^{\ast} = u, \ v^{\ast} = v, \ S_1^{\ast} = S_1, \ S_2^{\ast} = S_2,$$ $$\widetilde{T}_1: t^{\ast} = t, \ x_1^{\ast} = x_1 + \varepsilon_1, \ x_2^{\ast} = x_2, \ x_3^{\ast} = x_3, \ u^{\ast} = u, \ v^{\ast} = v, \ S_1^{\ast} = S_1, \ S_2^{\ast} = S_2,$$ $$\widetilde{T}_2: t^{\ast} = t, \ x_1^{\ast} = x_1, \ x_2^{\ast} = x_2 + \varepsilon_2, \ x_3^{\ast} = x_3, \ u^{\ast} = u, \ v^{\ast} = v, \ S_1^{\ast} = S_1, \ S_2^{\ast} = S_2,$$ $$\widetilde{T}_3: t^{\ast} = t, \ x_1^{\ast} = x_1, \ x_2^{\ast} = x_2, \ x_3^{\ast} = x_3 + \varepsilon_3, \ u^{\ast} = u, \ v^{\ast} = v, \ S_1^{\ast} = S_1, \ S_2^{\ast} = S_2,$$ $$\widetilde{T}_4: t^{\ast} = e^{2 \varepsilon_4} t, \ x_1^{\ast} = e^{\varepsilon_4} x_1, \ x_2^{\ast} = e^{\varepsilon_4} x_2, \ x_3^{\ast} = e^{\varepsilon_4} x_3, \ u^{\ast} = u, \ v^{\ast} = v, \ S_1^{\ast} = S_1, \ S_2^{\ast} = S_2,$$ $$\widetilde{T}_5: t^{\ast} = t, \ x_1^{\ast} = \theta_1(\varepsilon_5), \ x_2^{\ast} = \theta_2(\varepsilon_5), \ x_3^{\ast} = x_3, \ u^{\ast} = u, \ v^{\ast} = v, \ S_1^{\ast} = S_1, \ S_2^{\ast} = S_2,$$ $$\widetilde{T}_6: t^{\ast} = t + \varepsilon_6, \ x_1^{\ast} = \theta_1(\lambda \varepsilon_6), \ x_2^{\ast} = \theta_2(\lambda \varepsilon_6), \ x_3^{\ast} = x_3, \ u^{\ast} = u, \ v^{\ast} = v, \ S_1^{\ast} = S_1, \ S_2^{\ast} = S_2,$$ $$\widetilde{T}_7: t^{\ast} = e^{2 \varepsilon_7}t, \ x_1^{\ast} = e^{\varepsilon_7}\theta_1(\lambda \varepsilon_7), \ x_2^{\ast} = e^{\varepsilon_7}\theta_2(\lambda \varepsilon_7), \ x_3^{\ast} =e^{\varepsilon_7} x_3, \ u^{\ast} = u, \ v^{\ast} = v, \ S_1^{\ast} = S_1, \ S_2^{\ast} = S_2,$$ where $$\theta_1(\tau) = x_1 \cos{\tau} + x_2 \sin{\tau}, \ \ \theta_2(\tau) = - x_1 \sin{\tau} + x_2 \cos{\tau}.$$ [**Proof.**]{} Let us consider the case of the arbitrary function $d_1(u)$ and $d_2(v)$. According to Theorem 1 (see case 1 of Table 1), in this case the NHE system (\[14\])–(\[15\]) admits the eight-parameter group $G_8^{1}$ of invariance generated by the groups $T_0, T_1, T_2, T_3, T_4$ and the rotation groups $$T_{{ab}}: \ t^{\ast} = t, \ x_a^{\ast} = - x_b \sin\varepsilon_{ab} + x_a \cos\varepsilon_{ab}, \ x_b^{\ast} = x_b \cos\varepsilon_{ab} + x_a \sin\varepsilon_{ab}, \ x_c^{\ast} = x_c, \ u^{\ast} = u, \ v^{\ast} = v.$$ where $b < a = 1, 2, 3, \ c \neq a, \ c \neq b$, and $\varepsilon_{ab}$ are the group parameters. Since the BVP of the form (\[14\])–(\[18\]) has two free boundaries, $S_1(t, \textbf{x}) = 0$ and $S_2(t, \textbf{x}) = 0$, we need to extend the group $G_8^{1}$ by adding identical transformations for the new variables $S_1^{\ast} = S_1$ and $S_2^{\ast} = S_2$. We will denote the obtained group by $\widetilde{G}_8^{1}$ (the relevant sub-groups will be denoted in the same way, for instance, $\widetilde{T}_{{31}}$). By straightforward calculations, one easily checks that the boundary conditions (\[17\]) and (\[18\]) are invariant under the group $\widetilde{G}_8^{1}$. The situation is essentially different if one examines the invariance of the boundary condition (\[16\]) with respect to $\widetilde{G}_8^{1}$. To simplify calculations, the boundary conditions (\[16\]) should be rewritten in the form (see the monograph [@crank84], P. 18) $$\label{pp1} S_1(t, \textbf{x}) = 0: \ d_{1v} \nabla u \cdot \nabla S_1 = - H_v \frac{\partial S_1}{\partial t} - \textbf{Q}(t) \cdot \nabla S_1, \ u = u_v$$ (we remind the reader that $|\nabla S_k| \neq 0$, $k = 1, 2$). It will be shown below that the form of the vector function $ \textbf{Q}(t) $ in (\[pp1\]) plays a crucial role. First of all, we examine the invariance with respect to the one-parameter Lie groups forming the group $\widetilde{G}_8^{1}$. Obviously, conditions (\[pp1\]) are invariant with respect to the groups $\widetilde{T}_1, \widetilde{T}_2$ and $\widetilde{T}_3$ for the [*arbitrary*]{} smooth vector function $\textbf{Q}(t)$ . To be invariant under the group $\widetilde{T}_0$, the conditions must take place $$\label{pp1a} \left. d_{1v} \nabla u^{\ast} \cdot \nabla S_1^{\ast} + H_v \frac{\partial S_1^{\ast}}{\partial t^{\ast}} + \textbf{Q}(t^{\ast}) \cdot \nabla S_1^{\ast} \right \vert_{(39)}= 0, \ \left. u^{\ast} - u_v \right \vert_{(39)} = 0,$$ which lead to the requirement $$\label{pp2} Q_a(t + \varepsilon_0) = Q_a(t), \ a = 1, 2, 3.$$ Since (\[pp2\]) must hold for arbitrary real values of $t$ and $\varepsilon_T$, we conclude that $$\label{pp3} Q_a(t) = q_a, \ a = 1, 2, 3,$$ where $q_a$ are arbitrary constants. Thus, the BVP of the form (\[14\])–(\[18\]) is invariant with respect to the Lie group $\widetilde{T}_0$ if and only if conditions (\[pp3\]) take place. In a quite similar way, one examines the group $\widetilde{T}_4$ and, as a result, arrives at the requirement $$Q_a(t e^{2 \varepsilon_4}) e^{\varepsilon_4} = Q_a(t), \ a = 1, 2, 3$$ what implies $$\label{pp4} Q_a(t) = \frac{q_a}{\sqrt{t}}, \ a = 1, 2, 3.$$ Thus, the BVP of the form (\[14\])–(\[18\]) is invariant with respect to the Lie group $\widetilde{T}_4$ if and only if conditions (\[pp4\]) hold. Prior to examine the invariance of the boundary condition (\[16\]) with respect to the group $\widetilde{T}_{{ab}}$, we find how the first derivations of the variables $u$ and $S_1$ are transformed: $$\label{pp5} \frac{\partial u^{\ast}}{\partial x_a^{\ast}} = - \frac{\partial u}{\partial x_b} \sin\varepsilon_{ab} + \frac{\partial u}{\partial x_a} \cos\varepsilon_{ab}, \ \frac{\partial u^{\ast}}{\partial x_b^{\ast}} = \frac{\partial u}{\partial x_b} \cos\varepsilon_{ab} + \frac{\partial u}{\partial x_a} \sin\varepsilon_{ab}, \ \frac{\partial u^{\ast}}{\partial x_c^{\ast}} = \frac{\partial u}{\partial x_c},$$ $$\label{pp6} \frac{\partial S_1^{\ast}}{\partial x_a^{\ast}} = - \frac{\partial S_1}{\partial x_b} \sin\varepsilon_{ab} + \frac{\partial S_1}{\partial x_a} \cos\varepsilon_{ab}, \ \frac{\partial S_1^{\ast}}{\partial x_b^{\ast}} = \frac{\partial S_1}{\partial x_b} \cos\varepsilon_{ab} + \frac{\partial S_1}{\partial x_a} \sin\varepsilon_{ab}, \ \frac{\partial S_1^{\ast}}{\partial x_c^{\ast}} = \frac{\partial S_1}{\partial x_c}, \ \frac{\partial S_1^{\ast}}{\partial t^{\ast}} = \frac{\partial S_1}{\partial t}.$$ Substituting (\[pp5\]) and (\[pp6\]) into (\[pp1a\]) and making relevant calculations, we arrive at the equality $$\left(Q_a(t) \cos\varepsilon_{ab} + Q_b(t) \sin\varepsilon_{ab} \right) \frac{\partial S_1}{\partial x_a} + \left(- Q_a(t) \sin\varepsilon_{ab} + Q_b(t) \cos\varepsilon_{ab} \right) \frac{\partial S_1}{\partial x_b} + Q_c(t) \frac{\partial S_1}{\partial x_c} = \textbf{Q}(t) \cdot \nabla S_1,$$ leading to the algebraic equations $$Q_a(t) \cos\varepsilon_{ab} + Q_b(t) \sin\varepsilon_{ab} = Q_a(t),\ Q_a(t) \sin\varepsilon_{ab} - Q_b(t) \cos\varepsilon_{ab} = -Q_b(t).$$ Since $\varepsilon_{ab}$ is an arbitrary parameter, we immediately conclude that $$\label{pp7} Q_a(t) = Q_b(t) \equiv 0.$$ It means that the BVP of the form (\[14\])–(\[18\]) is invariant with respect to the Lie group $\widetilde{T}_{{ab}}$ if and only if conditions (\[pp7\]) hold, while the function $Q_c(t)$ is an arbitrary smooth function, i.e. the vector function $ \textbf{Q}(t) $ contains only one non-vanish component. Thus, we proved that the BVP of the form (\[14\])–(\[18\]) is invariant under $\widetilde{T}_0, \widetilde{T}_4$ and $\widetilde{T}_{{ab}} $ iff the vector function $ \textbf{Q}(t) $ satisfies the restrictions (\[pp3\]), (\[pp4\]) and (\[pp7\]), respectively. To finish the examination of the BVP with arbitrary functions $d_1(u)$ and $d_2(v)$, we need to investigate the case of the arbitrary one-parameter Lie group from the group $\tilde{g}_5 = (\widetilde{T}_0, \widetilde{T}_4, \widetilde{T}_{{21}}, \widetilde{T}_{{31}}, \widetilde{T}_{{32}})$. According to the general Lie theory, each one-parameter group of the group $\tilde{g}_5$ corresponds to a linear combination of the generators $P_t$, $D_0$ and $J_{ab}$ of the form $$L = \lambda_1 J_{21} + \lambda_2 J_{31} + \lambda_3 J_{32} + \lambda_4 P_t + \lambda_5 D_0, \ \lambda_1^2 + \ldots + \lambda_5^2 \neq 0,$$ where $\lambda_1, \ldots, \lambda_5$ are the given real constants. It should be stressed that the form of the operator $L$ can be simplified by using the transformations of variables $x_a, \ a = 1, 2, 3$ from the group $\widetilde{E}_{\mathrm{eq}}^{\mathrm{BVP}}$, namely the linear combination $\lambda_1 J_{21} + \lambda_2 J_{31} + \lambda_3 J_{32}$ may be reduced to a single operator of rotation, for example, to the operator $J_{12}$. Indeed, using the transformations $$t \rightarrow t, \ X \rightarrow A_1(\beta_1) A_2(\beta_2) A_3(\beta_3) X$$ where the rotation angles $\beta_a, \ a =1, 2, 3$ satisfy the conditions $$\lambda_1 \cos\beta_2 \sin\beta_3 - \lambda_2 \left(\cos\beta_1 \cos\beta_3 - \sin\beta_1 \sin\beta_2 \sin\beta_3 \right) - \lambda_3 \left( \sin\beta_1 \cos\beta_3 + \cos\beta_1 \sin\beta_2 \sin\beta_3 \right) = 0,$$ $$\lambda_1 \sin\beta_2 - \lambda_2 \sin\beta_1 \cos\beta_2 + \lambda_3 \cos\beta_1 \cos\beta_2 = 0.$$ one simplifies the operator $L$ to the form $$L = \lambda J_{21} + \lambda_4 P_t + \lambda_5 D_0, \ \lambda^2 + \lambda_4^2 + \lambda_5^2 \neq 0,$$ where $$\lambda = \lambda_1 \cos\beta_2 \cos\beta_3 + \lambda_2 \left(\cos\beta_1 \sin\beta_3 + \sin\beta_1 \sin\beta_2 \cos\beta_3 \right) + \lambda_3 \left( \sin\beta_1 \sin\beta_3 - \cos\beta_1 \sin\beta_2 \cos\beta_3 \right).$$ Now one can easily see that there are only two cases (depending on the parameters $\lambda, \lambda_4$ and $ \lambda_5$ !), leading to new one-parameter invariance groups of the BVP of the form (\[14\])–(\[18\]) with the correctly specified vector function $ \textbf{Q}(t) $. This occurs iff: [*a)*]{} $\lambda \lambda_4 \neq 0$ and $\lambda_5 = 0$, and [*b)*]{} $\lambda \lambda_5 \neq 0$. Let us consider the most troublesome case [*b)*]{}. Without loss of generality, one can assume $\lambda_4 = 0$ and $\lambda_5 = 1$, therefore, the corresponding one-parameter Lie group is $\widetilde{T}_7$ (see Remark 3). Substituting transformations from this group into invariance conditions (\[pp1a\]), we obtain the system $$\widehat{Q}_1 \cos(\lambda\varepsilon_7) - \widehat{Q}_2 \sin(\lambda\varepsilon_7) = Q_1(t),$$ $$\widehat{Q}_1 \sin(\lambda\varepsilon_7) + \widehat{Q}_2 \cos(\lambda\varepsilon_7) = Q_2(t),$$ $$\widehat{Q}_3 = Q_3(t),$$ which can be rewritten as follows: $$\widehat{Q}_1 = Q_1(t) \cos(\lambda\varepsilon_7) + Q_2(t) \sin(\lambda\varepsilon_7),$$ $$\widehat{Q}_2 = - Q_1(t) \sin(\lambda\varepsilon_7) + Q_2(t) \cos(\lambda\varepsilon_7),$$ $$\widehat{Q}_3 = Q_3(t),$$ where $\widehat{Q}_a = Q_a(e^{2 \varepsilon_7}t)e^{\varepsilon_7}, \ a = 1, 2, 3$. So unknown components of the vector function $\textbf{Q}(t)$ can be found from the system of functional equations obtained above: $$\label{f1} Q_1(t) = \frac{q_1 \cos\tau + q_2 \sin\tau}{\sqrt t}, \ Q_2(t) = \frac{- q_1 \sin\tau + q_2 \cos\tau}{\sqrt t}, \ Q_3(t) = \frac{q_3}{\sqrt t},$$ where $\tau = \frac{1}{2} \lambda \log t$, $q_a, \ a = 1, 2, 3$ are arbitrary real constants obeying the condition $q_1 q_2\lambda \neq 0$. Hence, the BVP of the form (\[14\])–(\[18\]) is invariant with respect to the Lie group $\widetilde{T}_{7}$ if and only if the conditions (\[f1\]) hold. Case [*b)*]{} has been completely investigated. Case [*a)*]{} has been examined in a quite similar way. As result, we have proved that the BVP of the form (\[14\])–(\[18\]) is invariant with respect to the Lie group $\widetilde{T}_{6}$ iff the conditions $$Q_1(t) = q_1 \cos(\lambda t) + q_2 \sin(\lambda t), \ Q_2(t) = - q_1 \sin(\lambda t) + q_2 \cos(\lambda t), \ Q_3(t) = q_3, \ \ q_1 q_2 \lambda \neq 0$$ take place. Thus, we conclude from the analysis carried out above that the trivial group of invariance $\widetilde{G}^{0}$ of the BVP of the form (\[14\])–(\[18\]) is the three-parameter Lie group, generated by the groups $\widetilde{T}_1, \widetilde{T}_2$ and $\widetilde{T}_3$ and one is listed in case 1 of Table \[tab2\]. $\widetilde{G}^{0}$ is extended either to four-parameter group (cases 2–4 of Table \[tab2\]) or five-parameter group (cases 5 and 6 of Table \[tab2\]) if and only if the relevant conditions at the vector function $\textbf{Q}(t)$ take place. These conditions have been derived by comparing the components of $\textbf{Q}(t)$ from formulae (\[pp3\]), (\[pp4\]) and (\[pp7\]) and those from the examination of cases [*a)*]{} and [*b)*]{}. One easily notes that there are no other possibilities to extend the group $\widetilde{G}^{0}$. It means that the case of the arbitrary functions $d_1(u)$ and $d_2(v)$ is completely investigated. The next step of the proof is to show that cases 2–9 from Table \[tab1\] do not lead to any new symmetries of the BVP of the form (\[14\])–(\[18\]). Consider case 2 when $d_1(u) = k_1$ and $d_2(v)$ is an arbitrary function. According to Theorem 1, the NHE system (\[14\])–(\[15\]) admits the infinite-dimensional Lie algebra formed by the operators from the basic algebra $AE(1,3)$ and the operators $u \partial_u,$ $\alpha(t,\textbf{x})\partial_u$. The corresponding groups to these operators are generated by the transformations $$\widetilde{T}_U: \ t^{\ast} = t, \ x^{\ast}_a = x_a, a = 1, 2, 3, \ u^{\ast} = e^{\varepsilon_U} u, \ v^{\ast} = v, \ S_1^{\ast} = S_1, \ S_2^{\ast} = S_2,$$ $$\widetilde{T}_{\infty}: \ t^{\ast} = t, \ x^{\ast}_a = x_a, a = 1, 2, 3, \ u^{\ast} = u + \alpha(t,\textbf{x}) \varepsilon_{\infty}, \ v^{\ast} = v, \ S_1^{\ast} = S_1, \ S_2^{\ast} = S_2,$$ Let us prove that any BVP in question is not invariant with respect to the group $\widetilde{T}_U$. In fact, the second invariance condition in (\[pp1a\]) takes the form $$e^{\varepsilon_U} u_v = u_v$$ and can be satisfied for an arbitrary value of the parameter $\varepsilon_U$ only in the case $u_v = 0$. On the other hand, if $u_v = 0$ then $u_m \neq 0$ and, thereby, the second boundary condition in (\[17\]) is not invariant with respect to $\widetilde{T}_U$. In a similar manner, it can be shown that any BVP in question is not invariant with respect to $\widetilde{T}_{\infty}$. Now, we must examine each one-parameter group, corresponding to a linear combination of the operators $P_t$, $D_0$, $J_{ab}, \ b < a = 1,2,3$, $u \partial_u$ and $\alpha(t,\textbf{x})\partial_u$ (see case 2 of Table \[tab1\]): $$Y = \lambda_1 J_{21} + \lambda_2 J_{31} + \lambda_3 J_{32} + \lambda_4 P_t + \lambda_5 D_0 + \lambda_6 u\partial_u + \lambda_7 \alpha(t,\textbf{x})\partial_u, \ \lambda_6^2 + \lambda_7^2 \neq 0,$$ where $\lambda_1, \ldots, \lambda_7$ are arbitrary real constants. To avoid cumbersome formulae, we present in an explicit form only the point transformations for the variable $u$: $$\label{pp8} u^{\ast} = e^{\lambda_6 \varepsilon_{Y}} u + \lambda_7 \int_0^{\varepsilon_{Y}} \alpha^{\ast}(\tau) e^{\lambda_6 (\tau - \varepsilon_{Y})} d \tau,$$ where $\alpha^{\ast}(\tau) = \alpha \left(t^{\ast}(\tau), \textbf{x}^{\ast}(\tau)\right)$. Obviously, if $\lambda_6 = 0, \lambda_7 \neq 0$ or $\lambda_6 \neq 0, \lambda_7 = 0$, then the BVP under study is not invariant with respect to the relevant one-parameter Lie group because one uses the result obtained above. Hence, we should examine only the case $\lambda_6 \lambda_7 \neq 0$. Taking into account formula (\[pp8\]), the second invariance condition from (\[pp1a\]) takes the form $$\lambda_7 \int_0^{\varepsilon_{Y}} \alpha^{\ast}(\tau) e^{\lambda_6 (\tau - \varepsilon_{Y})} d \tau = u_v \left(1 - e^{\lambda_6 \varepsilon_{Y}}\right).$$ Thus, we obtain $$u^{\ast} = e^{\lambda_6 \varepsilon_{Y}} u + u_v \left(1 - e^{\lambda_6 \varepsilon_{Y}}\right).$$ Using this formula, the invariance criterion for the second boundary condition from (\[17\]) leads to the condition $$u_v \left(1 - e^{\lambda_6 \varepsilon_{Y}}\right) = u_m \left(1 - e^{\lambda_6 \varepsilon_{Y}}\right).$$ Since this equality must hold for arbitrary values of the group parameter $\varepsilon_{Y}$, we immediately obtain that $u_m = u_v$. It is nothing other than the contradiction because $u_m \neq u_v$ and $\lambda_6 \neq 0$. Thus, we conclude that the operator $Y$ is not a Lie symmetry operator for any BVP from class (\[14\])–(\[18\]). Case 2 from Table \[tab2\] is completely examined. Obviously, case 3 from Table \[tab1\] can be studied in a similar manner to case 2 (the boundary conditions (\[17\]) and (\[18\]) should be used) and the same result is obtained. Consider cases 4—8 from Table \[tab1\] when $d_1(u)$ and $d_2(v)$ are specified non-constant functions. It turns out that cases 4—7 from Table \[tab1\] can be examined in a similar manner as we did it in case 2 using the group $\widetilde{T}_U$. Finally, no new Lie groups of invariance are obtained. The most non-trivial is the case of conformal power $n = m = - \frac{4}{5}$ (see case 8 in Table \[tab1\]) because of the conformal operators $K_b = |\textbf{x}|^2 \partial_{x_b} - 2 x_b x_a \partial_{x_a} + 5 x_b u \partial _u + 5 x_b v \partial_v$, $b < a = 1, 2, 3,$ which generate the one-parameter Lie groups $$\begin{aligned} & & T_{K_b}: \ x_a^{\ast} = \frac{x_a}{1 - 2 x_b \varepsilon_{K_b} + |\textbf{x}|^2 \varepsilon_{K_b}^2}, \ x_b^{\ast} = \frac{x_b - |\textbf{x}|^2 \varepsilon_{K_b}}{1 - 2 x_b \varepsilon_{K_b} + |\textbf{x}|^2 \varepsilon_{K_b}^2}, \ x_c^{\ast} = \frac{x_c}{1 - 2 x_b \varepsilon_{K_b} + |\textbf{x}|^2 \varepsilon_{K_b}^2}, \nonumber \\ & & \qquad \quad t^{\ast} = t, \ u^{\ast} = u \left(1 - 2 x_b \varepsilon_{K_b} + |\textbf{x}|^2 \varepsilon_{K_b}^2 \right)^{\frac{5}{2}}, \ v^{\ast} = v \left(1 - 2 x_b \varepsilon_{K_b} + |\textbf{x}|^2 \varepsilon_{K_b}^2 \right)^{\frac{5}{2}}, \nonumber\end{aligned}$$ where $a, b, c = 1, 2, 3; \ a \neq b, a \neq c, b \neq c$. One notes that the boundary condition (\[18\]) is not invariant under $T_{K_b}$. In fact, the invariance conditions $$\label{pp9} \lim_{|\textbf{x}| \rightarrow + \infty} |\textbf{x}^{\ast}| = + \infty, \ \left. v - v_{\infty} \right \vert_{(\ref{18})} = 0.$$ are not satisfied because $$\lim_{|\textbf{x}| \rightarrow + \infty} |\textbf{x}^{\ast}| = \lim_{|\textbf{x}| \rightarrow + \infty} \frac{|\textbf{x}|}{\left(1 - 2 x_b \varepsilon_{K_b} + |\textbf{x}|^2 \varepsilon_{K_b}^2\right)^{\frac{1}{2}}}= \frac{1}{\varepsilon_{K_b}} \not = + \infty.$$ Thus,we conclude that any BVP of the form (\[14\])–(\[18\]) is not invariant with respect to the Lie group $\widetilde{T}_{K_b}$. Finally, we have examined all possible one-parameter groups, corresponding to linear combinations of the operators $K_b, \ b = 1, 2, 3$ and $D$ (see case 8 in Table \[tab1\]) $$Z = \lambda_1 K_1 + \lambda_2 K_2 + \lambda_3 K_3 + \lambda_4 D, \ \lambda_1^2 + \lambda_2^2 + \lambda_3^2 \neq 0,$$ and shown that the boundary condition (\[18\]) is not invariant under those groups. Nevertheless, the case of linear governing system (\[14\])–(\[15\]) (case 9 in Table \[tab2\]) has a cumbersome algebra of invariance; it was examined in a quite similar way to case 2 and no new invariance groups found. The proof is now complete. $\blacksquare$ In Theorem 2, the special case of system (\[14\])–(\[15\]) with $d_1(u)=d_2(v)$ (see case 10 in Table \[tab1\]) was not examined because the relevant BVP with the equal diffusivities is unrealistic from the physical point of view. In fact, the vast majority of substances have different physical characteristics of solid, liquid, and gas phases. **Symmetry reduction and invariant solutions of BVPs from class (\[14\])–(\[18\]) with the constant energy flux** ================================================================================================================= **Optimal system of subalgebras of the invariance algebra** ----------------------------------------------------------- Let us consider a nonlinear model of heat transfer processes in metals under the action of intense constant energy flaxes directed perpendicular to the metal surface. This model coincides with the BVP (\[8\])–(\[12\]) (see also (\[14\])–(\[18\])), where $\textbf{Q}(t) = \textbf{q} \equiv (0, 0, q), \ q = \mbox{const}$. According to Theorem 2 such a BVP admits the five-parameter Lie group $\widetilde{G}_5$ of invariance (see case 5, in Table \[tab2\]) formed by the one-parameter groups $\widetilde{T}_0, \widetilde{T}_1, \widetilde{T}_2, \widetilde{T}_3$ and $\widetilde{T}_{{12}}$. This group corresponds to the five-dimensional Lie algebra $A_5$ with the basic operators $$P_t = \frac{\partial}{\partial t}, \ P_a = \frac{\partial}{\partial x_a}, \ J_{12} = x_2 P_1 - x_1 P_2, \, a = 1, 2, 3.$$ Our primary aim is to show how using these operators one can reduce the BVP (\[14\])–(\[18\]), where $\textbf{Q}(t) = \textbf{q}$, to BVPs of lower dimensions. We use for this purpose the *optimal systems of $s$-dimensional subalgebras* ($s\leq5$) of $A_5$. All such subalgebras are non-conjugate up to the group of inner automorphisms of the group $\widetilde{G}_5$. To construct a full list of the optimal systems, we represent the algebra $A_5$ as follows: $A_5 = \left<P_1, P_2, J_{12}\right> \oplus \left<P_3\right> \oplus \left<P_t\right>$. Now by using the well-known Lie-Goursat classification method for the subalgebras of algebraic sums of Lie algebras [@pathera-1] (see the monograph [@bar] for details) and the results of the subalgebras classification of low-dimensional real Lie algebras [@pathera-2], it is easy to obtain the complete list of subalgebras of the algebra $A_5$. This list can be divided into subalgebras of different dimensionality. One-dimensional subalgebras: $$\left<P_3 \cos\phi + P_t \sin\phi \right>, \ \left<P_1 + \alpha\left(P_3 \cos\phi + P_t \sin\phi \right)\right>, \ \left<J_{12} + \beta \left(P_3 \cos\phi + P_t \sin\phi \right)\right>;$$ Two-dimensional subalgebras: $$\left<P_3, P_t\right>, \ \left<P_1 + \alpha \left(P_3 \cos\phi + P_t \sin\phi \right), P_2\right>, \ \left<P_1 + \alpha \left(P_3 \cos\phi + P_t \sin\phi \right), P_3 \sin\phi - P_t \cos\phi\right>,$$ $$\left<J_{12} + \beta \left(P_3 \cos\phi + P_t \sin\phi \right), P_3 \sin\phi - P_t \cos\phi\right>;$$ Three-dimensional subalgebras: $$\left<P_1, P_3, P_t\right>, \ \left<J_{12}, P_3, P_t\right>, \ \left<P_1 + \alpha \left(P_3 \cos\phi + P_t \sin\phi \right), P_2, P_3 \sin\phi - P_t \cos\phi\right>,$$ $$\left<J_{12} + \beta \left(P_3 \cos\phi + P_t \sin\phi \right), P_1, P_2\right>;$$ Four-dimensional subalgebras : $$\left<P_1, P_2, P_3, P_t\right>, \ \left<J_{12} + \beta \left(P_3 \cos\phi + P_t \sin\phi \right), P_1, P_2, P_3 \sin\phi - P_t \cos\phi\right>;$$ Five-dimensional subalgebra: $$\left<J_{12}, P_1, P_2, P_3, P_t\right>.$$ where $ \alpha \geq 0$ and $\beta$ are arbitrary real constants, $0 \leq \phi < \pi$. We remaind the reader that the additional conditions on the functions $S_k(t, \textbf{x})$, $\textbf{V}_k(t, \textbf{x})$ and $\textbf{Q}(t)$ arising in the BVP class (\[14\])–(\[18\]) have been imposed, namely $$\label{4.1} \frac{\partial S_k}{\partial t} \neq 0, \ |\nabla S_k| \neq 0, \ \textbf{Q}(t) \cdot \textbf{n}_1 \neq 0, \ \textbf{V}_k \cdot \textbf{n}_k \neq 0, \ k=1,2.$$ On the other hand, the Lie-Goursat classification method is a purely algebraic procedure; therefore, some subalgebras presented above lead to invariant solutions, which do not satisfy the restrictions (\[4.1\]) . For example, the algebra $\left<J_{12}, P_3, P_t\right>$ generates the ansatz $$u = u(r), \ v = v(r), \ S_k = S_k(r), \ k = 1, 2,$$ where $r = \sqrt{x_1^2 + x_2^2}$. Obviously, one sees that $\frac{\partial S_k}{\partial t} = 0$ and $\textbf{q} \cdot \textbf{n}_1 = 0$. Thus, the contradiction is obtained and we conclude that the algebra $\left<J_{12}, P_3, P_t\right>$ leads to solutions, which do not have any physical meaning. The complete list of the subalgebras, leading to invariant solutions of the BVPs in question satisfying the restrictions (\[4.1\]), reads as follows. One-dimensional subalgebras: $$\left<P_3 \cos\phi + P_t \sin\phi \right> \left(\phi \neq 0, \frac{\pi}{2}\right), \ \left<P_1 + \alpha\left(P_3 \cos\phi + P_t \sin\phi \right)\right>, \ \left<J_{12} + \beta \left(P_3 \cos\phi + P_t \sin\phi \right)\right>;$$ Two-dimensional subalgebras: $$\left<P_1 + \alpha \left(P_3 \cos\phi + P_t \sin\phi \right), P_2\right>, \ \left<P_1 + \alpha \left(P_3 \cos\phi + P_t \sin\phi \right), P_3 \sin\phi - P_t \cos\phi\right> \left(\phi \neq 0, \frac{\pi}{2}\right),$$ $$\left<J_{12} + \beta \left(P_3 \cos\phi + P_t \sin\phi \right), P_3 \sin\phi - P_t \cos\phi\right> \left(\phi \neq 0, \frac{\pi}{2}\right);$$ Three-dimensional subalgebras: $$\label{4.0a} \left<P_1 + \alpha \left(P_3 \cos\phi + P_t \sin\phi \right), P_2, P_3 \sin\phi - P_t \cos\phi\right> \left(\phi \neq 0, \frac{\pi}{2}\right),$$ $$\label{4.0b} \left<J_{12} + \beta \left(P_3 \cos\phi + P_t \sin\phi \right), P_1, P_2\right> \left(\phi \neq 0, \frac{\pi}{2} \ \mbox{if} \ \beta \neq 0\right);$$ Four-dimensional subalgebras : $$\label{4.0c} \left<J_{12}, P_1, P_2, P_3 \sin\phi - P_t \cos\phi\right> \left(\phi \neq 0, \frac{\pi}{2}\right),$$ where $ \alpha \geq 0$ and $\beta$ are arbitrary real constants, $0 \leq \phi < \pi$. Symmetry reduction and invariant solutions ------------------------------------------- Now, one may use each algebra from this list for reducing the BVP of the form (\[14\])–(\[18\]) with $\textbf{Q}(t) = \textbf{q}$ to the BVP of lower dimensionality and to construct the exact solutions of the problem obtained. We also note that three- and four-dimensional subalgebras (\[4.0b\]) and (\[4.0c\]) generate the same ansatz to find the unknown functions $u, \, v, \, S_1$ and $S_2$: $$\label{4.2*} u = u(z), \ v = v(z), \ S_k = S_k(z), \ k = 1, 2, \, z = x_3 - \mu t.$$ (Hereafter, we use the designation $\mu = - \tan \phi$). The application of three-dimensional subalgebra (\[4.0a\]) also yields ansatz (\[4.2\*\]) but with the invariant variable $z = \alpha^* x_1+ x_3 - \mu t, \, \alpha^* \in \mathbb{R}$. Ansatz (\[4.2\*\]) reduces the BVP of the form (\[14\])–(\[18\]) to the BVP for ODEs, which was studied in details in our earlier papers [@ch-od90; @ch93; @ch-kov-11]. This ansatz leads to the plane wave solutions of the BVP in question, while melting and evaporation surfaces are two parallel planes moving with unknown velocity $\mu$ along the axes $0 x_3$. New non-trivial reductions occur if one applies one- and two-dimensional subalgebras. Let us consider the algebra $\left<J_{12} + \beta \left(P_3 \cos\phi + P_t \sin\phi \right), P_3 \sin\phi - P_t \cos\phi\right>$. Solving the corresponding system of characteristic equations, one obtains the ansatz $$\label{4.2} u = u(r,z), \ v = v(r,z), \ S_k = S_k(r,z), \ k = 1, 2,$$ where $z = x_3 - \mu t - \beta \arctan \frac{x_1}{x_2}, \ r = \sqrt{x_1^2 + x_2^2}$ are the invariant variables. Substituting ansatz (\[4.2\]) into the BVP of the form (\[14\])–(\[18\]) with $\textbf{Q}(t) = \textbf{q}$ and making the relevant calculations, we arrive at the two-dimensional BVP $$\begin{aligned} & & \frac{1}{r} \frac{\partial}{\partial r} \left(r d_1(u) \frac{\partial u}{\partial r} \right) + \left(\frac{\beta^2}{r^2} + 1 \right)\frac{\partial}{\partial z} \left(d_1(u) \frac{\partial u}{\partial z} \right) + \mu \frac{\partial u}{\partial z} = 0,\label{4.3} \\ & & \frac{1}{r} \frac{\partial}{\partial r} \left(r d_2(v) \frac{\partial v}{\partial r} \right) + \left(\frac{\beta^2}{r^2} + 1 \right)\frac{\partial}{\partial z} \left(d_2(v) \frac{\partial v}{\partial z} \right) + \mu \frac{\partial v}{\partial z} = 0,\label{4.4} \\ & & \qquad S_{1}(r,z) = 0:\ d_{1v} \nabla'u \cdot \nabla'S_1 = \left (\mu H_v - q \right) \frac{\partial S_1}{\partial z}, \ u = u_v,\label{4.5} \\ & & \qquad S_{2}(r,z) = 0: \ d_{2m} \nabla'v \cdot \nabla'S_2 = d_{1m} \nabla'u \cdot \nabla'S_2 + \mu H_m \frac{\partial S_2}{\partial z},\ u = u_m, \ v = v_m,\label{4.6} \\ & & \qquad r^2 + z^2 = +\infty: \ v = v_{\infty},\label{4.7}\end{aligned}$$ where $\mu$ is an unknown parameter and $\nabla'= \left(\frac{\partial}{\partial r}, \sqrt{\frac{\beta^2}{r^2} + 1}\frac{\partial}{\partial z} \right)$. Nevertheless, the BVP of the form (\[4.3\])–(\[4.7\]) is much simple object than the initial BVP, but it is still the nonlinear problem with the basic two-dimensional PDEs. Our purpose is to reduce one to a BVP with basic ODEs. Of course, one may apply different technics to realize such reduction; however, we confine ourselves to the case when the invariant variables $r$ and $z$ admit clear physical meaning. It happens for $\beta = 0$ because then the first variable $z$ makes the transition to a moving coordinate system (in the direction of the variable $x_3$) with the origin at the evaporation surface, while the variable $r$ presents the radial symmetry of the process with respect to the variables $x_1$ and $x_2$. Obviously, such a situation takes place if the surface bounded by a circle of the radius $R$ is exposed by the flux $\textbf{Q}(t) = \textbf{q}$. Thus, setting $\beta = 0$, we may consider the ansatz $$\label{4.8} u = u(\omega), \ v = v(\omega), S_k = S_k(\omega), \ \omega = z + \sqrt{z^2 + r^2}, \ k = 1, 2,$$ used earlier in [@ch-2003] for a similar purposes. Note that it is a non-Lie ansatz because the maximal algebra of invariance of system (\[4.3\])–(\[4.4\]) (with arbitrary non-constant functions $d_1(u) $ and $d_2(v)$) is trivial and generated by the operator $ \frac{\partial }{\partial z}$. Recently we found that ansatz (\[4.8\]) with $\omega$ determined from the cubic equation $$\label{4.8**} 1- \frac{2}{\omega}(x_3-\mu t)= \frac{x_1^2}{\omega^2} + \frac{ x_2^2}{\omega(\omega+\kappa)}, \, \kappa \in \mathbb{R}$$ was used in paper [@lyubov] to construct the exact solution of the BVP of the form (\[14\])–(\[18\]) with $\textbf{Q}(t) = \textbf{q}$ and the constant diffusivities $d_1(u) $ and $d_2(v)$. Setting $ \kappa =0$ and $x_3-\mu t=z$ in (\[4.8\*\*\]) one arrives at ansatz (\[4.8\]). However, we have checked that ansatz (\[4.8\]) with $\omega$ defined from (\[4.8\*\*\]) ( with any non-zero $ \kappa$ !) is not applicable to reduce the BVP of the form (\[14\])–(\[18\]) with non-constant diffusivity. Substituting ansatz (\[4.8\]) into the BVP of the form (\[4.3\])–(\[4.7\]) and taking into account the relations $$\nabla'u \cdot \nabla'S_k = \frac{2 \omega}{\sqrt{z^2 + r^2}} \frac{d u}{d \omega} \frac{d S_k}{d \omega}, \ \nabla'v \cdot \nabla'S_k = \frac{2 \omega}{\sqrt{z^2 + r^2}} \frac{d v}{d \omega} \frac{d S_k}{d \omega}, \ \frac{\partial S_k}{\partial z} = \frac{\omega}{\sqrt{z^2 + r^2}} \frac{d S_k}{d \omega}, \ k = 1, 2,$$ we obtain the BVP for ODEs: $$\begin{aligned} & & \frac{d}{d \omega} \left(\omega d_1(u) \frac{d u}{d \omega} \right) + \mu \frac{\omega}{2} \frac{d u}{d \omega} = 0, \ \ 0 < \omega_1 < \omega < \omega_2,\label{4.9} \\ & & \frac{d}{d \omega} \left (\omega d_2(v) \frac{d v}{d \omega} \right) + \mu \frac{\omega}{2} \frac{d v}{d \omega} = 0, \ \ \omega > \omega_2,\label{4.10} \\ & & \qquad \omega = \omega_1: \ 2 d_{1v} \frac{d u}{d \omega} = \mu H_v - q, \ u = u_v,\label{4.11} \\ & & \qquad \omega = \omega_2: \ 2 d_{2m} \frac{d v}{d \omega} = 2 d_{1m} \frac{d u}{d \omega} + \mu H_m,\ u = u_m, \ v = v_m,\label{4.12} \\ & & \qquad \omega = +\infty: \ v = v_{\infty},\label{4.13}\end{aligned}$$ where $\omega_k, \ k = 1, 2$ and $\mu$ are parameters to be determined. Now, we can define the form of the free surfaces $S_k(t, \textbf{x}) = 0, \ k = 1, 2$ because in accordance with ansatz (\[4.8\]) $$S_k(\omega) \equiv z + \sqrt{z^2 + r^2} = \omega_k, \ k = 1, 2.$$ Obviously, the last equations can be rewritten in the form $$\label{4.8*} \frac{x_1^2 + x_2^2}{\omega_k^2} = 1 - \frac{2 z}{\omega_k}, \ k = 1, 2.$$ Thus, the equations obtained define some paraboloids of revolution in the space of variables $x_1, x_2, z$ (see Fig.1). From the physical point of view, unknown parameters should satisfy the inequalities $\omega_2 > \omega_1 > 0$. Moreover, the parameter $ \omega_1 $ can be defined as follows. If one sets $z=0$ in (\[4.8\*\]) then $\omega_1= \sqrt {x_1^2 + x_2^2}$. On the other hand, a part of the surface bounded by a circle of the radius $R$ is only exposed by the flux $\textbf{Q}(t) = \textbf{q}$, i.e. we can set $\omega_1=R$ without losing the generality. Let us now turn to the construction of the exact solutions of problem (\[4.9\])–(\[4.13\]). In fact, the general solution of the nonlinear equations (\[4.9\]) and (\[4.10\]) is unknown. However, in some cases, one is known (see, e.g., [@pol-za]). Here, we consider two cases in details. [**Example 1.**]{} *The BVP of the form (\[4.9\])–(\[4.13\]) with $d_1(u) = a_1$ and $d_2(v) = a_2$, where $a_1, a_2 \in \mathbb{R}^{+}$.* In this case, the general solutions of equations (\[4.9\]) and (\[4.10\]) are given in an explicit form by the formulae $$\label{4.14} u = C_1 \Phi_1(\omega) + C_2, \ v = C_3 \Phi_2(\omega) + C_4,$$ where $\Phi_k(\omega) = \int_{\omega}^{+ \infty} \omega^{-1} e^{- \frac{\mu}{2 a_k} \omega} d\omega, \ k = 1, 2$, $C_1, \ldots, C_4$ are arbitrary to-be-determined constants. Substituting solution (\[4.14\]) into the boundary conditions (\[4.11\])–(\[4.13\]) and taking into account the formulae $$\frac{d \Phi_k}{d \omega} = \omega^{-1} e^{- \frac{\mu}{2 a_k} \omega}, \ k = 1, 2,$$ we arrive at the exact solution $$\label{4.15} u = \frac{u_v - u_m}{\Phi_1(R) - \Phi_1(\omega_2)} \Phi_1(\omega) + \frac{u_m \Phi_1(R) - u_v \Phi_1(\omega_2)}{\Phi_1(R) - \Phi_1(\omega_2)}, \ v = \frac{v_m - v_{\infty}}{\Phi_2(\omega_2)} \Phi_2(\omega) + v_{\infty},$$ where the parameters $\omega_2$ and $\mu$ must be found from the transcendent equation system $$\begin{aligned} & & 2 d_{1 v} \frac{u_v - u_m}{\Phi_1(R) - \Phi_1(\omega_2)} R^{- 1} e^{- \frac{\mu}{2 a_1} R} = \mu H_v - q,\nonumber \\ & & 2 d_{2 m} \frac{v_m - v_{\infty}}{\Phi_2(\omega_2)} \omega_2^{-1} e^{- \frac{\mu}{2 a_2} \omega_2} = 2 d_{1 m} \frac{u_v - u_m}{\Phi_1(R) - \Phi_1(\omega_2)} \omega_2^{-1} e^{- \frac{\mu}{2 a_1} \omega_2} + \mu H_m,\nonumber\end{aligned}$$ Finally, using formulae (\[4.15\]) , (\[4.2\]) and (\[4.8\]), we obtain the exact solution of the BVP of the form (\[14\])–(\[18\]) with $d_1(u) = a_1$, $d_2(v) = a_2$ and $\textbf{Q}(t) = \textbf{q}$ in the explicit form: $$\begin{aligned} & & u = \frac{u_v - u_m}{\Phi_1(R) - \Phi_1(\omega_2)} \Phi_1\left(\sqrt{x_1^2 + x_2^2 + (x_3 - \mu t)^2} + x_3 - \mu t\right) + \frac{u_m \Phi_1(R) - u_v \Phi_1(\omega_2)}{\Phi_1(R) - \Phi_1(\omega_2)},\nonumber \\ & & v = \frac{v_m - v_{\infty}}{\Phi_2(\omega_2)} \Phi_2\left(\sqrt{x_1^2 + x_2^2 + (x_3 - \mu t)^2} + x_3 - \mu t\right) + v_{\infty},\nonumber \\ & & S_k \equiv \frac{x_1^2 + x_2^2}{\omega_k^2} + \frac{2 (x_3 - \mu t)}{\omega_k} - 1 = 0, \ k = 1, 2; \ \omega_1 = R.\nonumber\end{aligned}$$ Whereas the basic equations of the BVP of the form (\[14\])–(\[18\]) with $d_1(u) = a_1$, $d_2(v) = a_2$ are linear, the relevant equations of the initial BVP of the form (\[8\])–(\[12\]) may be nonlinear, but satisfying the conditions $a_1 = \frac{\lambda_1(\phi^{-1}_1(u))}{C_1(\phi^{-1}_1(u))}$, $a_2 = \frac{\lambda_2(\phi^{-1}_2(v))}{C_2(\phi^{-1}_2(v))}$ (see the functions $\phi_1$ and $\phi_2$ in (\[13\])). [**Example 2.**]{} *The BVP of the form (\[4.9\])–(\[4.13\]) with $d_1(u) = u^{-1}$ and $d_2(v) = 1$, i.e., (\[4.9\]) is the fast diffusion equation, while (\[4.10\]) is the linear diffusion equation.* In this case, the general solutions of equations (\[4.9\]) and (\[4.10\]) are given by the formulae [@pol-za] $$\label{4.16} \int_{a}^{\omega u} \frac{d \nu}{\nu \left(1 + e^{- W\left(e^A\right) + A} \right)} = \ln \omega + C_2, \ v = C_3 \Phi(\omega) + C_4,$$ where $\Phi(\omega) = \int_{\omega}^{+ \infty} \omega^{-1} e^{- \frac{\mu}{2} \omega} d\omega$, $W(x)$ is the Lambert function, $A = - \frac{\mu}{2} \nu + C_1$, $a$ is an arbitrary constant, $C_1, \ldots, C_4$ are to-be-determined constants. Substituting solution (\[4.16\]) into the boundary conditions (\[4.11\])–(\[4.13\]) and taking into account the formulae $$\frac{d \Phi}{d \omega} = \omega^{-1} e^{- \frac{\mu}{2} \omega}, \ \ln \left(\frac{\omega}{u} \frac{d u}{d \omega} \right) + \frac{\omega}{u} \frac{d u}{d \omega} = A,$$ we obtain the exact solution of the BVP in question: $$\label{4.17} \int_{R u_v}^{\omega u} \frac{d \nu}{\nu \left(1 + e^{- W\left(e^{\mathcal{A}}\right) + \mathcal{A}} \right)} = \ln \frac{\omega}{R}, \ v = \frac{v_m - v_{\infty}}{\Phi(\omega_2)} \Phi(\omega) + v_{\infty},$$ where the parameters $\omega_2$ and $\mu$ must be found from the system of transcendent equations $$\begin{aligned} & &\int_{R u_v}^{\omega_2 u_m} \frac{d \nu}{\nu \left(1 + e^{- W\left(e^{\mathcal{A}}\right) + \mathcal{A}} \right)} = \ln \frac{\omega_2}{R} ,\nonumber \\ & & 2 \frac{v_m - v_{\infty}}{\Phi(\omega_2)} e^{- \frac{\mu}{2} \omega_2} = 2 e^{- W\left(e^{\mathcal{A}(\omega_2)}\right) + \mathcal{A}(\omega_2)} + \mu \omega_2 H_m.\nonumber\end{aligned}$$ Here, we used the designations $$\mathcal{A} = - \frac{\mu}{2} \nu + \ln \left(\left(\mu H_v - q \right) \frac{R}{2}\right) + \left(\mu H_v - q \right) \frac{R}{2} + \frac{\mu}{2} R u_v$$ $$\mathcal{A}(\omega_2) = -\frac{\mu}{2} \omega_2 u_m + \ln \left(\left(\mu H_v - q \right) \frac{R}{2} \right)+ \left(\mu H_v - q \right) \frac{R}{2}+ \frac{\mu}{2} R u_v .$$ Finally, using formulae (\[4.17\]) , (\[4.2\]) and (\[4.8\]), we obtain the exact solution of the origin BVP (\[14\])–(\[18\]) with $d_1(u) = u^{- 1}$, $d_2(v) = 1$ and $\textbf{Q}(t) = \textbf{q}$ in the implicit form $$\begin{aligned} & & \int_{R u_v}^{\left(\sqrt{x_1^2 + x_2^2 + (x_3 - \mu t)^2} + x_3 - \mu t \right) u} \frac{d \nu}{\nu \left(1 + e^{- W\left(e^{\mathcal{A}}\right) + \mathcal{A}} \right)} = \ln \frac{\sqrt{x_1^2 + x_2^2 + (x_3 - \mu t)^2} + x_3 - \mu t}{R},\nonumber \\ & & v = \frac{v_m - v_{\infty}}{\Phi(\omega_2)} \Phi\left(\sqrt{x_1^2 + x_2^2 + (x_3 - \mu t)^2} + x_3 - \mu t\right) + v_{\infty},\nonumber \\ & & S_k \equiv \frac{x_1^2 + x_2^2}{\omega_k^2} + \frac{2 (x_3 - \mu t)}{\omega_k} - 1 = 0, \ k = 1, 2; \ \omega_1 = R. \nonumber\end{aligned}$$ It should be noted that several BVPs of the form (\[4.9\])–(\[4.13\]) can also be exactly solved for some other forms of diffusivity coefficients. Moreover, one may apply the standard program package (e.g., Maple, Mathematica) to numerically solve this BVP with arbitrary given diffusivities. **Conclusions** =============== In this paper, multi-dimensional nonlinear BVPs with the basic evolution equations by means of the classical Lie symmetry method are studied. We consider BVPs of the most general form (\[1\])–(\[4\]) , which include basic equations of the arbitrary order, boundary conditions on known and unknown moving surfaces, boundary conditions on regular and non-regular manifolds. A new definition of invariance in the Lie sense for such BVPs is formulated. The definition generalizes those proposed earlier for simpler BVPs [@ben-olv-82; @b-k; @ibr92; @ch-kov-11], and it can be extended to BVPs for hyperbolic and elliptic equations. Note that the comparison of this definition with those proposed earlier is presented in the recent paper [@ch-kov-11], where Definition 1 in the case of (1+1)-dimensional BVP was formulated. In this paper, we also propose the algorithm of the group classification for classes of BVPs, i.e., extending the well-known problem for PDEs to BVPs. Of course, the group classification problem for simple classes of BVPs can be solved in a straightforward way (see, e.g., [@ch-kov-09]), however, one needs to determining some algorithm in the general case. The main part of the paper is devoted to solving the group classification problem for the class of (1+3)-dimensional BVPs (\[8\])–(\[12\]), modeling processes of melting and evaporation under a powerful flux of energy. First of all, we simplified the BVPs in question to the form (\[14\])–(\[18\]) using the Goodman substitution. Since the basic equations of the problem obtained are the standard NHEs, we used the known system of determining equations [@ch-king00; @ch-king06] to derive their Lie symmetry description. Having done this and using the group of equivalence transformations, we proved Theorem 2 presenting all possible Lie groups of invariance of the BVPs of the form (\[14\])–(\[18\]). It was shown that the Lie invariance does not depend on the diffusivities $d_1(u)$, $d_2(v)$ but only in the form of the flux $\textbf{Q}(t) $. There are only five correctly specified forms of $\textbf{Q}(t) $ (see Table 2) leading to the extensions of the three-dimensional Lie group of invariance (the trivial Lie group), which is admitted by the arbitrary BVP of the form (\[14\])–(\[18\]). One may note that cases 3 and 4 from Table 2 have no analogs in the (1+1)-dimensional space of independent variables, while cases 5 and 6 are generalizations of the relevant (1+1)-dimensional BVPs (see [@ch-kov-09] for comparison). We study in detail the BVP of the form (\[14\])–(\[18\]) with arbitrary diffusivities and the special form of the flux $\textbf{Q}(t) = \textbf{q} $, which naturally arises as a mathematical model of the melting and evaporation process. Since the MGI of this problem is five-dimensional, the sets of optimal $s$-dimensional subalgebras were constructed using the known algorithm [@pathera-1; @bar]. The brief analysis of these subalgebras is presented. One of them, the two-dimensional algebra, is applied for the reduction of the problem in question to the nonlinear BVP for ODEs. Finally, the BVP obtained was exactly solved in two cases of correctly-specified diffusivities; hence, the exact solutions of the BVP of the form (\[14\])–(\[18\]) with these diffusivities were found. It should be noted that the BVP of the form (\[14\])–(\[18\]) with constant diffusivities was studied earlier in [@lyubov], where the same result was obtained using an ad hoc ansatz, which does not connected with any Lie symmetry. To the best of our knowledge, the exact solution of the BVP of the form (\[14\])–(\[18\]) with the fast diffusion constructed above is new. The work is in progress to apply the results obtained in this paper to the reduction and construction of exact solutions for other multi-dimensional BVPs with the remarkable Lie invariance. [99]{} Bluman G W and Anco S C 2002 *Symmetry and Integration Methods for Differential Equations* (New York: Springer) Bluman G W and Kumei S 1989 *Symmetries and Differential Equations* (Berlin: Springer) Fushchych W I, Shtelen W M and Serov M I 1993 *Symmetry Analysis and Exact Solutions of Equations of Nonlinear Mathematical Physics* (Dordrecht: Kluwer) Olver P J 1993 *Applications of Lie Groups to Differential Equations* (New York: Springer) Ovsiannikov L V 1982 *The Group Analysis of Differential Equations* (New York: Academic) Pukhnachov V V 1972 Invariant solutions of the Navier-Stokes equations describing motion with a free boundary *Dokl. Akad. Nauk SSSR (Rep. Acad. Sci. USSR)* **202** 302–305 (in Russian) Bluman G W 1974 Application of the general similarity solution of the heat equation to boundary value problems *Q. Appl. Math.* **31** 403–415 Andreev V K, Kaptsov O V, Pukhnachov V V and Rodionov A A 1998 *Application of Group-Theoretical Methods in Hydrodynamics* (Netherlands: Kluwer) Rogers C and Ames W F 1989 *Nonlinear Boundary Value Problems in Science and Engineering* (Boston: Academic) Ibragimov N H (ed.) 1996 *CRC Handbook of Lie Group Analysis of Differential Equations, Vol. 3* (Boca Ration: CRC) Alexiades V and Solomon A D 1993 *Mathematical Modeling of Melting and Freezing Processes* (Washington: Hemisphere) Britton N F 2003 *Essential Mathematical Biology* (Berlin: Springer) Crank J 1984 *Free and Moving Boundary Problems* (Oxford: Clarendon) Anisimov S I, Imas Ya A, Romanov G S and Khodyko Yu V 1970 *The Influence of High-Power Radiation on Metals* (Moscow: Nauka) (in Russian) Ready J 1971 *Effects of High-Power Laser Radiation*, (New York: Academic) Rubinstein L I 1971 *The Stefan problem* (Providence, RI: American Mathematical Society) Stefan J 1889 Über einige Probleme der Theorie der Wärmeleitung *S. B. Wien. Akad. Mat. Natur.* **98** 173–184 Gupta S C 2003 *The Classical Stefan Problem: Basic Concepts, Modelling and Analysis* (Amsterdam: Elsevier) Briozzo A C and Tarzia D A 2002 An explicit solution for an instantaneous two-phase Stefan problem with nonlinear thermal coefficients *IMA J. Appl. Math.* **67** 249–261 Briozzo A C and Tarzia D A 2010 Exact solutions for nonclassical Stefan problems *Int. J. Diff. Eq.* **2010** 868059 Briozzo A C, Natale M F and Tarzia D A 2007 Existence of an exact solution for a one-phase Stefan problem with nonlinear thermal coefficients from Tirskii’s method *Nonlinear Analysis* **67** 1989–1998 Cherniha R M and Cherniha N D 1993 Exact solutions of a class of nonlinear boundary value problems with moving boundaries *J. Phys. A: Math. Gen.* **26** L935–940 Cherniha R M and Odnorozhenko I G 1990 Exact solutions of a nonlinear boundary value problem of melting and evaporation of metals under the action of high energy flux *Dopov. Akad. Nauk Ukr. (Rep. Acad. Sci. Ukraine) A* **12** 44–47 (in Ukrainian, summary in English) Cherniha R and Kovalenko S 2009 Exact solutions of nonlinear boundary value problems of the Stefan type *J. Phys. A: Math. Theor.* **42** 355202 Lorenzo-Trueba J and Voller V R 2010 Analytical and numerical solution of a generalized Stefan problem exhibiting two moving boundaries with application to ocean delta formation *J. Math. Anal. Appl.* **366** 538–549 Barry S I and Caunce J 2008 Exact and numerical solutions to a Stefan problem with two moving boundaries *Appl. Math. Model.* **32** 83–98 Crank J 1975 *The Mathematics of Diffusion* (Oxford: Clarendon) Pukhnachov V V 2006 Symmetry in Navier-Stokes equations *Uspekhi Mechaniki (Mechanics Successes)* **1** 6–76 (in Russian) Kartashov E M 2001 Analytical methods of solution of boundary-value problems of nonstationary heat conduction in regions with moving boundaries *J. Eng. Phys. Thermophys.* **74** 498–536 Mccue S W, King J R and Riley D S 2003 Extinction behaviour for two-dimensional inwardt solidification *Proc. Roy. Soc. A* **459** 977–999 Mccue S W, King J R and Riley D S 2005 The extinction problem for three-dimensional inwardt solidification *J. Eng. Math.* **52** 389–409 Carslaw H S and Jager J C 1959 *Conduction of Heat in Solids* (Oxford: Clarendon) Benjamin T B and Olver P J 1982 Hamiltonian structure, symmetries and conservation laws for water waves *J. Fluid Mech.* **125** 137–185 Cherniha R 2003 *Nonlinear Evolution Equations: Galiles Invariance, Exact Solutions and Their Applications. Thesis for the Dr. of Sci. Degree* (Kyiv: Institute of Mathematics, NAS of Ukraine) (in Ukrainian, summary in English) Cherniha R and Kovalenko S 2012 Lie symmetries of nonlinear boundary value problems *Commun. Nonlinear. Sci. Numer. Simulat.* **17** 71–84 Ibragimov N K 1992 Group analysis of ordinary differential equations and the invariance principle in mathematical physics (on the occasion of the 150th anniversary of the birth of Sophus Lie) *Russ. Math. Surv.* **47** 89–156 Ovsiannikov L V 1959 Group relations of the equation of non-linear heat conductivity *Dokl. Akad. Nauk SSSR (Rep. Akad. Sci. USSR)* **125** 492–495 (in Russian) Cherniha R, Serov M and Rassokha I 2008 Lie symmetries and form-preserving transformations of reaction-diffusion-convection equations *J. Math. Anal. Appl.* **342** 1363–1379 Cherniha R M and Odnorozhenko I G 1991 Studies of the processes of melting and evaporation of metals under the action of laser radiation pulses *Prom. Teplotekh. (Ind. Heat Tech.)* **13** 51–59 (in Russian, summary in English) Kozdoba L A 1975 *Methods for Solving Nonlinear Problems of Heat Conduction* (Moscow: Nauka) (in Russian) Goodman T R 1964 Application of integral methods to transient nonlinear heat transfer *Advances in Heat Transfer, Vol. 1* (New York: Academic) Dorodnitsyn V A, Knyazeva I V and Svirshchevskii S R 1983 Group properties of the nonlinear heat equation with source in the two- and three-dimensional cases *Differetial’niye Uravneniya* **19** 1215–1223 (in Russian) Ibragimov N H, Torrisi M and Valenti A 1991 Preliminary group classification of equations $v_{t t} = f(x, v_x) v_{x x} + g(x, v_x)$ *J. Math. Phys.* **32** 2988–2995 Cherniha R and King J R 2000 Lie symmetries of nonlinear multidimensional reaction–diffusion systems: I *J. Phys. A: Math. Gen.* **33** 267–-282 Cherniha R and King J R 2003 Lie symmetries of nonlinear multidimensional reaction–diffusion systems: II *J. Phys. A: Math. Gen.* **36** 405–-425 Cherniha R and King J R 2006 Lie symmetries and conservation laws of non-linear multidimensional reaction–diffusion systems with variable diffusivities *IMA J. Appl. Math.* **71** 391–408 Pathera J, Winternitz P and Zassenhaus H 1975 Continuous subgroups of the fundamental groups of physics. I. General method and the Poincare group *J. Math. Phys.* **16** 1597–1615 Fushchych W I , Barannyk L F and Barannyk A F 1991 *Subgroup Analysis of the Galilei and Poincare Groups and Reduction of Nonlinear Equations* (Kyiv: Naukova Dumka) (in Russian) Pathera J and Winternitz P 1977 Subalgebras of real three- and four-diensional Lie algebras *J. Math. Phys.* **18** 1449–1455 Lyubov B Ya and Sobol’ E N 1983 Heat transfer processes in phase conversions under the action of intense energy fluxes *J. Eng. Phys. Thermophys.* **45** 1192–1205 Polyanin A F and Zaitsev V F 2003 *Handbook of Exact Solutions for Ordinary Differential Equations* (Boca Raton, FL: CRC Press)
--- abstract: 'We present a real-time stereo visual-inertial-SLAM system which is able to recover from complicated kidnap scenarios and failures online in realtime. We propose to learn the whole-image-descriptor in a weakly supervised manner based on NetVLAD and decoupled convolutions. We analyse the training difficulties in using standard loss formulations and propose an allpairloss and show its effect through extensive experiments. Compared to standard NetVLAD, our network takes an order of magnitude fewer computations and model parameters, as a result runs about three times faster. We evaluate the representation power of our descriptor on standard datasets with precision-recall. Unlike previous loop detection methods which have been evaluated only on fronto-parallel revisits, we evaluate the performace of our method with competing methods on scenarios involving large viewpoint difference. Finally, we present the fully functional system with relative computation and handling of multiple world co-ordinate system which is able to reduce odometry drift, recover from complicated kidnap scenarios and random odometry failures. We open source our fully functional system as an add-on for the popular VINS-Fusion.' address: 'Robotics Institute, Hong Kong University of Science and Technology, Clear Water Bay Road, Kowloon, Hong Kong' author: - Manohar Kuse - Shaojie Shen bibliography: - 'root.bib' - 'lit\_review.bib' title: 'Learning Whole-Image Descriptors for Real-time Loop Detection and Kidnap Recovery under Large Viewpoint Difference' --- Kidnap Recovery, Loop Closure, VINS, Whole Image Descriptor. Introduction ============ ![Shows the corrected trajectories (different colors for different worlds) merged according to the inter-world loop candidates. Note that the merging occurs live (not offline) in real-time as the loop candidates are found. We also note that such cases cannot be handled by Qin *et al.*[@qin2018relocalization] which just merges with the world-0 (first world) and ignore any inter-world loop candidates not involving world-0. The *maplab* system [@schneider2018maplab] provides an online tool, *ROVIOLI* which is essentially a visual-inertial odometry and localization front-end. Although it provides for a console based interface offline for multi-session map merging, it cannot identify kidnaps and recover from them online. This sequence involves multiple kidnaps lasting from 10s to 30s. The video for the live run is available at the link: <https://youtu.be/3YQF4_v7AEg>. Live runs videos are available for more sequences through this link: <https://bit.ly/2IkEh3F>. []{data-label="fig:kidnap-screenshot"}](kidnap_implementation/screenshot3.jpg){width="0.95\columnwidth"} Over the past decade, the SLAM (Simultaneous Localization and Mapping ) community has made amazing progress towards increasing the specificity of the odometer and building usable maps of the environment to assist robots in various planning tasks. Systems using visual and inertial information fusion have been a contemporary theme towards reducing drift to less than 0.5% of the trajectory length [@cadena2016past]. Identifying a revisit to a place presents an opportunity to reduce the drift further and also to recover from kidnap scenarios. General place recognition, however, remains an extremely challenging problem [@lowry2016visual] due to myriad ways in which visual appearance of a place varies. In our own daily experience, humans describe places to fellow humans as a collection of objects, their color cues, their spatial locations and so on, thereby allowing to disambiguate places even if they approach the place from a very different viewpoint. Ideally, the loop detection module should describe a scene in this context. Humans probably do not rely on corner features (a common technique in use for loop detection in existing SLAM systems) to identify a place, instead we humans, most likely represent the scene as a whole in a semantic sense. The proposed system builds on this motivation. In this work, we propose the use of a framework which learns whole-image descriptors without explicit human labeling to represent a scene in a high dimensional subspace for detecting place revisits. We lay special emphasis on the real-time performance and evaluation of the system in context of visual-SLAM. Popular past works have considered loopclosure under fronto-parallel scenarios. However, place revisits can happen at substantial viewpoint difference. The underlying place recognition module in SLAM systems to identify place revisits occurring at widely different viewpoints. Past systems based on bag-of-visual-words (BOVW) are limited by the underlying low-level feature descriptors. The learned vocabulary (for BOVW) also have difficulty generalizing under adversaries like large viewpoint difference, noise, low light, changing exposure, less texture [@lowry2016visual]. The proposed method can learn a representation that generalizes well and can identify place revisits under non fronto-parallel viewpoints. We compare our method’s run-time performance with a popular bag-of-words approach, DBOW [@galvez2012dbow] and ibow-lcd by [@garcia2018ibow] along with recently proposed CNN based approaches for place recognition [@sunderhauf2015performance], [@merrill2018lightweight], [@antequera2017]. On real sequences, our method delivers a similar recognition performance to NetVLAD but at a 3X lower computational time and an order of magnitude fewer training variables. The major advantage of our system is that it has a high place recall rate thus is able to recover live in real-time from long and chained kidnaps by maintaining multiple co-ordinate systems and their relative poses. Our paper is organized as follows. In Section \[sec:lit-review\], we start by reviewing approaches in the Visual Place Recognition (VPR) community and some recent loopclosure methods used in Visual-SLAM community. Next, in Section \[sec:learning\_core\], we identify the issue of unstable learning in the original NetVLAD implementation which uses the tripletloss and we propose an allpairloss function to alleviate this issue. In Section \[sec:desc\_extraction\_comparison\] we present our implementation details for deployment as a visual-SLAM subsystem which includes the place recognition module, the datastructure for handling multiple co-ordinate systems and recovery from kidnap. In Section \[sec:all\_experiments\], we presents comparative experiments to this effect. Finally, we present our entire system which is available as a pluggable module to the popular VINS-Fusion [@vins-mono]. Literature Review {#sec:lit-review} ================= We recognize that visual place recognition (VPR) and loopclosure detection in SLAM are related problems. Here we first review recent advances from VPR community and then review state-of-the-art loop-closure methods. In the context of VPR, Sunderhauf *et al.* [@sunderhauf2015performance] pioneered the use of ConvNet features. Compared to SeqSLAM [@milford2012seqslam] and FAB-MAP [@cummins2008fab; @cummins2011appearance] use of features from pretained network results in better precision-recall performances on standard VPR datasets(Norland, Gardens Point, St. Lucia and Campus). In their subsequent work, Sunderhauf *et al.* [@sunderhauf2015place] proposed to use region proposals and extract ConvNet features on each of the regions. Arandjelovic *et al.* [@arandjelovic2016netvlad] proposed a trainable feature aggregation layer which mimics the popular VLAD (Vector of Locally Aggregated Descriptor). While impressive performance was obtained, these methods rely on nearest neighbour search for retrival. The image descriptor being very high dimensional (eg. 32K dimensional for [@arandjelovic2016netvlad], 64K for [@sunderhauf2015performance]), these methods perform various dimensionality reduction techniques to make nearest neighbour search feasible in reasonable time with some hit to the retrieval performance. WPCA was used by [@arandjelovic2016netvlad] which involve storage of a 32Kx4K matrix costing about 400 MB. More recently Khaliq *et al.* [@khaliq2018holistic] proposed an approach which make use of region-based features from a light-weight CNN architecture and combines them with VLAD aggregation. The approach from Chen *et al.* [@chen2018learning; @chen2017only] identifies key landmark regions directly from responses of VGG16 network which was pretrained on image classification task. For regional features encoding, bag-of-words was employed on a separare training dataset to learn the codebook. The approach by Hou *et al.* [@hou2018bocnf] is very similar to [@chen2018learning]. The Disadvantage of using pretrained models learned on ImageNet object classification, for example, puts more emphasis on objects rather than the nature of the scene itself. Other works in this context include [@bai2018sequence; @gao2017unsupervised; @7298790; @7989366; @7989359; @babenko2015aggregating; @arandjelovic2014dislocation; @sattler2016large]. For a more detailed summary of the works in place recognition we direct the readers to survey on place recognition / instance retrieval [@zheng2018sift; @lowry2016visual]. We summarize the literature in Table \[tab:place-recog-survey\]. Although CNN based techniques are considered as state-of-the-art in retrieval and place recognition tasks, they are still disconnected from overall SLAM and loop-closure detection problems. Commonly employed loop detection methods in state-of-the art SLAM systems rely on sparse point feature descriptors like SIFT, SURF, ORB, BRIEF etc. for representation and an adaptation of BoVW for retrieval. While BoVW provides for an scalable indexed retrieval, the performance of the system is limited by the underlying image representation. Such factors as the quantization in clustering when building vocabulary, occlusions, image noise, repeated structures also affect the retrieval performance. FAB-MAP [@cummins2008fab; @cummins2011appearance], DBOW2 [@galvez2012dbow] and others [@mur2014fast; @bampis2017high] rely on a visual vocabulary which is trained offline, while recent methods like OVV [@nicosevici2012automatic] , IBuILD[@khan2015ibuild], iBOW-LCD [@garcia2018ibow], RTAB-MAP [@labbe2013appearance] and others [@angeli2008fast; @garcia2014use; @zhang2016learning; @stumm2016_locationmodels] rely on online constructed visual vocabulary. Authors have also made use of whole-image-descriptors in loopclosure context [@Zhang2016RobustMS; @pepperell2014_allenv; @milford2012seqslam]. Works in the context of loopclosures in SLAM which make use of learned feature descriptors are: [@merrill2018lightweight; @gao2017unsupervised]. Merril and Huang [@merrill2018lightweight] learned an auto-encoder from the common HOG descriptors for the whole image. Other miscellaneous work related to our localization system are [@kenshimov2017deep; @DBLP:journals/corr/FeiTS15; @sizikova_eccvworkshop2016; @cieslewski2017_distributed; @cieslewski2017_wholeimagedesc]. Some works have also built full SLAM system with multi-session map merging capability. The *maplab* system [@schneider2018maplab] provides an online tool, *ROVIOLI* which is essentially a visual-inertial odometry and localization front-end. Although it provides for a console based interface offline for multi-session map merging, it cannot identify kidnaps and recover from them online. This is the major distinguishing point of our system. Also the relocalization system by Tong *et al.* [@qin2018relocalization] can merge multiple sessions live it only merges with the first co-ordinate frame, any loop connections between co-ordinate systems not involving the first co-ordinate systems are ignored. Our system on the other hand is able to maintain multiple co-ordinate systems and their relative poses, set associations and merge the trajactories online in real-time. We summarize our contributions: - A fully functional system, as an add-on for VINS-Fusion, which uses whole-image descriptor for place representation and recovery from odometry drifts, kidnap and failures live and in real-time. Our learning code [^1] and VINS-Fusion addon ROS package [^2] are open sourced. - A novel cost function which deals with the gradient issue observed in standard NetVLAD training. - Decoupled convolutions instead of standard convolutions result in similar performance on precision-recall basis but at a 3X lower computation cost and about 5-7X fewer learnable parameters, making it ideally suitated for real-time loopclosure problems. - Squashing channels of CNN descriptors instead of explicit dimensionality reduction of image descriptor for scalabilty. Even a 512-D image descriptor gives reasonable performance. **Representation** **Retrieval** **Method Description** ---------------------------------- --------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- SPF-real BOW [@cummins2008fab; @cummins2011appearance] soft-real-time (can run @5-10hz) SPF-binary BOW [@galvez2012dbow; @mur2014fast; @bampis2017high] real-time (10hz or more) SPF-real Inc-BOW [@angeli2008fast; @nicosevici2012automatic; @labbe2013appearance] soft-real-time (1-5 Hz). [@tsintotas2018assigning; @bampis2018fast] make use of temporal information to form visual words scene representation. SPF-binary Inc-BOW [@garcia2014use; @zhang2016learning; @khan2015ibuild; @garcia2018ibow] soft-real-time to 1-5Hz processing SPF graph [@stumm2016robust] Pretrained-CNN NN [@sunderhauf2015performance; @arroyo2016fusion; @bai2018sequence] provide for real-time descriptor computation (10-15hz). dimensionality reduction accomplished at 5hz, NN with 64K dim is really slow, NN after dimemsionality reduction ( 4000d) is about 5-15 hz. Pretrained-CNN BOW [@hou2018bocnf; @chen2018learning; @chen2017only] Custom-CNN NN [@arandjelovic2016netvlad; @7298790; @7989366; @7989359] provides for real-time descriptor computation. dim-reduction and NN search are bottle necks. Custom-CNN with region-proposals regionwise-NN [@sunderhauf2015place] very slow representation vector computation. [@khaliq2018holistic] region descriptor encoding computation 2-3 Hz. Reported matching times is several seconds. Unsupervised Learning NN [@merrill2018lightweight; @gao2017unsupervised; @antequera2017] descriptors are not descriptive enough after dim-reduction. real-time desc computation. Intensity NN [@milford2012seqslam; @pepperell2014_allenv] agnostic optimization [@Zhang2016RobustMS; @latif2014online; @han2018sequence] generally slow. Learning Place Representation {#sec:learning} ============================= \[sec:learning\_core\] In this section, we describe the training procedure. We start by reviewing VLAD and NetVLAD [@arandjelovic2016netvlad] (Sec. \[sec:netvlad\]). We see these methods as a way for pixelwise featuremap aggregation. Next we describe the learning issues associated with the triplet ranking loss function. To mitigate this issue we propose to use a novel all-pair loss function (Sec. \[sec:cost\_function\]). We provide an intuitive explanation along with experimental evidence on why the proposed all-pair loss function leads to faster and stable training. Review of VLAD and NetVLAD {#sec:netvlad} -------------------------- ![Notations and computations for the whole-image descriptor. An image is fed into the CNN followed by the NetVLAD layer. We experiment with VGG16 and propose to use decoupled convolution for its speed. Additionally for dimensionality reduction we propose channel-squashing. Our fully convolutional network, with K=16 produces a 4096-dimensional image descriptor (without channel squashing) and a 512-dimensional image descriptor (with channel squashing). In terms of number of floating point operations (FLOPs) for a 640x480 input image our proposed network is about 25X faster, real computational time is about 3X faster. Details in Sec. \[sec:learning\_core\]. []{data-label="fig:cnn_plus_netvlad"}](im/diagram_netvlad2.pdf){width="42.00000%"} Let ${\bf{h}}^{(I)}_{\bf{u}}(d)$ be the $d^{th}$ dimension ($d = 1 \cdots D$) of the image feature descriptors for image $I$ (width $W$ and height $H$) at pixel $\mathbf{u} := (i,j), i=1,\ldots W' ; j=1,\ldots H'$. These per pixel CNN-feature descriptors are assigned to one of the K clusters (K is a fixed parameter, we used 16, 48, 64 in our experiments), with ${{\bf{c}}_k} \in \Re^{D}, k=1 \ldots K$ as cluster centers. The VLAD [@arandjelovic2016netvlad] representation is $D \times K$ matrix, $\mathbf{V} = [ \mathbf{\vartheta}_1, \mathbf{\vartheta}_2, \ldots, \mathbf{\vartheta}_K ]$, defined as the sum of difference between local descriptor and assigned cluster center, $${\mathbf{\vartheta}}_k = \sum_{\forall \mathbf{u}} a_k( {\mathbf{h}}^{(I)}_{\mathbf{u}} ) \times ( {\mathbf{h}}^{(I)}_{\mathbf{u}} - {\mathbf{c}}_k )$$ where $a_k(.)$ denotes a scalar membership indicator function of the descriptor ${\mathbf{h}}^{(I)}_{\mathbf{u}}$ in one of the K classes. Arandjelovic *et al.* [@arandjelovic2016netvlad] proposed to mimic VLAD in a CNN-based framework. In order that the cluster assignment function, $a_k(.)$, be differentiable and hence learnable with back propagation they defined an approximation of the assignment function $a_k(.)$ using the softmax function. For brevity, we write, ${\mathbf{h}}^{(I)}_{\mathbf{u}}$ as $\mathbf{h}$: $$\label{eq:cluster_association} \begin{split} \hat{a}_k( \mathbf{h} ) &= \frac{e^{-\alpha \ || \mathbf{h} - \mathbf{c}_k || }}{ \sum_{k'=1}^{K} e^{-\alpha \ || \mathbf{h} - \mathbf{c}_{k'} ||} }\\ % % &= \sigma( \mathbf{r} )_k % \end{split}$$ where $r_k=\mathbf{w}_k^{T} \mathbf{h} + b_k$, $\mathbf{w}_k = 2\alpha \mathbf{c}_k$, $b_k = -\alpha ||\mathbf{c}_k||^2$. $\sigma( \mathbf{r} )_k$ is the softmax function. $r_k$ can be computed with convolutions. $\mathbf{w}_k$, $b_k$ and $\mathbf{c}_k$ are learnable parameters in addition to the CNN parameters $\theta$. Fig. \[fig:cnn\_plus\_netvlad\] summarizes the computations and notations. Each of the vectors corresponding to K clusters is individually unit normalized and then the whole vector is unit normalized. This is referred in the literature as Intra-normalization which reduce the effect of burstiness of visual features[@jegou2009burstiness]. Thus, scene descriptor $\mathbf{\eta}^{(I)}$, of size $D*K$ is produced using a CNN and the NetVLAD layer, $$\mathbf{\eta}^{(I)} = \aleph( \{ {\mathbf{h}}^{(I)}_{\mathbf{u}} \} )$$ In general any base CNN can be used and the NetVLAD mechanism can be thought of aggregating the CNN pixel-wise descriptors. For experiments, we use the VGG16 network [@simonyan2014very]. Additionally, following [@howard2017mobilenets] we propose to use the decoupled convolution, ie. a convolution layer is split into two layers, the first of which does only spatial convolution independently across all the input channels. Second of the two layers does 1x1 convolution on the channels. This has been found to boost running time at marginal loss of accuracy for object categorization tasks. We also propose to reduce the dimensions by quashing channels with learned 1x1 convolutions rather than reduce the dimensions of the image descriptor as has been done by the original NetVLAD paper. This eliminates the need to store the whittening matrix (as done by [@arandjelovic2016netvlad]). At the run time it eliminates the need for a large matrix-vector multiplication for dimensionality reduction. Proposed All-Pair Loss Function {#sec:cost_function} ------------------------------- To learn the parameters of the CNN ($\theta$) and of the NetVLAD layer ($\mathbf{w}_k$, $b_k$ and $\mathbf{c}_k$), the cost function needs to be designed such that, in the dot product space, $\eta$ corresponding to projections of the same scene (under different viewpoints) appear as nearby points (higher dot product value, nearer to 1.0). Let $\eta^{(I_q)}$ be the descriptor of the query image $I_q$. Similarly, let $\eta^{(P_i)}$ and $\eta^{(N_j)}$ be the descriptors of $i^{th}$ positive and $j^{th}$ negative sample respectively. By positive sample, we refer to a scene which is same as query image scene but imaged from a different perspective. By negative sample, we refer to a scene which is not the same place as the query image. Let the notation, $\langle \eta^{(a)}, \eta^{(b)}\rangle$, denote the dot product of two vectors. Following [@arandjelovic2016netvlad] we use multiple positive and negative samples $\{I_q, \{P_i\}_{i=1,\ldots,m}, \{N_j\}_{j=1,\ldots,n}\}$ per training sample, however with a novel all-pair loss function. We provide an intuitive explanation for the superiority of the proposed loss function over the standard triplet loss used by [@arandjelovic2016netvlad] for training. We also provide corroborative experimental evidence towards our claims. The commonly used triplet loss function can be rewritten in our notations as, $L_{triplet-loss}$: $$\label{eq:netvlad_original} \sum_j \mbox{max}\big( 0, \langle \eta^{(I_q)}, \eta^{(N_j)}\rangle - \mbox{min}_i ( \langle \eta^{(I_q)}, \eta^{(P_i)} \rangle ) + \epsilon \big)$$ where $\epsilon$ is a constant margin. Note that [@arandjelovic2016netvlad] prefered to define the loss function in Euclidean space rather than the dot product space. Using any of the spaces is equivalent since for unit vectors, $\bf{a}$ and $\bf{b}$, the dot product, $\langle \bf{a}, \bf{b} \rangle$, and the squared Euclidean distance $d(\bf{a},\bf{b})$, are related as, $d(a,b) = 2 (1 - \langle a, b \rangle)$ with a negative correlation. This has been taken care of by flipped sign in our optimization problem compared to the one used by Arandjelovic *et al.* [@arandjelovic2016netvlad]. This loss function is the difference between the worst positive sample, ie. $\mbox{min}_i \langle \eta^{(I_q)}, \eta^{(P_i)} \rangle$ and the query with every negative sample. In an independent study by Bengio *et al.* [@bengio2009curriculum], it was observed that for faster convergence, it is crucial to select triplets from the training dataset, that violate the triplet constraint, ie. result in as few zero-loss as possible. They demonstrated that these zero-loss scenarios lead to zero gradients which in turn slows the training. They suggested to provide easier samples in early iterations and harder samples in later iteration to speed up the learning process. To this effect, Schroff *et al.* [@schroff2015facenet] proposed a strategy to select triplets using recent network checkpoints, every $n$ (say 1000) training iterations. Instead of using a complicated strategy as done by [@schroff2015facenet], we rely on a well-designed loss function which gives this effect. Thus, we define a novel loss function based on all-pair comparisons of positive and negative samples with the query image. The proposed loss function is relatively harder to satisfy (resulting in fewer zero loss samples), hence its higher discriminatory power compared to the triplet loss (see Fig. \[fig:batch-zero-loss-compare\]). In order to learn highly discriminative descriptors, we want the similarity of query sample ie. $\eta^{(I_q)}$ with positive samples be more than the similarity between query sample and the negative samples. Let us consider two cases a) $\langle \eta^{(I_q)}, \eta^{(N_j)} \rangle > \langle \eta^{(I_q)}, \eta^{(P_i)} \rangle$; b) $\langle \eta^{(I_q)}, \eta^{(N_j)} \rangle < \langle \eta^{(I_q)}, \eta^{(P_i)} \rangle$. Case-b is what we prefer so we do not want to have a penalty (want to have zero loss) for its occurence. Case-a is opposite of what we prefer thus we add a penalty proportional to the magnitude of the dot product to discourage this event. We propose to add a penalty term as above for all the pairs where the conditions do not hold. For effective learning we propose to compute the loss over every pair of positive and negative sample. The final loss function for one learning sample ( $\{I_q, \{P_i\}_{i=1,\ldots,m}, \{N_j\}_{j=1,\ldots,n}\}$ ) is given as, $L_{proposed}$: $$\label{eq:cost_final} L = \sum_{i=1}^{m} \sum_{j=1}^{n} \mbox{max}( 0, \langle \eta^{(I_q)}, \eta^{(N_j)} \rangle - \langle \eta^{(I_q)}, \eta^{(P_i)} \rangle + \epsilon )$$ The motivation and the effect of the proposed loss function on termination of learning is summarized in Fig. \[fig:cost\_function\_schematic\]. ![Illustration of the effect of learning with proposed loss function. Descriptor of query image ($\eta^{(I_q)}$, in blue). Descriptors of positive set ($\eta^{(P_i)}$, in green) and negative set ($\eta^{(N_j)}$, in black). See also Eq. \[eq:cost\_final\].[]{data-label="fig:cost_function_schematic"}](im/lossfunction-diagrams.pdf) We further note that the proposed loss function is harder to satisfy (giving a positive penalty) compared to the loss function used by Arandjelovic *et al.*[@arandjelovic2016netvlad] (ie. Eq. \[eq:netvlad\_original\]). This has been experimentally observed (see Fig. \[fig:batch-zero-loss-compare\]). The major drawback of using an easy to satisfy penalty function is the vanishing gradients problem [@bengio2009curriculum], which slows the speed of learning. This is because a zero-loss sample results in zero-gradient during back-propagation. ### Loss function in Matrix Notation {#sec:matrix_notation} For fast and efficient training we express the loss function (Eq. \[eq:cost\_final\]) in a matrix notation. We firstly define $\varDelta^{q}_{\bf{P}}$ to represent dot product between the sample query $\eta_q$ and descriptors of each of the positive samples $$\varDelta^{q}_{\bf{P}} = \begin{bmatrix} \langle \eta^{(I_q)}, \eta^{(P_1)} \rangle \\ \vdots \\ \langle \eta^{(I_q)}, \eta^{(P_m)} \rangle \end{bmatrix}$$ Next, we define $\varDelta^{q}_{\bf{N}}$ to represent dot product between the sample query and descriptors of each of the negative samples. $$\varDelta^{q}_{\bf{N}} = \begin{bmatrix} \langle \eta^{(I_q)}, \eta^{(N_1)} \rangle \\ \vdots \\ \langle \eta^{(I_q)}, \eta^{(N_n)} \rangle \end{bmatrix}$$ Let $\mathbf{1}_n$ denote a column-vector of size $n$ and $\mathbf{1}_m$ denote a column-vector of size $m$ with all entries as 1s. Also define $\mathbf{0}_{m \times n}$ as null-matrix of dimensions $m \times n$. The max(.) operator is a point-wise operator. Now we note that Eq. \[eq:cost\_final\] can be expressed in matrix notation as: $$\mathbf{L} = \mbox{ max }( \mathbf{0}_{m \times n}, \mathbf{1}_m (\varDelta^{q}_\mathbf{N})^T - \varDelta^{q}_{\mathbf{P}} \mathbf{1}_n^T + \epsilon \mathbf{1}_m \mathbf{1}_n^T )$$ Training Data ------------- In order to train the scene descriptor, only requirement on the data is that we be able to draw positive sample images (views of the same physical scenes) and negative sample images (images of different scenes). One possible way is to bootstrap a video sequence with existing methods for loopclosure detection. Such a preprocessed sequence might not be useful for localization but can provide enough information to draw positive and negative samples for learning a whole-image-descriptor with the proposed method. Several walking, driving, drone videos etc. available on video sharing websites can be used for learning. Another way could be to use 3D mesh-models and render views with nearby virtual-camera locations to obtain positive samples. With the advent of crowd sourcing street scenes and availability of services like mappilary[^3], it is easily possible to assemble a much larger dataset for training. Faster and descriminative learning is even more crucial when making use of such larger training datasets. We differ this until our future work. For our experiments be comparable with exisiting methods, we use the Pittsburgh (Pitts250k) [@torii2013visual] dataset which contains 250k images from Google’s street-view engine. It provides multiple street-level panoramic images for about 5000 unique locations in Pittsburg, Pennsylvania over several years. Multiple panoramas are available at a particular place ($\approx$ 10m apart) sampled approximately 30 degrees apart along the azimuth. Another similar dataset is the TokyoTM dataset [@torii2013visual]. Training Hyperparameters ------------------------ The CNN-learnable parameters are initialized with the Xavier initialization [@glorot2010understanding]. We initialize NetVLAD parameter $\mathbf{c}_k$ as unit vectors drawn randomly from a surface of a hypersphere. $b_k$ and $\mathbf{w}_k$ are coupled with $\mathbf{c}_k$ at initialization. However, as learning progresses these variables are decoupled. We use the AdaDelta solver [@zeiler2012adadelta] with batch size of $b=4$ (each batch with $m=6$ positive samples and $n=6$ negative samples)). This configuration takes about 9GB of GPU memory during training. We stop the training at 1200 epochs. Our 1 epoch is 500 randomly drawn tuples from the entire dataset. The learning rate is reduced by a factor of 0.7 if loss function does not decrease in 50 epochs and a regularization constant is set to 0.001 (to make regularization loss about 1% of fitting loss). Data augmentation (rotation, affine scale, random cropping, random intensity variation) is used for robust learning, which we begin after 400 epochs. This explains the rise in the loss function values in our experiments in Fig. \[fig:triplet-vs-allpair-on-pw13-k16\], \[fig:triplet-vs-allpair-on-vggblock4-k16\], \[fig:triplet-vs-allpair-on-pw7-squash-chnls-k16\]. The output descriptor size is $K \times D$. K is the number of clusters in NetVLAD, we use K=16,32,64. D is the number of channels of the output CNN. For both VGG16 and the decoupled network it is 512. Some other authors notably Arandjejovic [@arandjelovic2016netvlad] and Sunderhauf [@sunderhauf2015performance] have used whittening-PCA and gaussian random projections respectively to reduce the dimensions of the image descriptor from about 64K to 4K. We suggest to use learnable squashing channels (to say 32) with channelwise convolutions before feeding the pixelwise descriptors to the NetVLAD layer rather than reduce the dimensionality of the image descriptor. We have experimentally compared the effect of this channel squashing in Fig. \[fig:pr-plot-mynt-coffee-shop-seq\] and Fig.\[fig:pr-plot-mynt-seng-seq\]. Deployment {#sec:deploy} ========== Our system, which is able to correct drifts of VIOs, recover from long kidnaps and system failures online in real-time, is available as a add-on to the popular VINS-Fusion. In general, our system can work with any VIO system. It is worth nothing that the *maplab* system’s [@schneider2018maplab] *ROVIOLI* cannot identify kidnaps and recover from them online, although it can merge multiple sequences offline (when all the sequences are known). In this section we describe our multi-threaded software architecture (Fig. \[fig:deploy-full-system\]). We use the consumer-producer paradigm for real-time deployment of our system. We start by describing the image level descriptor extraction and comparison. After that, we describe the details of pose computation at the loopcandidate image pair. Next we describe how our system is able to recover from kidnaps by keeping track of the disjoint co-ordinate systems and switching off the visual-interial odometry when no features can be reliably tracked. ![System Overview[]{data-label="fig:deploy-full-system"}](kidnap_implementation/proposed_system.pdf){width="0.9\columnwidth"} Descriptor Extraction and Comparison {#sec:desc_extraction_comparison} ------------------------------------ We propose a pluggable system to the popular VINS-Fusion[^4] by Qin *et al.*[@vins-mono]. Our system receives keyframes from the visual-inertial odometry sub-system to produce loopclosure candidates. We use a naive store-and-compare strategy to find loopcandidates. The descriptors at all previous keyframes, ie. $\eta^{(I_{t})} \ t=1,\ldots,t$ are stored indexed with time. When a new keyframe arrives, say $\eta^{(I_{t+1})}$, we perform $\langle \eta^{(I_{t_i})}, \eta^{(I_{t+1})} \rangle \ i=1,\ldots,t-T$. $T$ is typically 150, ie. ignore latest 150 frames (or 15 seconds) for loopclosure candidates. These are a measure of likelihoods for loopclosure at each of the keyframe timestamps. We accept the loopclosure hypothesis if the query score is above a set threshold (fixed for all the sequences) and if three consecutive queries retrieve descriptors within six keyframes of the first of the three queries. In a real implementation, the threshold can be set a little lower and wrong hypothesis can be eliminated with geometric verifications heuristics. We note that loopcandidates with large viewpoint difference provide a formidable challenge for tracked feature matching between the two views. In this work for comparison of descriptor’s precision-recall, we do not perform any geometric verification. Our naive comparison (matrix-vector multiplication) takes about 50 ms for comparison with 4000 keyframes (about 10 min sequence) on a desktop CPU with image descriptor dimension of 8192 and about 10ms for 512 dimensional image descriptor. While the comparison times grow unbounded as number of keyframes increase, the objective of this paper is to demonstrate the representation power of learned whole image descriptors over the traditional BOVW on sparse feature descriptor loopclosure detection framework and the recently proposed CNN-based image descriptors in terms of detection under large viewpoint difference. Dealing with scalability could be a future research direction. In our opinion scalability can be achieved by sophisticated product quantization approaches similar to Johnson *et al.*[@faiss2017] or by maintaining a marginalized set of scene descriptors, along with scene object labels and dot product comparison on this smaller subset. Feature Matching and Pose Computation ------------------------------------- In this section we describe the task of metric scale relative pose computation given a loopcandidate pair. For computation of reliable pose estimates we make use of the GMS-Matcher [@bian2017gms] as a robust correspondence engine. The GMS-matcher propose a simple grid based voting scheme for finding fast correspondences. This matcher provides reliable matches even under large viewpoint difference. This can be attributed to the grid-based voting scheme that it implements, has the effect of eliminating point correspondences if nearby points in one view do not go to nearby points in the second view. Computationally it takes about 150-200 ms per image pair (image size 640x480). Since we only need to compute these correspondence on loopcandidates and also since we use a multi-threaded implementation, this higher computational time does not stall our computational pipeline. Note that in the event of loop detections under large viewpoint difference, the tracked features in a SLAM system do not provide enough information for feature computation. Some other related recent works on CNN based feature correspondence include InLoc [@taira2018inloc], Neighbourhood consensus network[@rocco2018neighbourhood]. Although these approaches present impressive results, they involve storage of allpair dense pixel-level features which currently cannot be accomplished in realtime. Once the feature correspondences are produced we make use of direct perspective-n-point (PNP) method by Hesch and Roumeliotis [@hesch2011direct] for pose computation [^5]. We reject a loopcandidate if insufficient number of correspondences are produces (we use a threshold of 200 correspondences). We use random sampling and consensus (RANSAC) to make the PNP robust under spurious feature correspondences that may occasionally occur. For about 10 RANSAC iteration it takes on average 5 ms to compute the pose from approximately 2000 feature correspondences. We make use of stereo geometry for computing 3D points at loopclosures. In cases where the queue size is small and computational resources sufficient we also compute the pose by iterative closest point (ICP) method in addition to the PNP. The 3D points for both the images of the loopcandidates is obtained from stereo geometry in this case. If the pose computations are not consistent from the two methods we reject the loopcandidate. Kidnap Detection and Recovery ----------------------------- In our system we also deal with a kidnap recovery mechanism. By kidnap we refer to the camera’s view being blocked and the camera teleported to another location several 10s to 100s of meters away in 10s to 100s of seconds. The teleported location may or may not be a previously seen location. We make use of a simple criterion like the current number of tracked features falling to a very low value (like less than 10) to determine if the camera system is kidnapped. Once we determine that the camera system has been kidnapped we stop the visual inertial odometry subsystem. When sufficient number of features are again being tracked we reinitialize the odometry/sensor fusion system. It is to be noted that in such a case it starts with a new co-ordinate reference. Hence forth we refer to the new co-ordinate systems as world-0 ($w_0$), world-1 ($w_1$), world-2 ($w_2$) and world-k ($w_k$) in general. We denote the nodes $n$ by a superscript to identify the world that it is in. For instance $n_i^{(k)}$, means the $i^{th}$ node (i is the global index of the node) is in the $k^{th}$ world. The odometry pose of the node is denoted as $^{(k)}\mathbf{T}_i$. The incoming loopcandidate pair can be categorized into two kinds (refer to figure \[fig:disjoint-set-ds\] for a visual explaination). a) Intra worlds (eg. $n_i^{(k)} \leftrightarrow n_j^{(k)}$ ) and b) Inter worlds ($n_i^{(k)} \leftrightarrow n_j^{(k')}$). ![The solid black squares represent the nodes in the pose graph. Arrows show the loopcandidates. The white rectangles show each of the individual worlds. The colored rectangular enclosures are the worlds belonging to the same set. []{data-label="fig:disjoint-set-ds"}](kidnap_implementation/disjoint_set.pdf) ### Direct Pose Computation Between $w_k$ and $w_{k'}$ {#sec:direct-worldpose-infer} The inter-world loopcandidates $n_i^{(k)} \leftrightarrow n_j^{(k')}$ can be used to infer the relative poses between the co-ordinate system $w_k$ and $w_{k'}$. In this section, we describe the computation of the relative pose between the worlds $k$ and $k'$, ie. $^{(k)}\mathbf{T}_{(k')}$ from the relative pose between the two nodes ($^i\mathbf{T}_j$) and the odometry poses of the nodes in their respective worlds ie. $^{(k)}\mathbf{T}_i$ and $^{(k')}\mathbf{T}_j$ respectively. $$^{(k)}\mathbf{T}_{(k')} = ^{(k)}\mathbf{T}_i \times ^i\mathbf{T}_j \times ( ^{(k')}\mathbf{T}_j )^{-1}$$ ### Indirect Pose Computation Between $w_k$ and $w_{k_1}$ {#sec:indirect-worldpose-infer} It is easy to see that if we have two inter-world loopcandidates like: $n_i^{(k)} \leftrightarrow n_j^{(k')}$ and $n_{i_1}^{(k_1)} \leftrightarrow n_{j_1}^{(k')}$, the three worlds $w_k$, $w_{k'}$ and $w_{k_1}$ are said to be in same set. It is also possible to indirectly infer the relative pose between worlds $w_k$ and $w_{k1}$ even though no loopcandidate exists between these two sets. This estimate is needed for correctly initializing the poses to solve the pose graph optimization problem. We make use of the data structure disjoint sets [@cormen2009introduction] to maintain the world information that are in the same set. The advantage of the disjoint set datastructure is it provides for a constant time set-union and sub-linear time set-association query. Each world starts in its own set, everytime we encounter the inter-world looppair we merge these two sets of the worlds into a single set. When we assert that two different worlds are in the same set, we imply that a relative transform between these two worlds can be determined. However that a loopcandidate between these two pairs of worlds may or may not exist. In case no loopcandidate exists between the two worlds but these two worlds are in the same set, the relative poses between the worlds can be determinted by finding a graph-path between the two worlds and chaining the relative pose estimates between the adjacent world pairs in the path. In a general scenario, this can be accomplished by constructing a directed graph of the worlds with nodes being the worlds in the same set and edges being the relative poses between these two worlds, ie. $^{(k)}\mathbf{T}_{(k')}$. A breadth-first search on this graph is sufficient to determine an estimate of relative poses between arbitary pairs of worlds by chaining the relative poses of the path generated by the graph search. Implementation Details ---------------------- Our full system employs multiple threads. It uses the producer-consumer programming paradigm for managing and processing the data. In our system, thread-1 produces image descriptors of all incoming keyframe images. Thread-2 consumes the image descriptors to produce candidate matches. Thread-3 consumes the candidate matches to produce feature correspondences. Thread-4 uses the feature correspondence to produce the relative pose $^i\mathbf{T}_j$. Thread-5 monitors the number of tracked features to know if the system has been kidnap. Thread-6 uses the loopcandidates and their relative poses to construct the disjoint set data structure and maintain the relative poses between multiple co-ordinate systems as detailed in Sec. \[sec:direct-worldpose-infer\] and \[sec:indirect-worldpose-infer\]. Thread-7 incrementally constructs and solves the pose graph optimization problem while carefully initializing the initial poses and making use of poses between the worlds. Our pose graph solver is based upon the work of Sunderhauf *et al.* [@sunderhauf2012switchable]. A separate thread is used for visualizing the poses. Even though we use 7 threads, the effective load factor on the system is about 2.0. This means about two cores are occupied by our system (this does not include the processing for VINS-Fusion Odometry System). We demonstrate the working of our full system (see Fig. \[fig:kidnap-screenshot\]). It is worth noting that in addition to reducing the drift on account of loopclosures, our implementation can reliably identify and recover from kidnap scenarios lasting longer than a minute online in realtime. We attribute such robustness to the high recall rates of the NetVLAD based image descriptor engine. The results in regard to the operation of the full system can be found in the attached video. We record our own data for demonstating the kidnap cases. Some of the kidnap cases are labelled hard which include the need for indirect inference which is not available in the previous slam systems including the relocalization system from Qin *et al.*[@qin2018relocalization]. Experiments {#sec:all_experiments} =========== In this section we describe our experiments. We evaluate the effect of using the allpair loss function compared to the commonly used triplet loss function (Sec. \[sec:exp-effect-of-allpair-loss\]), while keeping the same backend CNNs. Next the effect on running time and memory consumption by the use of decoupled convolutions are tabulated (Sec. \[sec:exp-flops\]). In Sec. \[sec:exp-precision-recall-vpr-seq\] we evaluate the precision-recall performance of the proposed algorithm with other competing methods in the SLAM community and in the visual place recognition community. Finally in Sec. \[sec:exp-online-loop-detections\] we evaluate the performance of the proposed method on a real world SLAM sequences captured under challenging conditions especially revisits occuring with non-fronto parallel configurations and in-plane rotations. For demonstration of our full system on real world sequences refer to the video attachment. Evaluation Metrics for Loss Function {#sec:exp-effect-of-allpair-loss} ------------------------------------ We evaluate the effect of the proposed loss function on NetVLAD compared to the original triplet loss which was used in [@arandjelovic2016netvlad]. We evaluate our proposed loss function using a) relative loss declines b) number of correctly identified pairs from the validation tuple. We also plot the variance dot products of the positive samples amongst themselves. For validation we use the Pitts30K dataset and for training we use the TokyoTM dataset. Similar to the training tuple, a validation tuple contains a query image, nN ($=6$) negative samples and nP ($=6$) positive samples. For evaluation, we propose to count the number of correctly identified image pairs which were actually similar to query images (ground truth) and identified as similar based on the image descriptor dot product scores. Ideally, the evaluation metric should be the SLAM sequences however it becomes infeasible to evaluate with SLAM sequence every say 20 epochs so we stick to this workaround. We plot these metrics for the training data and the validation data as the iterations progress. See Fig. \[fig:triplet-vs-allpair-on-pw13-k16\], \[fig:triplet-vs-allpair-on-vggblock4-k16\], \[fig:triplet-vs-allpair-on-pw7-squash-chnls-k16\] for the plots. The summary of the observation: - Using the same backend convolutional network, the same parameters of the netvlad layer, and same learning hyperparameters, the network trained with the proposed all-pair loss function performs better as evaluated against the validation metric, count of pairs correctly identified. - It can also be inferred that the gradients obtained from the use of proposed all-pair loss function are more stable, hence the faster convergence. - Our all pair loss function was found to perform better even when using the decoupled convolutions, decoupled convolution with channel quashing vs the original VGG16 network. ### Number of Zero Loss Tuples In this experiment, we train with batch size 24. But since this won’t fit in the GPU memory we use gradient cummulation. We plot the number of zero loss sample as iterations progress in Fig. \[fig:batch-zero-loss-compare\]. When using the triplet loss, we get more number of zero loss samples. This results in zero-gradient updates and hence slow learning compared to proposed allpairloss. This can be attributed to allpairloss function being harder to satisfy resulting in better gradients during training. ![The number of batches with zero loss as iterations progress for learning with proposed cost function (in blue, Eq. \[eq:cost\_final\]) compared to using the triplet ranking loss [@arandjelovic2016netvlad] (in red, Eq. \[eq:netvlad\_original\]).This experiments used a batch size of 24 with gradient cummulation. Having a higher count for zero-loss samples is detrimental to learning as it leads to zero-valued gradients. Best viewed in color.[]{data-label="fig:batch-zero-loss-compare"}](exp_iros2018/relative_learning_energy/figure_2.pdf) ### Spreads of Positive and Negative Samples As experimentally observed in Fig. \[fig:pos-set-dev-analysis\], the use of proposed allpair loss function results in a more discriminative image descriptor as compared to the network trained with the triplet loss. We observe a lower spread amongst the positive samples and a larger separation in positive and negative samples. This has the effect that the deployment as loopclosure module being less sensitive to slight changes in dot product thresholds. ![Showing spreads ($\mu \pm \sigma$) of $\langle \eta_q, \eta_{P_i} \rangle$ (in green) and spreads of $\langle \eta_q, \eta_{N_i} \rangle$ (in red) as the learning progresses. Fig. \[fig:pos-set-dev-analysis\] (top) corresponds to [@arandjelovic2016netvlad], with the triplet loss function. Fig. \[fig:pos-set-dev-analysis\] (bottom) corresponds to the proposed allpair loss function. We observe a lower spread amongst positive samples and larger separation between positive and negative samples.[]{data-label="fig:pos-set-dev-analysis"}](exp_iros2018/relative_learning_energy/cropped_figure_3.pdf) ### TripletLoss vs AllpairLoss on Decoupled Net We compare the effect of different loss function when using the decoupled network. The network trained with allpair loss is able to correctly identify almost 60% of the pairs from the tuples drawn from the validation data, compared to when network was trained with tripletloss which is able to identify about 35% correctly under identical conditions. See Fig. \[fig:triplet-vs-allpair-on-pw13-k16\]. ![image](exp_dec_2018/tensorboard/pw13_k16/1.pdf){width="40.00000%"} ![image](exp_dec_2018/tensorboard/pw13_k16/2.pdf){width="40.00000%"} ![image](exp_dec_2018/tensorboard/pw13_k16/3.pdf){width="40.00000%"} ![image](exp_dec_2018/tensorboard/pw13_k16/4.pdf){width="40.00000%"} ### TripletLoss vs AllpairLoss on VGG16 Net Even when trained with VGG16 as the backend CNN, allpairloss performed better than the tripletloss under identical training conditions. See Fig. \[fig:triplet-vs-allpair-on-vggblock4-k16\]. ![image](exp_dec_2018/tensorboard/vggblock4_k16/1.pdf){width="40.00000%"} ![image](exp_dec_2018/tensorboard/vggblock4_k16/2.pdf){width="40.00000%"} ![image](exp_dec_2018/tensorboard/vggblock4_k16/3.pdf){width="40.00000%"} ![image](exp_dec_2018/tensorboard/vggblock4_k16/4.pdf){width="40.00000%"} ### TripletLoss vs AllpairLoss on Decoupled Net with Channel Squashing When using decoupled network with channels squashing (for dimensionality reduction) we observe a better performance when trained with allpairloss. In this configuration the descriptor size is just 512 per image. The training was arguably more unstable in this case (we observe oscillations). Possibly with a lower learning rate this effect can be reduced. See Fig. \[fig:triplet-vs-allpair-on-pw7-squash-chnls-k16\]. ![image](exp_dec_2018/tensorboard/pw7_quash_chnls_k16/1.pdf){width="30.00000%"} ![image](exp_dec_2018/tensorboard/pw7_quash_chnls_k16/2.pdf){width="30.00000%"} ![image](exp_dec_2018/tensorboard/pw7_quash_chnls_k16/3.pdf){width="30.00000%"} ![image](exp_dec_2018/tensorboard/vgg16_block5_k16_k32_k64/com__1.pdf){width="30.00000%"} ![image](exp_dec_2018/tensorboard/vgg16_block5_k16_k32_k64/com__2.pdf){width="30.00000%"} ![image](exp_dec_2018/tensorboard/vgg16_block5_k16_k32_k64/com__3.pdf){width="30.00000%"} Running Times {#sec:exp-flops} ------------- Currently there is rapid progress in compute-capabilities of GPUs. Under such circumstances it is much more appropriate to report the number of floating point operations (FLOPs) for the networks rather than absolute running times in seconds (or milli-seconds). We tabulate in Table \[tabular:gflops-data\] the Giga-FLOPs of the networks under various parameter settings. For reference, the forward pass with VGG base network can be computed in about 40-50ms and with decoupled nets in about 10-15ms for 640x480 3 channel images on Titan X (Pascal). VGG16 with $K=64$ is the recommended configuration from Arandjelovic *et al.* [@arandjelovic2016netvlad], which results in a 32K-dimensional descriptor which is reduced to 4096-dimensional by a linear transformation. This linear transformation takes about 400 MB of memory. On the other hand, our proposed network which uses a network with decoupled convolutions as the base CNN with channel squashing and $K=16$ results in 512-dimensional descriptor (4096-dimensional if not using channel quashing). It is able to run 3-4x faster than NetVLAD [@arandjelovic2016netvlad] while having about 20x fewer floating point operations for a 640x480 image and 5x fewer number of learnable parameters. See table \[tabular:gflops-data\]. It is worth noting that most of the computational load is in the computation of per pixel descriptors and the NetVLAD layer itself takes negligible computations compared to base CNN. For training the networks, we use Intel i7-6800 CPU with Titan X (Pascal), 12GB GPU RAM. It takes about 1 seconds per iteration for forward and backward pass. Note that one iteration with batchsize 4, involve 52 images ($(6+6+1) \times 4$). CNN Layer \# L D-Size Model (MB) -------------------- -------- -------- ------------ ------------- ------------- ------------- ------------- **VGG16\_K16** **** **** **320x240** **640x480** **320x240** **640x480** block5\_pool 14.7M 8192 56.19 234.94 767.56 47.04 188.08 block4\_pool 7.6M 8192 29.19 174.06 725.32 47.05 188.117 block3\_pool 1.7M 4096 6.65 165.47 641.86 47.05 188.167 **VGG16\_K64** **** **** **320x240** **640x480** **320x240** **640x480** block5\_pool 14.78M 32768 56.38 234.32 711.46 47.05 188.11 block4\_pool 7.70M 32768 29.38 203.53 696.23 47.08 188.26 block3\_pool 1.76M 16384 6.75 158.86 635.26 47.13 188.46 **decoup\_K16** **** **** **320x240** **640x480** **320x240** **640x480** pw13 3.2M 8192 12 197.97 792.21 1.742 7.01 pw10 1.36M 8192 5 189.1 734.48 1.749 7.03 pw7 554K 4096 2 164.9 652.25 1.749 7.04 **decoup\_K16\_r** **** **** **320x240** **640x480** **320x240** **640x480** pw13 3.5M 512 12 211.58 793.46 1.742 7.01 pw10 1.49M 512 5 193.86 739.73 1.749 7.03 pw7 686K 512 2 167.67 657.97 1.749 7.04 **decoup\_K64** **** **** **320x240** **640x480** **320x240** **640x480** pw13 3.33M 32768 12 210.98 805.21 1.76 7.08 pw10 1.40M 32768 5 189.38 734.58 1.78 7.18 pw7 600K 16384 2 162.86 652.35 1.78 7.186 : Tabulation of run time memory requirements, learnable parameters (\# L), descriptor size (D-Size), model size in Mega-bytes, giga floating point operation (GFLOPs) for various configurations. We note that $block5\_pool$ for VGG16 network is equal in depth to $pw13$ for decouped network. $block4\_pool$ and pw10 have equal depth; $block3\_pool$ and pw7 have equal depth. K (eg. K16, K64) refers to the number of clusters in NetVLAD layer. We report data for input image size 320x240 and 640x480. We conclude that our proposed decoupled network is 20X faster computationally with an order of magnitude less number of parameters, while delivering about the same performance as the original NetVLAD. Our squashed channel network ‘decoup\_K16\_r‘ gives a descriptor size of 512 with about 5% additional forward pass memory and 2% increase in parameter size with hardly noticible computation time increase. NetVLAD [@arandjelovic2016netvlad] uses a whittening PCA for reducing descriptor dimensionality which needs to store a matrix of size 32Kx4K that takes about 400 MB.[]{data-label="tabular:gflops-data"} Precision-recall Comparison {#sec:exp-precision-recall-vpr-seq} --------------------------- We evaluate the performance on the following datasets: a) **GardensPoint** dataset, b) **CampusLoop** dataset [@merrill2018lightweight], c) our **CampusConcourse** dataset. Each of the datasets contains two sequences, ‘live‘ and ‘memory‘. Note that every image in live sequence has a corresponding image in the memory sequence. For evaluation, we load the memory sequence in the database and compare this database with each of the images in live sequence using a basic nearest neighbour search. Further we also evaluate our performance for the mappilary Berlin streetview dataset [@sunderhauf2015place] i)**berlin-kundamm** ii) **berlin-halenseeestrasse** and iii) **berlin-A100** as has been common amongst visual place recognition community. We compare the proposed method with the recently proposed learning based loop detection methods **CALC** by Merril and Huang [@merrill2018lightweight]. Additionally we also compare with **DBOW** [@galvez2012dbow] (Bag-of-visual words). We evaluate with prominent approaches amongst visual place recognition community, **AlexNet** by Sunderhauf *et al.* [@sunderhauf2015performance], **LA-Net**, by Lopez-Antequera *et al.* [@antequera2017]; **original NetVLAD** [@arandjelovic2016netvlad]; Chen *et al.* [@chen2017only]. The main idea of this evaluation is to gauge the recall rates and discriminative performance of various methods. We take the matches as correctly identified if match’s index is within six indices of itself. For our evaluation, we use the total number of positive matches as the length of the sequence, since every image in the live sequence has a correspondence in the memory-sequence. Total number of accepted matches are those which satisfy the loop hypothesis. The precision-recall curves are formed by sweeping through all the thresholds within the full range of thresholds. The results presented here differ from those presented in [@merrill2018lightweight] as it is not exactly clear how recall=1 was achieved by them, how the DBoW was used to generate these results and any heuristics, if any, was used to identify false positives. As noted by Merril and Huang [@merrill2018lightweight], and by Sunderhauf *et al.* [@sunderhauf2018limits] superior precision-recall does not fully prove the superiority of a method in real loop-closure of a SLAM system. Such factors as repeated objects in scenes, similar looking scenes, invariance to rotation & scale, computation time are important when considering a place recognition system for SLAM’s loop closure. Another issue about such an evaluation is that it cannot gauge a method’s ability to return ’no matches’ in case the query scene is not found in the database. Also all these datasets are rather small (about 80-200 frames) and we cannot evaluate the generalizability of the scene description by each of the methods. For example a dataset with multiple similar looking scene is needed to throughly evaluate a method’s performance. Rotation and scale variance of the method cannot be evaluated with these datasets. So for a better perspective of the usability of the methods we also evaluate them on live SLAM sequences with manually marked loopclosure detections for evaluation (details in section \[sec:exp-online-loop-detections\]). ![image](post_iros2018_exp/precision_recall/pr_campusloop.pdf){width="30.00000%"} ![image](post_iros2018_exp/precision_recall/pr_gardenspoint.pdf){width="30.00000%"} ![image](post_iros2018_exp/precision_recall/pr_uniconcourse.pdf){width="30.00000%"} ### Walking-apart Sequence For precision-recall curves see Fig. \[fig:precision\_recall\_memory\_live\_sequences\]. The *CampusLoop sequence* contains appearance variation due to changing weather condition. Our method does not explicitly deal with this kind of variation as it is primarily based on color cues. The method CALC, for example, is based on scene structure. Our method delivers comparable performance in this sequence. For the other two testing sequences, viz. *GardenPoint* and *UniConcourse*, our method performs better than previous methods. This is attributed to the fact that the descriptors learned by our method are able to generalize well into identifying place revisits at large viewpoint difference and rotational variance, which is the case in these two sequences. When compared to NetVLAD [@arandjelovic2016netvlad] which uses the triplet loss and VGG network, we observe a slight boost of recall rates. Since these sequences are very small, the higher capacity of the proposed method is not observable in this case. ### AUC Performance on Mappilary Dataset We evaluate our method with other state of the art methods with area-under-the-curve (AUC) of the precision-recall plot on the mappilary berlin-streets dataset in Fig. \[fig:mappilary-AUC\]. Each of the three test sequences contain two sets of images. Note that each image in second of the two sets has a pre-image in the first set. These datasets are 80-200 frames each. Although considerable viewpoint and light variation exists amongst the two sets, there is no rotation variation. We test our method with VGG16 backend CNN and with decoupled-net backend CNN. We compare with FAB-MAP [@cummins2011appearance], SEQSLAM [@milford2012seqslam], Chen *et al.* [@chen2017only], NetVLAD [@arandjelovic2016netvlad]. In this case, Chen *et al.*’s method outperform. It is worth noting that Chen’s method [@chen2017only] makes use of region proposal and is not a real-time (or near real-time) method. ![image](exp_dec_2018/mappilary_precision_recall/A100.pdf){width="30.00000%"} ![image](exp_dec_2018/mappilary_precision_recall/halenseestrasse.pdf){width="30.00000%"} ![image](exp_dec_2018/mappilary_precision_recall/kundamm.pdf){width="30.00000%"} Online Loop Detections {#sec:exp-online-loop-detections} ---------------------- We compare the performance of the descriptors produced from the proposed method to some of the relavant methods with real world sequences. We introduce three sequences and refer them as ’Live Walks Dataset’, each is about 10min of walking. The main differentiating point compared to the standard KITTI dataset is that ours contains adversaries like revisits under large viewpoint difference, moving objects (people), noise, lighting changes, in-plane rotation to name a few. Two of which were captured with a gray scale camera and one of it was captured with a color camera. We also provide manually marked ground truth labels for loop detections along with odometry of the poses for visualization. The odometry was not used for identifying loops. Note that for a real sequence with N keyframes there are a little less than $N^2$ pair of loop-frames. The human was shown every pair and asked to mark the pairs which were the same place. Using these manual annotations, there are 3 kinds of pairs. a) pairs not detected by the algorithm, ie. missed pairs b) wrongly detected pairs, ie. pairs which were in reality different places but algorithm identified it as the same place. c) pairs correctly identified, ie. pairs which were marked by the algorithm as same places and were in reality same places. We compare our method with some of the relavant methods on our real world datasets. We plot the precision-recall under various threshold settings. We define precision as the fraction of candidate loops which were actually loops. By recall we mean the fraction of actual loops identified. We do not use any geometric verification step to boost our precision, the results shown in this section are from raw image descriptor comparison. With geometric verification, precision of almost 100% can be easily accomplished. We use our method in various configurations a) decoupled net as base CNN, K=16 (descriptor size of 4096), b) VGG16 as base CNN, K=16, c) decoupled net with squashed channels, K=16 (descriptor size of 512). We compare with i) NetVLAD [@arandjelovic2016netvlad], ii) Merril and Huang [@merrill2018lightweight], iii) Sunderhauf *et al.* [@sunderhauf2015performance], iv) DBOW [@galvez2012dbow] and v) ibow-lcd [@garcia2018ibow]. We acknowledge the superior performance of Z. Chen *et al.* [@chen2017only] method and possibly also of Sunderhauf *et al.* [@sunderhauf2015place] on the mappilary dataset. However, it was not practical to test it on our datasets which are an order of magnitude larger than those dataset. It takes about 1-1.5 sec/frame for descriptor computation and about 800ms-1.2 sec/pair for descriptor comparison. For our dataset of 5000 keyframes the provided MATLAB implementation would take almost 100 days (number of comparisons would be $5000+4999+4998+...$ ). Arguably a faster implementation could acomplish the task in about a day or two for a 5000 frame or 15 min walking video. Thus this method is no where close to being real-time. Hence it was not compared. We also note that the running time for Sunderhauf *et al.* [@sunderhauf2015place] is in similar range to Z. Chen *et al.*’s method. ![Precision-recall plot for the sequence ‘mynt\_coffee-shop‘ when compared to manual annotations of loop candidates and threshold varied. We compare the following methods: Relja NetVLAD [@arandjelovic2016netvlad], decoupled net with channel squashing (proposed), decoupled net without channel squashing, CALC [@merrill2018lightweight], ibow-lcd [@garcia2018ibow] and DBOW [@galvez2012dbow]. []{data-label="fig:pr-plot-mynt-coffee-shop-seq"}](exp_dec_2018/precision_recall_details/pr_mynt_coffee-shop.pdf){width="\columnwidth"} ![Similar to Fig. \[fig:pr-plot-mynt-coffee-shop-seq\] but for sequence ‘mynt\_seng‘. []{data-label="fig:pr-plot-mynt-seng-seq"}](exp_dec_2018/precision_recall_details/pr_mynt_seng.pdf){width="\columnwidth"} ![image](post_iros2018_exp/seq_base_2/manual_annotated.png){width=".32\textwidth"} ![image](post_iros2018_exp/seq_base_2/proposed.png){width=".32\textwidth"} ![image](post_iros2018_exp/seq_base_2/manual_2.jpg){width=".32\textwidth"} \ ![image](post_iros2018_exp/seq_base_2/vanila_netvad_vgg6.png){width=".185\textwidth"} ![image](post_iros2018_exp/seq_base_2/calc.png){width=".185\textwidth"} ![image](post_iros2018_exp/seq_base_2/lanet.png){width=".185\textwidth"} ![image](post_iros2018_exp/seq_base_2/alexnet.png){width=".185\textwidth"} ![image](post_iros2018_exp/seq_base_2/dbow2.png){width=".185\textwidth"} \ \ \ The proposed method is able to detect revisits in all the regions for this test sequence. We attribute this to the NetVLAD architecture which cumulates the descriptors so as to become somewhat invariant to in-plane rotations. Other learning based methods for example CALC totally miss the revisits occurring under dim to moderate lighting. This can be attributed to the fact that it is based on HOG which under dim lighting do not provide enough spread for the histogram to generate meaningful descriptors. CALC and DBOW as expected are able to work in situations with in-plane rotations (in region 1 of Fig. \[fig:intro-example\]). AlexNet and LA-Net being essentially off-the-shelf parameters learned for object classification task are not invariant to rotations. DBOW however is easily confused if two scene share similar looking textures, for example, the texture of the ceiling in otherwise different looking scenes. DBOW also perform poorly under low-contrast scenes. Compared to the original NetVLAD descriptor we observe a boost in recall rates for the proposed method. A precision-recall curve when comparing the methods to human marked loop candidates is presented in Fig. \[fig:seq\_base\_2\_various\_thresholds\]. ![Loop closure candidates (in red) as we vary the thresholds on VIO (green) for sequence ’base-2’ for the proposed method (in row-1); NetVLAD [@arandjelovic2016netvlad] (in row-2); CALC[@merrill2018lightweight] (in row-3) and DBOW [@galvez2012dbow] (in row-4). Along the columns are various thresholds. Leftmost is for loosest, rightmost is for tightest. Row-5 shows the PR-curve for each method where compared to human marked loop-candidates.[]{data-label="fig:seq_base_2_various_thresholds"}](post_iros2018_exp/seq_base_2/various_threshold_proposed/2.png){width="0.28\columnwidth"} ![Loop closure candidates (in red) as we vary the thresholds on VIO (green) for sequence ’base-2’ for the proposed method (in row-1); NetVLAD [@arandjelovic2016netvlad] (in row-2); CALC[@merrill2018lightweight] (in row-3) and DBOW [@galvez2012dbow] (in row-4). Along the columns are various thresholds. Leftmost is for loosest, rightmost is for tightest. Row-5 shows the PR-curve for each method where compared to human marked loop-candidates.[]{data-label="fig:seq_base_2_various_thresholds"}](post_iros2018_exp/seq_base_2/various_threshold_proposed/3.png){width="0.28\columnwidth"} ![Loop closure candidates (in red) as we vary the thresholds on VIO (green) for sequence ’base-2’ for the proposed method (in row-1); NetVLAD [@arandjelovic2016netvlad] (in row-2); CALC[@merrill2018lightweight] (in row-3) and DBOW [@galvez2012dbow] (in row-4). Along the columns are various thresholds. Leftmost is for loosest, rightmost is for tightest. Row-5 shows the PR-curve for each method where compared to human marked loop-candidates.[]{data-label="fig:seq_base_2_various_thresholds"}](post_iros2018_exp/seq_base_2/various_threshold_proposed/4.png){width="0.28\columnwidth"} ![Loop closure candidates (in red) as we vary the thresholds on VIO (green) for sequence ’base-2’ for the proposed method (in row-1); NetVLAD [@arandjelovic2016netvlad] (in row-2); CALC[@merrill2018lightweight] (in row-3) and DBOW [@galvez2012dbow] (in row-4). Along the columns are various thresholds. Leftmost is for loosest, rightmost is for tightest. Row-5 shows the PR-curve for each method where compared to human marked loop-candidates.[]{data-label="fig:seq_base_2_various_thresholds"}](post_iros2018_exp/seq_base_2/various_threshold_vanilla_netvlad/2.png){width="0.28\columnwidth"} ![Loop closure candidates (in red) as we vary the thresholds on VIO (green) for sequence ’base-2’ for the proposed method (in row-1); NetVLAD [@arandjelovic2016netvlad] (in row-2); CALC[@merrill2018lightweight] (in row-3) and DBOW [@galvez2012dbow] (in row-4). Along the columns are various thresholds. Leftmost is for loosest, rightmost is for tightest. Row-5 shows the PR-curve for each method where compared to human marked loop-candidates.[]{data-label="fig:seq_base_2_various_thresholds"}](post_iros2018_exp/seq_base_2/various_threshold_vanilla_netvlad/3.png){width="0.28\columnwidth"} ![Loop closure candidates (in red) as we vary the thresholds on VIO (green) for sequence ’base-2’ for the proposed method (in row-1); NetVLAD [@arandjelovic2016netvlad] (in row-2); CALC[@merrill2018lightweight] (in row-3) and DBOW [@galvez2012dbow] (in row-4). Along the columns are various thresholds. Leftmost is for loosest, rightmost is for tightest. Row-5 shows the PR-curve for each method where compared to human marked loop-candidates.[]{data-label="fig:seq_base_2_various_thresholds"}](post_iros2018_exp/seq_base_2/various_threshold_vanilla_netvlad/4.png){width="0.28\columnwidth"} ![Loop closure candidates (in red) as we vary the thresholds on VIO (green) for sequence ’base-2’ for the proposed method (in row-1); NetVLAD [@arandjelovic2016netvlad] (in row-2); CALC[@merrill2018lightweight] (in row-3) and DBOW [@galvez2012dbow] (in row-4). Along the columns are various thresholds. Leftmost is for loosest, rightmost is for tightest. Row-5 shows the PR-curve for each method where compared to human marked loop-candidates.[]{data-label="fig:seq_base_2_various_thresholds"}](post_iros2018_exp/seq_base_2/various_threshold_calc/2.png){width="0.28\columnwidth"} ![Loop closure candidates (in red) as we vary the thresholds on VIO (green) for sequence ’base-2’ for the proposed method (in row-1); NetVLAD [@arandjelovic2016netvlad] (in row-2); CALC[@merrill2018lightweight] (in row-3) and DBOW [@galvez2012dbow] (in row-4). Along the columns are various thresholds. Leftmost is for loosest, rightmost is for tightest. Row-5 shows the PR-curve for each method where compared to human marked loop-candidates.[]{data-label="fig:seq_base_2_various_thresholds"}](post_iros2018_exp/seq_base_2/various_threshold_calc/3.png){width="0.28\columnwidth"} ![Loop closure candidates (in red) as we vary the thresholds on VIO (green) for sequence ’base-2’ for the proposed method (in row-1); NetVLAD [@arandjelovic2016netvlad] (in row-2); CALC[@merrill2018lightweight] (in row-3) and DBOW [@galvez2012dbow] (in row-4). Along the columns are various thresholds. Leftmost is for loosest, rightmost is for tightest. Row-5 shows the PR-curve for each method where compared to human marked loop-candidates.[]{data-label="fig:seq_base_2_various_thresholds"}](post_iros2018_exp/seq_base_2/various_threshold_calc/4.png){width="0.28\columnwidth"} ![Loop closure candidates (in red) as we vary the thresholds on VIO (green) for sequence ’base-2’ for the proposed method (in row-1); NetVLAD [@arandjelovic2016netvlad] (in row-2); CALC[@merrill2018lightweight] (in row-3) and DBOW [@galvez2012dbow] (in row-4). Along the columns are various thresholds. Leftmost is for loosest, rightmost is for tightest. Row-5 shows the PR-curve for each method where compared to human marked loop-candidates.[]{data-label="fig:seq_base_2_various_thresholds"}](post_iros2018_exp/seq_base_2/various_threshold_dbow/2.png){width="0.28\columnwidth"} ![Loop closure candidates (in red) as we vary the thresholds on VIO (green) for sequence ’base-2’ for the proposed method (in row-1); NetVLAD [@arandjelovic2016netvlad] (in row-2); CALC[@merrill2018lightweight] (in row-3) and DBOW [@galvez2012dbow] (in row-4). Along the columns are various thresholds. Leftmost is for loosest, rightmost is for tightest. Row-5 shows the PR-curve for each method where compared to human marked loop-candidates.[]{data-label="fig:seq_base_2_various_thresholds"}](post_iros2018_exp/seq_base_2/various_threshold_dbow/3.png){width="0.28\columnwidth"} ![Loop closure candidates (in red) as we vary the thresholds on VIO (green) for sequence ’base-2’ for the proposed method (in row-1); NetVLAD [@arandjelovic2016netvlad] (in row-2); CALC[@merrill2018lightweight] (in row-3) and DBOW [@galvez2012dbow] (in row-4). Along the columns are various thresholds. Leftmost is for loosest, rightmost is for tightest. Row-5 shows the PR-curve for each method where compared to human marked loop-candidates.[]{data-label="fig:seq_base_2_various_thresholds"}](post_iros2018_exp/seq_base_2/various_threshold_dbow/4.png){width="0.28\columnwidth"} \ ![Loop closure candidates (in red) as we vary the thresholds on VIO (green) for sequence ’base-2’ for the proposed method (in row-1); NetVLAD [@arandjelovic2016netvlad] (in row-2); CALC[@merrill2018lightweight] (in row-3) and DBOW [@galvez2012dbow] (in row-4). Along the columns are various thresholds. Leftmost is for loosest, rightmost is for tightest. Row-5 shows the PR-curve for each method where compared to human marked loop-candidates.[]{data-label="fig:seq_base_2_various_thresholds"}](post_iros2018_exp/seq_base_2/mani_base-2-PR.pdf){width="0.90\columnwidth"} ![The results of the proposed method on KITTI00 and KITTI05. The XY plane is the 2d location of the trajectory. z-axis represents the frame number. In this dataset the revisits occur at similar viewpoints, the performance of all the compared methods is almost the same. []{data-label="fig:kitti"}](post_iros2018_exp/kitti/kitti00.jpg){width="0.6\columnwidth"} \ ![The results of the proposed method on KITTI00 and KITTI05. The XY plane is the 2d location of the trajectory. z-axis represents the frame number. In this dataset the revisits occur at similar viewpoints, the performance of all the compared methods is almost the same. []{data-label="fig:kitti"}](post_iros2018_exp/kitti/kitti05.jpg){width="0.6\columnwidth"} Full System Experiments ----------------------- We also experiment with our entire system involving relative pose computations at the loopcandidates and pose graph solver with kidnap recovery mechanism. Our experimental setup involves just the ’MYNT EYE D’ [^6] camera. It includes a stereo camera pair and an 200 Hz IMU with frame and IMU sync of about 1 ms. We kidnap the camera by blocking the view of the camera and trasporting it to another location. Additionally, we also experiment with the EuRoC MAV dataset [@Burri_euroc_dataset] which also have stereo camera data and IMU. A representative live real-time run of the system is shown in Fig. \[fig:kidnap-screenshot\]. Our system can identify and recover from kidnaps online and in realtime. A comparison of the loop detections with the VINSFusion system and our proposed system is shown in Fig. \[fig:overlay-loopedge-compare-with-vinsfusion\]. This sequence repeatedly traverse a hall at non-fronto-parallel views and with in-plane rotations. Our system is able to correctly recognize and compute relative poses at loopclosures involving large viewpoint differences and in-plane rotations. We highlight the distinguishing points of our system compared to *colmap* [@schoenberger2016sfm] and *maplab* [@schneider2018maplab]. *Colmap* is a general 3D reconstruction system and involves offline processing of unordered image sets. The *maplab* system provides an online tool, *ROVIOLI* which is essentially a visual-inertial odometry and localization front-end. Although it provides a console based interface for multi-session map merging, it cannot identify kidnaps and recover from them online. Our system essentially fills in this gap. ![Comparing revisit detections of the proposed method (top) and VINS-Fusion, which uses DBOW2 (2nd row). This sequence contains repeated traversal in a hall of 15mx5m at various rotations and viewpoints. Although bag-og-words based method perform well under fronto-parallel view it has very low recall compared to our method on larger viewpoint difference. A side-by-side live run of this sequence is available at <https://youtu.be/dbzN4mKeNTQ>. Row-3 to row-5 shows some representative looppairs which we identified by our methods as loops but were missed out by DBOW2 in VINS-Fusion.[]{data-label="fig:overlay-loopedge-compare-with-vinsfusion"}](compare_wth_vinsfusion/cerebro.png){width="0.45\columnwidth"} ![Comparing revisit detections of the proposed method (top) and VINS-Fusion, which uses DBOW2 (2nd row). This sequence contains repeated traversal in a hall of 15mx5m at various rotations and viewpoints. Although bag-og-words based method perform well under fronto-parallel view it has very low recall compared to our method on larger viewpoint difference. A side-by-side live run of this sequence is available at <https://youtu.be/dbzN4mKeNTQ>. Row-3 to row-5 shows some representative looppairs which we identified by our methods as loops but were missed out by DBOW2 in VINS-Fusion.[]{data-label="fig:overlay-loopedge-compare-with-vinsfusion"}](compare_wth_vinsfusion/vinsfusion_dbow2.png){width="0.45\columnwidth"} \ \ \ Conclusion and Future Work ========================== We proposed a data-driven, weakly supervised approach to learn a scene representation for use in loopclosure module of a SLAM system. Additionally we demonstrated the use of the disjoint dataset structure to maintain set associations of multiple coordinate systems for online merging of multiple pose graphs. Unstable learning was observed for the original NetVLAD [@arandjelovic2016netvlad] which made use of tripletloss for training. This was observed to be especially prominent when trained with smaller number of clusters. The issue was mitigated with use of the proposed allpairloss function. This resulted in higher performance even with a smaller number of cluster in the NetVLAD layer. For realtime performance we made use the decoupled convolutional layer instead of the standard convolutions for speed. The network with decoupled convolutions are almost 3X faster in computation time with 5-7X fewer learnable parameters. To evaluate precision-recall performance for loopclosure detection in a real SLAM system, we compare our method with popular BOVW-based methods along with state-of-the-art CNN-based methods on real world sequences. Qualitative and quantitative experiments on standard datasets as well as self-captured challenging sequences with adversaries including revisits at large viewpoint difference, in-plane rotation, dim lighting etc. suggest that proposed method can identify loopcandidates under substantial viewpoint difference. We also observe a boost in recall rates when compared to training with original NetVLAD. Also our descriptors are found to be fairly invariant to rotation and lighting changes. In addition to the precision-recall we also demonstrate the real-time working of our method as a pluggable module for VINS-Fusion. Our system is not only able to reduce drift but also identify and recover from complicated kidnap scenarios and failures. A robust place recognition module is a critical element for the SLAM dependent fields: long-term autonomy and augmented reality. In addition to a robust and discriminative image representation, the use of text and object information to further disambiguate similar looking places and provide semantic cues to underlying planning methods could be a way forward for a truly intelligent and scalable place recognition system. To aid the development of such a system, we opensource our implementation and our dataset along with human annotated looppairs to the research community. [^1]: <https://github.com/mpkuse/cartwheel_train> [^2]: <https://github.com/HKUST-Aerial-Robotics/VINS-kidnap> [^3]: https://www.mapillary.com/ [^4]: https://github.com/HKUST-Aerial-Robotics/VINS-Fusion [^5]: Implementation from *theia-sfm* (<http://www.theia-sfm.org/>) [^6]: https://www.mynteye.com
--- abstract: 'The Laboratory of Metrology in Electrical Standards of Inmetro (Lampe) has built, in 2010, a transformer which is able to perform inductive voltage divider ratio calibrations using the bootstrap method. This transformer was idealized according to the design adopted at [*Physikalisch-Technische Bundesanstalt*]{} (PTB), and has very similiar characteristics to that system. Lampe’s bootstrap system is not yet in operation, and DIT calibrations are presently being carried according to a triangular set of measurements involving standard capacitors. This method, though, has uncertanties limited to a few parts in $10^6$; the primary objective of Lampe’s bootstrap system project is to achieve uncertanties at least 10 times smaller than this.' author: - 'F. A. Silveira${}^{}$[^1]' title: Bootstrap Calibration of Inductive Voltage Dividers at Inmetro --- Introduction {#intro} ============ The project of a DIT calibration system at Lampe is centered on a two-stage transformer capable of making bootstrap measurements of the ratio between inductive voltage dividers (IVD) steps, as thoroughly described in [@kibble; @hall1968]. Inmetro’s bootstrap transformer has been constructed in a 2010 cooperation with PTB [@kyriazis2012] (and is itself based on the PTB system), in order to develop a system capable of calibrating the 10:-1 ratio of the main IVD used in coaxial 2- and 4-terminal-type impedance bridges. Lampe has two capacitance bridges which have been in operation for 5 to 6 years now. These bridges are an important link in the traceability chain to derive the capacitance unit from the quantum Hall resistance [@schurr2002]. And with the intent of showing how we plan to improve our calibration uncertainty of measurements carried in these bridges, this work presents a brief description of our bootstrap system project, and points a few important characteristics of the bootstrap transformer constructed here. System Diagram {#diagrams} ============== Following the original PTB project, the bootstrap transformer constructed at Inmetro is capable of providing voltage (both in-phase and quadrature) to two current injection systems simultaneously. Fig. \[bootstrapsys\] shows an updated schematics for the complete IVD calibration setup; in this figure, we highlight the bootstrap transformer constructed at Inmetro [@kyriazis2012], at the extreme left of this drawing. This transformer is two-stage-type, and is apt to calibrate two-stage IVDs at 10:-1 ratio. For a revision of the two-stage principle of transformers, as well as the bootstrap method, the reader is referred to [@kibble; @hall1968]. The potencial difference to be used as reference to the taps of the object IVD is provided through the triaxial connectors [FH,FL]{}. These two output terminals, and both IVD and bootstrap windings wired by them, make the main loop of the calibration circuit. The potencial difference between any given IVD taps $n,n+1$ for each $n=-1,\cdots, 9$ is to be compared to the the reference voltage $U_{Ref}$ at the bootstrap terminals [FH]{} and [FL]{}, and is denoted [@kibble] [ $${U_n-U_{n-1}=U_{Ref}\left(1+\alpha_{n}\right),} \label{diferenca}$$ ]{} where $\alpha_n$ is the reading of the ratio on the knobs of the panel of the injection inductive decade $CN_1$. $CN_1$ is a 6-decade, single-core coaxial divider with two output taps; $T_1$ is a 100:1 ratio toroidal transformer. The subsystem formed by $CN_1$, $T_1$ and the RC phase shifter make the voltage injection system over the main loop, to be described in Sec. \[injection\]. In the triaxial branches, the outermost shields of the cables are taken to guard potencial levels, which equal the potencials of the two IVD taps to be measured. These guard potencials are selected through two cursors (labelled $SW_1$ in Fig. \[bootstrapsys\]) that run jointly between the potencials of ${\tt A_{10},A_{-1}}$. Guarding is meant to supress the parasitic currents from the center conductors of $\tt FH,FL$ terminals by taking them to the same potencial as the innermost shield of the cables. For a revision of the guarding/shielding principles, see, e.g., [@morrison] The bridge is balanced by connecting two taps of the IVD to the $\tt FH,FL$ terminals of the bootstrap transformer. Then a guard potencial should be adjusted on $SW_2$ that suits to the selected taps of the IVD. Finally, $CN_1$ and $CN_2$ are serially adjusted. All these steps are sequentially and iteractivelly repeated until the voltage read through the lock-in amplifier fluctuates below 30 or 20 $\mu$V, when the net gain in the pre- and lock-in amplifier stages are set to about $10^3$. ![image](fig1.jpg){width="\linewidth"} Supply and compensation {#supply} ----------------------- The system shown in Fig. \[bootstrapsys\] is supplied through a sinewave generator, a power amplifier and an isolation two-stage transformer, which provide the needed voltage submultiples. The isolation transformer output provide the proper supply to the IVD to be calibrated (shown at the center of Fig. \[bootstrapsys\]) and the bootstrap transformer (shown at the extreme left of the schematics.) The transformer is supplied through coaxial terminals ${\tt A_{10},A_{-1}}$ (magnetising stage) and $\tt D_{10},D_{-1}$ (ratio stage.) Connectors labelled [TQ,MQ]{} supply in-phase and quadrature components to (the magnetising stage of) the injection system. In practice, however, the inductive decades available in Lampe to build the injection system are single-core type. So, only the output [MQ]{} of the transformer is connected to the current injection system. IVD-based bridge comparison circuits estabilish the relation between the IVD ratio in one arm and unknown impedances on the other, under the constraint that equilibrium holds between two defined nodes. Parasitic admittances on the (coaxial) ports of the circuit elements cause current deviations that may be important to the final balance. In order to compensate for the effect of these parasitics, we couple between the source terminals and ground a RC complex circuit that balances the outer shield potencial of the cables to the ground potencial, thus minimising the stray currents at the ports of the circuit elements. The most widely used source compensation circuit on impedance bridges of the kind discussed here is the [*wagner circuit*]{} [@kibble; @hague]. The isolation transformer is provided with a wagner-type compensation subcircuit, all-contained in the chassis of the inductive decade labelled $CN_2$ in Fig. \[bootstrapsys\]. Injection/detection {#injection} ------------------- The injection decade $CN_1$ is combined to a phase shifter (made out of an RC network) at the output which is meant to provide a small quadrature component. Both in-phase and quadrature components of $CN_1$ full voltage are then mixed in the injection 100:1 transformer coupled to the inner conductor of [FH]{}, and the whole system add up to inject small, adjustable in-phase and quadrature voltage components over the main loop. The impedances of the phase shifter whirl about 100 $\Omega$ (trimmed resistance) and 1 pF (adjustable capacitance). A similar system detects the total voltage difference on the [FL]{} terminal and read it through low noise pre- and lock-in amplifiers, where the null condition of the bridge is read in both its in-phase and quadrature components. The potencials of the pairs of terminals [MP,MQ]{} and [TP,TQ]{} are selected through individual switches to ratios $\pm 0,0001$ to $\pm 1$ of the potencial difference between the terminals [FH]{} and [FL]{}. These potencials are straight derivations from the magnetising ([MP,MQ]{}) and ratio ([TP,TQ]{}) windings, and can provide voltage source to the magnetising and ratio stages of a two-stage combining network. For the sake of concision, only $\tt TQ$ and $\tt MQ$ are shown in Fig. \[bootstrapsys\]. The omitted $\tt TP,MP$ derivations, coupled to the magnetising and ratio windings, are in everything symmetric to $\tt TQ,MQ$, though. Injection transformer detail {#detail} ============================ Aside $\tt FH,FL$ triax terminals, the connections to the bootstrap and the injection/detection transformers all are of the coaxial BPO type; the $\tt FH,FL$ terminals are of the 1051 DKE/Fischer type. Fig \[exploded\] shows an exploded view of such an injection/detection transformer. It shows the coaxial chassis with BPO- and Fischer-type connectors, the toroidal windings and the central triaxial cable that works as the one-winding coil of this device. The guarding at the point where $\tt T_1$ (alternativelly, $\tt T_2$) couples to $\tt FH$ ($\tt FL$) brings about some delicate management of the internal shielding of this transformer. A special geometry of the two shield layers of the cable is required to avoid stray currents and the resulting leakage inductance. Fig. \[internview\] shows a plane section of the windings coupling. At the middle of the triax cable shown in this figure, there is a small section of the shields (not explicit shown in the picture) that leaves the center conductor electrically exposed to the magnetic flux in the toroidal core. This sectioning follows the tecniques described in [@kibble] for minimising stray capacitances between the shields sections. These parasitic effects are well discussed in [@kibble] and references therein, and they can be avoided (or at least greatly attenuated) with a particular, distinct superposition of the shields at the point of magnetic coupling also applied in [@kyriazis2012]. Its application to $\tt T_1, T_2$, as well as its effectiveness will be detailed elsewhere, after further tests have been carried on. .5cm \[exploded\] .5cm \[internview\] Conclusions {#conclusao} =========== As said before, Lampe’s bootstrap measurement system is not yet in operation, and DIT calibrations are presently being carried based on the external calibration values of standard capacitors. This method has uncertanties limited to a few parts in $10^6$, which may be considered to be large for this type of measurement. In addition, having our standards calibrated externally involves large costs, both on financial and logistic sides. The main objective of Lampe’s bootstrap system project is to achieve uncertanties in the $10^{-7}$ or $10^{-8}$ range. The circuit element on which the system is based, the bootstrap transformer was built in 2010. This project went into a halt for some time now, and measurements still must be made on the transformer to test for some of its characteristics, like its behaviour under loading conditions and stability. We’re still short of but a few circuit elements in order to assemble the bootstrap bridge system shown in Fig \[bootstrapsys\], but the next stages are already in progress. We have already submitted to Inmetro’s precision workshop the projects of Figs. \[exploded\] and \[internview\], which should be ready soon, and started the building of cabling and peripheral connections. For instance, we already dispose of a rich supply of choke cores, an important part of any AC precision coaxial circuit used to equalize currents between inner and outer shields of the cables, and thus minimise the effects of electromagnetic noise on closed loops [@kibble; @morrison; @hague]. Also, most of the connectors and cabling are of the coaxial BNC/RG-56, 50$\Omega$ commercial type, and are not currently a problem. All in all, we expect to start making measurements over the next year. Acknowledgements {#ack} ================ The author thanks G. A. Kyriazis for the support in retrieving notes on the Inmetro-PTB cooperation, as well as for important suggestions on the design of the injection/detection systems.   [10]{} B. P. Kibble e G. H. Rayner, [*Coaxial AC Bridges*]{}, Adam Hilger LTD., Bristol (1984). H. P. Hall, [*An exercise in voltage division*]{}, General Radio Experimenter [**42**]{}, No. 5 (1968). G. A. Kyriazis, E. Afonso and J. Melcher, [*Design and Construction of a Bootstrap Transformer at Inmetro*]{}, CPEM Digest (Conference on Precision Electromagnetic Measurements), (2012) Physikalisch-Technische Bundesanstalt (PTB), J. Schurr and J. Melcher, [*Modular System for the Calibration of Capacitance Standards Based on the Quantum Hall Effect*]{}, Workshop and Project Documentation, January 2002. R. Morrison, [*Grounding and Shielding, Circuits and Interference*]{}, John Wiley & Sons, 5th ed., U.S. (2007). B. Hague (revised by T. R. Foord), [*Alternating Current Bridge Methods*]{}, Pitman, 6th ed., London (1971). [^1]: Email address: [*fsilveira@inmetro.gov.br*]{}
--- abstract: 'In 1957, De Giorgi [@De1] proved the Hölder continuity for elliptic equations in divergence form and Moser [@Mo1] gave a new proof in 1960. Next year, Moser [@Mo2] obtained the Harnack inequality. In this note, we point out that the Harnack inequality was hidden in [@De1].' address: - - 'School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an 710049, China' author: - Dongsheng Li - Kai Zhang date: 'December 14, 2015' title: A note on the Harnack inequality for elliptic equations in divergence form --- [^1] The Harnack inequality ====================== Consider the following elliptic equation: $$\label{e1.1} (a^{ij}u_i)_j=0~~~~\mbox{in}~~Q_6,$$ where $(a^{ij})_{n\times n}$ is uniformly elliptic with ellipticity constants $\lambda$ and $\Lambda$. In this note, $Q_r(x_0)$ denotes the cube with center $x_0$ and side-length $r$, and $Q_r:=Q_r(0)$. In 1961, Moser [@Mo2] obtained the following Harnack inequality: \[th1.3\] Let $u\geq 0$ be a weak solution of (\[e1.1\]). Then $$\label{e1.5} \sup_{Q_{1}} u \leq C\inf_{Q_{1}} u,$$ where $C$ depends only on $n$, $\lambda$ and $\Lambda$. The method used in [@Mo2] is that first to estimate the upper and lower bound of $u$ in terms of $||u||_{L^{p_0}}$ and $||u^{-1}||_{L^{p_0}}$ respectively by an iteration for some $p_0>0$ and then to join these two estimates together to obtain (\[e1.5\]) by the John-Nirenberg inequality. In 1957, De Giorgi [@De1] proved the Hölder continuity for weak solutions of (\[e1.1\]) and Moser [@Mo1] gave a new proof later. The following are two of the main results in [@De1] and [@Mo1] (see also [@De2]): \[th1.1\] Let $u\geq 0$ be a weak subsolution of (\[e1.1\]). Then $$\label{e1.2} \|u\|_{L^{\infty}(Q_1)}\leq C \|u\|_{L^{2}(Q_3)},$$ where $C$ depends only on $n$, $\lambda$ and $\Lambda$. \[th1.2\] Let $u\geq 0$ be a weak supsolution of (\[e1.1\]). Then for any $c_0>0$, there exists a constant $c$ depending only on $n$, $\lambda$, $\Lambda$ and $c_0$ such that $$\label{e1.3} m\{x\in Q_{1}: u(x)>c\}>c_0 \Rightarrow u>1 \mbox{ in } Q_{3},$$ where $m$ denotes the Lebesgue measure. In this note, we will prove Theorem \[th1.3\] by the above two theorems directly. That is, De Giorgi’s proof implies Harnack inequality. This was first noticed by DiBenedetto [@P1]. Some other new approaches to Harnack inequality can be founded in [@P2] and [@P3], where U. Gianazza and V. Vespri [@P3] requires only a qualitative boundedness of solutions, which is different from here. Proof of Theorem[\[th1.3\]]{} ============================= In the following, we present the key points for obtaining Theorem \[th1.3\] by Theorem \[th1.1\] and \[th1.2\]. First, we show that Theorem \[th1.1\] implies the following local maximum principle: \[le2.1\] Let $u\geq 0$ be a weak subsolution of (\[e1.1\]) and $p_0>0$. Then $$\label{e1.6} \|u\|_{L^{\infty}(Q_{1})}\leq C \|u\|_{L^{p_0}(Q_3)},$$ where $C$ depends only on $n$, $\lambda$, $\Lambda$ and $p_0$. By the interpolation for $L^p$ functions and , for any $\varepsilon >0$, we have $$\|u\|_{L^{\infty}(Q_{1})}\leq \varepsilon \|u\|_{L^{\infty}(Q_{3})}+C(\varepsilon) \|u\|_{L^{p_0}(Q_3)}$$ whose scaling version is $$\label{e2.2} r^{\frac{n}{p_0}}\|u\|_{L^{\infty}(Q_{r}(x_0))}\leq \varepsilon r^{\frac{n}{p_0}}\|u\|_{L^{\infty}(Q_{3r}(x_0))}+C(\varepsilon) \|u\|_{L^{p_0}(Q_{3r}(x_0))}$$ for any $Q_{6r}(x_0)\subset Q_6$. Given $x_0\in Q_3$, denotes by $d_{x_0}$ the distance between $x_0$ and $\partial Q_3$. Then the cube $Q_{6r}(x_0)\subset Q_3$ where $r=d_{x_0}/3\sqrt{n}$, and we have $$\begin{aligned} d_{x_0}^{\frac{n}{p_0}}|u(x_0)|&\leq C(n)r^{\frac{n}{p_0}}\|u\|_{L^{\infty}(Q_{r}(x_0))}\\ &\leq \varepsilon C(n) r^{\frac{n}{p_0}}\|u\|_{L^{\infty}(Q_{3r}(x_0))}+C(\varepsilon) \|u\|_{L^{p_0}(Q_{3r}(x_0))} (\mbox{By } \eqref{e2.2})\\ &\leq \varepsilon C(n) \sup_{x\in Q_3} d_{x}^{\frac{n}{p_0}}|u(x)|+C(\varepsilon) \|u\|_{L^{p_0}(Q_{3})}. \end{aligned}$$ Take the supremum over $Q_3$ and $\varepsilon$ small such that $\varepsilon C(n)<1/2$. Then, $$\sup_{x\in Q_3} d_{x}^{\frac{n}{p_0}}|u(x)|\leq C\|u\|_{L^{p_0}(Q_{3})},$$ which implies . Next, we show that Theorem \[th1.2\] implies the weak Harnack inequality: \[le2.2\] Let $u\geq 0 $ be a weak supsolution of (\[e1.1\]). Then $$\label{e1.7} \|u\|_{L^{p}(Q_{1})}\leq C\inf_{Q_{3}} u,$$ where $p>0$ and $C$ depend only on $n$, $\lambda$ and $\Lambda$. Without loss of generality, we assume that $\inf_{Q_3} u=1$ and we only need to prove that there exists a constant $c$ depending only on $n$, $\lambda$ and $\Lambda$ such that $$\label{e1.8} m\{x\in Q_1: u(x)>c^k\} \leq \frac{1}{2^k},$$ for $k=1,2,...$. We prove (\[e1.8\]) by induction. Take $c_0=1/2$ in Theorem \[th1.2\]. Then there exists a constant $c$ depending only on $n$, $\lambda$ and $\Lambda$ such that holds. Hence, holds for $k=1$ since we assume that $\inf_{Q_3} u=1$. Suppose that (\[e1.8\]) holds for $k\leq k_0-1$. Let $$A:=\{x\in Q_1: u(x)>c^{k_0}\} \mbox{ and } B:=\{x\in Q_1: u(x)>c^{k_0-1}\}.$$ We need to prove $m(A)\leq m(B)/2$. By the Calderón-Zygmund cube decomposition (see [@C-C Lemma 4.2]), we only need to prove that for any $Q_{r}(x_0)\subset Q_1$, $$\label{e1.10} m(A\cap Q_r(x_0))>\frac{1}{2} m(Q_r(x_0)) \Rightarrow Q_{3r}(x_0)\cap Q_1\subset B,$$ which is exactly the scaling version of for $v=u/c^{k_0-1}$. Now, Theorem \[th1.3\] follows clearly by combining (\[e1.6\]) and (\[e1.7\]). “$u\geq 0$” can be removed in Lemma \[le2.1\] and a corresponding estimate for $u^+$ holds. As for elliptic equations in non-divergence form, we also have the local maximum principle (Lemma \[le2.1\]) and the weak Harnack inequality (Lemma \[le2.2\]) respectively (see [@C-C Theorem 4.8]). In fact, this note is inspired by [@C-C]. [10]{} L. Ambrosio, G. Dal Maso, M. Forti, M. Miranda and S. Spagnolo. *Ennio De Giorgi Selected Papers*, Springer-Verlag, Berlin, 2006, pp. 149–174. L. A. Caffarelli and X. Cabré, *Fully nonlinear elliptic equations*, Amer. Math. Soc., Providence, RI, 1995. E. De Giorgi, *Sulla differenziabilità e l’analiticità delle estremali degli integrali multipli regolari (Italian)*, Mem. Accad. Sci. Torino. Cl. Sci. Fis. Mat. Nat. (3) **3** (1957) 25–43. E. DiBenedetto, *Harnack Estimates in Certain Function Classes*, Atti Sem. Mat. Fis. Univ. Modena **37**(1989), 173-182. E. DiBenedetto, U. Gianazza and V. Vespri, *Harnack¡¯s inequality for degenerate and singular parabolic equations*, Springer Monographs in Mathematics, Springer 2011. U. Gianazza and V. Vespri, *Parabolic De Giorgi classes of order $p$ and the Harnack inequality*, Calc. Var. Partial Differential Equations **26** (2006), 379-399. J. Moser, *A new proof of De Giorgi’s theorem concerning the regularity problem for elliptic differential equations*, Comm. Pure Appl. Math. **13** (1960) 457–468. J. Moser, *On Harnack’s theorem for elliptic differential equations*, Comm. Pure Appl. Math. **14** (1961), 577–591. [^1]: This research is supported by NSFC 11171266.
--- author: - 'C. Chatelain' - 'D. Voliotis' title: 'Numerical evidence of the double-Griffiths phase of the random quantum Ashkin-Teller chain' --- Introduction {#Sec1} ============ Classical and quantum phase transitions are affected differently by the introduction of homogeneous disorder. In the former, it is well established that, when no frustration is induced, disorder is a relevant perturbation at a critical point when thermal fluctuations grow slower than disorder ones inside the correlation volume. It follows that the critical behavior is unchanged when the specific heat exponent $\alpha$ of the pure model is negative [@Harris]. This criterion, due to Harris, has been extensively tested on classical toy models such as the 2D Ashkin-Teller model [@Domany] or the 2D $q$-state Potts model [@RBPM; @RBPM2]. In the latter, disorder is relevant for $q>2$ and the new random fixed point depends on the number of states $q$.\ Quantum phase transitions, i.e. transitions driven by quantum fluctuations rather than thermal ones, involve new phenomena. First, randomness can never be considered as homogeneous because time plays the role of an additional dimension. Therefore, in contrast to the classical case, even when random couplings are homogeneously distributed on the lattice, they are always infinitely correlated in the time direction. Indeed, the random quantum Ising chain in a transverse field (RTFIM), for instance, is equivalent to the celebrated McCoy-Wu model, a classical 2D Ising model with couplings that are randomly distributed in one direction but perfectly correlated in the second one [@McCoy; @McCoy2]. As a consequence, scale invariance is broken even after averaging over disorder. The random quantum fixed point is usually invariant under anisotropic scaling transformations. Correlation length $\xi$ and autocorrelation time $\xi_t$ grow differently when approaching the random quantum critical point: $$\xi_t\sim \xi^z, \label{Defz}$$ where $z$ is the dynamical exponent. In the RTFIM, or in any model whose critical behavior is described by the same fixed point, this dynamical exponent increases algebraically when approaching the critical point and diverges at the critical point.\ Another feature of quantum phase transitions in presence of disorder is the existence of Griffiths phases [@Griffiths]. In the paramagnetic phase, there may exist large regions with a high concentration of strong couplings which can therefore order ferromagnetically earlier than the rest of the system. Even though the probability of such regions is exponentially small, they can cause a singular behavior of the free energy with respect to the magnetic field in a finite range of values of the quantum control parameter. The region of the paramagnetic phase where this phenomena occurs is called a disordered Griffiths phase. A similar phenomena takes place in the ferromagnetic phase. The singular behavior is due to regions of the system with a high concentration of weak bonds at their boundaries. They are therefore only weakly coupled to the rest of the system and can order independently [@VojtaReview]. Because the tunneling time of these rare regions grows exponentially fast with their size, they have a drastic effect on the average autocorrelation functions of the system. Instead of the usual exponential decay, the latter displays an algebraic decay [@RiegerYoung] $$\overline{A(t)}\sim t^{-1/z} \label{AutocorrGrf}$$ involving the dynamical exponent $z$. In classical systems, Griffiths phases usually consist in essential singularities, too weak to be observed numerically, apart with some long-range correlated disorder [@ChatelainAT; @ChatelainATb].\ The quantum Ising chain in a transverse magnetic field has been, by far, the most studied system undergoing a quantum phase transition. The mapping of this model onto a lattice gas of free fermions allowed for exact calculations in the pure case [@Lieb]. In the presence of random couplings, exact results are sparse [@Shankar] but the mapping still allows for an efficient numerical estimate of static, as well as dynamic, quantum averages [@Young]. The critical behavior is governed by an unusual infinite-randomness fixed point (IRFP) which has been extensively studied using a real-space renormalization group approach, the Strong-Disorder Renormalization Group (SDRG), first introduced by Ma and Dasgupta [@MaDasgupta], and later extended to the RTFIM by Fisher [@Fisher; @Fisher2; @Monthus]. The strongest coupling, exchange interaction or transverse field, is decimated by projecting out the Hilbert space onto the ground state of this coupling. Other couplings are then treated using second-order perturbation theory. Nevertheless, the method is believed to become exact as the IRFP is approached because the probability distribution of random couplings becomes broader and broader and therefore, a strong coupling is always surrounded by weaker couplings that can be treated perturbatively. The dynamical exponent $z$ was shown to diverge at the phase transition. The relation (\[Defz\]) is replaced by $\xi\sim (\ln\xi_t)^{1/\psi}$ with $\psi=1/2$. Autocorrelation functions decay as [@Igloi] $$\overline{A(t)}\sim (\ln t)^{-2x_\sigma} \label{AutocorrFP}$$ at the critical point, while correlation functions $C(r)$ display a more usual algebraic decay with the distance $r$. The Ma-Dasgupta renormalization group allows for the exact determination of the magnetization scaling dimension and the correlation length exponent [@Fisher; @Fisher2]: $$2x_\sigma={2\beta/\nu}=2-{1+\sqrt 5\over 2}, \quad \nu=1/\psi=2. \label{ExposantIRFP}$$ The approach has been applied numerically to higher dimensions [@Kovacs; @Kovacs2; @Kovacs3]. The IRFP of the RTFIM is quite robust: in contrast to the classical case, the random quantum $q$-state Potts chain falls also into this universality class for any value of $q$ [@Senthil; @CarlonPotts].\ In this paper, a model with a richer phase diagram is considered. The quantum two-color Ashkin-Teller model can be seen as two coupled Ising chains in a transverse field. The Hamiltonian is [@Kohmoto] $$\begin{aligned} H=&&-\sum_i \big[J_i \sigma_i^z\sigma_{i+1}^z+h_i\sigma_i^x\big] -\sum_i \big[J_i \tau_i^z\tau_{i+1}^z+h_i\tau_i^x\big] \nonumber\\ &&\quad\quad\quad -\sum_i \big[K_i \sigma_i^z\sigma_{i+1}^z\tau_i^z\tau_{i+1}^z +g_i\sigma_i^x\tau_i^x\big] \end{aligned}$$ where $\sigma_i^{x,y,z}$ and $\tau_i^{x,y,z}$ are two sets of Pauli matrices. The model possesses two ${\mathbb Z}_2$-symmetries, corresponding to the invariance of the Hamiltonian under the reversal of all spins $\sigma_i$ (or $\tau_i$) and of both $\sigma_i$ and $\tau_i$. The breaking of these symmetries can be monitored using the two order parameters $$M=\sum_i\langle\sigma_i^z\rangle,\quad P=\sum_i\langle\sigma_i^z\tau_i^z\rangle$$ referred to as magnetization and polarization. In the pure case, i.e. $J_i=J$, $K_i=K$, $h_i=h$ and $g_i=g$, the phase diagram involves several critical lines, as the 2D classical Ashkin-Teller model . When $K<J$, the two ${\mathbb Z}_2$ symmetries are simultaneously broken and the Ashkin-Teller model undergoes a single second-order quantum phase transition with the control parameter $\delta=J/h$. The scaling dimensions of magnetization, polarization and energy densities vary along the critical line [@Kohmoto]: $$x_\sigma={1\over 8},\ x_{\sigma\tau}={\pi\over 8\arccos(-\epsilon)},\ x_\varepsilon={\pi\over 2\arccos(-\epsilon)}$$ for $\epsilon=K/J\in[-1/\sqrt 2;1]$. For $K>J$, i.e. $\epsilon>1$, the critical line splits into two lines, both belonging to the Ising universality class ($x_\sigma=1/8$, $x_{\sigma\tau}=x_{\sigma^2}=1/16$ and $x_\varepsilon=1$). These lines separate the paramagnetic ($M=P=0$) and Baxter ($M,P\ne 0$) phases from an intermediate mixed phase ($M=0$, $P\ne 0$).\ In the following, the random Ashkin-Teller chain is considered. The four couplings $J_i$, $h_i$, $K_i$ and $g_i$ are random variables, though not independent but constrained by the relation [^1] $${K_i\over J_i}={g_i\over h_i}=\epsilon \label{DefEpsilon}$$ where $\epsilon$ is a site-independent fixed parameter. This model was first studied numerically by means of Density-Matrix Renormalization Group (DMRG) in the weak-disorder regime $\epsilon<1$ [@Carlon]. As in the pure model, the system undergoes a single quantum phase transition with the control parameter $$\delta=\overline{\ln J}-\overline{\ln h}.$$ SDRG shows that the inter-chain couplings $K_i$ and $g_i$ are irrelevant on the critical line $\delta=0$, i.e. the random Ashkin-Teller model behaves as two uncoupled random Ising chains. The critical behavior is therefore governed by the Fisher infinite-randomness fixed point with the critical exponents $(\ref{ExposantIRFP})$. However, for finite disorder strength, a strong cross-over is observed numerically between the pure fixed point and this infinite-randomness fixed point. The regime $\epsilon>1$ of the random Ashkin-Teller model was only studied more recently using SDRG . The phase diagram is qualitatively the same as the pure Ashkin-Teller model, in particular the two Ising lines still meets at a tricritical point located at $\delta=0$ and $\epsilon=1$. When this point is approached by varying $\delta$, the scaling dimensions (\[ExposantIRFP\]) of the infinite-randomness Ising fixed point are recovered. However, when approaching this point along the half-line $\delta=0$ and $\epsilon>1$, the critical behavior is governed by different exponents: $$\beta={6-2\sqrt 5\over 1+\sqrt 7},\quad \nu={8\over 1+\sqrt 7}.$$ Note that the ratio $\beta/\nu$ is unchanged, a property sometimes referred to as weak universality. Between the two Ising lines in the regime $\epsilon>1$, SDRG indicates the existence of a double Griffiths phase: magnetization behaves as in the disordered Griffiths phase of the random Ising chain but polarization as in the ordered Griffiths phase.\ In the rest of the paper, new data of both regimes $\epsilon<1$ and $\epsilon>1$ obtained by DMRG are presented and discussed. While only the critical point was considered in [@Carlon], we are interested in the out-of-critical region of the phase diagram and especially in the Griffiths phases when $\epsilon>1$. In the first section, details about the implementation of the model and the parameters used for numerical computations are presented. In the second section, the phase boundaries are determined using integrated autocorrelation times, and the disorder fluctuations of magnetization and polarization. They are compared with the behavior of the entanglement entropy of one half of the lattice with the rest of the system. In the third section, the spin-spin and polarization-polarization autocorrelation functions are analyzed more carefully. In particular, we are interested in the algebraic decay (\[AutocorrGrf\]) signaling the existence of a Griffiths phase. Finally, a conclusion follows. Numerical details ================= We have considered a binary distribution of the intra-chain couplings $J_i$: $$\wp(J_i)={1\over 2}\big[\delta(J_i-J_1)+\delta(J_i-J_2)\big]$$ and homogeneous transverse fields $h$ and $g$. Equation (\[DefEpsilon\]) now reads $${K_i\over J_i}={g\over h}=\epsilon. \label{DefEpsilon2}$$ The critical behavior is expected to be unaffected by this choice. Indeed, the probability distributions of $h$ and $g$, initially delta peaks, will become broader and broader under renormalization so that the same IRFP will be eventually reached. This choice was made to minimize the number of disorder configurations. If $L$ is the lattice size, the number of $J_i$ couplings is $L-1$ with open boundary conditions so the total number of disorder configurations is $2^{L-1}$. For small lattice sizes, up to $L=16$, the average over disorder can be performed exactly and the possibly disastrous consequences of an under-sampling of rare events can be avoided [@Derrida]. This strategy is motivated by the fact that we are mainly interested in Griffiths phases, where the dominant behavior is due to rare disorder configurations. The drawback is that a precise determination of critical exponents is more difficult, in contrast to [@Carlon] where the sampling was limited to 10,000 disorder configurations, allowing for larger lattice sizes up to $L=32$. We also made additional calculations for a lattice size $L=20$ but with an average over only 50,000 disorder configurations, randomly chosen among the 524,288 ones. As we will see, this under-sampling leads to observable deviations.\ For simplicity, we have moreover restricted ourselves to the case $$J_2=1/J_1\ \Leftrightarrow\ \overline{\ln J_i}=0.$$ and we have chosen a strong disorder by setting $J_1=4$ and $J_2=1/4$. The quantum control parameter is now $$\delta=-\ln h.$$\ The model was studied using the time-dependent Density-Matrix Renormalization Group algorithm [@White; @White2; @Schollwoeck]. A rough estimate of the ground state is first obtained with the so-called Infinite-Size DMRG algorithm. Because the couplings are inhomogeneous, the system was grown by adding single spins to one boundary rather than inserting them between the two blocks. After this initial Infinite-Size step, the accuracy of the ground state is improved by performing four sweeps of the Finite-Size algorithm. Since disorder fluctuations dominate at the IRFP, quantum fluctuations are expected to be much weaker than in the pure Ashkin-Teller model. For the latter, the expected critical exponents were recovered by keeping of the order of $m=192$ states when truncating the Hilbert space of a left or right block in the DMRG algorithm. For the random Ashkin-Teller model, we fixed the upper limit of this parameter to $m=64$. The actual number of states was determined dynamically by imposing a maximal truncation error: $10^{-5}$ during the initial Infinite-Size step, $10^{-6}$, $10^{-7}$, $10^{-8}$, and $10^{-9}$ during the four Finite-Size sweeps. Using these parameters, we were able to make calculations for a large number of quantum control parameters $\delta$ for lattice sizes up to $L=20$ for $\epsilon>1$. Unfortunately, the Arpack library, used to determine the ground-state in the truncated Hilbert space, sometimes failed for some particular disorder configurations. In these cases, the point, and not simply this disorder configuration, is discarded. For $\epsilon\le 1$, many calculations failed for $L=16$. Only 27 values of the control quantum parameter could be completed for $\epsilon=1$, mostly far from the critical point. Moreover, when successful, the computation takes a time which increases very fast for $\epsilon<1$. Since the two Ising chains are uncoupled at the fixed point, the Hilbert space becomes closer to a tensor product of the spaces of two Ising chains. Therefore, the number of states to be kept during the truncation process of the DMRG algorithm should be of the order of the square of the number of states necessary for a single Ising chain. For this reason, the largest lattice size considered in the regime $\epsilon<1$ is only $L=12$.\ Average magnetization and polarization densities $$\overline{\langle m\rangle} =\overline{\bra 0\sigma_{L/2}^z\ket 0},\quad \overline{\langle p\rangle} =\overline{\bra 0\sigma_{L/2}^z\tau_{L/2}^z\ket 0},\quad$$ were measured at the center of the chain. $\ket 0$ denotes the ground state and the over-line bar stands for the average over disorder. In order to measure non-vanishing averages, longitudinal magnetic and electric fields were coupled to the two boundary spins of the chain with the Hamiltonian $$H_1=B\sigma_1^z+E\sigma_1^z\tau_1^z +B\sigma_L^z+E\sigma_L^z\tau_L^z$$ to break the two ${\mathbb Z}_2$ symmetries. The convergence of the DMRG algorithm is also faster when such boundary fields are imposed. Spin-spin and polarization-polarization connected autocorrelation functions, defined as $$\begin{aligned} \overline{A_\sigma(t)} &=&\overline{\bra 0\sigma_{L/2}^z(t)\sigma_{L/2}^z(0)\ket 0} -\overline{\langle m\rangle^2},\\ \overline{A_{\sigma\tau}(t)}&=&\overline{\bra 0\sigma_{L/2}^z(t)\tau_{L/2}^z(t) \sigma_{L/2}^z(0)\tau_{L/2}^z(0)\ket 0} -\overline{\langle p\rangle^2},\nonumber \label{DefAutoCorr} \end{aligned}$$ were estimated using a discretized imaginary-time evolution operator: $$\overline{A_\sigma(n\Delta t)}=\overline{\left[{\bra 0\sigma_{L/2}^z \big(1-H\Delta t\big)^n\sigma_{L/2}^z\ket 0\over \bra 0 \big(1-H\Delta t\big)^n\ket 0}\right]}-\overline{\langle m\rangle^2}.$$ We have used the value $\Delta t=10^{-3}$ and computed autocorrelation functions up to $t=10$. Phase boundaries ================ As discussed in the introduction, the random quantum Ashkin-Teller model is expected to undergo a single transition when $\epsilon\le 1$ and two transitions when $\epsilon>1$. This is easily observed on the behavior of magnetization and polarization, which are the two order parameters of these two transitions. As seen on figures \[fig10b\], magnetization and polarization display a fast variation but at different values of the transverse field $h$, and therefore of the control parameter $\delta=-\ln h$, when $\epsilon>1$. \[tc\]\[tc\]\[1\]\[0\][$h$]{} \[Bc\]\[Bc\]\[1\]\[1\][$\overline{\langle m\rangle}$]{} \[Bc\]\[Bc\]\[1\]\[1\][$\overline{\langle p\rangle}$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1/4$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1/2$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=2$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=4$]{} ![image](fig10b.eps){width="8cm"}![image](fig11b.eps){width="8cm"} However, because of the finite-size of the system, magnetization and polarization curves are too smooth to provide accurate estimates of the location of the transitions. Diverging quantities are more convenient for that purpose and usually preferred in numerical studies. In this section, we discuss three quantities that diverge, or display a pronounced peak, at the transitions of the random Ashkin-Teller model. Integrated autocorrelation time ------------------------------- One of the properties that define criticality is that any characteristic length or time disappears at a second-order phase transition. Out-of-criticality, the exponential decay of average spatial correlation functions $C(r)$ and autocorrelation functions $A(t)$ provides respectively a correlation length $\xi$ and an autocorrelation time $\xi_t$. In a pure system, both quantities are expected to diverge as a critical point is approached. In the random case, a divergence of $\xi$ and $\xi_t$ is expected in the whole Griffiths phase. However, in a finite system, these divergences are smoothed and replaced by a finite peak. At large time $t$, connected autocorrelation functions $\overline{A(t)}$ are dominated by an exponential decay of the variable $t/\xi_t$. Consequently, their integrals behave as $$\tau=\int_0^{+\infty} \overline{A(t/\xi_t)}dt =\xi_t\int_0^{+\infty} \overline{A(u)}du \label{DefTau}$$ and, like $\xi_t$, should display a peak. We have computed the integrated autocorrelation time $\tau$ for spin-spin and polarization-polarization autocorrelation functions. The upper bound of the integral (\[DefTau\]) was replaced by the largest time $t=10$ considered. This approximation has no effect on the estimate of the autocorrelation time $\tau$ as long as $\xi_t$ is much smaller than $10$. As will be seen below, this is the case for the lattice sizes that we considered.\ As can be seen on figures \[fig12\] and \[fig13\], the integrated autocorrelation times display two peaks. The first peak occurs at a value of the transverse field $h$ which is of the same order of magnitude as $J_2$. Therefore, this peak is probably associated to the ordering transition of the disorder configurations with a majority of weak couplings $J_2$. However, the height of this peak does not increase significantly with the lattice size so one can conjecture that this peak will remain finite in the thermodynamic limit and is not associated to any phase transition. The height of the second peak clearly increases with the lattice size. For $\epsilon\le 1$, the location of the peak is roughly the same for spin-spin and polarization-polarization autocorrelation times. For $\epsilon>1$, the data clearly shows that the peak occurs at a positive control parameter $\delta$, i.e. a transverse field $h<1$, for spin-spin autocorrelation functions and negative for polarization-polarization ones. This indicates that the system undergoes an electric phase transition followed, at larger control parameter, by a magnetic one. This is consistent with the picture given by magnetization and polarization curves. The location of the two transitions was predicted by Hrahsheh [*et al.*]{} to be $\delta_c=\pm\ln {\epsilon\over 2}$ for $\epsilon\gg 1$ [@Vojta1]. For $\epsilon=4$, we observe the two peaks at $\delta_c=-\ln h_c\simeq 0.54$ and $\delta_c\simeq -0.99$ for $L=16$ for instance, still far from $\pm\ln{\epsilon\over 2} \simeq\pm 0.69$. Moreover, our transitions lines are not symmetric with respect to the axis $\delta=0$, as required by self-duality. Since the data was produced by DMRG with a relatively large number of states and since the averages were made over all disorder configurations, the deviation can only be the consequence of the relatively small lattice sizes that could be reached and of the boundary magnetic and electric fields which favor a Baxter phase and therefore shift the whole phase diagram. \[tc\]\[tc\]\[1\]\[0\][$h$]{} \[Bc\]\[Bc\]\[1\]\[1\][$\xi_t$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=8$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=10$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=12$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=16$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=20$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1/4$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1/2$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=2$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=4$]{} ![image](fig12.eps){width="20cm"} \[tc\]\[tc\]\[1\]\[0\][$h$]{} \[Bc\]\[Bc\]\[1\]\[1\][$\xi_t$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=8$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=10$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=12$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=16$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=20$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1/4$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1/2$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=2$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=4$]{} ![image](fig13.eps){width="20cm"} We also considered the first moment $$\int_0^{+\infty} t\ \!\overline{A(t)}dt\ \big/\ \int_0^{+\infty} \overline{A(t)}dt \label{FirstMoment}$$ that should be equal to the autocorrelation time $\xi_t$ if the connected autocorrelation function $\overline{A(t)}$ displays a purely exponential decay $\overline{A(t)}\sim e^{-t/\xi_t}$. Like the autocorrelation time, the first moment was computed for both spin-spin and polarization-polarization autocorrelation functions. When plotted with respect to the transverse fields, two peaks are observed. Even though the shape of these peaks is not strictly identical to that of the autocorrelation time (\[DefTau\]), in particular the second peak is higher and slightly broader, both quantities behave in the same way with the transverse field $h$. Therefore, the same conclusions can be drawn. A reconstructed phase diagram is shown on figure \[fig20c\]. It is qualitatively similar to the one presented in Ref. [@Vojta1]. However, it is not symmetric under the transformation $\delta\leftrightarrow -\delta$. As discussed above, finite-size effects are here strengthened by the boundary magnetic and electric fields that globally shift the phase diagram. \[Bc\]\[Bc\]\[1\]\[1\][$h$]{} \[Bc\]\[Bc\]\[1\]\[1\][$\epsilon$]{} \[Br\]\[Br\]\[1\]\[0\][$L=8$]{} \[Br\]\[Br\]\[1\]\[0\][$L=10$]{} \[Br\]\[Br\]\[1\]\[0\][$L=12$]{} \[Br\]\[Br\]\[1\]\[0\][$L=16$]{} \[Br\]\[Br\]\[1\]\[0\][$L=20$]{} ![Phase diagram in the parameter space $(\epsilon,h)$ obtained from spin-spin (continuous lines) and polarization-polarization (dashed lines) first moment (\[FirstMoment\]). The different curves correspond to different lattice sizes.[]{data-label="fig20c"}](fig20c.eps "fig:"){width="9cm"} Disorder fluctuations --------------------- In a random system, any thermodynamic average $\overline{\langle X\rangle}$ is the result of a quantum average $$\langle X\rangle=\bra{\psi_0[J_i,K_i]}X\ket{\psi_0[J_i,K_i]}$$ followed by an average over coupling configurations $$\overline{\langle X\rangle}=\int \bra{\psi_0[J_i,K_i]}X\ket{\psi_0[J_i,K_i]} \wp(\{J_i,K_i\})\prod_i dJ_idK_i$$ where $\ket{\psi_0[J_i,K_i]}$ is the ground state of the system for a given coupling configuration $\{J_i,K_i\}$ and $\wp(\{J_i,K_i\})$ the probability of this configuration. At an IRFP, disorder fluctuations dominate over quantum fluctuations. The strength of the former can be measured by the variance $$V_X=\overline{\langle X\rangle^2}-\overline{\langle X\rangle}^2.$$ We computed this quantity for both magnetization ($V_\sigma$) and polarization ($V_{\sigma\tau}$). As can be seen on figures \[fig18\] and \[fig19\], the variances $V_\sigma$ and $V_{\sigma\tau}$ are numerically very stable. They vanish at high and low transverse fields $h$ and display a well-defined single peak. In particular, there is no second peak at $h\sim J_2$. The locations of the maxima of the peaks are accurately determined and are in agreement with the ones estimated from autocorrelation times. The same conclusions can be drawn: the magnetic and electric transitions occur at very close control parameters $\delta$, probably the same, for $\epsilon\le 1$, while a finite shift is observed for $\epsilon>1$. Even though only a weak dependence on the lattice size $L$ of $V_\sigma$ and $V_{\sigma\tau}$ is observed on figures \[fig18\] and \[fig19\], a systematic finite-size shift is present. For $\epsilon\le 1$, the distance between the two critical lines decreases when the lattice size $L$ increases, in agreement with the prediction of a unique transition. The coincidence of the maxima of the autocorrelation times with those of the disorder fluctuations shows that the phase transition is induced by disorder fluctuations, rather than quantum fluctuations, as expected at an IRFP. \[tc\]\[tc\]\[1\]\[0\][$h$]{} \[Bc\]\[Bc\]\[1\]\[1\][$V_\sigma$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=8$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=10$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=12$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=16$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=20$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1/4$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1/2$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=2$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=4$]{} ![image](fig18.eps){width="20cm"} \[tc\]\[tc\]\[1\]\[0\][$h$]{} \[Bc\]\[Bc\]\[1\]\[1\][$V_{\sigma\tau}$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=8$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=10$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=12$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=16$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=20$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1/4$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1/2$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=2$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=4$]{} ![image](fig19.eps){width="20cm"} As can be noticed on figures \[fig18\] and \[fig19\], the height of the peaks of the variance of disorder fluctuations increases slightly with the lattice size, at least for $L\le 16$. The data for the lattice size $L=20$ display indeed a smaller peak. This lattice size is the only one for which the average has not been computed over all possible disorder configurations but only over a subset ($\sim 10\%$) of them. The smaller peak for $L=20$ is therefore probably due to an under-sampling of the dominant configurations at the critical point. $50,000$ is still, at least for certain quantities, a too small number of disorder configurations. In the following, data for $L=20$ should be taken with more care than smaller lattice sizes, for which an exact average over disorder was performed. Entanglement entropy -------------------- When the degrees of freedom of the system can be divided into two subsets $A$ and $B$, and therefore when the Hilbert space can be decomposed as a tensor product ${\cal H}={\cal H}_A\otimes{\cal H}_B$, the degree of entanglement of the two sub-blocks is conveniently measured by the von Neumann entanglement entropy of $A$ with the rest of the system [@Amico]: $$S_A=-\trace_{{\cal H}_A} \rho_A\log\rho_A$$ where $\rho_A$ is the reduced density matrix $$\rho_A=\trace_{{\cal H}_B} \rho$$ and $\rho$ the density matrix of the full system. In the case of a pure state $\ket\psi$, the latter is the projector $\rho=\ket\psi\bra\psi$. In the following, we will consider the subset $A$ made of the $\ell$ spins at the left of the chain. \[tc\]\[tc\]\[1\]\[0\][$h$]{} \[Bc\]\[Bc\]\[1\]\[1\][$S(L/2)$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=8$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=10$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=12$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=16$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=20$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1/4$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1/2$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=2$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=4$]{} ![image](fig21.eps){width="20cm"} Entanglement entropy has recently attracted a lot of attention because of Conformal Field Theory (CFT) predictions at pure critical points [@Vidal; @Cardy]. The predicted logarithmic behavior with $\ell$ is also observed in RTFIM but with a prefactor that involves an effective central charge $\tilde c={1\over 2}\ln 2$ [@Refael]. The entanglement entropy is also commonly used in the literature to determine phase boundaries [@Amico]. Indeed, it is expected to be larger when quantum correlation functions are long-range. At an IRFP, the entanglement entropy is related to the probability of a strongly correlated cluster across the boundary between the two blocks $A$ and $B$. Numerically, the reduced density matrix $\rho_A$ being computed and diagonalized at each step of the DMRG algorithm, the entanglement entropy is given without any additional computational effort.\ The average entanglement entropy $\overline{S(\ell)}$ of the random quantum Ashkin-Teller chain is plotted on figure \[fig21\] for $\ell=L/2$. For $\epsilon=4$, two peaks can be observed and interpreted as the signature of the two phase transitions. As expected, one single peak is present when $\epsilon\le 1$. However, only one peak can be distinguished in the case $\epsilon=2$, while autocorrelation time indicates the existence of two transitions. Because of the finite-size of the system, the expected two peaks are probably merged into a single one. This scenario is compatible with what is observed for $\epsilon=4$: what was only a shouldering at the left of the peak for $L=8$ becomes a second independent peak at $L=20$.\ The phase diagram is qualitatively the same as previously constructed. However, the two peaks are not located at the same position as those displayed by the integrated autocorrelation times, or the variance of disorder fluctuations. At $\epsilon=4$, they are instead found at $\delta_c \sim -0.10$ and $\delta_c\sim -1.33$, far from the estimates $\delta_c \simeq 0.54$ and $-0.99$. This large difference is probably due to Finite-Size effects. Indeed, magnetization, polarization and autocorrelation functions were computed at the center of the lattice, i.e. at the site $L/2$. In contrast, the entanglement entropy is a global quantity, therefore more sensitive to the presence of boundary fields. \[tc\]\[tc\]\[1\]\[0\][$\log L$]{} \[Bc\]\[Bc\]\[1\]\[1\][$\max_h S(L/2)$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1/4$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1/2$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=2$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=4$]{} ![Scaling of the maximum of the entanglement entropy $S(L/2)$ for an equal partition of the system, i.e. $\ell=L/2$, with the logarithm of the lattice size. The different symbols correspond to different values of $\epsilon$ and the straight lines are linear fit of the data.[]{data-label="fig21b"}](fig21b.eps "fig:"){width="9cm"} CFT predicts that the entanglement entropy of a block of size $\ell$ behaves as [@Cardy] S()=c +[Cst.]{}, \[EqEntropieCFT\] where $c$ is the central charge and $\rho$ is equal to $1/3$ for periodic boundary conditions and $1/6$ for open boundaries. However, this relation was obtained on a finite but continuous manifold and not on a lattice. Therefore, it is only poorly verified by our numerical data, for which strong lattice effects are still present. Nevertheless, the predicted dependence on the lattice size $L$ is well reproduced by the numerical data. For an equal partition of the system, i.e. when plugging $\ell=L/2$ into (\[EqEntropieCFT\]), the entanglement entropy $S(L/2)$ is expected to be a linear function of $\ln L$ with a slope $\rho c$. The numerical data at the maximum of $S(L/2)$ is in good agreement with this prediction as shown on figure \[fig21b\]. This confirms the divergence of the correlation length as the lattice size and therefore, the occurrence of a phase transition. Because of the magnetic and electric fields coupled to the boundaries of the system during the numerical computations, the CFT predictions for the slope of $S(L/2)$ with $\ln L$ do not apply. Autocorrelation functions ========================= As discussed in the introduction, the average connected autocorrelation functions $\overline{A(t)}$ display three different behaviors according to the values of the parameters $\delta$ and $\epsilon$, i.e. the position in the phase diagram. On the critical lines, a slow relaxation (\[AutocorrFP\]) depending on the logarithm of $t$ is expected. In the Griffiths phases, rare regions induce an algebraic decay (\[AutocorrGrf\]) of the autocorrelation functions, with an exponent $1/z$. Finally, away from the Griffiths phases, a more usual exponential decay is recovered. \[tc\]\[tc\]\[1\]\[0\][$t$]{} \[Bc\]\[Bc\]\[1\]\[1\][$\overline{A_\sigma(t)}$]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!0.100$ ($1.00$)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!0.249$ ($1.00$)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!0.402$ ($0.66$)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!0.602$ ($0.54$)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!0.917$ ($0.67$)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!1.059$ ($0.82$)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!1.203$ ($1.00$)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!1.483$ ($1.00$)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!2.032$ (exp.)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!3.235$ (exp.)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!5.969$ (exp.)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!9.960$ (exp.)]{} ![image](fig16.eps){width="8.85cm"} \[Bc\]\[Bc\]\[1\]\[1\][$\overline{A_{\sigma\tau}(t)}$]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!0.100$ (exp.)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!0.249$ (exp.)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!0.402$ (exp.)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!0.602$ ($1.00$)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!0.917$ ($1.00$)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!1.059$ ($1.00$)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!1.203$ ($0.91$)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!1.483$ ($0.82$)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!2.032$ ($0.63$)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!3.235$ ($0.64$)]{} \[Br\]\[Br\]\[1\]\[0\][$h\!=\!5.969$ ($1.00$)]{} ![image](fig17.eps){width="8.85cm"} The algebraic decay of autocorrelation functions in the Griffiths phases has been observed numerically in the case of the random quantum Ising chain by exploiting the mapping onto a gas of free fermions [@Young]. The lattice sizes that we were able to reach with DMRG being much smaller, such an algebraic decay of the spin-spin or polarization-polarization autocorrelation functions could not be observed for the random Ashkin-Teller model. A purely algebraic behavior is indeed expected to hold only in the large-time limit $t\gg 1$ and in the thermodynamic limit $L\gg 1$. A transient regime may be observed for small times $t$ while, for large times $t$, the finite-size of the system may induce an exponential decay of autocorrelation functions. Usually, one looks for an intermediate regime in the numerical data where the asymptotic behavior holds. No such intermediate regime could be found for both the spin-spin or polarization-polarization autocorrelation functions. This is particularly clear when plotting an effective exponent ${d\ln \overline{A(t)}\over d\ln t}$ versus $t$. For values of the transverse field $h$ expected to be in the Griffiths phases, two non-algebraic regimes, where the effective exponent varies with $t$, are observed at short and large times. But in between, no plateau corresponding to a purely algebraic decay could be distinguished.\ To fit our numerical data, we used an extended expression of the one proposed by Rieger [*et al.*]{} for autocorrelation functions in a Griffiths phase [@RiegerYoung]. The assumptions are the same: in the paramagnetic phase, the probability of an ordered region of linear size $\ell$ scales as $\wp(\ell)\sim e^{-c\ell}$ and its tunneling time is $\tau(\ell)\sim e^{\sigma'\ell}$. In a finite system of width $L$, the linear size of rare regions is bounded by $L$ so the average autocorrelation function reads $$\begin{aligned} \overline{A(t)} =\overline{e^{-t/\tau}} &=&\int_0^L \wp(\ell)e^{-t/\tau(\ell)}d\ell \nonumber\\ &=&{t^{-c/\sigma'}\over\sigma'}\int_{te^{-\sigma'L}}^t v^{c/\sigma'-1}e^{-v}dv \nonumber\\ &=&{t^{-1/z}\over\sigma'}\big[\gamma(1/z,t) -\gamma(1/z,te^{-\sigma'L})\big] \end{aligned}$$ where $v=te^{-\sigma'\ell}$, $\sigma'/c=z$ is the dynamical exponent, and $\gamma(a,x)$ is the incomplete gamma function. In the limit of large time $t$ and lattice size $L$, one recovers the prediction $\overline{A(t)}={\Gamma(1/z)\over\sigma'}t^{-1/z}$ obtained in the saddle-point approximation.\ The numerical estimates of the connected autocorrelation functions were fitted with the 4-parameter non-linear [*ansatz*]{} $$\overline{A(t)}=a_1t^{-a_2}|\gamma(a_2,a_3t)-\gamma(a_2,a_4t)|. \label{AnsatzA}$$ The bounds $0<a_2\le 1$ were imposed during the fitting procedure. The quality of the fit was quantified using the mean-square deviation $\chi^2$. Because the boundaries of the Griffiths phases are not known with a good accuracy, the data were also fitted with an exponential $\overline{A(t)}=a_1e^{-a_2t}$. Spin-spin and polarization-polarization autocorrelation functions are plotted respectively on figures \[fig16-7\] for various transverse fields $h$ at $\epsilon=4$. The continuous lines correspond to the best fit, Eq. (\[AnsatzA\]) or exponential, i.e. the one with the smallest mean square deviation $\chi^2$. The inverse $1/z$ of the dynamical exponent is indicated in the legend when (\[AnsatzA\]) is the best fit, while [*exp*]{} indicates a fit with an exponential. As can be seen on the figures, the data are nicely reproduced by an exponential decay for large and small transverse fields $h$. Close to the transition point, the best fit is obtained with (\[AnsatzA\]), which means that the corresponding range of transverse fields is in a Griffiths phase. As expected, when $\epsilon\le 1$, these phases are centered around $h=1$ and their boundaries are similar for spin-spin and polarization-polarization autocorrelation functions. For $\epsilon=4$, the Griffiths phase is shifted to smaller values of the transverse field for spin-spin autocorrelation functions and to larger ones for polarization-polarization autocorrelations. For $\epsilon=2$, the shift is seen only for the polarization-polarization autocorrelation functions. It was also the case for the peak of the autocorrelation time (figure \[fig12\]). At the boundaries of the Griffiths phases, the data is not well fitted, neither by an exponential form nor by the [*ansatz*]{} (\[AnsatzA\]). When the best fit is obtained with the [*ansatz*]{} (\[AnsatzA\]), the dynamical exponent takes a value $z=1$, i.e. saturating the imposed bound $z\le 1$. The deviation between the fit and the numerical data is clearly visible on figures \[fig16-7\]. The transverse fields for which such a deviation occurs, are probably in a region of cross-over where the autocorrelation functions display a more complex behavior. \[tc\]\[tc\]\[1\]\[0\][$h$]{} \[Bc\]\[Bc\]\[1\]\[1\][$1/z$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=8$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=10$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=12$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=16$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=20$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1/4$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1/2$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=2$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=4$]{} ![image](fig16c.eps){width="20cm"} \[tc\]\[tc\]\[1\]\[0\][$h$]{} \[Bc\]\[Bc\]\[1\]\[1\][$1/z$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=8$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=10$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=12$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=16$]{} \[Br\]\[Bl\]\[1\]\[0\][$L=20$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1/4$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1/2$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=1$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=2$]{} \[Bc\]\[Bc\]\[1\]\[0\][$\epsilon=4$]{} ![image](fig17c.eps){width="20cm"} On figures \[fig16c\] and \[fig17c\], the inverse of the dynamical exponents $z$ is plotted versus the transverse field $h$. As conjectured in Ref. [@Vojta1], the dynamical exponents display a peak centered at the corresponding critical point, i.e. at the magnetic transition for the dynamical exponent of spin-spin autocorrelation functions and electric transition for polarization-polarization autocorrelations. As already observed for other peaked quantities, the two transitions occur at the same control parameters for $\epsilon\le 1$ and are separated for $\epsilon>1$. Note that the maxima of the dynamical exponents are found at the locations of those of the autocorrelation times and of the variance of disorder fluctuations. Between these two transitions lines, there is therefore a double Griffiths phase, i.e. a disordered Griffiths phase in the magnetic sector and an ordered one in the electric sector where both dynamical exponents $z$ are larger than 1. However, as seen on figures \[fig16c\] and \[fig17c\], these Griffiths phase are not infinite but have a finite extension, because of the binary distribution of the couplings $J_i$ and $K_i$. For $\epsilon=4$, it is observed that the magnetic and electric Griffiths phases still overlap. Nevertheless, it will probably not be the case anymore for larger values of $\epsilon$.\ In the random Ising chain, the dynamical exponent was shown to behave as $z\sim 1/2|\delta|$ in the Griffiths phase [@Igloi2001]. A similar behavior seems to be also reasonable in the case of the random Ashkin-Teller chain, as can be seen on figures \[fig16c\] and \[fig17c\]. The boundaries $\delta_+=-\ln h_+$ and $\delta_-=-\ln h_-$ of the Griffiths phase were first estimated respectively as the first and last points with a dynamical exponent $z>1$. The critical point is assumed to be located at $\delta_c=(\delta_++\delta_-)/2$. The two dashed lines plotted on figures \[fig16c\] and \[fig17c\] simply correspond to straight lines $1/z(\delta)=(\delta-\delta_c)/(\delta_+-\delta_c)$ for $\delta\in [\delta_c;\delta_+]$ and $1/z(\delta)=(\delta-\delta_c) /(\delta_--\delta_c)$ for $\delta\in [\delta_-;\delta_c]$. The slope is not equal to two, as in the Ising model, but is in the range $1-1.5$. As the lattice size is increased, the numerical data seem to accumulate on these straight lines, at least at the boundaries of the Griffiths phase. In the neighborhood of the critical point, much larger lattice sizes would be necessary to test this linear behavior of $1/z$. In the case of $L=20$, the dynamical exponent seems to be over-estimated for $h<h_c$. Again, this may be explained by the under-sampling already observed with disorder fluctuations of magnetization and polarization. Conclusions =========== The random quantum Ashkin-Teller chain has been studied by means of time-dependent Density Matrix Renormalization Group. The average over all possible disorder configurations was performed for $L\le 16$. For $L=20$, a partial average is observed to induce an under-sampling of disorder fluctuations of magnetization and polarization. Such partial averages are commonly used in the literature in the study of random systems. Our data show that they should be considered with great care, especially in the quantum case.\ The analysis of integrated autocorrelation times and of the variance of disorder fluctuations leads to a phase diagram qualitatively in agreement with the one conjectured by Hrahsheh [*et al.*]{} on the basis of SDRG [@Vojta1]. However, finite-size effects are large, especially for entanglement entropy, and our lattice sizes are too small to allow for an accurate extrapolation in the thermodynamic limit. The coincidence of the maxima of disorder fluctuations with the critical lines confirms that the phase transition is governed by disorder fluctuations, and not by quantum fluctuations. Nevertheless, the divergence of the entanglement entropy as the logarithm of the lattice size is recovered, as in pure quantum chains. In the regime $\epsilon>1$, the existence of a double Griffiths phase is confirmed. Using an original method to take into account finite-size effects, the two dynamical exponents, associated to the algebraic decay of spin-spin and polarization-polarization autocorrelation functions respectively, could be computed. They display the expected behavior in a Griffiths phase: a peak centered at the magnetic or electric transition. Furthemore, it seems reasonnable to assume that they diverge in the thermodynamic limit as $z(\delta)\sim 1/|\delta|$. It is our pleasure to gratefully thank Cécile Monthus for discussions and for having pointing out some useful references on the topic. [99]{} A.B. Harris (1974) [*J. Phys. C: Solid State Phys.*]{} [**7**]{} 1671. S. Wiseman, and E. Domany (1995) [*Phys. Rev. E*]{} [**51**]{} 3074. J. Cardy, and J.L. Jacobsen (1997) [*Phys. Rev. Lett.*]{} [**79**]{}, 4063. J. L. Jacobsen and J. L. Cardy (1998) [*Nucl. Phys. B*]{} [**515**]{}, 701. B.M. McCoy and T.T. Wu (1968) [*Phys. Rev.*]{} [**176**]{} [631]{}. B.M. McCoy et T.T. Wu (1969) [*Phys. Rev.*]{} [**188**]{} [982]{}. R.B. Griffiths (1969) [*Phys. Rev. Lett.*]{} [**23**]{} 17. T. Vojta (2006) [*J. Phys. A: Math. Gen.*]{} [**39**]{} R143. C. Chatelain (2013) [*Europhys. Lett.*]{} [**102**]{} 66007. C. Chatelain (2014) [*Phys. Rev. E*]{} [**89**]{} 032105. H. Rieger, and A. P. Young. (1997) [*Quantum Spin Glasses.*]{} In Complex Behaviour of Glassy Systems, Eds by Miguel Rubí and Conrado Pérez-Vicente, Lecture Notes in Physics 492. Springer Berlin Heidelberg. T.D. Schultz, D.C. Mattis, and E.H. Lieb (1964) [*Rev. Mod. Phys.*]{} [**36**]{} 856. R. Shankar, and G. Murthy (1987) [*Phys. Rev. B*]{} [**36**]{} 536. A.P. Young, and H. Rieger (1996) [*Phys. Rev. B*]{} [**53**]{}, 8486. C. Dasgupta, and S.-K. Ma. (1980) [*Phys. Rev. B*]{} [**22**]{} 1305. D.S. Fisher (1992) [*Phys. Rev. Lett.*]{} [**69**]{} 534. D.S. Fisher (1995) [*Phys. Rev. B*]{} [**51**]{} 6411. F. Iglói, and C. Monthus (2005) [*Phys. Rep.*]{} [**412**]{} 277. F. Iglói, and H. Rieger (1998) [*Phys. Rev. B*]{} [**57**]{} 11404. I. Kovács, and F. Iglói (2010) [*Phys. Rev. B*]{} [**82**]{} 054437. I. Kovács, and F. Iglói (2011) [*Phys. Rev. B*]{} [**83**]{} 174207. I. Kovács, and F. Iglói (2012) [*Eur. Phys. Lett.*]{} [**97**]{} 67009. T. Senthil and S. N. Majumdar (1996) [*Phys. Rev. Lett.*]{} [**76**]{}, 3001. E. Carlon, C. Chatelain, and B. Berche (1999) [*Phys. Rev. B*]{} [**60**]{}, 12974. J. Ashkin, and E. Teller (1943) [*Phys. Rev.*]{} [**64**]{}, 178. C. Fan (1972) [*Phys. Lett.*]{} [**39A**]{}, 136. M. Kohmoto, M. den Nijs, and L.P. Kadanoff (1981) [*Phys. Rev. B*]{} [**24**]{}, 5229. C. Fan (1972) [*Phys. Rev. B*]{} [**6**]{} 902. G. Kamieniarz, P. Koz[ł]{}owski, and R. Dekeyser (1997) [*Phys. Rev. E*]{} [**55**]{}, 3724. E. Carlon, P. Lajkó, and F. Iglói (2001) [*Phys. Rev. Lett.*]{} [**87**]{} 277201. F. Hrahsheh, R. Narayanan, J.A. Hoyos, and T. Vojta (2014) [*Phys. Rev. B*]{} [**89**]{} 014401. F. Hrahsheh, J.A. Hoyos, and T. Vojta (2012) [*Phys. Rev. B*]{} [**86**]{} 214204. B. Derrida, and H. Hilhorst (1981) [*J. Phys. C: Solid State Phys.*]{} [**14**]{} L539. S.R. White (1992) [*Phys. Rev. Lett.*]{} [**69**]{} 2863. S.R. White (1993) [*Phys. Rev. B*]{} [**48**]{} 10345. U. Schollwoeck (2005) [*Rev. Mod. Phys.*]{} [**77**]{} 259. L. Amico, R. Fazio, A. Osterloh, and V. Vedral (2008) [*Rev. Mod. Phys.*]{} [**80**]{}, 517. G. Vidal, J.I. Latorre, E. Rico, and A. Kitaev (2003) [*Phys. Rev. Lett.*]{} [**90**]{}, 227902. P. Calabrese and J. Cardy (2004) [*J. Stat. Mech.: Theory Exp.*]{} P06002. G. Refael and J.E. Moore (2004) [*Phys. Rev. Lett.*]{} [**93**]{}, 260602. F. Iglói, R. Juhász, and P. Lajkó (2001) [*Phys. Rev. Lett.*]{} [**86**]{} [1343]{} [^1]: The case where $K_i/J_i=\epsilon_J$ and $g_i/h_i =\epsilon_h$ are different was considered in [@Vojta1]. At the infinite-randomness fixed point, both quantities are renormalized to the same value, $\epsilon^*=0$ in the weak-coupling regime ($\epsilon_J,\epsilon_h<1$) and $\epsilon\rightarrow +\infty$ in the strong-coupling one. Without loss of generality, one can start with $\epsilon_J=\epsilon_h$. The more general case where $\epsilon_J$ and $\epsilon_h$ are random variables and are allowed to take values both above and below 1 was also considered in [@Vojta1] and leads to a different critical behavior at the multicritical point.
--- abstract: 'We show that the predictability of letters in written English texts depends strongly on their position in the word. The first letters are usually the least easy to predict. This agrees with the intuitive notion that words are well defined subunits in written languages, with much weaker correlations across these units than within them. It implies that the average entropy of a letter deep inside a word is roughly 4-5 times smaller than the entropy of the first letter.' author: - Thomas Schürmann and Peter Grassberger title: The predictability of letters in written english --- Since language is used to transmit information, one of its most quantitative characteristics is the entropy, i.e., the average amount of information (usually measured in bits) per character. Entropy as a measure of information was introduced by Shannon [@SW]. He also performed extensive experiments [@S] using the ability of humans to predict continuations of printed text. This and similar experiments [@CK; @LR] led to estimates of typically $\approx 1-1.5$ bits per character. In contrast, the best computer algorithms whose prediction is based on sophisticated statistical methods reach entropies of $\approx 2-2.4$ bits [@BCW]. Even this is better than what commercial text compression packages achieve: starting from texts where each character is represented by one byte, they typically achieve compression ratios $\approx 2$, corresponding to $\approx 4$ bits/character. These differences result from different abilities to take into account long-range correlations which are present in all texts and whose utilization requires not only a good understanding of language but also substantial computational resources. Formally, Shannon entropy $h$ of a letter sequence $(...,s_{-1},s_0,s_1,...)$ over an alphabet of $d$ letters is given by $$\begin{aligned} \label{eq2} h=&-&\lim\limits_{n\to\infty}\sum_{s_{-n},...,s_0} p(s_{-n},...,s_0)\\ &\times&\log\, p(s_0|s_{-1},...,s_{-n})\nonumber\\ &=&\lim\limits_{n\to\infty}\langle -\log\, p(s_0|s_{-1},...,s_{-n})\rangle\end{aligned}$$ where $p(s_{-n},...,s_0)$ is the probability for the letters at position $-n$ to $0$ to be $s_{-n}$ to $s_0$, and $p(s_0|s_{-1},...,s_{-n})=\frac{p(s_{-n},...,s_0)}{p(s_{-n},...,s_{-1})}$. The second line of this equation tells us that $h$ can be considered as an average over the [*information*]{} of [*bit number*]{}. While Eq. (\[eq2\]) obviously assumed stationarity, we can define the latter also for nonstationary sequences, provided they are distributed according to some probability $p$ which satisfies the Kolmogorov consistency conditions. The information of the $k$th letter when it follows the string $...,s_{k-2},s_{k-1}$ is thus defined as: $$\begin{aligned} \label{bitnum} \eta_k=\lim\limits_{n\to\infty} \log \frac{1}{p(s_k|s_{k-1},...,s_{k-n})}\end{aligned}$$ Notice that this depends both on the previous letters (or “contexts” [@WRF]) and on $s_k$ itself. If the sequence is only one-sided infinite (as for written texts), we extend it to the left with some arbitrary but fixed sequence, in order to make the limes in Eq. (\[bitnum\]) well defined. When trying to evaluate $\eta_k$, the main problem is the fact that $p(s_k|s_{k-1},...,s_{k-n})$ is not known. The best we can do is to obtain an estimator $\hat{p}(s_k|s_{k-1},...,s_{k-n})$ which then leads to an information estimate $\hat{\eta}_k$, and to: $$\begin{aligned} \label{eq4} \hat{h}_N=\frac{1}{N}\sum\limits_{k=1}^N\eta_k\end{aligned}$$ for a text of length $N$. This can be used also for testing the quality of the predictor $\hat{p}(s_k|s_{k-1},...,s_{k-n})$: the best predictor is that which leads to the smallest $\hat{h}$. This is indeed the main criterion by which $\hat{p}(s_k|s_{k-1},...,s_{k-n})$ is constructed. In this way we do not only get an estimate $\hat h$ of $h$, but we can investigate the predictability of individual letters within specific contexts. The fact that different letters have different predictabilities is of course well known. If no contexts are taken into account at all, then the best predictor is based on the frequencies of letters, making the most frequent ones the easiest to predict. Studies of these frequencies exist for all important languages. Much less effort has gone into the context dependence. Of course, the next natural distribution after the single-letter probabilities are the distributions of pairs and triples which give contexts of length 1 and 2, and which have also been studied in detail [@BCW]. But these distributions do not directly reflect some of the most prominent features of written languages, namely, that they are composed of subunits (words, phrases) which are put together according to grammatical rules. In the following, we shall study the simples consequences of this structure. If words are indeed natural units, it should be much easier to predict letters coming late in the word - where we have already seen several letters with which they should be strongly correlated - than letters at the beginnings of words. Surprisingly, this effect has not jet been studied in the literature, maybe due to a lack of efficient estimators of entropies of individual letters. A similar, but maybe less pronounced effect is expected with words replaced by phrases. In our investigation, we use an estimator which is based on minimizing $\hat h$. Technically, it builds a rooted tree with contexts represented as path starting at some inner node and ending at the root. The tree is constructed such that each leaf corresponds to a context which is seen a certain number of times (typically, 2-5), and each internal node has appeared more often as a context. A heuristic rule is used for estimating $\hat p$ for each context length, and the optimal context length is chosen such that it will most likely lead to the smallest $\hat h$. Details of this algorithm (which resembles those discussed in Refs. [@BCW] and [@WRF]) is given in [@SG]. The information needed to predict a letter with this algorithm consists, on the one hand, of the rules entering the algorithm, and on the other, of the structure stored in the tree. In the present application, we have first build two trees, each based on $\approx 4\times 10^6$ letters from Shakespeare [@Sh], and from the LOB corpus [@LOB], respectively. We have then used this trees to predict additional $\approx 10^6$ letters from these texts. The average estimated entropies were 2.0 bit/character for both texts, which is slightly better than the best published values [@BCW]. In Figs. 1 and 2, we show the average information per letter as functions of the position in the word [^1]. We see indeed a dramatic decrease, both for Shakespeare and for the LOB corpus. The information for the first letter is $\approx 3.8$ bits, which is close to the estimate of 4.1 bit/letter if no contexts are used at all. Thus there is very little information across words which can be used by the algorithm. Already the second letter can be estimated much easier, having an uncertainty of $\approx 2$ bits. This decreased further, until a plateau is reached with the fifth letter where $\hat{\eta}_5\approx 0.7$. ![Entropy per letter is dependent on its position in a word for Shakespeare’s collected works: Original version (“unscrambled”) compared with the surrogate version created by scrambling the words (“scrambled”). Statistics for words longer than 18 letters is too poor to give meaningful estimates.](Fig1.eps){width="8.0cm"} Actually, we have to be careful when concluding that little information across words can be used by our algorithm. It might be that information is useful for predicting subsequent letters even if it could not be used to predict the first one. To test this, we have created surrogate texts by scrambling the words: all words are permuted randomly, such that any correlation between them is lost while the correlations within words and frequencies of words are unchanged. It increases the average entropies for both text by $\approx 0.1$ bit/letter. The changes in the position-dependent entropies are shown in Figs. 1 and 2. We see that the entropies of the leading letter are increased significantly by scrambling, while those at positions $>4$ are hardly changed at all. ![Entropy per letter is dependent on its position in a word for mixed texts from newspapers (LOB corpus): Original version (“unscrambled”) compared with the surrogate version created by scrambling the words (“scrambled”). Again, the curves are truncated when the statistical error becomes too large.](Fig2.eps){width="8.0cm"} Finally, we show in Figs. 3 and 4 how the estimated overall entropy depends on the length of the text, with and without scrambling. That these estimates decrease with the length is a simple consequence of the fact that the algorithm has to “learn” (by building the tree) before being able to make good estimates of $p$. The curves for the scrambled texts are more smooth since the text has been made homogeneous by scrambling. Thus, all learned features will be useful for the future, while this is not true for the unscrambled texts: each time the subject changes, part of the learned features become useless, and new features have to be learned. Thus the convergence of $\hat{h}_N$ for scrambled texts reflects only the learning speed of the algorithm, while that for the unscrambled texts depends also on long range correlations which can be detected only with higher statistics. Extrapolating $\hat{h}_N$ to $N\to\infty$ for unscrambled texts is thus highly non-trivial, as is suggested also by the very low entropies found in [@S]-[@LR]. In contrast, extrapolation of the curves for scrambled texts is much more easy, and suggests that our estimates for $N\approx 4\times 10^6$ are already very close to the asymptotic ones. ![Entropy estimates of Shakespeare’s collected works: Original version (“unscrambled”) compared with a surrogate version created by scrambling the words (“scrambled”).](Fig3.eps){width="8.0cm"} ![Entropy estimates of mixed English texts from newspapers (LOB corpus): Original version(“unscrambled”); surrogate version by scrambling the words (“scrambled”).](Fig4.eps){width="8.0cm"} In summary, we have shown that there are very strong differences in predictability of letters, depending on their position within words. Although such dependencies are to be expected qualitatively, we find the size of the effect surprising. If our algorithm were optimal, it would mean that the constraints within words are indeed much stronger then those between words. But the fact that subjective (human-based) entropy estimates [@S]-[@LR] are typically lower than machine-based ones, suggest that our algorithm might not be perfect, even though it compares favorably with other algorithms available at present. Thus, our result might just mean that it is harder for the algorithm to learn grammatical (inter-word) than orthographic (intra-word) rules. But in that case, no algorithm of the type used here or in Refs. [@BCW] and [@WRF] could learn these rules even with much higher computable efforts. Thus, our findings indeed represent an inherent feature of written English, as suggested also by the analysis of scrambled texts. Up to now, we have only studied the most primitive grammatical aspects. We should expect similar but less strong differences with the position in a phrase. Other features leading to similar effects could be dependent clauses or direct speech. Obviously, this is a rich field where much remains to be done. Eventually, this could then be used to create more efficient text compression algorithms.\ This work was supported by DFG within the Graduiertenkolleg “Feldtheoretische und numerische Methoden in der Elementarteilchen- und Statistischen Physik”. [10]{} bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} C. E. Shannon and W. Weaver, “The Mathematical Theory of Communication”, (University of Illinois Press, Urbana, IL 1949). C. E. Shannon, “Prediction and entropy of printed English,” [*Bell Syst. Techn. J.*]{} [**30**]{}, 50 (1951). T. M. Cover and R. C. King, “A convergent gambling estimate of the entropy of English,” [*IEEE Trans. Inform. Theory*]{} [**24**]{}, 413 (1978). L. B. Levitin and Z. Reingold, “Evaluation of the entropy of a language by an improved prediction method with application to printed Hebrew,” (Tel Aviv Univ., preprint 1994). T. C. Bell, J. G. Cleary and I. H. Witten, “Text Compression” (Prentice Hall, Englewood Cliffs, N.J.,1990). M. J. Weinberger, J. J. Rissanen and M. Feder, “A universal finite memory source,” [*IEEE Trans. Inform. Theory*]{} [**41**]{}, 643 (1995). T. Schürmann and P. Grassberger, “Entropy estimation of symbol sequences,” CHAOS Vol. 6, No. 3 (1996) 414-427, eprint: http://www.arxiv.org/abs/cond-mat/0203436. W. A. Shakespeare, [*Collected Works*]{} provided as ASCII-text by Project Gutenberg Etext, Illinois Benedictine College, Lisle). LOB Corpus. A collection of mixed English texts of newspapers (provided as ASCII-text by D. Wolff, Department of Linguistics, University of Wuppertal, Germany). [^1]: Technically, a word is defined as any string of letters following a blank and ending with the next blank. Punctuation marks and special characters were taken out in agreement with Ref. [@S], and all non-blank letters were converted to lower case.
--- author: - 'S.J. Oliver' - 'L. Wang' - 'A.J. Smith' - 'B. Altieri' - 'A. Amblard' - 'V. Arumugam' - 'R. Auld' - 'H. Aussel' - 'T. Babbedge' - 'A. Blain' - 'J. Bock' - 'A. Boselli' - 'V. Buat' - 'D. Burgarella' - 'N. Castro-Rodr[í]{}guez' - 'A. Cava' - 'P. Chanial' - 'D.L. Clements' - 'A. Conley' - 'L. Conversi' - 'A. Cooray' - 'C.D. Dowell' - 'E. Dwek' - 'S. Eales' - 'D. Elbaz' - 'M. Fox' - 'A. Franceschini' - 'W. Gear' - 'J. Glenn' - 'M. Griffin' - 'M. Halpern' - 'E. Hatziminaoglou' - 'E. Ibar' - 'K. Isaak' - 'R.J. Ivison' - 'G. Lagache' - 'L. Levenson' - 'N. Lu' - 'S. Madden' - 'B. Maffei' - 'G. Mainetti' - 'L. Marchetti' - 'K. Mitchell-Wynne' - 'A.M.J. Mortier' - 'H.T. Nguyen' - 'B. O’Halloran' - 'A. Omont' - 'M.J. Page' - 'P. Panuzzo' - 'A. Papageorgiou' - 'C.P. Pearson' - 'I. P[é]{}rez-Fournon' - 'M. Pohlen' - 'J.I. Rawlings' - 'G. Raymond' - 'D. Rigopoulou' - 'D. Rizzo' - 'I.G. Roseboom' - 'M. Rowan-Robinson' - 'M. Sánchez Portal' - 'R. Savage' - 'B. Schulz' - Douglas Scott - 'N. Seymour' - 'D.L. Shupe' - 'J.A. Stevens' - 'M. Symeonidis' - 'M. Trichas' - 'K.E. Tugwell' - 'M. Vaccari' - 'E. Valiante' - 'I. Valtchanov' - 'J.D. Vieira' - 'L. Vigroux' - 'R. Ward' - 'G. Wright' - 'C.K. Xu' - 'M. Zemcov' bibliography: - 'counts.bib' date: 'Received Mar 30, 2010; accepted May 11, 2010' title: 'HerMES: SPIRE galaxy number counts at 250, 350 and 500$\mu$m[^1]' --- [Emission at far-infrared wavelengths makes up a significant fraction of the total light detected from galaxies over the age of Universe. [Herschel]{} provides an opportunity for studying galaxies at the peak wavelength of their emission. Our aim is to provide a benchmark for models of galaxy population evolution and to test pre-existing models of galaxies. With the [Herschel]{} Multi-tiered Extra-galactic survey, HerMES, we have observed a number of fields of different areas and sensitivity using the SPIRE instrument on [Herschel]{}. We have determined the number counts of galaxies down to $\sim20$ mJy. Our [ constraints from directly counting galaxies]{} are consistent with, [ though more precise than]{}, estimates from the BLAST fluctuation analysis. We have found a steep rise in the Euclidean normalised counts $<100$ mJy. We have directly [ resolved $\sim15$% of]{} the infrared extra-galactic background at the wavelength near where it peaks.]{} Introduction ============ The statistical properties of galaxy populations are important probes for understanding the evolution of galaxies. The most basic statistic of galaxy populations is the number counts i.e. the number density of galaxies as a function of flux. The first strong evidence for cosmological evolution came through studying number counts of radio galaxies (e.g. @longair66). The number counts at far-infrared and sub-mm wavelengths are well known to exhibit strong evolution, e.g. from [*IRAS*]{} [@Oliver92 and references therein], [*ISO*]{} [@Oliver2002; @her04 and references therein] [*Spitzer*]{} [@Shupe2008; @Frayer2009 and references therein], and ground-based sub-mm surveys [@Maloney2005; @Coppin2006; @Khan2007; @Greve2008; @Weiss2009; @Scott2010 and references therein]. These results are underlined by the discovery of a significant extragalactic infrared background [@puget96; @Fixsen1998; @Lagache1999]. The background measures the flux weighted integral of the number counts over all redshifts plus any diffuse cosmological component. This indicates that as much energy is received from galaxies after being reprocessed through dust as is received directly. It is only very recently, using BLAST, that [ count models have]{} been probed [ using fluctuation techniques [@Patanchon2009] or directly [@Bethermin2010]]{} at the wavelength where the background peaks. [ Far-infrared and sub-mm counts and background measurements]{} have been modelled phenomenologically with strongly evolving populations (see Section \[sec:models\]). Physical models (e.g. so-called semi-analytic models) struggle to explain these counts and solutions include altering the initial mass function (e.g. @Baugh2005) [ or exploiting AGN/Supernovae feedback (e.g. @Granato2004)]{}. A primary goal of [*Herschel*]{} [@Pilbratt2010] is to explore the evolution of obscured galaxies. [*Herschel*]{} opens up a huge region of new parameter space of surveys in area, depth and wavelength. The [*Herschel*]{} multi-tiered extragalactic survey (HerMES[^2]; @oliver2010) is the largest project being undertaken by [*Herschel*]{} and consists of a survey of many well-studied extra-galactic fields (totalling $\sim70\, {\rm deg}^2$) at various depths. This letter is the first number count analysis from the HerMES Science Demonstration Phase SPIRE data. Even these preliminary results will be able to eliminate some existing models and provide a benchmark on which future models can be tested. SPIRE Data {#sec:data} ========== Science Demonstration Phase Observations ---------------------------------------- The observations described here were carried out on the [*Herschel*]{} Space Observatory [@Pilbratt2010] using the Spectral and Photometric Imaging Receiver (SPIRE). The SPIRE instrument, its in-orbit performance, and its scientific capabilities are described by [@Griffin2010], and the SPIRE astronomical calibration methods and accuracy are outlined in [@Swinyard2010]. They were undertaken as part of the HerMES programe during the Science Demonstration Phase between 12-Sep-2009 and 25-Oct-2009 under the proposal identification `SDP_soliver_3`. The fields and observations are summarised in Table \[tab:sdp\_obs\]. --------------- -------------------------------- ----------- ----------- ----------- ---------- ----------------------- --------- --------------- ------------------------------- ---------------------------- ---------------------------- ---------------------------- -- -- -- -- Name Size RA Dec Roll Mode Scan Repeats $t_{\rm AOR}$ $\langle N_{\rm samp}\rangle$ $S_{50\%}^{250\mu{\rm m}}$ $S_{50\%}^{350\mu{\rm m}}$ $S_{50\%}^{500\mu{\rm m}}$ /$^\circ$ /$^\circ$ /$^\circ$ Rate /hr /mJy /mJy /mJy A2218 $9^\prime\times9^\prime$ 248.98 66.22 217 Lrg. Map 30$^{\prime\prime}$/s 100 9.2 1622 13.8 16.0 15.1 FLS $155^\prime\times135^\prime$ 258.97 59.39 185 Parallel 20$^{\prime\prime}$/s 1 16.8 30 17.5 18.9 21.4 Lockman-North $ 35^\prime\times35^\prime$ 161.50 59.02 91 Lrg. Map 30$^{\prime\prime}$/s 7 3.9 117 13.7 16.5 16.0 Lockman-SWIRE $218^\prime \times 218^\prime$ 162.00 58.11 92 Lrg. Map 60$^{\prime\prime}$/s 2 13.4 16 25.7 27.5 33.4 GOODS-N $30^\prime\times30^\prime $ 189.23 62.24 132 Lrg. Map 30$^{\prime\prime}$/s 30 13.5 501 12.0 13.7 12.8 --------------- -------------------------------- ----------- ----------- ----------- ---------- ----------------------- --------- --------------- ------------------------------- ---------------------------- ---------------------------- ---------------------------- -- -- -- -- SPIRE Catalogue Data Processing ------------------------------- For this paper it is sufficient to note that the same source detection method is applied to the simulations as to the real data so we sketch the details only briefly. The single-band SPIRE catalogues have been extracted from the maps using a version of the [sussex]{}tractor method [@Savage2007] as implemented in [hipe]{} [@hipe2]. The processing of the SPIRE data is summarised here, with details of the approach given by [@Smith2010]. Calibrated timelines were created using HIPE development version 2.0.905, with a fix applied to the astrometry (included in more recent versions of the pipeline), with newer calibration files (beam-steering mirror calibration version 2, flux conversion version 2.3 and temperature drift correction version 2.3.2) and with a median and linear slope subtracted from each timeline. The default [hipe]{} naïve map-maker was then used to create maps, which were given a zero mean. The shallow fields [ (Lockman-SWIRE and FLS)]{} were smoothed with a point-source optimised filter (see @Smith2010 for details). Peaks in the map were identified and the flux was estimated based on an assumed (Gaussian) profile for a point source, through a weighted sum of the map pixels close to the centre of the source. [ This filtering means we underestimate the flux of extended sources.]{} [ The SPIRE Catalogue (SCAT) processing is assessed by injecting synthetic sources on a grid into the real maps. We then run the SCAT source extraction pipeline on these maps and claim success if the closest detection to the injected source is within a search radius of [fwhm]{} [ of the beam]{} and has a flux within a factor of two of the injected flux (see @Smith2010 for more details). The resulting 50% completeness estimated in this way is tabulated in Table \[tab:sdp\_obs\] but is not used to assess the counts.]{} Method {#sec:method} ====== The [*Herschel*]{} beam is broad compared with the number density of sources, i.e. the maps are confused. @Nguyen2010 measure a variance in nominal map pixels due to confused sources finding $\sigma_{\rm conf}$ 5.8, 6.3 and 6.8 mJy/beam. This confusion means care has to be taken in the estimation of number counts. Our technique follows the standard approach for sub-mm surveys, correcting for flux boosting and incompleteness. We [ determined the false detection rate by applying]{} the source extraction on maps obtained from the difference between two independent observations of the same field. These maps are expected to have zero mean, [ no sources]{}, but similar noise properties to mean maps. [ We thus]{} estimate that the reliability for the samples in this paper is better than 97% for all fields and bands. A source we measure to have flux, $S_{\rm m}$, and noise, $\sigma_{\rm m}$, is more likely to be a dimmer source on top of a positive noise fluctuation than the converse; this is known as flux boosting. We follow the Bayesian method of [@Crawford2009] for estimating fluxes of individual sources (“de-boosting”). We estimate the posterior probability distribution of the true flux of the source ($S_i$) that contributed the most flux to a given detection. Note that it is similar to the now-standard flux de-boosting method (e.g. @Coppin2006) but with an additional exponential suppression term at low intrinsic flux. We derive counts by randomly sampling the posterior distribution ten thousand times. The flux de-boosting procedure has some dependency on the choice of prior number count model and [ so these samples are drawn from distributions produced for the full range of models discussed in Section \[sec:models\]. These samples provide a direct estimate of the confidence region of our counts.]{} For faint sources, the posterior probability function rises beyond the sampled flux range ($S_{\rm m} / 5<S_i<5\,S_{\rm m}$) so that the deboosted flux is highly uncertain. In those cases we flag the deboosted flux as “bad” and the derived number counts at the flux level where deboosted fluxes are flagged “bad’ are unreliable. [ We exclude count bins in which the fraction of “bad” sources is $>20\%$]{}. We also estimate the uncertainty in this by looking at the variation in derived counts from the range of models. Errors are included in the plots. [ The flux de-boosting procedure assumes no clustering. Clustering will affect this and will be addressed in a later paper.]{} [ We estimate the incompleteness in the whole process]{} by running full simulations. We have constructed input maps various from various number count models (@pearson09 [@lagache04; @Patanchon2009] and models from @Xu2003 and @Lacey2010). These input maps are then processed by the SPIRE Photometer Simulator ([sps]{}, @Sibthorpe2009) for observational programmes exactly the same as [ the real data]{}. The timeline output of the [sps]{}, map-making and source extraction are then processed in the same way as the real data, [ including the flux-deboosting. The ratio of input to output counts gives us the completeness with the standard deviation between models providing an estimate of an error in that estimate]{}. Results {#sec:models} ======= ![Number counts obtained from HerMES source catalogues. Filled circles are the mean number counts averaged over the following fields. GOODS-N & Lockman-North (faintest five bins only) and FLS & Lockman-SWIRE (brightest six bins only) [ with flux-deboosting, completeness corrections and field-field error bars.]{} Model fit to fluctuations of BLAST maps (omitting upper-limits, @Patanchon2009) [ — shaded region; BLAST resolved counts [@Bethermin2010] — open triangles; @Khan2007 data point — open circle; asymptote from modelling of IRAS data [@Serjeant2005] — dotted line. Models are discussed in the text. Dashed line indicates the flux at which the integrated number density is (40 beams)$^{-1}$.]{} []{data-label="fig:realnumbercounts"}](PSWEuclidan_dNds_gaussian.pdf "fig:"){width="3.4in"} ![Number counts obtained from HerMES source catalogues. Filled circles are the mean number counts averaged over the following fields. GOODS-N & Lockman-North (faintest five bins only) and FLS & Lockman-SWIRE (brightest six bins only) [ with flux-deboosting, completeness corrections and field-field error bars.]{} Model fit to fluctuations of BLAST maps (omitting upper-limits, @Patanchon2009) [ — shaded region; BLAST resolved counts [@Bethermin2010] — open triangles; @Khan2007 data point — open circle; asymptote from modelling of IRAS data [@Serjeant2005] — dotted line. Models are discussed in the text. Dashed line indicates the flux at which the integrated number density is (40 beams)$^{-1}$.]{} []{data-label="fig:realnumbercounts"}](PMWEuclidan_dNds_gaussian.pdf "fig:"){width="3.4in"} ![Number counts obtained from HerMES source catalogues. Filled circles are the mean number counts averaged over the following fields. GOODS-N & Lockman-North (faintest five bins only) and FLS & Lockman-SWIRE (brightest six bins only) [ with flux-deboosting, completeness corrections and field-field error bars.]{} Model fit to fluctuations of BLAST maps (omitting upper-limits, @Patanchon2009) [ — shaded region; BLAST resolved counts [@Bethermin2010] — open triangles; @Khan2007 data point — open circle; asymptote from modelling of IRAS data [@Serjeant2005] — dotted line. Models are discussed in the text. Dashed line indicates the flux at which the integrated number density is (40 beams)$^{-1}$.]{} []{data-label="fig:realnumbercounts"}](PLWEuclidan_dNds_gaussian.pdf "fig:"){width="3.4in"} ![The integrated background light at 250, 350, 500 $\mu$m from the HerMES counts determined in Figure \[fig:realnumbercounts\]. Dotted lines [ are the flux at which the integrated density is [ (40 beams)$^{-1}$]{}]{}. The hatched regions are measurements of the COBE background [@Lagache1999]](cib.pdf){width="3.4in"} . \[fig:cib\] [ The results are presented in Table \[tab:counts\] and in Figure \[fig:realnumbercounts\] with Euclidian normalization. There are several sources of uncertainties in the number counts: Poisson noise from the raw counts; “sampling variance" due to additional fluctuations from real large-scale structure; additional Poisson noise from the sampling of the posterior flux distribution and systematic errors from the corrections and assumptions about priors and the effect of clustering on the de-boosting. We measure the standard deviation of counts between fields which includes Poisson errors and some of the other systematic errors. The errors plotted are the field to field variations (or the Poisson errors if larger) with the errors from flux-boosting and completeness corrections added in quadrature.]{} We see approximately flat counts for $S>100$ mJy and then a steep rise. There is flattening to about 20 mJy. We find very good agreement with the number counts estimated from a $P(D)$ fluctuation analysis of the BLAST maps [@Patanchon2009]. We have also estimated, but do not show, the integral counts. The flux density at which the integral source counts reach 1 source per 40 beams (with beams defined as $3.87\times10^{-5}, 7.28 \times10^{-5}, 1.48 \times 10^{-4}\, {\rm deg}^2$) is [ $18.7 \pm 1.2, \, 18.4 \pm 1.1$ and $13.2 \pm 1.0$ mJy]{} at 250, 350 and 500 $\mu$m respectively ([ N.B. these fluxes are slightly below our secure estimation of counts]{}) . Likewise the number density at 100 mJy is [ $12.8\pm3.5,\, 3.7\pm 0.4$ and $0.8\pm0.1\, {\rm deg}^{-2}$]{}. These last measurements alone will be sufficient to rule out many models. Since the first [*ISO*]{} results, many empirical models have been developed to predict and interpret the numbers and luminosities of IR galaxies as a function of redshift. Empirical models are based on a similar philosophy. The spectral energy distributions of different galaxy populations are fixed and the mid-IR, far-IR and submm data are used to constrain the luminosity function evolution. Current limits come from the mid-IR, far-IR and submm number counts, redshift distributions, luminosity functions, and cosmic IR background. Models all agree on the general trends, with a very strong evolution of the bright-end ($> 10^{11} $L$_\odot$) of the luminosity function and they yield approximately the same comoving number density of infrared luminous galaxies as a function of redshift. We compare the number counts with eight models, one pre-[*Spitzer*]{} [@Xu2003], two based on the [*ISO*]{}, SCUBA and [*Spitzer*]{} first results [@lagache04; @negrello07] and 5 being more constrained by deep [*Spitzer*]{}, SCUBA, AzTEC, and recent BLAST observations [@LeBorgne2009; @pearson09; @rr09; @Valiante2009; @Franceschini2010]. The differences between the models are in several details, different assumptions leading sometimes to equally good fits to the current data. For example, @Valiante2009 conclude that it is necessary to introduce both an evolution in the AGN contribution and an evolution in the luminosity-temperature relation, while @Franceschini2010 reproduce the current data with only 4 galaxy populations and only one template for each population. We also compare with two semi-analytic models those of @Lacey2010 and @Wilman2010. Comparison with SPIRE number counts shows that many models cannot fit the bright end ($>100$ mJy). Exceptions are the models of @negrello07, @Valiante2009, @Franceschini2010 and @pearson09. Of these only @Valiante2009 can fit the rise from ($20<S<100$) mJy. [ The @Valiante2009 model has “cooler” spectral energy distributions at higher redshift. However, increasing the number of higher redshift galaxies would have a similar effect on the counts so it would be premature to assume the spectral energy distributions need revision.]{} We have also calculated the contribution of the resolved sources to the background intensity as a function of flux (shown in Figure \[fig:cib\]). [ At the (40 beams)$^{-1}$ depth we resolve $1.73\pm 0.33,\, 0.63\pm 0.18,\, 0.15\pm0.07\; {\rm nW}\,{\rm m}^2$ or 15, 10, 6% of]{} the nominal measured values at 250, 350 and 500 $\mu$m [@Lagache1999]. [ Future work will provide more detailed constraints at the fainter limits. This will include a $P(D)$ analysis [@Glenn2010] and counts from catalogues extracted at known [*Spitzer*]{} source positions.]{} [ccccccccc]{} & Bin 1 & Bin 2 & Bin 3 & Bin 4 & Bin 5 & Bin 6\ $S_{\rm min}$& 20 & 29 & 51 & 69 & 111 & 289\ $S_{\rm max}$& 29 & 51 & 69 & 111 & 289 & 511\ $S_{\rm euc}$& 23.8 & 37.5 & 58.9 & 85.9 & 166.2 & 374.1\ \ $\frac{dN}{dS}$& 2.0$\times 10^8$ & 6.4$\times 10^7$ & 1.2$\times 10^7$ & 3.1$\times 10^6$ & 2.1$\times 10^5$ & 1.7$\times 10^4$\ err$_1$& 38 & 22 & 28 & 35 & 23 & 6\ err$_2$& 10 & 6 & 7 & 6 & 4 & 19\ err$_3$& 4 & 3 & 7 & 4 & 7 & 23\ err$_4$& 11 & 19 & 30 & 28 & 30 & 8\ \ $\frac{dN}{dS}$ & 1.1$\times 10^8$ & 3.5$\times 10^7$ & 5.3$\times 10^6$ & 1.1$\times 10^6$ & 6.2$\times 10^4$ & 4.7$\times 10^3$\ err$_1$& 49 & 34 & 44 & 56 & 25 & 129\ err$_2$& 18 & 14 & 17 & 6 & 5 & 12\ err$_3$& 7 & 4 & 10 & 7 & 15 & 43\ err$_4$& 23 & 13 & 33 & 8 & 12 & 64\ \ $\frac{dN}{dS} $ & 3.6$\times 10^7$ & 1.1$\times 10^7$ & 1.6$\times 10^6$ & 2.3$\times 10^5$ & 1.3$\times 10^4$ & 1.3$\times 10^3$\ err$_1$& 83 & 50 & 62 & 56 & 45 & 0\ err$_2$& 31 & 18 & 25 & 15 & 18 & 7\ err$_3$& 10 & 6 & 18 & 14 & 33 & 50\ err$_4$& 5 & 17 & 48 & 27 & 20 & 0\ Conclusions {#sec:conclusions} =========== [ We present the first SPIRE number count analysis of resolved sources, conservatively within the limit of [*Herschel*]{} confusion. We have measured counts which resolve around 15% of the infrared background at 250 $\mu$m. We see a very steep rise in the counts from 100 to 20 mJy in 250, 350 and 500 $\mu$m. Few models have quite such a steep rise. Many models fail at the bright counts $>100$ mJy. This may suggest that models need a wider variety or evolution of the spectral energy distributions or changes in the redshift distributions. Future work is required to accurately constrain the fainter ends $<20$ mJy where confusion is a serious challenge. ]{} Oliver, Wang and Smith were supported by UK’s Science and Technology Facilities Council grant ST/F002858/1. SPIRE has been developed by a consortium of institutes led by Cardiff Univ. (UK) and including Univ. Lethbridge (Canada); NAOC (China); CEA, LAM (France); IFSI, Univ. Padua (Italy); IAC (Spain); Stockholm Observatory (Sweden); Imperial College London, RAL, UCL-MSSL, UKATC, Univ. Sussex (UK); Caltech, JPL, NHSC, Univ. Colorado (USA). This development has been supported by national funding agencies: CSA (Canada); NAOC (China); CEA, CNES, CNRS (France); ASI (Italy); MCINN (Spain); SNSB (Sweden); STFC (UK); and NASA (USA). HIPE is a joint development (are joint developments) by the Herschel Science Ground Segment Consortium, consisting of ESA, the NASA Herschel Science Center, and the HIFI, PACS and SPIRE consortia." The data presented in this paper will be released through the [*Herschel*]{} Database in Marseille HeDaM ([hedam.oamp.fr/HerMES]{}). [^1]: [*Herschel*]{} is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. [^2]: hermes.sussex.ac.uk
--- abstract: 'The cluster soft excess emission indicates the presence of large amounts of warm gas (T$\sim10^6$ K) in the neighborhood of galaxy clusters. Among the clusters that display this phenomenon is the Coma cluster, the nearest rich galaxy cluster. The excess emission is more prominent at the cluster’s outskirts than at its center. Detailed studies of its large-scale emission – up to $\sim$2.6 Mpc from the cluster’s center – reveal that these warm baryons are as massive as, or possibly more massive than, the well-known hot intra-cluster medium (T$\sim10^8$ K). A possible interpretation of the excess emission from the Coma cluster is radiation from low-density filaments located in the neighborhood of the cluster. In this case, the filaments would extend for much larger distances, or feature higher density, than predicted by current cosmological simulations.' author: - 'Massimiliano Bonamente$\,^{1,2}$, Richard Lieu$\,^{1}$ and Jonathan P.D. Mittaz$\,^{1}$' date: '?? and in revised form ??' title: ' Warm gas in the outskirts of galaxy clusters – The cluster soft excess phenomenon' --- Introduction to the soft excess emission ======================================== Clusters of galaxies are strong emitters of X-rays, which originate from a hot and diffuse intra-cluster medium (ICM) at temperatures of a few $\times 10^{7}$ K. The soft X-ray band below $\sim$ 1 keV often feature the ‘soft excess’ emission phenomenon, i.e., radiation in excess of that expected from the hot ICM. The excess emission could originate from Inverse-Compton scattering of cosmic microwave background (CMB) photons against a population of relativistic electrons in the intra-cluster medium (Hwang 1997, Sarazin and Lieu 1998, Ensslin and Biermann 1998, Lieu et al. 1999). Alternatively, warm gas at T$\sim 10^6$ K could be responsible for the soft emission (e.g., Lieu et al. 1996a,b, Nevalainen et al. 2003, Kaastra et al. 2003, Bonamente, Joy and Lieu 2003). Warm gas may reside inside the cluster, or in very diffuse filamentary structures outside the cluster, as seen in large scale hydrodynamic simulations (e.g., Cen and Ostriker 1999, Davé et al. 2001, Cen et al 2001). The warm gas scenario appears to be favored by the current X-ray data (e.g., Bonamente, Lieu and Mittaz 2001a, Buote 2001, Kaastra et al. 2003) Spatial and spectral features of the soft excess emission ========================================================= The soft excess emission is present in $\sim$ 50% of the clusters. Two surveys by Bonamente et al. (2002) and Kaastra et al. (2003) reported, respectively, detection of soft excess emission in 18/38 clusters and in 7/21 clusters that were investigated. Also, the excess is normally detected when the observations have a high S/N, such as in the nearby Virgo (Bonamente, Lieu and Mittaz 2001a) and Coma clusters (Bonamente, Joy and Lieu 2003). The relative importance of the soft excess component – with respect to the hot ICM component – normally increases with radial distance from the cluster center. This is the case of the Coma cluster (Fig.1; Bonamente, Joy and Lieu 2003), of A1795 (Mittaz et al. 1998), A2199 (Lieu, Bonamente and Mittaz 1999), Virgo (Bonamente, Lieu and Mittaz 2001a; Lieu et al. 1996a), MKW3s and A2052 (Kaastra et al. 2003). An important exception in AS1101 (Bonamente, Lieu and Mittaz 2001a), where the soft excess is sharply peaked at the cluster’s center. ![The radial distribution of the soft excess emission and of the hot ICM emission of Coma from ROSAT PSPC data. The data was analyzed in concentric annuli, further divided into quadrants.](bonamente_f1.eps){width="5.5in"} Thermal modelling of the soft excess component is normally favored by the goodness-of-fit analysis. Among the highest S/N data analyzed to date, the Coma cluster PSPC data favors the thermal interpretation for the majority of its regions (Fig. 2; Bonamente, Joy and Lieu 2003). The detection of OVII emission lines in the soft X-ray spectra of a few clusters, reported by Kaastra et al. (2003) from XMM data, would be the unmistakable signature of the thermal nature of the soft excess. A confirmation of those lines with higher S/N is however required before the detection be regarded as definitive. The non-thermal interpretation remains a plausible explaination for the excess emission, and it cannot be formally rejected in its entirety. In order to explain the detected excess emission as non-thermal radiation, some clusters require that the cosmic rays be near or above pressure equipartition with the hot gas (Lieu et al. 1999; Bonamente, Lieu and Mittaz 2001a). ![ PSPC spectrum of the 50-70’ north-eastern quadrant of Coma. In green is the hot ICM plus power law model, in red the hot ICM model plus a low-energy thermal component.](bonamente_f2.eps){width="5.5in"} Thermal interpretation of the soft excess emission ================================================== If the excess emission is of thermal origin, it is possible to envisage two scenarios:\ (1) The ‘warm’ gas coexists with the ‘hot’ gas. In this case, it is possible to estimate the density of the warm gas from the emission integral $I=\int n^2 dV$; in Coma the warm gas would have a density of 9$\times 10^{-4}$ cm$^{-3}$ to $\sim 8 \times 10^{-5}$ cm$^{-3}$ (Bonamente, Joy and Lieu 2003). If the soft excess emission originates from a warm phase of the intra-cluster medium, the ratio of the emission integral ($I=\int n^2 dV$) of the hot ICM and the emission integral of the warm gas can be used to measure the relative mass of the two phases. The emission integral is readily measured by fitting the X-ray spectrum. For the PSPC data of the Coma cluster, we calculated $M_{warm}/M_{hot}$=0.75 within a radius of 2.6 Mpc (Bonamente, Joy and Lieu 2003). \(2) The ‘warm’ gas resides in low density filamentary structures outside the clusters, the warm-hot intergalactic medium (WHIM). This scenario follows several cosmological simulations (e.g., Cen and Ostriker 1999) which identify the majority of the low-redshift baryons in a tenuous network of WHIM filaments. Following this interpretation, the gas should feature overdensities of $\delta \sim 3-300$, corresponding to $\sim 10^{-4}-10^{-6}$ cm$^{-3}$ (for $\Omega_b \sim 0.05$), and a median overdensity of $\sim 10-30$ (Davé et al. 2001). We employ a simple model where constant-density filaments are directed towards the observer. In this case, the emission integral of the soft phase becomes $I= n^2 A L$, where $A$ is the projected area of the filaments and $L$ their length along the line of sight. In order to explain the detected $I$, filaments should extend for several Mpc in front of the cluster (see Table 1). Consider the case of filaments with density $n=10^{-5}$ cm$^{-3}$ ($\delta \sim 30$): the filaments should then extend for several hundred mega-parsecs! This figure is at odds with typical results from cosmological simulations, and with the fact that the Coma cluster is located at a distance of $\sim$ 95 Mpc from the Galaxy. Filaments of higher density ($\sim 10^{-4}$ cm$^{-3}$) will require shorter $L$ (Table 1); in this case the scenario becomes tenable (Bonamente, Joy and Lieu 2003), although still at odds with the results from cosmological simulations, which predict an average filament density considerably lower than $10^{-4}$ cm$^{-3}$. This result is consistent with a detailed analysis of the soft X-ray emission predicted by the WHIM filaments seen in some recent simulations (e.g., Cen et al. 2001). The analysis reveals that the WHIM filaments predict several times ($\geq 10$) lower soft X-ray fluxes than those of the typical soft excess emission (Mittaz et al. 2004) as, e.g., that of the Coma cluster. Similar results apply to the soft excess of the AS1101 cluster, where a strong centrally-peaked soft excess emission was detected (Bonamente, Lieu and Mittaz 2001a). [lccc]{}\ Region & $I^{*}$ & $L$ (Mpc) & $L$ (Mpc)\ (arcmin) & & \[$n=10^{-5}$ cm$^{-3}$\] & \[$n=10^{-4}$ cm$^{-3}$\]\ \ 0-20 & 0.057 & 1,800 & 18\ 20-40 & 0.039 & 350 & 3.5\ 40-55 & 0.066 & 610 & 6.1\ 55-70 & 0.019 & 250 & 2.5\ 70-90 & 0.007 & 220 & 2.2\ \ [(\*) The emission integral $I$ is in units of $10^{14}[ 4 \pi (1+z)^2 D^2]$, as in the XSPEC optically-thin MEKAL code. $D$ is distance to the Coma cluster in cm and $z$=0.023 the redshift. Description of the PSPC data used to obtain the emission integrals $I$ can be found in Bonamente, Joy and Lieu (2003).]{} Following this scenario, and for a filament density of $n=10^{-4}$ cm$^{-3}$, the PSPC data of Coma yields the conclusion that $M_{fil}/M_{hot}$=3 within a radius of 2.6 Mpc (Bonamente, Joy and Lieu 2003). The warm gas would therefore be more massive than the hot ICM if it is distributed in low-density filaments. As the total mass of the Coma cluster within 14 Mpc is 1.6$\pm0.4 \times 10^{15} M_{\bigodot}$ (Geller, Diaferio and Kurtz 1999) and the mass of the hot ICM is $\sim 4.3 \times 10^{14} M_{\bigodot}$ within 2.6 Mpc (Mohr, Mathiesen and Evrard 1999), the soft excess emission could account for a significant fraction of the low-redshift $\Omega_b$. Kaastra et al. (2003) resported similar results from XMM observations of several clusters including AS1101, MKW3s and A2052. Conclusions =========== Detection of soft X-ray excess emission from galaxy clusters is commonplace. The radiation is more prominent at the cluster outskirts, and it indicates that clusters may be a significant reservoir of ‘warm’ baryons. Precise mass estimates for the ‘soft’ component require knowledge of the exact location of the emitter. If the gas is located in filaments outside the clusters, the detection of soft excess in the Coma cluster indicates that the ‘warm’ gas is more massive than the hot ICM. Filaments of considerably higher density than predicted by the current cosmological simulations are needed to explain the detected excess emission from the Coma cluster. Bonamente, M., Lieu, R., Joy, M.K. and Nevalainen, J.H. 2002, [Astrophys. J.]{}, 576, 688 Bonamente, M., Lieu, R. and Mittaz, J.P.D. 2001a, [Astrophys. J. Letters]{}, 561, 63 Bonamente, M., Lieu, R. and Mittaz, J.P.D. 2001b, [Astrophys. J.]{}, 547, 7 Buote, D.A., 2001, [Astrophys. J.]{}, 548, 652 Cen, R. and Ostriker, J.P. 1999, [Astrophys. J. Letters]{}, 514, L1 Cen, R., Tripp, T.M., Ostriker, J.P. and Jenkins, E.B. 2001, [Astrophys. J. Letters]{}, 559,L5 Davé, R., Cen, R., Ostriker, J.P., Bryan, G.L., Hernquist, L., Katz, N., Weinberg, D.H., Ensslin, T. and Biermann, P. 1998, [Astron. and Astrophys.]{}, 330, 90 Geller, M.J., Diaferio, A. and Kurtz, M.J. 1999, [Astrophys. J.]{}, 517, L26 Hwang, C.-Y. 1997, Science, 278, 1917 Kaastra, J.S., Lieu, R., Tamura, T., Paerels, F.B.S. and den Herder, J.W. 2003, [Astron. and Astrophys.]{}, 397, 445 Lieu, R., Ip, W.-I., Axford, W.I. and Bonamente, M. 1999b, [Astrophys. J. Letters]{}, 510,L25 Lieu, R., Bonamente, M. and Mittaz, J.P.D. 1999, [Astrophys. J. Letters]{}, 517, L91 Lieu, R., Mittaz, J.P.D., Bowyer, S., Lockman, F.J., Hwang, C. -Y., Schmitt, J.H.M.M. 1996a, [Astrophys. J. Letters]{}, 458, L5 Lieu, R., Mittaz, J.P.D., Bowyer, S., Breen, J.O., Lockman, F.J., Murphy, E.M. & Hwang, C. -Y. 1996b, Science, 274,1335 Mittaz, J.P.D., Lieu, R. and Lockman, F.J. 1998, [Astrophys. J. Letters]{}, 498, L17 Mittaz, J.P.D. et al. 2004, in preparation. Mohr, J.J., Mathiesen, B and Evrard, A. E. 1999, [Astrophys. J.]{}, 517, 627 Nevalainen, J.H., Lieu, R., Bonamente, M. and Oosterbroeck, T. 2003, [Astrophys. J.]{}, in press Sarazin, C.L. and Lieu, R. 1998, [Astrophys. J. Letters]{}, 494, L177
--- bibliography: - 'bib/literature.bib' --- =1 Introduction {#sec:introduction} ============ Online consumer reviews represent a key source of information for customers considering purchasing a product [[e.g.]{} @Dellarocas.2003]. On modern online retailer platforms, users are typically provided with the opportunity to assign a product to a star rating ranging from one star (very negative) to five stars (very positive). These product valuations not only inform other customers about the quality of a product, but also have a significant and positive impact on retail sales [@Chevalier.2006]. Another relevant feature of modern retailer platforms is that customers are typically provided with the opportunity to rate the perceived helpfulness of a review, [i.e.]{}the extent to which it facilitates their decision-making [@Mudambi.2010]. Existing research has demonstrated that reviews that are perceived as more helpful also have a greater influence on retailer sales [@Dhanasobhon.2007]. Since review helpfulness serves as focal point to study human decision-making, several studies have focused on the question of what makes reviews helpful or unhelpful. For instance, longer and more detailed reviews are perceived as more helpful [@Yin.2016]. Previous literature has, however, produced mixed results regarding the effect of review ratings on helpfulness. While, for example, @Sen.2007 associate a greater helpfulness of negative reviews, the results from @Mudambi.2010 point in the opposite direction. Apart from the numeric star ratings, customer reviews typically also contain a substantial amount of unstructured textual data, [i.e.]{}the review texts. This written content encompasses highly customer-relevant information, such as user experiences or customer opinions [@Cao.2011]. Nonetheless, previous works have primarily studied review helpfulness on the basis of structured information (such as star ratings or review length), whereas the textual component has been largely ignored. This stems from the fact that language offers a rich source of information, with direct effects on human decision-making [@Pennebaker.2003]. In this context, a particularly decisive aspect is the two-sidedness of argumentation, [i.e.]{}that the reviewer illustrates both the positive and negative aspects of a particular product. For example, marketing research suggests that *“two-sided messages generate relatively high levels of attention and motivation to process because they are novel, interesting, and credible”* [@Crowley.1994]. Since online product reviews, similar to marketing campaigns, advocate a certain opinion about a product, one could expect two-sided argumentation in customer reviews to play an important role regarding their helpfulness. In order to address this important research question, this paper examines the effect of two-sided argumentation on the perceived helpfulness of customer reviews. For this purpose, we use a dataset of *192,189* Amazon customer reviews in combination with a novel text analysis method that allows us to study the line of argumentation on the basis of individual sentences. As detailed later in this paper, the method employs *distributed text representations* and *multi-instance learning* to transfer information from the document level to the sentence level. By assigning similar sentences to the same polarity label and differing sentences to an opposite polarity label, we are able to operationalize the two-sidedness of argumentation from a language-based perspective. A subsequent empirical analysis suggests that two-sided argumentation in reviews significantly increases the helpfulness of the reviews. Moreover, we find this effect to be stronger for positive reviews than for negative reviews, whereas a higher degree of emotional language weakens the effect. This work immediately suggests manifold implications for practitioners and Information Systems research: we present a language-based approach to better understanding the role of two-sided argumentation in the assessment of customer reviews. In a next step, this allows practitioners to enhance their communication strategies with respect to product descriptions, social media content, and advertising. Moreover, our findings have immediate implications for retailer platforms, which can utilize our results to optimize their customer feedback system and present more useful product reviews. Ultimately, this study contributes to IS research by addressing the paramount question of how humans react to information in the form of written text. The remainder of this work is structured as follows. establishes the background of our study and derives our research hypotheses. In , we introduce our research methodology. Subsequently, presents our empirical setup and tests our hypotheses. In we discuss implications of our findings. Finally, concludes and outlines our further research agenda. Research Hypotheses {#sec:background} =================== In this study, we aim to understand the role of two-sidedness in the helpfulness of customer reviews. Existing research in this direction has focused on the role of review extremity, [i.e.]{}whether the review rating is positive, negative, or neutral. For example, @Pavlou.2006 found that the extreme ratings of sellers on eBay are more influential than moderate ratings. In contrast, @Mudambi.2010 showed that for electronic devices, extreme reviews are less helpful than moderate reviews. In this paper, we hypothesize that a potential reason for these contradictory findings is that they study two-sidedness solely in the context of review ratings, whereas a review’s actual textual content is ignored. Based on this notion, we derive our research hypotheses, all of which aim at studying two-sidedness in reviews from a language-based perspective. The line of arguments and their reasoning plays a key role regarding the interpretation of information. For example, @Tversky.1974 find that the increased availability of justifications for a decision increases the confidence of the decision-maker. Similarly, @Schwenk.1986 shows that the arguments of managers are more persuasive when they provide more information in support of their position. This preference for diagnostic information can be based on multiple factors. For instance, a person may not yet have made the cognitive effort to identify the reasons for a decision. Similarly, the person might not be motivated to weigh the pros and cons regarding various alternatives. Hence, in the context of reviews, one could expect that an in-depth review from someone who has already expended the effort to assess a product helps other customers make the purchase decision. Reviews provide detailed information when presenting a balanced, two-sided view of both the pros and cons. We thus expect reviews with a higher degree of two-sidedness to exhibit a greater perceived helpfulness as compared to one-sided appeals with a clear-cut positive or negative opinion. Therefore, our first research hypothesis states: ***Hypothesis 1.*** *A higher degree of two-sidedness increases the helpfulness of a review.* Existing literature has produced mixed results regarding the question of whether positive or negative reviews are more helpful to customers. A possible reason for the inconsistent findings in previous works is that they ignore the initial beliefs of customers before assessing a product review [@Yin.2016]. Consumers evaluate reviews from other customers in order to help them fulfill their consumption goals [@Zhang.2010]. Positive reviews provide information about satisfactory experiences with the product, and thus represent opportunities to attain positive outcomes. Since positive reviews are more congruent with consumers’ goals, they are likely to be more persuasive than negative ones (positivity bias). Therefore, we may expect that a positive review refuting negative arguments may remove the lingering doubts of a customer and provide particularly convincing information in his or her decision process [@Pan.2011]. We thus expect the role of two-sidedness to be stronger for positive reviews as compared to negative reviews. Therefore, H2 states: ***Hypothesis 2.*** *The effect of two-sidedness on review helpfulness is stronger for positive reviews.* Another important question is how the dispersion of review ratings influences the effect of two-sided argumentation on helpfulness. A high dispersion of ratings indicates low agreement among reviewers, who exhibit a range of diverging opinions about a product. In addition, a higher rating dispersion indicates a higher relevancy of diversity in customers’ tastes or product details [@Clemons.2006]. We thus expect two-sidedness to be particularly informative when the dispersion of ratings is high. Thus, H3 states: *****Hypothesis 3.***** *A higher rating dispersion increases the effect of two-sidedness on review helpfulness.* Besides a cognitive thinking dimension, human perception is also influenced by an affective feeling dimension [@Sweeny.2010]. Similar to other textual information sources, product reviews can be highly emotionally charged. Such personal content cannot assumed to be uniformly helpful to the purchase decision. In contrast, customers are more likely to seek objective, factual information that contains information about how the product is used and how it compares to alternatives [@Ghose.2007]. Emotionally charged messages, however, typically strengthen opinions in a one-sided direction and have the potential to distract from relevant factual information [@Prollochs.2016; @Sweeny.2010]. Therefore, we expect a higher degree of emotionality to decrease the effect of two-sidedness on review helpfulness. H4 states: ***Hypothesis 4.*** *A higher degree of emotionality decreases the effect of two-sidedness on review helpfulness.* Methodology {#sec:methodology} =========== This section introduces our methodology by which to infer the degree of two-sidedness of argumentation in customer reviews. For this purpose, we employ a two-staged approach: First, the review texts are mapped to a vector-based representation using sentence embeddings. We then combine the vector representations with the review ratings to infer the polarity of individual sentences using multi-instance learning. Distributed Sentence Representations ------------------------------------ The accuracy of textual analysis depends heavily on the representation of the textual data and the selection of features [@Le.2014; @Prollochs.2018]. To overcome the drawbacks of the frequently employed bag-of-words approach, such as missing context and information loss, we take advantage of recent advances in learning distributed representations for text. For this purpose, we employ the *doc2vec* library developed by Google [@Le.2014]. This library is based on a deep learning model that creates numerical representations of texts, regardless of their length. Specifically, the underlying model allows one to create distributed representations of sentences by mapping the textual data onto a vector space. The resulting sentence vectors have several useful properties. First, more similar sentences are mapped to more similar vectors. Second, the feature vectors also fulfill simple algebraic properties such as, for example, *king* - *man* + *woman* = *queen*. The feature representations created by the *doc2vec* library have been shown to significantly increase the accuracy of text classification [@Le.2014]. For the training of our *doc2vec* model, we initialize the word vectors with the vectors from the pre-trained Google News dataset[^1], which is the predominant choice in the previous literature. Here we use the hyperparameter settings developed by @Lau.2016 during an extensive analysis. Subsequently, we split each review into sentences and generate vector representations for all sentences. These are used in the next section as input data to infer polarity labels for individual sentences using multi-instance learning. Inferring Two-Sidedness of Argumentation Using Multi-Instance Learning ---------------------------------------------------------------------- We are facing a problem in which the observations (reviews) contain groups of instances (sentences) instead of a single feature vector, whereby each review is associated with a rating. Formally, let $X = \{\boldsymbol{x}_i\}, i=1\dots N$ denote the set of all sentences in all reviews, $N$ the number of sentences, $D$ the set of reviews and $K$ the number of reviews. Each review $D_k=(\mathcal{G}_k, l_k)$ consists of a multiset of sentences $\mathcal{G}_k \subseteq X$ and is assigned a label $l_k$ ($0$ for negative and $1$ for positive). The learning task is to train a classifier $y$ with parameters $\boldsymbol{\theta}$ to infer sentence labels $y_{\boldsymbol{\theta}}(\boldsymbol{x}_i)$ given only the review labels. The above problem is a multi-instance learning problem [@Dietterich.1997], which can be solved by constructing a loss function consisting of two components: (a) a term that punishes different labels for similar sentences; (b) a term that punishes misclassifications at the review level. The loss function $L(\boldsymbol{\theta})$ is then minimized as a function of the classifier parameters $\boldsymbol{\theta}$, $$\begin{aligned} L(\boldsymbol{\theta}) &= \frac{1}{N^2} \sum\limits_{i=1}^N \sum\limits_{j=1}^N \mathcal{S}(\boldsymbol{x}_i,\boldsymbol{x}_j) (y_i - y_j)^2 + \frac{\lambda}{K} \sum\limits_{k=1}^K (A(D_k,\boldsymbol{\theta}) - l_k)^2, \label{eq:costgeneral}\end{aligned}$$ where $\lambda$ is a free parameter that denotes the contribution of the review level error to the loss function. In this function, $\mathcal{S}(\boldsymbol{x}_i,\boldsymbol{x}_j)$ measures the similarity between two sentences $\boldsymbol{x}_i$ and $\boldsymbol{x}_j$, and $(y_i - y_j)^2$ denotes the square loss on the predictions for sentences $i$ and $j$. In addition, $A(D_k,\boldsymbol{\theta})$ denotes the predicted label for the review $D_k$. Hence, the loss function punishes different labels for similar sentences while still accounting for a correct classification of the review label. In order to adapt the loss function to our problem, [i.e.]{}classifying sentences in reviews into positive and negative categories, we specify concrete functions for the placeholders in Equation \[eq:costgeneral\] as follows. First, we use cosine similarity to calculate a similarity measure between two sentence representations, [i.e.]{}$\mathcal{S}(\boldsymbol{x}_i,\boldsymbol{x}_j) = \frac{\boldsymbol{x}_i \cdot \boldsymbol{x}_j}{||\boldsymbol{x}_i|| \cdot ||\boldsymbol{x}_j||}$. Second, we need to specify a classifier to predict $y_i$. Here, we choose a logistic regression model due to its simplicity and reliability. Third, we define $A(D_k,\boldsymbol{\theta})$ as the most frequent label of the sentences $\mathcal{G}_k$. Altogether, this results in a specific loss function which is to be minimized by the parameter of the logistic regression $\boldsymbol{\theta}$ using stochastic gradient descent. Ultimately, we use the above model to infer labels of individual sentences as follows. First, a sentence is transformed into its vector representation $\boldsymbol{x}_i$. Second, we calculate $y_{\boldsymbol{\theta}}(\boldsymbol{x}_i)$ via the logistic regression model. If the result of $y_{\boldsymbol{\theta}}(\boldsymbol{x}_i)$ is greater than or equal to , the model predicts positive (and negative otherwise). It is worth noting that this approach yields accuracy on a manually-labeled, out-of-sample dataset of positive and negative sentences from Amazon reviews, which can be seen as reasonably accurate for our analysis. In contrast to alternative approaches, such as dictionary-based methods or supervised learning models, the method yields superior performance and does not require any kind of manual labeling. Based on the sentence polarity labels, we then determine the degree of two-sidedness $RTS$ in a review $D_k$ via $$\begin{aligned} RTS = 1- \left|\frac{1}{|\mathcal{G}_k|} \sum_{x_i \in \mathcal{G}_k}y_{\boldsymbol{\theta}}(\boldsymbol{x}_i) -0.5 \right| \cdot 2.\end{aligned}$$ Hence, we map reviews with an equal number of positive and negative sentences to the value $1$ and reviews with either only positive or only negative sentences to the value $0$.[^2] Empirical Analysis {#sec:results} ================== Dataset and Empirical Model --------------------------- For our analysis, we use a frequently-employed corpus of retailer-hosted consumer reviews in the category of cell phones and accessories from Amazon [@He.2016]. This dataset exhibits several advantages as compared to alternative sources: first, all reviews are verified by the retailer, [i.e.]{}the author of a review must have actually purchased the product. Second, the Amazon platform features a particularly active user base, [i.e.]{}a high number of reviews per product [@Gu.2012]. The complete sample consists of consumer reviews containing the following information: (1) the numerical rating assigned to the product ([i.e.]{}the star rating), (2) the review text, (3) the number of helpful votes for the review, (4) the date on which the review was posted. Moreover, we collected the following product-specific information: (i) the price of the product, and (ii) the average star rating. In addition, we determine the two-sidedness for all reviews in our dataset using the methodology described in the previous section. This measure ranges from 0 (only one-sided arguments) to 1 (an equal number of positive and negative sentences). Out of all reviews, contain both positive and negative sentences. A share of of all documents contain only positive sentences, while consist solely of negative sentences. The mean two-sidedness in our dataset is . The average number of sentences per review is . The average star rating of a product is . Reviews have received helpful votes in a range between and . The mean number of helpful votes is . The mean length of a review is words. We use a quasi-poisson model to analyze the effect of two-sided argumentation on the perceived helpfulness of customer reviews. This type of model is not only a common choice for the analysis of word-of-mouth variables, but also has the advantage of being able to handle the many count variables in our dataset. The dependent variable in the model is the helpfulness of a review, given by the number of helpful votes from other users $RHVotes$. The key explanatory variable is $RTS$, which measures the degree of two-sidedness in a review. Consistent with the related literature [[e.g.]{} @Yin.2016], we additionally use a fixed set of control variables for each product, namely, a product’s average rating ($PAvg$), the dispersion of ratings ($PDisp$) and the price ($PPrice$). In addition, we incorporate the following control variables at the review level [@Lutz.2017; @Mudambi.2010; @Yin.2016]. We include the review age in years ($RAge$), the review length in increments of 100 words ($RLength$) and the difference between the review rating and the product’s average rating ($RDiff$). In addition, we add a variable $REmo$ that allows us to measure the degree of emotionality of a review. The emotionality measure is calculated based on the fraction of emotional words in a review using the frequently-employed NRC dictionary [@Mohammad.2010]. Following prior research, we also control for the fraction of cognitive words in a review ($RCog$) using the LIWC text analysis software [@Yin.2016]. Altogether, the resulting model with intercept $\beta_0$ and error term $\epsilon$ is Ln(RHVotes) &= \_0 + \_1 + \_2 + \_3 + \_4 + \_5 + \_6 + \_7\ &+ \_8 + \_9 + \_[10]{} + \_[11]{} +\_[12]{} + . && \[eqn:regression\_h3\] Hypotheses Tests ---------------- We now use the above model to test our hypotheses. All regression results are provided in . We start our analysis with a baseline model in which we only include the independent variables from previous works. The results are shown in column (a) of . The analysis of the model indicates a good fit, with a relatively high McFadden’s pseudo $R^2$ value of . As expected, the length and age of a review have a positive impact on review helpfulness. In contrast, a higher fraction of cognitive words, a higher degree of emotional language and a higher difference between rating and average rating have a negative impact. We also see that more expensive products tend to have more helpful reviews. We now test our first hypothesis. For this purpose, we add the variable $RTS$ to our model. The results are shown in column (b) of . The coefficient of $RTS$ is significant and positive ($\beta=0.864, p < 0.001$). Hence, more two-sided reviews containing positive as well as negative arguments exhibit a greater helpfulness for other customers. We also note an increase in terms of $R^2$ from for the baseline model to . All other coefficients remain stable. Thus, H1 is supported. To test our second hypothesis, we extend our previous model by additionally adding the interaction term $RTS \times RDiff$. The results for this model are shown in column (c). The coefficient of this interaction is positive and significant ($\beta = 0.185, p < 0.001$). Hence, we find support for H2 stating that the effect of two-sided argumentation is higher for positive reviews [i.e.]{}for reviews with a rating above a product’s average rating. Next, we test whether a higher degree of rating dispersion increases the effect of two-sided argumentation on helpfulness. For this purpose, we modify the model in column (c) to additionally include the interaction term $RTS \times PDisp$. The results of this model are shown in column (d). The additional term is positive but not statistically significant at any common significance level. Thus, we do not find support for H3. Finally, we test our fourth hypothesis by adding another interaction term $RTS \times REmo$. The coefficient of this interaction term is negative and significant ($\beta = -1.324, p < 0.001$). Thus, we find support for H4 stating that a higher degree of emotionality decreases the effect of two-sided argumentation. Ultimately, we perform several robustness checks to prove the validity of our analysis. First, we check our models for possible multicollinearity. For this purpose, we calculate the variance inflation factors (VIF) for all variables in our models. The VIF of all regressors (except the interaction terms) are below the critical threshold of 4. This finding is also supported by the fact that our independent variables show relatively high significance values with comparatively low standard errors. Second, we also validate our model by adding quadratic terms of $RTS$ to the individual models. According to our results, the additional terms are not statistically significant and all models continue to support our hypotheses. Third, we tested the extend to which the emotionality measure based on the NRC emotions dictionary also reflects the subjectivity of a review. For this purpose, we tested an alternative model in which we replaced $REmo$ with a corresponding subjectivity measure based on the MPQA subjectivity lexicon [@Wilson.2005]. The models yield similar results and a significantly negative effect for subjectivity on review helpfulness. Hence, to a certain extent, the emotionality measure also reflects the subjectivity of a review. [|l| D[.]{}[.]{}[4.7]{}| D[.]{}[.]{}[4.7]{}| D[.]{}[.]{}[4.7]{}| D[.]{}[.]{}[4.7]{}| D[.]{}[.]{}[4.7]{}|]{}\ & & & & &\ $Intercept$ & -2.030\^[\*\*\*]{} & -2.551\^[\*\*\*]{} & -2.550\^[\*\*\*]{} & -2.545\^[\*\*\*]{} & -2.630\^[\*\*\*]{}\ & (0.184) & (0.181) & (0.181) & (0.191) & (0.192)\ $PAvg$& 0.042 & 0.073\^[\*]{} & 0.075\^[\*]{} & 0.075\^[\*]{} & 0.076\^[\*]{}\ & (0.032) & (0.031) & (0.031) & (0.031) & (0.031)\ $PDisp$ & 0.00002 & 0.070 & 0.058 & 0.054 & 0.053\ & (0.045) & (0.044) & (0.044) & (0.069) & (0.069)\ $PPrice$ & 0.380\^[\*\*\*]{} & 0.353\^[\*\*\*]{} & 0.355\^[\*\*\*]{} & 0.355\^[\*\*\*]{} & 0.353\^[\*\*\*]{}\ & (0.012) & (0.012) & (0.012) & (0.012) & (0.012)\ $RAge$& 0.389\^[\*\*\*]{} & 0.373\^[\*\*\*]{} & 0.374\^[\*\*\*]{} & 0.373\^[\*\*\*]{} & 0.373\^[\*\*\*]{}\ & (0.005) & (0.005) & (0.005) & (0.005) & (0.005)\ $RCog$ & -1.224\^[\*\*\*]{} & -1.724\^[\*\*\*]{} & -1.844\^[\*\*\*]{} & -1.843\^[\*\*\*]{} & -1.839\^[\*\*\*]{}\ & (0.220) & (0.224) & (0.224) & (0.224) & (0.224)\ $REmo$ & -2.500\^[\*\*\*]{} & -2.449\^[\*\*\*]{} & -2.464\^[\*\*\*]{} & -2.464\^[\*\*\*]{} & -1.784\^[\*\*\*]{}\ & (0.135) & (0.136) & (0.136) & (0.136) & (0.220)\ $RLength$ & 0.122\^[\*\*\*]{} & 0.114\^[\*\*\*]{} & 0.114\^[\*\*\*]{} & 0.114\^[\*\*\*]{} & 0.113\^[\*\*\*]{}\ & (0.001) & (0.001) & (0.001) & (0.001) & (0.001)\ $RDiff$ & -0.052\^[\*\*\*]{} & -0.053\^[\*\*\*]{} & -0.159\^[\*\*\*]{} & -0.159\^[\*\*\*]{} & -0.159\^[\*\*\*]{}\ & (0.009) & (0.009) & (0.016) & (0.016) & (0.016)\ $RTS$ & & 0.864\^[\*\*\*]{} & 0.879\^[\*\*\*]{} & 0.871\^[\*\*\*]{} & 1.031\^[\*\*\*]{}\ & & (0.030) & (0.030) & (0.103) & (0.111)\ $RTS \times RDiff$ & & & 0.185\^[\*\*\*]{} & 0.185\^[\*\*\*]{} & 0.187\^[\*\*\*]{}\ & & & (0.024) & (0.024) & (0.024)\ $RTS \times PDisp$ & & & & 0.007 & 0.009\ & & & & (0.089) & (0.088)\ $RTS \times REmo$ & & & & & -1.324\^[\*\*\*]{}\ & & & & & (0.345)\ Observations & & & & &\ McFadden’s $R^2$ & 0.2086 & 0.2238 & 0.2248 & 0.2248 & 0.2251\ &\ Discussion {#sec:discussion} ========== Our study allows for a deeper understanding of the assessment of consumer reviews on online retailer platforms. In contrast to previous works that study helpfulness on the basis of structured data (such as star ratings or review length), our analysis additionally incorporates the textual dimension of customer reviews. As our main finding, we provide strong evidence that the line of arguments and their reasoning plays a key role in the interpretation of reviews. Specifically, we find that a higher degree of two-sided argumentation increases the helpfulness of a review for other users as compared to one-sided appeals with a clear-cut positive or negative opinion. This is also concordant with marketing research suggesting that two-sided messages generate a higher level of attention [@Crowley.1994], as well as the experimental results from @Jensen.2013, which suggest that highlighting positive and negative aspects of a product increases the credibility of a reviewer. However, our study not only extends these works from a field study perspective, but also sheds additional light on the unresolved question of whether positive or negative reviews are more helpful to customers. In this domain, our analysis reveals an important role of two-sidedness that is stronger for positive reviews than for negative reviews. Ultimately, our analysis also indicates that the effect of two-sidedness depends on the emotional orientation of documents. In this respect, we see that customers prefer diagnostic, factual information about the pros and cons of a product when assessing customer reviews. This study has implications for practitioners in the fields of marketing and public relations. Since the helpfulness of reviews is directly related to the two-sidedness of argumentation, our findings can help companies to enhance their communication strategies with regard to product descriptions, social media content, and advertisement. In this context, it should not be assumed that positive or negative reviews are generally perceived as more helpful. Instead, the role of review ratings in relation to perceived helpfulness rather depends on the line of arguments presented in the review. In a next step, our findings can also help retailer platforms to better inform customers who are considering purchasing a product. For instance, retailer platforms might utilize our findings to develop writing guidelines to encourage more useful seller reviews. It is worth noting that a better understanding of why customers perceive a particular review as helpful or unhelpful can also aid in the detection of fake reviews [@Zhang.2016]. Conclusion and Further Research {#sec:conclusion} =============================== A growing body of literature is attempting to clarify the influence of word-of-mouth on customer purchase decisions. In this paper, we examine the effect of two-sided argumentation on the perceived helpfulness of Amazon customer reviews. In contrast to previous works, our analysis thereby sheds light on the reception of reviews from a language-based perspective. According to our results, two-sided argumentation in reviews significantly increases their helpfulness. We find this effect to be stronger for positive reviews than for negative reviews, whereas a higher degree of emotional language weakens this effect. In a practical sense, our results allow practitioners in the fields of marketing and public relations to enhance their communication strategies. Moreover, we contribute to IS research by addressing the question of how textual information affects customers’ individual behavior and decision-making. On the road to completing this research in progress, we will expand the study in four directions. First, our dataset is limited to reviews about cell phones and accessories. To analyze the generalizability of our results, we will examine the differential impact of two-sidedness of argumentation in the context low-involvement and high-involvement products. Second, it might be interesting to analyze the effects of two-sidedness on other recommendation platforms, such as restaurant reviews or social media. Third, it is an intriguing notion to study how two-sidedness and its relevancy depends on the coverage of different aspects and topics in reviews. Fourth, further research is necessary to study the differences in information reception among different target groups. For instance, the subjective interpretation of the same information might vary across different audiences and cultures. [^1]: Available from the Google code archive at <https://code.google.com/archive/p/word2vec/>. [^2]: [For reasons of simplicity and reproducibility, we follow previous literature by classifying sentences into positive and negative categories. As a robustness check, we also tested an alternative variant with an additional neutral category. This approach yields a similar distribution for $RTS$ and qualitatively identical results in our later analysis.]{}
--- abstract: 'We present evidence for a strong correlation between the gaseous absorbing column density towards type 2 Seyfert nuclei and the presence of a stellar bar in their host galaxies. Strongly barred Seyfert 2 galaxies have an average N$_H$ that is two orders of magnitude higher than non-barred Sy2s. More than 80% of Compton thick (N$_H>10^{24}cm^{-2}$) Seyfert 2s are barred. This result indicates that stellar bars are effective in driving gas in the vicinity of active nuclei.' author: - 'R. Maiolino' - 'G. Risaliti' - 'M. Salvati' date: 'Received / Accepted ' title: The effect of bars on the obscuration of active nuclei --- Introduction {#intro} ============ Large scale gravitational torques, such as bars and interactions, are thought to transport gas into the central region of galaxies and, specifically, in the vicinity of active galactic nuclei (Shlosman et al. 1990). Seyfert galaxies are the low luminosity subset of AGNs. According to the accreting supermassive black hole paradigm, the accretion rate inferred for Seyfert nuclei is low ($10^{-1}-10^{-2}M_{\odot} yr^{-1}$) and, therefore, much fuelling from the host galaxy is not required. Indeed, McLeod & Rieke (1995), Ho et al. (1997a), Mulchaey & Regan (1997) show that the occurrence of bars in Seyfert galaxies is not higher than in normal galaxies. Yet, there is both theoretical and observational evidence that stellar bars do drive gas into the central region of galaxies (Athanassoula 1992, Tacconi et al. 1997, Laine et al. 1998). The resulting central concentration of gas might not be relevant for the fuelling process, but can play a role in the obscuration of the active nucleus, i.e. stellar bars could contribute to the obscuration that affects Seyfert 2s. This connection would be very important for the unified theories (Antonucci 1993). In this letter we tackle this issue by comparing the degree of obscuration in Sy2s with the strength of the stellar bar in their host galaxy. Dependence of the absorbing N$_H$ on the bar strength {#nh_bar} ===================================================== Hard X-ray spectra can be regarded as the best tool to measure directly the absorbing column density in Seyfert galaxies. Recent surveys have significantly enlarged the sample of Sy2s for which an estimate of N$_H$ is available, and have also reduced the bias against heavily obscured objects that plagued former studies (Maiolino et al. 1998a, Bassani et al. 1998, Risaliti et al. 1998). We restricted our study to obscured Seyferts in the Maiolino & Rieke (1995) sample, completed with 18 additional objects discovered in the Ho et al. (1997b) survey. These two Seyfert samples are much less biased than others in terms of luminosity and obscuration of the active nucleus and, therefore, can be considered representative of the local population of Seyfert galaxies. For AGNs which are thin to Compton scattering (i.e. N$_H < 10^{24}cm^{-2}$) along our line of sight the N$_H$ can be derived from the photoelectric cutoff in the 2–10 keV spectral range, provided that the signal to noise is high enough. If the source is Compton thick then the direct component in the 2–10 keV range is completely suppressed and we can only observe the reflected component, generally little absorbed. As a consequence, Compton thick Sy2s are sometimes misidentified as “low absorption objects” when observed in the 2-10 keV range. However, the fact that the absorbing column density is actually higher ($>10^{24}cm^{-2}$) can be inferred from several indicators, such as the equivalent width of the Fe line and the spectral slope, and by comparing the X-ray flux with other isotropic indicators of the intrinsic luminosity. A more detailed discussion of this issue is given in Maiolino et al. (1998a) and Bassani et al. (1998). We just mention that sensitive spectra at higher energies, such as those obtained by BeppoSAX in the 10–200 keV band, can identify column densities in the range $10^{24}-10^{25}cm^{-2}$ (Matt et al. 1998, Cappi et al. 1998) or set a lower limit of $10^{25}cm^{-2}$ (Maiolino et al. 1998a). ------------- ---------------- ----- -------------------- --------- Name Log(N$_H$)$^a$ RC3 Others$^c$ Adopted NGC1068 $>$25 SA bar$^1$ SB NGC1365 23.3 SB – SB NGC1386 $>$24 SB – SB NGC1808 22.5 SAB – SAB NGC2110 22.5 SAB – SAB NGC2273 $>$25 SB double-bar$^2$ SB NGC2639 23.6 SA no-bar$^3$ SA NGC2992 21.8 – no-bar$^5$ SA NGC3031 20.9 SA – SA NGC3079 22.2 SB – SB NGC3081 23.8 SAB double-bar$^{2,7}$ SAB NGC3147 20.6 SA – SA NGC3281 23.9 SAB – SAB NGC3393 $>$25 – bar$^{2,5}$ SAB NGC4258 23.2 SAB – SAB NGC4388 23.6 SA boxy-bulge$^{4,5}$ SAB NGC4507 23.5 SAB bar$^2$ SAB NGC4565 $<$21.8 SA – SA NGC4579 20.6 SAB – SAB NGC4594 21.7 SA – SA NGC4941 23.6 SB – SB NGC4939 $>$25 SA – SA NGC4945 24.6 SB – SB NGC5005 $>$24 SAB – SAB NGC5033 20.9 SA no-bar$^3$ SA NGC5135 $>$24 SB bar$^{2,3}$ SB NGC5194 23.7 SA – SA NGC5347 $>$24 SB bar$^2$ SB NGC5506 22.5 – no-bar$^3$ SA NGC5643 23.3 SAB bar$^2$ SAB NGC5674 22.8 SAB bar$^4$ SAB NGC7172 22.9 – no-bar$^3$ SA NGC7314 22.1 SAB bar$^3$ SAB NGC7319 23.5 SAB – SAB NGC7582 23.1 SB – SB NGC7590 $<$20.9 – no-bar$^3$ SA IC2560 $>$24 SB – SB IC3639 $>$25 SB bar$^2$ SB IC5135 $>$24 – bar$^{2,3}$ SAB IC5063 23.4 SA no-bar$^2$ SA Circinus 24.7 SA – SA IRAS0714 $>$25 SA – SA IRAS1832-59 22.1 SA – SA Mk1066 $>$24 SB – SB ------------- ---------------- ----- -------------------- --------- : Absorbing N$_H$ and bar classification for the Sy2s sample.[]{data-label="tab_obs"} $^a$ In units of cm$^{-2}$; these values are from Bassani et al. (1998) and Risaliti et al. (1998). $^b$ SA = non-barred; SAB = weakly barred; SB = strongly barred. $^c$ References: 1–Thatte et al. (1997); 2–Mulchaey et al. (1997); 3–Hunt et al. (in prep.); 4–McLeod & Rieke (1995); 5–Alonso-Herrero et al. (1998). In Tab. 1 we list all the Seyfert 2, 1.9 and 1.8 galaxies (i.e. those showing indication of obscuration) in the joint Maiolino & Rieke (1995) and Ho et al. (1997b) samples which have an estimate of the absorbing N$_H$ based on X-ray data, and for which a bar classification is also available. With regard to the bar we generally favored the classification based on near–IR images (since these are less affected by extinction) and quantitative identifications of the stellar bars based on ellipticity, position angle, “boxy-shape” and brightness profile arguments as measured from digital (unsaturated) images. Otherwise the optical classification reported in the RC3 was adopted (de Vaucouleurs et al. 1991). However, in almost all cases the RC3 identification of bars was in agreement with the other works. Whenever the RC3 classification was not in conflict with the near-IR and/or “quantitative” classification (i.e. in most cases), we further split barred systems in strongly barred (SB) and weakly barred (SAB) as reported in the RC3. Four objects not classified or classified as non-barred in the RC3, while their infrared images show indications of a bar; in these cases we adopt the classification SAB, with the exception of NGC1068, whose bar appears strong in the K-band image (Thatte et al. 1998). The completeness of this Sy2s sample is discussed in detail in Risaliti et al. (1998). We could not find obvious biases or other correlations that could introduce spurious relations between N$_H$ and bar properties. In particular, there is no correlation between N$_H$ and the luminosity of active nuclei (Risaliti et al. 1998). Fig. 1 shows the distribution of N$_H$ for subsamples of increasing stellar bar strength, ordered from bottom to top. There is a clear tendency for N$_H$ to increase along the sequence. Tab. 2 quantifies this apparent trend. The median of the N$_H$ distribution[^1] increases by more than two orders of magnitude going from unbarred to strongly barred Sy2s. The confidence of the results is given by the Gehan test (Feigelson & Nelson 1985), that takes into account also censored data: strongly barred and unbarred Sy2s have N$_H$ distributions that are different at a confidence level higher than 99%. Another impressive result is that more than 80% of Compton thick (N$_H > 10^{24}cm^{-2}$) Sy2s are barred (13 out of 16), to be compared with $\sim$ 55% of the general population and specifically of early type systems that typically host Seyfert activity (Sellwood & Wilkinson 1993, Ho et al. 1997a). Also 56% of Compton thick Sy2s are hosted in strongly barred systems, to be compared with $\sim$ 25% of the general population. --------------------- ------------ ----------- ----------- Parameter Non–barred Weakly Strongly barred barred No. objects 16 16 12 Med. Log(N$_H$)$^a$ 22.1 23.5 24.4 Compton thick: 0.5truecm No. (%) 3 (18.7%) 4 (25.0%) 9 (56.2%) Probability$^b$ 89% $>$99% --------------------- ------------ ----------- ----------- : Properties of the N$_H$–bar correlation in Sy2s[]{data-label="tab_res"} $^a$ Median of Log(N$_H$) in units of cm$^{-2}$.\ $^b$ Probability for the N$_H$ distribution of intermediate and strongly barred Sy2s to be different from the N$_H$ distribution of non-barred Sy2s. Other indications {#other} ================= Maiolino et al. (1997) found that Sy2s are characterized by a rate of non-axisymmetric potentials (including interactions and peculiar morphologies) about 20% higher than Sy1s, this difference appears significant. Hunt & Malkan (1998) find the occurence of bars in Sy2s not significantly higher than in Sy1s within the CfA and the 12$\mu m$ samples. In the samples of Ho et al. (1997a) and Mulchaey & Regan (1997) the occurrence of bars in type 2 Seyferts is 10–20% higher than type 1 Seyferts. These results indicate that even if bars drive gas into the circumnuclear region, such gas does not reduce much the opening angle of the light cones. Yet, if the correlation between bar strength and amount of circumnuclear gas obtained for Sy2s applies also to Sy1s, we would expect a large amount of gas in the circumnuclear region of barred Sy1s as well. The circumnuclear (cold) gas is expected to Compton-reflect the nuclear X-ray radiation. This Compton-reflected component should flatten the X-ray spectrum in the 10–30 keV spectral range. Therefore, within the bar–circumnuclear gas connection scenario depicted above, we would expect barred Sy1s to have a flatter spectrum in the 10–30 keV band. However, this test is subject to various caveats. First, variability affects the slope of the observed spectrum because of the time-lag between the primary and reprocessed radiation. Second, a fraction of the “cold” reflection is expected to come from the accretion disk. Third, the effect is expected to be small: the reflected component should contribute no more than $\sim$30% in this spectral region. Fourth, to date spectra in this X-ray band are sparse and with low sensitivity. So far, the only (small) sample of Sy1s observed at energies higher than 10 keV is the one presented in Nandra & Pounds (1994), that use Ginga data. Their sample contains nine Sy1s whose host galaxy have a bar classification. As shown in Tab. 3 the spread of the photon index measured between 10 and 18 keV is large. Nonetheless, Tab. 3 shows a tendency for the hard X-ray spectra of barred Sy1s to be flatter than the unbarred Sy1s. ----------------------------- ---------------- --------------- --------------- Parameter Non–barred Weakly Strongly barred barred $\langle 1.69$\pm$ 0.24 1.31$\pm$0.10 1.24$\pm$0.47 \Gamma _{10-18 keV}\rangle$ (No. objects) (3) (3) (3) $\langle 0.92$\pm$ 0.07 1.20$\pm$0.23 1.58$\pm$0.80 log(L_{IR}/L_X)\rangle ^a$ (No. objects) (5) (6) (8) ----------------------------- ---------------- --------------- --------------- : Properties of Sy1s as a function of the bar strength[]{data-label="tab_sy1"} $^a$ $L_{IR}$ = N band ($\sim 10\mu m$) luminosity; $L_X$ = 2–10 keV luminosity. Large amounts of circumnuclear gas in Sy1s could be detected via dust-reprocessed light in the infrared. More circumnuclear gas would imply more warm (AGN-heated) dust, hence more infrared emission relative to the intrinsic luminosity of the AGN (traced by the hard X-ray luminosity). The mid–IR ($\sim 10\mu m$) is an excellent band to look for this excess, since the AGN IR emission peaks there (Maiolino et al. 1995). Also, by using narrow beam photometry it is possible to isolate the contribution of the AGN from the host galaxy. Within the scenario of the bar–circumnuclear gas connection, barred Sy1s are expected to show a mid-IR to X-ray flux ratio higher than non-barred Sy1s. Yet, several caveats affect this test as well. Both short and long term variability plague the reliability of the X-ray flux as a calibrator of the AGN luminosity. Equilibrium temperature arguments indicate that the dust emitting significantly at 10$\mu m$ should be located within the central 1–10 pc; therefore, excess of circumnuclear gas distributed over the 100 pc scale would not be probed by this indicator. Finally, even the small aperture (5$''$) used in most of the groundbased mid-IR observations might include the contribution from a central compact starburst. Tab. 3 reports the mean of the 10$\mu m$/2–10keV luminosity ratio as a function of the bar strength for a sample of 19 Sy1s. The 10$\mu m$ data are from Maiolino et al. (1995) and from Giuricin et al. (1995); the X-ray data are from Malaguti et al. (1994), where we choose the datum closest in time to the mid-IR observation, to minimize long term variability effects. There is a tendency for barred systems to have a higher L$_{10\mu m}$/L$_X$ ratio, though the spread is large and the statistics are poor. Discussion {#disc} ========== The important result of our study is that the absorbing column density of type 2 Seyferts strongly correlates with the presence of a stellar bar in their host galaxies. As discussed in the Introduction, this result is not completely unexpected: stellar bars are very effective in driving gas into the central region, thus contributing to the obscuration of AGNs. On the other hand this gas should not play a major role in powering low luminosity AGNs, such as Seyfert nuclei, given the lack of correlation between stellar bars and Seyfert activity (McLeod & Rieke 1995, Mulchaey & Regan 1997, Ho et al. 1997a). As discussed in the former section, the fraction of bars in Sy2s is only moderately higher (at most by 20%) than in Sy1s. So, the gas accumulated in the central region by the stellar bar increases the column density outside the light cones, but it does not increase much the covering factor of the obscuring material. A possible explanation is that radiation and wind/jet pressure act to destroy or expel molecular clouds that happen to enter the light cones, while outside the light cones self-shading allows molecular clouds to survive and pile up along our line of sight. Another possibility is that the inflowing gas concentrates at dynamical resonances (eg. Lindblad resonances) forming obscuring tori. The solid angle that such tori subtend to the AGN depends on their inner radius and thickness, that in turn depend on the dynamical and kinematical properties of the nuclear region, but are relatively independent of the amount of gas in the torus (if the gas self–gravity is significant the torus would flatten, thus actually reducing its covering solid angle). On the other hand, the N$_H$ through the torus depends linearly on its mass. Our result has also important implications on the scales over which the obscuring material is distributed. Large scale stellar bars can transport gas into the central few 100 pc, but they exert little influence on the gas dynamics on smaller nuclear scales. As a consequence, a first interpretation of our bar–N$_H$ link is that a large fraction of the obscuration in Sy2s occurs on the 100 pc scale. This is in line with results obtained from HST images, that ascribe the obscuration of several Sy2s nuclei to dust lanes or disks a few 100 pc in size (eg. Malkan et al. 1998). Yet, one of the most interesting results of our study is that stellar bars appear very effective in making the obscuration of Sy2 nuclei so high to be Compton thick: more than 80% of Compton thick Sy2s are barred. As discussed in Risaliti et al. (1998), column densities larger than 10$^{24}cm^{-2}$ are unlikely to be distributed on the 100 pc scale, since the implied nuclear gas mass would exceed the total dynamical mass in the same region for several objects. This consideration generally constrains most of the Compton thick gas to be located within a few 10 pc from the nucleus. On the other hand, on these small scales the gas dynamics is expected to be little affected by the non-axisymmetric potential of a stellar bar in the host galaxy. The connection between large stellar bars and Compton thickness of Sy2s thus requires some mechanism to link the dynamics on these different scales. Shlosman et al. (1989) proposed that if the large scale stellar bar collects in the central region a mass of gas that is a significant fraction of the dynamical mass, then the gaseous disk might become dynamically unstable and form a gaseous bar that could drive gas further into the nuclear region. A nuclear gaseous bar has been recently discovered in the nearby Circinus galaxy (Maiolino et al. 1998b), that hosts a Compton thick Seyfert 2 nucleus, in agreement with expectations. Ironically, this galaxy has been classified as non-barred in the RC3 catalog; however the fact that it is edge on and located in the Galactic plane (where crowding and patchy extinction confuse the large scale morphology) might have prevented the identification of a large scale stellar bar. Alternatively, nested secondary [*stellar*]{} bars have also been observed in several galaxies and also in Seyfert galaxies (Mulchaey & Regan 1997), and are thought to be more stable (Friedli & Martinet 1993). An encouraging result in this direction is that the only two Sy2s showing evidence for an inner secondary stellar bar and for which the N$_H$ has been estimated are actually heavily obscured (see Tab. 1). However the opposite is not true: 5 other Compton thick Sy2s imaged in the near–IR by Mulchaey et al. (1997) do not show evidence for inner bars, though the limited angular resolution might have prevented their detection. We are grateful to L. Hunt for providing us with information on her data in advance of publication. This work was partially supported by the Italian Space Agency (ASI) through the grant ARS–98–116/22. Alonso-Herrero A., Simpton C., Ward M.J., Wilson A.S., 1998, ApJ 495, 196 Antonucci R.R.J, 1993, ARA&A 31, 473 Athanassoula E., 1992, MNRAS, 259, 345 Bassani L., Dadina M., Maiolino R., et al. 1998, ApJS, in press Cappi M., Bassani L., et al. 1998, ApJ, submitt de Vaucouleurs G., de Vaucouleurs A., Corwin H.G., Buta R.J.Jr., Paturel G., Fauquè P., 1991, Third reference Catalogue of Bright Galaxies, Springer-Verlag Feigelson E..D., Nelson P.I., 1985, ApJ 293, 192 Giuricin G., Mardirossian F., Mezzetti M., 1995, ApJ, 446, 550 Friedli D., Martinet L., 1993, A&A, 277, 27 Ho, L., Filippenko, A., Sargent, W., 1997a, ApJ, 487, 522 Ho, L., Filippenko, A., Sargent, W., 1997b, ApJS, 112, 315 Hunt, L., Malkan, M., 1998, ApJ submitt Laine S., Kenney J., Yun M., Gottesman S., ApJ in press Maiolino R., Rieke G.H., 1995, ApJ 454, 95 Maiolino R., Ruiz M., Rieke G., Keller L., 1995, ApJ 446, 561 Maiolino R., Ruiz M., Rieke G.H., Papadopoulous P., 1997, ApJ 485, 522 Maiolino R., Salvati M., Bassani L., et al. 1998a, A&A,338,781 Maiolino R., Alonso-Herrero A., Anders S., Quillen A., Rieke G.H., Tacconi-Garman L., 1998b, Adv. Sp. Res., in press Malaguti G., Bassani L., Caroli E., 1994, ApJS, 94, 517 Malkan M.A., Gorjian V., Tam R., 1998, ApJS, 117, 25 Matt G., Guainazzi M., Maiolino R., et al., 1998, A&A, in press McLeod K.K., Rieke G.H., 1995, ApJ, 441, 96 Mulchaey, J.S., Regan, M.W., 1997, ApJ, 482, L135 Mulchaey, J.S., Regan, M.W., Arunav, K., 1997, ApJS, 110, 299 Nandra K., Pounds K.A., 1994, MNRAS 268, 405 Risaliti G., Maiolino R., Salvati M., 1998, ApJ, submitt Sellwood, J.A., Wilkinson, A., 1993, Rep. Prog. Phys., 56, 173 Shlosman, I., Frank J., Begelman, M.C., 1989, Nat., 338, 45 Shlosman, I., Begelman, M.C., Frank, J., 1990, Nat., 345, 679 Tacconi L.J., Gallimore J.F., Genzel R., Schinnerer E., Downes D., 1997, Ap&SS, 248, 59 Thatte, N., Quirrenbach, A., Genzel, R., Maiolino R., Tecza M., 1997, ApJ 490, 238 [^1]: The median is estimated by means of the Kaplan-Meier estimator that takes into account also censored data.
--- abstract: 'Using Time-Dependent Density Matrix Renormalization Group (TDMRG) we study the collision of one-dimensional atomic clouds confined in a harmonic trap and evolving with the Lieb-Liniger Hamiltonian. It is observed that the motion is essentially periodic with the clouds bouncing elastically, at least on the time scale of the first few oscillations that can be resolved with high accuracy. This is in agreement with the results of the quantum Newton cradleexperiment of Kinoshita *et al.* \[Nature **440**, 900 (2006)\]. We compare the results for the density profile against a hydrodynamic description, or generalized nonlinear Schrödinger equation, with the pressure term taken from the Bethe Ansatz solution of the Lieb-Liniger model. We find that hydrodynamics can describe the breathing mode of a harmonically trapped cloud for arbitrary long times while it breaks down almost immediately for the collision of two clouds due to the formation of shock waves (gradient catastrophe). In the case of the clouds’ collision TDMRG alone allows to extract the oscillation period which is found to be measurably different from the breathing mode period. Concomitantly with the shock waves formation we observe a local energy distribution typical of population inversion, *i.e.*, an effective negative temperature. Our results are an important step towards understanding the hydrodynamics of quantum many-body systems out of equilibrium and the role of integrability in their dynamics.' author: - Sebastiano Peotta - Massimiliano Di Ventra title: Quantum Shock Waves and Population Inversion in Collisions of Ultracold Atomic Clouds --- Introduction {#sec:intro} ============ The quantum dynamics of closed many-body quantum systems is relatively unexplored [@Polkovnikov:2011] and has become the subject of active research only recently with the advent of highly tunable ultracold atomic gases [@Bloch:2008; @Lewenstein:2007]. In these systems the almost perfect decoupling from the external environment and the long time scales allow to study details of the quantum dynamics that are not easily accessible, e.g., in solid-state systems. An example of the tunability of ultracold gases is the use of optical lattices [@Bloch:2008] to freeze the transverse motion and confine the gas in one dimension, a regime where quantum fluctuations play a prominent role [@Giamarchi_book]. Interestingly several 1D Hamiltonians relevant to ultracold gases are known to be integrable, *i.e.*, they possess an infinite number of local conserved quantities [@Sutherland_book; @Korepin_book; @Cazalilla:2011]. The implications of integrability on the time evolution of a quantum system is far from being understood as shown in the highly debated quantum Newton cradleexperiment of Ref.  which promptly followed the realization of a Tonks-Girardeau gas [@Girardeau:1960; @Girardeau:1965; @Kinoshita:2004; @Tolra:2004; @Paredes:2004; @Kinoshita:2005]. In this latter work a 1D gas of bosons interacting via a contact potential (Lieb-Liniger model [@Lieb:1963a; @Lieb:1963b]) is separated in two symmetric clouds that subsequently collide in a harmonic trap. Interestingly the clouds bounce off each other several hundred times without noticeable decay of the oscillatory motion. On the other hand, in three dimensions the dynamics are dramatically different with the clouds merging in a single motionless lump - *i.e.*, a thermal state - after few bounces [@Kinoshita:2006]. This sharp difference between the behaviors in one and three dimensions has triggered a vast amount of theoretical work [@Rigol:2013; @Collura:2013] aimed at understanding the nature of the asymptotic state (if any) reached when the time evolution is dictated by an integrable Hamiltonian. It is fairly clear that integrability manifests itself only in out of equilibrium dynamics whose accurate description requires eigenstates with an energy substantially larger than the ground state energy [@Kinoshita:2006; @Ronzheimer:2013; @Arzamasovs:2013]. On the contrary 1D integrable and nonintegrable models behave alike when the dynamics are restricted to the low energy spectrum, *i.e.* in the linear response regime. For instance, a large class of integrable and nonintegrable 1D Hamilonians are known to fall in the universality class of the Tomonaga-Luttinger liquid, an integrable model [@Giamarchi_book; @Cazalilla:2011; @Gogolin_book; @Stone_book]. As illustrated in Ref. , ultracold gases can be easily driven in different out of equilibrium and nonlinear regimes while a substantial effort is needed to probe only their linear response [@Bloch:2012]. Unfortunately not many approaches are available to study the out of equilibrium dynamics of interacting quantum systems. Only in 1D the Time-Dependent Density Matrix Renormalization Group (TDMRG) [@White:2004; @Vidal:2004; @Daley:2004; @tdmrg] allows the – numerically exact – simulation of the real time dynamics for arbitrary Hamiltonians, with the restriction that the entanglement content of the evolving wavefunction is initially not too large and not growing too rapidly in time [@tdmrg]. Alternatively one can discard the fine-grained description of a system, such as the full wavefunction (or an approximation thereof), and focus directly on the collective dynamics of the observables of interests, the particle density being the easier to access in the context of quantum gases. The collective field approach has been very successful for Bose-Einstein condensates, called also coherent matter waves [@Dalfovo:1999]. At much lower temperatures than the condensation temperature the only relevant degree of freedom of a gas of weakly interacting bosons is a space-dependent complex order parameter, namely the wavefunction in which a macroscopically large number of particles condense, and interactions can be safely accounted for at the mean-field level [@Castin:2001]. The evolution of the complex order parameter is governed by the celebrated Gross-Pitaevskii equation [@Gross:1961; @Gross:1963; @Pitaevskii:1961] which has proved to be very effective in providing quantitative predictions for the static, dynamic, and thermodynamic properties of trapped Bose gases [@Dalfovo:1999]. The Gross-Pitaevskii equation is equivalent to the standard Euler’s equations of fluid dynamics for an inviscid fluid, albeit with an additional quantum pressureterm. The Tomonaga-Luttinger theory of 1D many-body systems is sometimes called hydrodynamics [@Arzamasovs:2013] or harmonic fluidapproach [@Haldane:1981a; @Haldane:1981b] since the canonical fields in the Hamiltonian are the integrated density $\phi = \int \rho $ and velocity $\theta = \int v$ and represent the relevant collective modes at low energies and long wavelength [@Giamarchi_book]. While the Gross-Pitaevskii’s is in essence a classical nonlinear hydrodynamics, the Tomonaga-Luttinger model is a linear – noninteracting – quantum field theory. Nonlinear extensions of the Tomonaga-Luttinger theory have been discussed in several contexts [@Bettelheim:2006a; @Bettelheim:2006b; @Bettelheim:2007; @Bettelheim:2008; @Schmidt:2009; @Schmidt:2010; @Imambekov:2012], but throughout this work hydrodynamics descriptionwill stand for a system of nonlinear equations for a classical fluid. Incidentally, this is the same approach used in Time-Dependent Density Functional Theory (TD-DFT) [@Gross:1984] in particular in its orbital free formulation [@Ligneres:2005], where the density is the sole dynamical variable. For experiments such as the collision between degenerate clouds comprising a large number of interacting atoms a collective field description is usually the only available option. Various phenomena have been studied in collision experiments, such as the interference of matter waves [@Ketterle:1997], dispersive shock waves in BECs [@Hoefer:2006; @Meppelink:2009], superfluidity, shock wave formation and domain wall propagation in the unitary Fermi gas [@Joseph:2011; @Bulgac:2012; @Salasnich:2012; @Ancilotto:2012], spin transport [@Sommer:2011], and lack of thermalization in quasi-integrable 1D systems [@Kinoshita:2006; @Polkovnikov:2011]. With reference to the experimental setup of Ref. , we study the collision of two clouds of one dimensional bosons for arbitrary interaction strength, by means of a Time-Dependent Density Matrix Renormalization Group (TDMRG) approach [@White:2004; @Vidal:2004; @Daley:2004; @tdmrg] based on a Matrix Product State (MPS) approximation of the full wavefunction. A first important result presented here is that the numerical simulation of the experiment in Ref.  for the first few ($\sim 3$) oscillations is within reach of TDMRG and we provide details on how this has been accomplished. Moreover, if the time evolution is computed accurately, the entanglement is slowly growing in the quenches that we perform, a fact that can possibly allow to reach times much longer than the ones considered in this work. Assessing the maximal evolution time allowed by TDMRG requires a more accurate analysis of the numerical errors which is beyond the scope of the present work. Therefore the important questions of thermalization and of the nature of the asymptotic state is not the focus here. However we put forward a definition of local temperature that could be useful in this context (see below). A second result presented here is the accurate comparison of the exact quantum dynamics with a generalized Gross-Pitaevskii equation or *generalized nonlinear Schrödinger equation* (GNLSE) [@Korepin_book] which, in hydrodynamic form, contains a pressure term derived from the Bethe Ansatz solution of the Lieb-Liniger model. This is the best available hydrodynamic description for the present problem. While hydrodynamics works for several oscillations for the breathing mode, in the case of the clouds’ collision the formation of shock waves leads to a chaotic behavior which is not reflected in the periodic behavior shown by the TDMRG data. Only from the latter the oscillation period as a function of the interaction strength can be accurately extracted and it is found to be different from the breathing period, an easily testable prediction. This result emphasizes that a better understanding of quantum shock waves is instrumental to a – at least qualitatively – correct hydrodynamic description of 1D quantum gases. Finally, we further characterize the formation of shock waves by studying the Wigner distribution function, a tool used by other authors in the context of shock wave dynamics of free fermions [@Bettelheim:2012; @Mirlin:2012]. Starting from the Wigner function, we show how it is possible to define a *local energy distribution function* and that at the onset of shock waves formation the latter shows *population inversion*, *i.e.* higher energy states are more occupied than lower energy ones. Recently [@Braun:2013] a *negative temperature*, namely a population inversion in the energy distribution of the motional degrees of freedom of atomic gases has been realized. Moreover, it was shown in Ref.  that population inversion does not necessarily imply a fast decay to the true thermal equilibrium state, thus showing the quite unique properties these systems possess. We suggest that the small thermalization rate and absence of visible decay of the oscillatory motion in the density profiles observed both in Ref.  and in our simulations are a dynamical manifestation of the same remarkable (meta-)stability of the negative temperature state realized in Ref. . In fact, we employ a possible definition of local temperature out of equilibrium – put forward in Ref.  – and find again negative values in correspondence of the shock wave formation time. Model and methods ================= \[sec:quenches\] Lieb-Liniger model and quenches ------------------------------------------------ The Lieb-Liniger model [@Lieb:1963a; @Lieb:1963b] provides an excellent description of one dimensional ultracold bosonic atoms [@Olshanii:1998; @Dunjko:2001]. In terms of a bosonic field $\hat \Psi(x)$ its Hamiltonian reads $$\label{eq:ll} \mathcal{\hat H}_{\rm LL} = \int dx\, \bigg[\frac{\hbar^2}{2m}|\partial_x \hat \Psi(x)|^2 + \frac{g_B}{2}|\hat \Psi(x))|^4 +V(x)|\hat \Psi(x)|^2\bigg],$$ where $g_B \in \left[0,+\infty\right]$ is a coupling constant and $m$ is the atom mass. In the following we will consider a time-dependent external potential $V(x,t)$ changing abruptly at $t=0$ (*quench*). Hamiltonian (\[eq:ll\]) is integrable for any $g_B$ when $V(x)=0$, while for $V(x)\neq0$ the exact eigenstates and eigenvalues are known for free bosons $g_B=0$ and hard-core bosons $g_B = +\infty$, the latter being equivalent to free fermions according to the Bose-Fermi mapping theorem [@Girardeau:1960; @Girardeau:1965]. We consider two kinds of quench. In the first one the external potential is an harmonic well with a sudden change in frequency at $t=0$ $$\label{eq:post_pot} V(x,t) = \frac{1}{2}m\omega^2_1(t)x^2,\quad \omega_1(t>0) = \frac{\omega_1(t\leq0)}{\sqrt{3}}\,.$$ This excites the *breathing mode* of a gas initially in the ground state. In a second kind of quench we prepare the gas in the ground state of the potential $$\label{eq:init_pot} V(x,t \leq 0) = \frac{1}{2}m\omega^2_0\frac{(x^2-D^2)^2}{4D^2}\,,$$ in order to have two clouds of particles separated by a distance $\sim D$, and we let it evolve for $t>0$ in the harmonic potential (\[eq:post\_pot\]) with $\omega_1 < \omega_0$ (*microcanonical picture of transport* [@mybook; @chien]). The values of the frequencies $\omega_0$ and $\omega_1$ depend on the interaction strength and are reported in Table \[tb:frequencies\]. \[sec:tdmrg\]TDMRG simulations ------------------------------ It is possible to access the dynamics of (\[eq:ll\]), (\[eq:post\_pot\]) and (\[eq:init\_pot\]) in an essentially exact fashion using TDMRG. TDMRG has been applied mainly to lattice systems for relatively short time scales [@Kollath:2005], but simulations of systems in the continuum limit and for quite long time scales (of the order of several periods) are feasible [@Muth:2010; @Muth:2011; @Peotta:2011; @Peotta:2012; @Knap:2013; @White:2012; @Caux:2012]. Details are provided in Appendix \[appendix:tdmrg\]. In essence we use a wavefunction in MPS form that explicitly conserves the number of particles and is evolved in time using a sixth order Trotter expansion [@tdmrg; @Peotta:2011; @Muth:2011; @Peotta:2012]. Moreover the sizes of the MPS matrices are allowed to change dynamically both in space and in time by fixing the discarded weight \[see App. \[appendix:tdmrg\]\]. In our simulations we have employed two different discretizations  of (\[eq:ll\]) [@Muth:2010], either as a Bose-Hubbard model (nonintegrable), or as a XXZ spin chain (integrable) using the Bose-Fermi mapping for arbitrary interaction strength $g_B$ [@Muth:2010; @Cheon:1999; @Cheon:1998]. The lattice Hamiltonians are given in Appendix \[appendix:discretization\]. No substantial difference in the results has been observed. Importantly we find that, for an accurate time evolution of the MPS, the entanglement entropy is bounded or very slowly increasing [@SOM] which translates in a manageable size of the MPS. Thus, in principle, longer times could be explored but we show only results for the first few half periods $\tau = \pi/\omega_1(t>0)$. In the following, lengths are expressed in units of the lattice spacing $a$ of the discretized model, which is a small, but otherwise arbitrary length scale, energy is in units of $J = \hbar^2/(2ma^2)$, and time in units of the post-quench oscillation half period $\tau = \pi/\omega_1(t>0)$. The interaction parameter $g_B$ is given in units of $Ja$. Occasionally we use the Lieb-Liniger parameter $\gamma = mg_B/(\hbar^2\rho) $ with $\rho = 0.05/a$, an indicative value of the density in the inhomogeneous system considered here. We ensured, by separately tuning $\omega_0$ and $\omega_1$ for each value of the coupling $g_B$ \[see Table \[tb:frequencies\]\], that the particle density *per site* never exceeds $\sim0.15$, thus lattice effects are negligible (continuum limit) [@Peotta:2011; @Peotta:2012; @Muth:2011]. In our simulation we consider $N =20$ particles in a lattice of length $L=600a$. The clouds’ distance is fixed at $D = 120a$ \[see (\[eq:init\_pot\])\]. With this choice of distance the two clouds are always partially overlapping while simulations for well separated clouds are more numerically demanding. In the actual experiment [@Kinoshita:2006] the number of particles varies between $40$ and $250$, figures not far from the one used in our simulations. Moreover we will see that the dynamics can be well described in the local density approximation, namely using the local pressure calculated in the thermodynamic limit. Thus our results are significant for much larger system sizes, a fact that we explicitly verified in the case of free fermions where a scaling in the number of particles can be easily performed. $g_B/(Ja)$ $\hbar \omega_0/J$ $\hbar\omega_1/J$ $\tau J/ \hbar$ ------------ -------------------- ------------------- ------------------ 0.0 0.0005 0.0003 11107 0.002 0.0005 0.0003 11107 0.02 0.0009 0.0006 4967 0.2 0.0025 0.0012 2618 0.6 0.0040 0.0017 1756 1.0 0.0046 0.0022 1433 1.4 0.0049 0.0024 1328 2.0 0.0055 0.0025 1258 $+\infty$ 0.0063 0.0033 956 : \[tb:frequencies\] Frequencies $\omega_0$ \[Eq. (\[eq:init\_pot\])\] and $\omega_1$ \[Eq. (\[eq:post\_pot\])\] and oscillation half period $\tau=\pi/\omega_1(t>0)$ used in the simulations according to the value of $g_B$. The table refers to the collision quench. In the case of the quench exciting the breathing mode the initial trapping frequency $\omega_1(t\leq 0)$ is given in the third column of the table, and the post-quench frequency by $\omega_1(t>0) = \omega_1(t\leq 0)/\sqrt{3}$ \[Eq. (\[eq:post\_pot\])\]. Times are in units of $\hbar/J= ma^2/\hbar$. Changing these parameter with the interaction strength is important in order to keep the on-site density roughly constant when the compressibility of the gas varies. The above choice works well as one can see in Fig. \[fig1\]. ![image](Fig1) \[sec:hydrodynamic\]Hydrodynamic description -------------------------------------------- In this work we compare our TDMRG results with the best possible (to our knowledge) hydrodynamic description for the present case, namely a generalized nonlinear Schrödinger equation (GNLSE) [@Dunjko:2001; @Kim:2003; @Lieb:2003; @Salasnich:2004; @Damski:2004; @Salasnich:2005; @Damski:2006] $$\label{eq:gnlse} i\hbar \partial_t\Psi(x,t) = \left[-\frac{\hbar^2}{2m} \partial_x^2 + \phi(\rho) + V(x,t)\right]\Psi(x,t)\,,$$ where $\Psi(x)$ is a complex field, $V(x,t)$ is specified by (\[eq:post\_pot\]) and (\[eq:init\_pot\]), $\rho(x) = |\Psi(x)|^2$ is the density, and $\phi(\rho)$ the Gibbs free energy per particle, or chemical potential, obtained from the Bethe Ansatz solution of the Lieb-Liniger model. In the Gross-Pitaevskii limit ($g_B\to 0$) $\phi(\rho)= g_B\rho$ while in the hard-core limit ($g_B\to +\infty$) $\phi(\rho)= \frac{\pi^2\hbar^2}{2m}\rho^2$. Accurate numerical values of $\phi(\rho)$ for intermediate interactions are available [@Dunjko:2001]. Eq. (\[eq:gnlse\]) can be written alternatively in a more standard hydrodynamic form by using the [*de Broglie ansatz*]{} $\Psi(x,t) = \sqrt{\rho(x,t)} e^{iS(x,t)/\hbar}$ and separating the real and imaginary part. The result is the quantum Euler equations [@Damski:2006] $$\begin{gathered} \partial_t\rho = -\partial_x\left(\rho v\right)\,,\label{eq:cont}\\ \partial_t v + v\partial_x v = -\frac{1}{m}\partial_x\left(\phi(\rho) -\frac{\hbar^2}{2m}\frac{\partial_x^2\sqrt{\rho}}{\sqrt{\rho}}+V(x,t)\right)\,,\label{eq:mom}\end{gathered}$$ where the velocity field $v(x,t) = \partial_x S(x,t) /m $ has been introduced. Without the *quantum pressure* $-(\hbar^2\partial_x^2\sqrt{\rho})/(2m\sqrt{\rho})$ Eqs. (\[eq:cont\]) and (\[eq:mom\]) amount to a simple Local Density Approximation (LDA), but this term needs to be included in order to reproduce the free bosons limit ($g_B\to 0$). Note that there are no free parameters in Eq. (\[eq:gnlse\]) or equivalently in Eqs. (\[eq:cont\]) and (\[eq:mom\]). The GNLSE Eq. (\[eq:gnlse\]) has been solved numerically using a time-splitting spectral method [@tssm]. We used a fourth-order Trotter expansion to perform imaginary time evolution in the initial potential (\[eq:init\_pot\]), thus providing the initial state $\Psi(x,t = 0)$. A sixth order Trotter expansion was used to evolve the system in the quenched potential (\[eq:post\_pot\]), the same expansion employed for TDMRG. ![\[fig2\] (Color online) Comparison between TDMRG (black line) and hydrodynamic (red line) density profiles at $t=0$ (bottom), after one oscillation period $\tau^*(g_B)$ (middle) and after two periods (top). The density profiles at different times have been shifted in the vertical direction. How the renormalized oscillation period has been extracted from the TDMRG data is explained in Sec. \[sec:period\] and Fig. \[fig:three\]. Note that the hydrodynamic simulations match the TDMRG results for several oscillations periods in the case of the breathing mode while in the collision of clouds the approximation breaks down before a single oscillation is completed due to shock waves formation (see Fig. \[fig1\]). In both cases the full quantum dynamics exhibit (quasi-)periodicity for any interaction strength (the data shown here refers to $g_B=1.0Ja$).](Fig2) \[sec:shock\_waves\] Classical and Quantum Hydrodynamics ======================================================== Shock waves ----------- TDRMG and hydrodynamics are compared in Fig. \[fig1\] both for the breathing mode quench and the clouds’ collision for different values of the interaction strength ($g_B = 0,\dots,+\infty$). In the case of the breathing mode (upper panels) the excellent agreement at all times – even longer than those shown in Fig. \[fig1\] (see Fig. \[fig2\] and below) – and for all $g_B$’s indicates that lattice effects are negligible (continuum limit) [@Peotta:2011; @Peotta:2012] and, quite surprisingly, the hydrodynamic description works well for just $N= 20$ particles. An analogous rapid crossover from a few particles to a many particles regime has been observed in Ref. . A feature that hydrodynamics is unable to capture are the small oscillations in the density profile visible in the TDMRG data for any $t \geq 0$ in the strongly interacting limit. These are called *shell effects* [@Gleisberg:2000; @Vignolo:2000; @Wonneberger:2001; @Brack:2001; @Wang:2002; @Mueller:2004] and are a feature of the ground state that persists during the evolution. We stress that the agreement between the results obtained with two completely different methods such as TDMRG and hydrodynamics is a strong check of the accuracy of our simulations. Contrary to the breathing mode, the hydrodynamic description can capture the dynamics of colliding clouds, shown in the lower panels of Fig. \[fig1\], only up to a time $t\sim 0.35\tau$ when oscillations form in the GNLSE solutions, corresponding to the formation of shock waves (*gradient catastrophe* [@Kulkarni:2012; @Whitham:book]). The oscillation amplitude increases with the interaction strength and is maximal in the Tonks-Girardeau limit. These shock waves with oscillatory behaviour are known as *dispersive* and occur in inviscid fluids, e.g., Bose-Einstein condensates [@Hoefer:2006; @Meppelink:2009; @Kulkarni:2012; @Lowman:2013]. Our TDMRG results are very similar to the experimental data reported in Ref.  where viscosity was introduced in the hydrodynamic equations to describe shock waves, while in the TD-DFT calculation in Ref.  a renormalized kinetic term $\lambda\partial_x^2\Psi$ was used for the same reason. This is not justified here since Eq. (\[eq:gnlse\]) has no free parameters and it is an excellent approximation up to the gradient catastrophe for *any* $g_B$. Introducing viscosity would contradict the fact that almost no dissipation is present in our system as we will show below. It is however unclear what kind of dispersive term should be used in our case to reproduce the exact quantum dynamics where the oscillations are suppressed with respect to the GNLSE dynamics. A discussion of the dissipative or dispersive nature of shock waves in quantum gases can be found in Refs. . As it is nicely illustrated in Fig. \[fig1\] the dynamics of these quantum shock waves for finite $g_B$ are in fact continuously connected to the Tonks-Girardeau limit (free fermion, $g_B \to +\infty$) a fact anticipated in Ref. . Suprisingly enough the hydrodynamics of free fermions is still poorly understood and has been the subject of recent works [@Bettelheim:2012; @Mirlin:2012]. ![\[fig:three\] (Color online) Upper plot, deviation $\Delta(t)$ as a function of time \[Eq. (\[eq:delta\])\]. In blue are the data relative to the Bose-Hubbard discretization and in red the ones relative to the XXZ spin chain discretization. The arrows indicate the instants where the density is closest to the initial one (minima of $\Delta(t)$). In the lower plot the renormalized oscillation period $\tau^*(g_B)/\tau$ is shown as a function of the interaction strength $g_B$ extracted from the minima of $\Delta(t)$ (see the upper plot). The red triangles (blue circles) refer to the XXZ (Bose-Hubbard) discretization. The black squares are relative to the breathing period for which hydrodynamics and TDMRG agree. The red (black) line at the bottom left represents the frequency extracted from the $g_B =0$ data for the collision (breathing) quench. They deviate from the exact result $\tau^*(0) = \tau$ in the continuum limit since lattice effects distort the density profile in time. As can be seen in Fig. \[fig1\] higher densities are explored in the $g_B=0$ case thus the continuum limit approximation is less accurate.](Fig3) \[sec:period\]Oscillation frequency shift ----------------------------------------- Although the system has a strongly nonequilibrium and nonlinear dynamics we find from the TDMRG simulations that the initial density profile and thus the initial state are recovered after a time of order $\tau$ for any $g_B \in [0,+\infty]$ both in the breathing and in the collision quenches. This remarkable recurrence, that would be expected only in the limit of small oscillations, is shown in Fig. \[fig2\] for $g_B = 1.0Ja$. The hydrodynamics results for the breathing quench (right panel of Fig. \[fig2\]) show a remarkable agreement with the quantum dynamics for times at least as long as few oscillation periods, while they deviate rapidly in the collision quench (left panel). The profiles in Fig. \[fig2\] are shown at times $t = 0,\,\tau^*(g_B),\,2\tau^*(g_B)$ where the renormalized oscillation period $\tau^*(g_B)$ has been extracted as follows. We use the mean square deviation of the density profile at time $t$ from the initial one $$\label{eq:delta} \Delta(t) = \frac{1}{N}\sqrt{\int dx\,(\rho(x,t)-\rho(x,0))^2}\,,$$ shown in the upper panel of Fig. \[fig:three\] for the collision quench. These curves have been obtained from TDMRG data since hydrodynamics is unreliable in this case. Moreover we have compared results using the two different discretizations employed \[App. \[appendix:discretization\]\] and found no significant differences. $\Delta(t)$ essentially drops to zero at times $t \sim\tau$ and $t \sim 2\tau$ indicating that the system has approximately returned to the initial state. The times at which the first minimum occurs is precisely $\tau^*(g_B)$. The second minimum occurs at $2\tau^*(g_B)$ to a good approximation. The results for the renormalized period are shown in the lower panel of Fig. \[fig:three\]. In the exactly solvable limits $g_B=0$ and $g_B=+\infty$ the period is not renormalized. In between these extrema it has a nonmonotonic behaviour with a maximum in the interval $0.02< g_B /(Ja) <0.2$ ($0.2 <\gamma < 2$). The almost perfect periodicity observed for any $g_B$ is a strong indication of very small dissipation, in agreement with experimental results [@Kinoshita:2006]. The collision period is found to be measurably larger than the breathing period – for which hydrodynamics is accurate [@Astrakharchik:200] –, a fact that could be easily tested experimentally. While the dynamics obtained from TDMRG are essentially periodic, the ones obtained from Eq. (\[eq:gnlse\]) are rather chaotic in the case of the collision quench. We remind the reader that the static density and the dynamics up to the gradient catastrophe are well captured by the GNLSE (\[eq:gnlse\]) (see the left panels of Fig. \[fig1\]). A simple explanation of this phenomenon is that in the case of the breathing mode the density is slowly varying in space and what counts is just the local pressure in the LDA sense, which is reproduced by Eq. \[eq:gnlse\], or by Eqs. \[eq:cont\] and \[eq:mom\], by definition. However in the case of the collision quench where shock waves are formed, gradient corrections on top of LDA become crucial and the quantum pressure is inadequate since it leads to a qualitatively different evolution. A different point of view is that Eq. \[eq:gnlse\] with the external potential set to zero entails the conservation only of particle number, momentum and energy and produces a chaotic dynamics, while the full quantum dynamics is subject to an infinite number of conservation laws or, in other words, the Lieb-Liniger Hamiltonian is integrable [@Sutherland_book]. It appears that the integrability breaking due to the external potential is small in this case and slightly affecting the quantum dynamics. ![image](Fig4) ![\[fig5\] Energy distribution function $f(x,\varepsilon,t)$ corresponding to first ($t = 0\tau$) and third ($t \sim 0.3\tau$) snapshots in Fig. \[fig4\], panels **b)** and **d)**. The black, grey and light grey lines correspond to $x/a = 25, 40, 60$ for the two upper quadrants ($g_B = 0.2Ja$) \[Fig. \[fig4\] panel **b)**\], and $x/a = 15, 30, 50$ for the two lower quadrants ($g_B = 2.0Ja$) \[Fig. \[fig4\] panel **d)**\], respectively. Note that, neglecting oscillations for large $\varepsilon$ – due to the finite number of particles – the initial distribution decreases monotonically, while a maximum develops for finite $\varepsilon$ after the shock-wave formation at $t\sim 0.3\tau$, *i.e.* a population inversion.](Fig5) Population inversion ==================== In order to study in more detail the shock wave dynamics we use the Wigner function [@Bettelheim:2012; @Mirlin:2012] $$\label{eq:wigner} W(x,p,t) = \frac{1}{\hbar \pi} \int dy\, \rho(x+y,x-y;t) e^{\frac{2ipy}{\hbar}}\,,$$ where $\rho(x',x;t)=\langle \hat \Psi^\dagger\left(x,t\right)\hat \Psi\left(x',t\right)\rangle$ is the one-body density matrix. The one-body density matrix can be easily extracted by contracting the wavefunction in MPS form [@tdmrg]. ![\[fig6\] Inverse information compressibility as a function of time for various interaction strengths $g_{B}/(Ja) = 0.2,\,1.0,\,2.0$. Note the divergence in correspondence to a fully developed shock wave at $t = 0.4\div0.5\tau$. See Fig. \[fig4\], snapshots at $t=0.38\tau$ in panel a) and at time $t=0.32\tau$ in panel c).](Fig6) Neglecting negative values, $W(x,p,t)$ can be thought of as a *local momentum ($p$) distribution* as in the Boltzmann equation. Oscillations and negative values of the Wigner function obviously spoil its interpretation as a local momentum distribution. However, we have found in the case of free fermions, where a scaling with the number of particles is possible, that such features do not preclude a well defined Fermi step with increasing $N$. The local *energy* distribution $f(x,\varepsilon,t)$ is defined with respect to the local co-moving (Lagrangian) reference frame with velocity $mv(x,t) =j(x,t)/\rho(x,t)$ [@mybook], with $j(x,t) = \int dp\,pW(x,p,t)$ and $\rho(x,t) = \int dp\, W(x,p,t)$. Thus $\int dp\, W(x,p-mv(x),t) = 0$ and the energy distribution reads $$\label{eq:erg_dist} f(x,\varepsilon,t) = 2\pi \hbar \sum_{s = \pm}W(x,s\sqrt{2m\varepsilon}-mv(x))\,.$$ This is the quantity shown in the color plots in Fig. \[fig4\] and for selected values of the position $x$ in Fig. \[fig5\]. At $t = 0$ \[bottom of Fig. \[fig4\] and left panels of Fig. \[fig5\]\] the distribution $f(x,\varepsilon,t)$ decreases monotonically with $\varepsilon$ (leaving aside oscillations related to the finite particle number) indicating an equilibrium energy distribution. In correspondence of the shock wave formation ($t \sim 0.3 \tau$) the energy distribution is no longer an equilibrium one, $f(x,\varepsilon,t)$ is larger for values of $\varepsilon$ away from zero \[right panels of Fig. \[fig5\]\], signalling a population inversion, namely an effective negative temperature. Population inversion leads to the break down of the LDA and to the deviation from the GNLSE (Eq. \[eq:gnlse\]) solution. The energy distribution function in the usual sense is $\sqrt{m/(2\varepsilon)}f(x,\varepsilon,t)$ and contains the 1D density of states factor $\sqrt{m/(2\varepsilon)}$. However, for our purposes the definition in Eq. (\[eq:erg\_dist\]) is more appropriate since for a classical system at equilibrium $f(x,\varepsilon,t) \propto e^{-\beta \varepsilon}$ and a monotonically increasing behaviour in the distribution directly corresponds to a negative temperature. This would not be immediately apparent if the 1D density of states had been taken into account. Negative temperature -------------------- In order to corroborate the presence of a negative temperature out of equilibrium we characterize the state of the system with the *information compressibility* [@DiVentra:2009], which measures the relative change of the number of available microstates of an open system in response to an energy variation. In our case we deal with a closed and finite quantum system. However, it is always possible to trace out part of the system and study the remaining half as an open one. In our case we focus on the reduced density matrix of half of the chain $$\label{DM} \begin{split} \hat \rho_{\rm half} (t) = &\sum_{n_{L/2+1},\ldots,n_{L}}{\left\langle n_1,\ldots,n_{L/2},n_{L/2+1},\ldots, n_L | \Psi \right\rangle} \\ &\times{\left\langle \Psi | n_1,\ldots,n_{L/2},n_{L/2+1},\ldots,n_L \right\rangle}\,. \end{split}$$ The concept of information compressibility has been introduced in Ref.  as a mean to characterize out of equilibrium states of open systems. Expectation values can be easily extracted from $\rho_{\rm half}(t)$ if the state ${\left\langle \{n_i\} | \Psi \right\rangle}$ is in MPS form [@tdmrg]. Call $\Omega$ the number of microstates available to the system. The information compressibility is then defined as the relative variation of the number of microstates with respect to the energy variation at time $t$ [@DiVentra:2009] $$K_I(t) = \left.\frac{1}{\Omega}\frac{\delta \Omega}{\delta E}\right|_{t}\,.$$ Using the microcanonical relation $\Omega = \exp\left(S/k_\text{\tiny B}\right)$ one arrives at the computationally more convenient definition $$\label{eq} K_I(t) = \left.\frac{1}{k_\text{\rm B}} \frac{\partial S}{\partial t}\frac{\partial t}{\partial E}\right|_t\,.$$ Note the similarity of this quantity with the thermodynamic definition of inverse temperature [@DiVentra:2009]. Given a system with density matrix $\hat \rho(t)$ and Hamiltonian $\mathcal{\hat H}$, the energy is $E(t) = \mathrm{Tr}[\hat \rho(t)\mathcal{\hat H}]$ and the thermodynamic entropy $S(t) = -k_\text{\tiny B}\mathrm{Tr}\left[\hat \rho(t)\ln\hat \rho(t)\right]$. We used for $\hat \rho(t)$ the density matrix of half of the system $\hat \rho_{\rm half}(t)$ defined above and $\mathcal{\hat H} = \mathcal{\hat H}_{\rm internal}$ is the part of the Hamiltonian relative to the *internal energy* of the system, namely the sum of kinetic and interaction energy but excluding the potential energy due to external forces $$\mathcal{\hat H}_{\rm internal} = \int dx\, \bigg[\frac{\hbar^2}{2m}|\partial_x \hat \Psi(x)|^2 + \frac{g_B}{2}|\hat \Psi(x))|^4\bigg]\,.$$ This is consistent with the thermodynamic definition of inverse temperature as the derivative of the entropy with respect to the internal energy. The inverse information compressibility is shown in Fig. \[fig6\] for various interaction strengths. The interesting point is the divergence of $K_I^{-1}(t)$ in all cases for $t/\tau \sim 0.4$, *i.e.*, in correspondence to a fully-developed shock wave, and negative values of this quantity at later times. If we interpret the inverse compressibility as an effective temperature, this behaviour is clearly suggestive of a population inversion, in agreement with our previous results. Conclusions and perspectives ============================ In this work we have used TDMRG, an essentially exact method, and an approximate hydrodynamic description based on the GNLSE to study numerically the debated quantum Newton cradleexperiment [@Kinoshita:2006]. We find that, when the two clouds of atoms collide, shock waves occur almost immediately after the quench, a fact which has been previously overlooked, suggesting interesting connections with the growing literature on the subject of shock waves in ultracold gases [@Hoefer:2006; @Meppelink:2009; @Joseph:2011; @Bulgac:2012; @Salasnich:2012; @Ancilotto:2012; @Kulkarni:2012; @Lowman:2013]. On the contrary, in a quench where the breathing mode is excited shock waves are absent. Interestingly, while shock waves greatly affect the GNLSE dynamics by triggering an aperiodic and chaotic behaviour, this does not occur in the full quantum dynamics where the system is found to return to the initial state after (approximately) half of the harmonic trap period, as in the case of the breathing period. We observe essentially no decay within the time scales that we have been able to explore ($\lesssim 3\tau$), an indication of the extremely small dissipation in the system. This suggest that the shock structure is controlled by a dispersive term which is however rather different from the quantum pressure in Eq. (\[eq:mom\]), since it leads to a qualitatively different dynamics and to an oscillatory structure different from the usual one [@Whitham:book; @Lowman:2013; @Mirlin:2012; @Bettelheim:2012]. We provide results for the oscillation period as a function of the interaction strength in the case of the collision of clouds, a nonperturbative result that, to our knowledge, can be obtained with no method other than TDMRG. The experiment in Ref.  is important for the problem of thermalization, namely what is the appropriate statistical ensemble that can describe the state asymptotically reached by an integrable system. Although this is not our main focus here, we point out that TDMRG could be useful in the future for this purpose since in the kind of quench that we study the entanglement has very little or no growth, which implies that the computational cost grows linearly with the maximum time reached in a simulation \[see Ref. [@SOM] and App. \[appendix:tdmrg\]\]. In the present case we study only the first few oscillations. Notice, however, that in a three-dimensional collision of two Bose-Einstein condensates the thermalization scale is $\lesssim 2\tau$ [@Kinoshita:2006], well within reach of our method. Importantly, we observed that the dynamics for finite interaction strength $g_B$ is continuously connected [@Bettelheim:2012] to the Tonks-Girardeu case (free fermions, $g_B = +\infty$) for which the dynamics in a harmonic trap is stricly periodic, *i.e.* there is *no decay* towards a stationary asymptotic state. Understanding if this is the case also for arbitrary finite interactions is an important question, and in the future it may be possible to provide some lower bounds on the decay rate using TDMRG. The results presented here are also relevant to the broad problem of understanding the hydrodynamics of quantum gases, namely to provide an effective description using as dynamical variables only the observables of interest, such as density and velocity fields [@mybook]. Such a description can be of great value since it is computationally more affordable than full quantum simulations such as those provided by TDMRG. This is the same point of view adopted by TD-DFT in its orbital free formulation [@Ligneres:2005]. In fact the Runge-Gross theorem [@Gross:1984] of TD-DFT guarantees that an exact hydrodynamic description of quantum dynamics exists [@mybook], although the analytical expression of the stress tensor is unknown even for free fermions [@Bettelheim:2012; @Mirlin:2012]. The use of DMRG to study Density Functional Theory in an exact setting, has been put forward in Ref.  in the context of ground-state calculations. Here, we have approached the dynamical problem for one of the simplest many-body systems in the same fashion. We emphasise that a better understanding of shock waves even in the Tonks-Girardeau limit is an important step towards the goal of a better hydrodynamic description of ultracold gases. Finally, we have shown that quantum shock waves lead to a population inversion in the local energy distribution, namely to a negative effective temperature, a result confirmed by a possible definition of temperature out of equilibrium put forward in Ref. . Our results suggest that statistical ensembles with negative temperatures for the motional degrees of freedom, as shown in Ref. , are a common feature in collision experiments with ultracold gases [@Kinoshita:2006]. This work has been supported by DOE under Grant No. DE-FG02-05ER46204. We thank C.-C. Chien for a critical reading of our paper and B. Damski and L. Glazman for useful suggestions. The numerical results presented in this work have been obtained by using an implementation of the TDMRG code with Matrix Product States, developed by the team coordinated by Davide Rossini at the Scuola Normale Superiore, Pisa (Italy). TDMRG simulations {#appendix:tdmrg} ================= In the TDMRG simulations a Matrix Product State (MPS) representation [@tdmrg] of the wavefunction has been employed $$\label{eq:mps} {\left\langle n_1,n_2,\ldots,n_{L-1},n_{L} | \Psi \right\rangle} = \bm{A}^{[n_1]}\cdot \bm{A}^{[n_2]}\cdot \ldots\cdot \bm{A}^{[n_{L-1}]}\cdot \bm{A}^{[n_L]}\,,$$ with $\{n_i\}_{i = 1,\ldots,L}$ a given set of occupancies of the lattice with length $L$. The matrix $\bm{A}^{[n_i]}$ for fixed $n_i$ has dimension $m_{i-1}\times m_i$ where $m_i$ is called the *link dimension*, an integer number attached to the link connecting site $i$ and site $i+1$. The link dimension is position dependent and it is the crucial parameter that needs to be tuned to find a balance between accuracy and speed [@tdmrg]. For open boundary conditions $m_0 = m_L = 1$. The dot $\,\cdot\,$denotes the matrix multiplication. A standard trick for increasing the speed of TDRMG is the use of the conservation of the number of particles ($\sum_i n_i = N$). This leads to a block structure for the matrices $\bm{A}^{[n_i]}$. It can be easily checked that large blocks of $\bm{A}^{[n_i]}$ are zero and the size of the MPS is greatly reduced. During the time evolution the link dimension $m_i$ is kept to a low value by performing a singular value decomposition [@tdmrg] (SVD) of a two-site matrix $\bm{M}^{[n_in_{i+1}]}_{\ell_{i-1}\ell_{i+1}} = \sum_{\ell_i}\bm{A}^{[n_i]}_{\ell_{i-1}\ell_i}\bm{A}^{[n_{i+1}]}_{\ell_i\ell_{i+1}}$ and discarding the lowest singular values compatibly with the condition $$\epsilon > \sum_{\rm discarded\; \sigma_{\ell_{i}}}\sigma_{\ell_i}^2\,,$$ where $\sigma_{1} \geq \sigma_{2} \geq \dots \geq \sigma_{m_i-1} \geq \sigma_{m_i}$ are the singular values obtained by SVD and $\epsilon$ is the *discarded weight*, a small parameter that controls the precision. Note that according to this truncation procedure the link dimension $m_i$ adapts automatically in space and time to the evolving wavefunction of an inhomogeneous and out of equilibrium system. The block structure carries over to $\bm{M}^{[n_in_{i+1}]}_{\ell_{i-1}\ell_{i+1}}$ and the SVD can be performed blockwise with considerable speed-up [@tdmrg]. A MPS with a block structure imposed by the conservation of the number of particles and the truncation prescription described above are the two ingredients that enable to simulate reliably the quench protocol presented in the main text for long enough times to observe several collisions of the clouds. The same techniques have been employed successfully for the Fermi-Hubbard model in Ref.  and for the Bose-Hubbard model with two species in Ref. . Additional details on the structure imposed on the MPS by particle number conservation can be found in Ref. . In our simulations we used a discarded weight $\epsilon = 10^{-10}$ and we employed a sixth order Trotter expansion for the time evolution [@trotter; @Peotta:2011; @Peotta:2012] with time step $\Delta t = 0.1\hbar/J$ for the BH discretization and $\Delta t = 0.05\hbar/J$ for the XXZ discretization. The reason for using a sixth order expansion has been discussed in Ref. . The reliability of our simulations has been controlled in several ways. First, we checked our results against exactly solvable cases, namely free bosons and free fermions (hard-core limit $g_B\to +\infty$) verifying that the exact diagonalization results are indistinguishable from the TDMRG ones over several oscillation half periods $\tau = \pi/\omega_1(t>0)$, the scale of one collision. Second, the discarded weight has been lowered to $\epsilon = 10^{-11}$ without observing significant differences in the evolved density profile $\rho(x,t)$. Finally the comparison between hydrodynamic and TDMRG data in the case of the breathing mode quench \[see Fig. \[fig1\] and \[fig2\]\] is by itself an unbiased check, for any value of the interaction strength, of the accuracy of our simulations over several oscillation periods. In the Supplementary Online Materials [@SOM] we provide animations of the density profiles obtained both with TDMRG and GNLSE, alongside the corresponding link dimension $m_i$ and block entropy $S_i$ [@tdmrg] illustrating the important point that in our TDMRG simulation the entanglement growth is not so dramatic, which allows in principle to reach longer times than those presented here. Discretization of the Lieb-Liniger Hamiltonian {#appendix:discretization} ============================================== The Lieb-Liniger model has been discretized in two distinct ways: - **Bose-Hubbard discretization** [@Kollath:2005; @Peotta:2012; @Knap:2013] \[eq:bose-hubbard\] \_[Bose-Hubbard]{} = &-J\_i ([ b]{}\^\_[i]{}[b]{}\_[i+1]{}+ [H.c.]{})\ &+ \_i [n]{}\^2\_[i]{} + \_iV\_i[n]{}\_i, with $\hat b_i,\,\hat b_i^\dagger$ bosonic annihilation and creation operators on the $i$-th site and ${\hat n}_i = \hat b_i^\dagger \hat b_i$ the corresponding on-site density operator. The maximal occupacy as been truncated to $n_i \leq 6$ in the simulations. - **XXZ spin chain discretization** [@Muth:2010; @Muth:2010bis] $$\label{eq:XXZ} \begin{split} \mathcal{\hat H}_{\rm XXZ} = &-J\sum_i(\hat c_i^\dagger\hat c_{i+1} + {\rm H.c.}) \\&-\frac{2J}{1+U/(4J)}\sum_i\hat n_i\hat n_{i+1} +\sum_iV_i\hat n_i\,, \end{split}$$ where $\hat c_i$,$\hat c_i^\dagger$ are fermionic annihilation and creation operators and $\hat n_i = \hat c_i^\dagger\hat c_i$ is the on-site density operator. The Hamiltonian (\[eq:XXZ\]) is equivalent to a XXZ spin chain after a Jordan-Wigner transformation. The equivalence between the Lieb-Liniger model and the low density limit of Hamiltonian (\[eq:XXZ\]) is a consequence of the Bose-Fermi mapping in 1D for *arbitrary* $g_B$ discussed in Refs. . The discretization of the Hamiltonian for $p$-wave interacting fermions is carried out in Ref. . The couplings in the above Hamiltonians are related to the continuum model as $J = \hbar^2/(2ma^2)$, $U = g_B/a$ and $V_i = V(x = ia)$. In our simulations we used the lattice spacing $a$ as unit of length and the hopping energy $J$ as unit of energy. For a system with density $\langle \hat n_i\rangle/a = \rho$ the dimensionless Lieb-Liniger parameter [@Lieb:1963a; @Lieb:1963b] reads $$\label{eq:ll_par} \gamma = \frac{mg_B}{\hbar^2 \rho} = \frac{U}{2J\langle \hat n_i\rangle}\,.$$ [77]{} A. Polkovnikov, K. Sengupta, A. Silva, M. Vengalattore, [**83**, 863 (2011)](http://rmp.aps.org/abstract/RMP/v83/i3/p863_1). I. Bloch, J. Dalibard, W. Zwerger, [**80**, 883 (2008)](http://dx.doi.org/10.1103/RevModPhys.80.885). M. Lewenstein, A. Sanpera, V. Ahufinger, B. Damski, A. Sen(De), U. Sen, [Adv. Phys. **56**, 243 (2007)](http://www.tandfonline.com/doi/abs/10.1080/00018730701223200#.Ukr-C3j0FyA). T. Giamarchi, *Quantum Physics in One Dimension*, Oxford University Press (2004). B. Sutherland, *Beautiful models*, World Scientific (2004). V. E. Korepin, N. M. Bogliubov, A. G. Izergin, *Quantum Inverse Scattering Method and Correlation Functions*, Cambridge University Press (1993). M. A. Cazalilla, R. Citro, T. Giamarchi, M. Rigol, [**85**, 1405 (2011)](http://dx.doi.org/10.1103/RevModPhys.83.1405). T. Kinoshita, T. Wenger and David S. Weiss, [Nature **440**, 900 (2006)](http://www.nature.com/nature/journal/v440/n7086/full/nature04693.html). M. Girardeau, [J. Math. Phys. **1**, 516 (1960)](http://jmp.aip.org/resource/1/jmapaq/v1/i6/p516_s1). M. Girardeau, [Phys. Rev. **139**, 500 (1965)](http://prola.aps.org/abstract/PR/v139/i2B/pB500_1). T. Kinoshita, T. Wenger, D. S. Weiss, [Science **305**, 1125](http://www.sciencemag.org/content/305/5687/1125.short) (2004). B. Laburthe Tolra, K. M. O’Hara, J. H. Huckans, W. D. Phillips, S. L. Rolston, J. V. Porto, [**92**, 190401 (2004)](http://prl.aps.org/abstract/PRL/v92/i19/e190401). B. Paredes, A. Widera, V. Murg, O. Mandel, S. Fölling, I. Cirac, G. V. Shlyapnikov, T. W. Hänsch, I. Bloch, [Nature **429**, 277 (2004)](http://www.nature.com/nature/journal/v429/n6989/abs/nature02530.html). T. Kinoshita, T. Wenger, D. S. Weiss, [**95**, 190406 (2005)](http://dx.doi.org/10.1103/PhysRevLett.95.190406). E. H. Lieb, W. Liniger, [Phys. Rev. **130**, 1605 (1963)](http://prola.aps.org/abstract/PR/v130/i4/p1605_1). E. H. Lieb, [Phys. Rev. **130**, 1616 (1963)](http://prola.aps.org/abstract/PR/v130/i4/p1616_1). Marcos Rigol, Dynamics and thermalization in correlated one-dimensional lattice systems, in *Finite Temperature and Non-Equilibrium Dynamics* (Vol. 1 Cold Atoms Series) N.P. Proukakis, S.A. Gardiner, M.J. Davis and M.H. Szymanska, eds. Imperial College Press, London (2013). Preprint [arXiv:1008.1930](http://arxiv.org/abs/1008.1930v2). See, e.g., M. Collura, S. Sotiriadis, P. Calabrese, [**110**, 245301 (2013)](http://arxiv.org/abs/1303.3795) and references therein. J. P. Ronzheimer, M. Schreiber, S. Braun, S. S. Hodgman, S. Langer, I. P. McCulloch, F. Heidrich-Meisner, I. Bloch, U. Schneider, [**110**, 205301 (2013)](http://arxiv.org/abs/1301.5329). M. Arzamasovs, F. Bovo, D. M. Gangardt, [arXiv:1309.2647 (2013)](http://arxiv.org/abs/1309.2647). A. O. Gogolin, A. A. Nersesyan, A. M. Tsvelik, *Bosonization and Strongly Correlated Systems*, Cambridge University Press (2004). M. Stone, *Bosonization*, World Scientific (1994). M. Endres, T. Fukuhara, D. Pekker, M. Cheneau, P. Schauß, C. Gross, E. Demler, S. Kuhr, I. Bloch, [Nature **487**, 454 (2012)](http://www.nature.com/nature/journal/v487/n7408/full/nature11255.html). S. R. White, A. E. Feiguin, [**93**, 076401 (2004)](http://prl.aps.org/abstract/PRL/v93/i7/e076401). Guifré Vidal, [**93**, 040502 (2004)](http://prl.aps.org/abstract/PRL/v93/i4/e040502). A. J. Daley, C. Kollath, U. Schollwöck and G. Vidal, [J. Stat. Mech. (2004) P04005](http://iopscience.iop.org/1742-5468/2004/04/P04005). U. Schollwöck, [Ann. Phys. (NY) [**326**]{}, 96 (2011)](http://www.sciencedirect.com/science/article/pii/S0003491610001752). F. Dalfovo, S. Giorgini, L. P. Pitaevskii, S. Stringari [**71**, 463 (1999)](http://rmp.aps.org/abstract/RMP/v71/i3/p463_1). Y. Castin, in *Coherent atomic matter waves*, Lecture Notes of Les Houches Summer School, p.1-136, edited by R. Kaiser, C. Westbrook, and F. David, EDP Sciences and Springer-Verlag (2001). E. P. Gross, [Nuovo Cimento **20**, 454 (1961)](http://link.springer.com/article/10.1007/BF02731494). E. P. Gross, [J. Math. Phys. **4**, 195 (1963)](http://jmp.aip.org/resource/1/jmapaq/v4/i2/p195_s1). L. P. Pitaevskii, 1961, Zh. Eksp. Teor. Fiz. **40**, 646 \[Sov. Phys. JETP **13**, 451 (1961)\]. F. D. M. Haldane, [Journal of Physics C: Solid State Physics **14**, 2585 (1981)](http://iopscience.iop.org/0022-3719/14/19/010/). F. D. M. Haldane, [**47**, 1840 (1981)](http://prl.aps.org/abstract/PRL/v47/i25/p1840_1). E. Bettelheim, A. G. Abanov, P. B. Wiegmann, [**97**, 246401 (2006)](http://prl.aps.org/abstract/PRL/v97/i24/e246401). E. Bettelheim, A. G. Abanov, P. Wiegmann, [**97**, 246402 (2006)](http://prl.aps.org/abstract/PRL/v97/i24/e246402). E. Bettelheim, A. G. Abanov, P. B. Wiegmann, [J. Phys. A: Math. Theor. **40** F193 (2007)](http://iopscience.iop.org/1751-8121/40/8/F02?rel=ref&relno=1). E. Bettelheim, A. G. Abanov, P. B. Wiegmann, [J. Phys. A: Math. Theor. **41**, 392003 (2008)](http://iopscience.iop.org/1751-8121/41/39/392003/). T. L. Schmidt, A. Imambekov, L. I. Glazman, [**104**, 116403 (2010)](http://prl.aps.org/abstract/PRL/v104/i11/e116403). T. L. Schmidt, A. Imambekov, L. I. Glazman [**82**, 245104 (2010)](http://prb.aps.org/abstract/PRB/v82/i24/e245104). A. Imambekov, T. L. Schmidt, L. I. Glazman [**84**, 1253 (2012)](http://rmp.aps.org/abstract/RMP/v84/i3/p1253_1). E. Runge, E. K. U. Gross, [**52**, 997 (1984)](http://prl.aps.org/abstract/PRL/v52/i12/p997_1). V. L. Lignères, E. A. Carter, Introduction to Orbital-Free Density Functional Theory, in *Textbook of Materials Modeling* pp. 137-148, S. Yip ed., Springer (2005). M. R. Andrews, C. G. Townsend, H.-J. Miesner, D. S. Durfee, D. M. Kurn, W. Ketterle [Science **275**, 637 (1997)](http://www.sciencemag.org/content/275/5300/637.abstract). M. A. Hoefer, M. J. Ablowitz, I. Coddington, E. A. Cornell, P. Engels, and V. Schweikhard, [**74**, 023623 (2006)](http://pra.aps.org/abstract/PRA/v74/i2/e023623). R. Meppelink, S. B. Koller, J. M. Vogels, and P. van der Straten, E. D. van Ooijen, N. R. Heckenberg, and H. Rubinsztein-Dunlop, S. A. Haine and M. J. Davis, [**80** 043606 (2009)](http://pra.aps.org/abstract/PRA/v80/i4/e043606). J. A. Joseph, J. E. Thomas, M. Kulkarni, A. G. Abanov, [**106**, 150401 (2011)](http://prl.aps.org/abstract/PRL/v106/i15/e150401). Aurel Bulgac, Yuan-Lung Luo, Kenneth J. Roche, [**108**, 150401 (2012)](http://prl.aps.org/abstract/PRL/v108/i15/e150401). L. Salasnich, [Eur. Phys. Lett. **96**, 40007 (2011)](http://iopscience.iop.org/0295-5075/96/4/40007). F. Ancilotto, L. Salasnich, F. Toigo, [**85**, 063612 (2012)](http://pra.aps.org/abstract/PRA/v85/i6/e063612). A. Sommer, M. Ku, G. Roati, M. W. Zwierlein, [Nature **472**, 201–204 (2011)](http://www.nature.com/nature/journal/v472/n7342/full/nature09989.html). E. Bettelheim, L. Glazman, [**109**, 260602 (2012)](http://prl.aps.org/abstract/PRL/v109/i26/e260602). I. V. Protopopov, D. B. Gutman, P. Schmitteckert, A. D. Mirlin, [**87**, 045112 (2013)](http://prb.aps.org/abstract/PRB/v87/i4/e045112). S. Braun, J. P. Ronzheimer, M. Schreiber, S. S. Hodgman, T. Rom, I. Bloch, U. Schneider, [Science **339**, 52 (2013)](http://www.sciencemag.org/content/339/6115/52.full). M. Di Ventra and Y. Dubi, [Europ. Phys. Lett. **85**, 40004 (2009)](http://iopscience.iop.org/0295-5075/85/4/40004/). M. Olshanii, [81, 938 (1998)](http://prl.aps.org/abstract/PRL/v81/i5/p938_1). V. Dunjko, V. Lorent, M. Olshanii, [**86**, 5413 (2001)](http://prl.aps.org/pdf/PRL/v86/i24/p5413_1). M. Di Ventra, [*Electrical transport in nanoscale systems*]{}, (Cambridge University Press, 2008). C.-C. Chien, M. Zwolak and M. Di Ventra, [**85** 041601 (2012)](http://pra.aps.org/abstract/PRA/v85/i4/e041601). C. Kollath, U. Schollwöck, J. von Delft, W. Zwerger, [**71**, 053606 (2005)](http://pra.aps.org/pdf/PRA/v71/i5/e053606). D. Muth, M. Fleischhauer, B. Schmidt, [**82**, 013602 (2010)](http://pra.aps.org/abstract/PRA/v82/i1/e013602). D. Muth, [J. Stat. Mech. (2011) P11020](http://iopscience.iop.org/1742-5468/2011/11/P11020/). S. Peotta, D. Rossini, P. Silvi, G. Vignale, R. Fazio, and M. Polini, [**108**, 245302 (2012)](http://prl.aps.org/abstract/PRL/v108/i24/e245302). S. Peotta, D. Rossini, M. Polini, F. Minardi, R. Fazio, [**110**, 015302 (2013)](http://prl.aps.org/abstract/PRL/v110/i1/e015302). M. Knap, C. J. M. Mathy, M. B. Zvonarev, E. Demler, [arXiv:1303.3583](http://arxiv.org/abs/1303.3583). E. M. Stoudenmire, L. O. Wagner, S. R. White, K. Burke, [**109**, 056402 (2012)](http://prl.aps.org/abstract/PRL/v111/i9/e093003). J.-S. Caux, R. M. Konik, [**109**, 175301 (2012)](http://prl.aps.org/abstract/PRL/v109/i17/e175301). T. Cheon, T. Shigehara, [**82** 2536 (1999)](http://prl.aps.org/pdf/PRL/v82/i12/p2536_1). T. Cheon, T. Shigehara, [Phys. Lett. A **243**, 111 (1998)](http://www.sciencedirect.com/science/article/pii/S0375960198001881). See Supplementary Online Material. Y.E. Kim, A.L. Zubarev, [**67**, 015602 (2003)](http://pra.aps.org/pdf/PRA/v67/i1/e015602). E. H. Lieb, R. Seiringer, J. Yngvason, [**91**, 150401 (2003)](http://prl.aps.org/abstract/PRL/v91/i15/e150401). L. Salasnich, A. Parola, L. Reatto, [**70**, 013606 (2004)](http://pra.aps.org/abstract/PRA/v70/i1/e013606). B. Damski, [**69**, 043610 (2004)](http://pra.aps.org/abstract/PRA/v69/i4/e043610). L. Salasnich, A. Parola, L. Reatto, [**72**, 025602 (2005)](http://pra.aps.org/abstract/PRA/v72/i2/e025602). B. Damski, [**73**, 043601 (2006)](http://pra.aps.org/abstract/PRA/v73/i4/e043601). Weizhu Baoa, Shi Jinb, Peter A. Markowich, [J. Comp. Phys. **175**, 487 (2002)](http://www.sciencedirect.com/science/article/pii/S0021999101969566). F. Gleisberg, W. Wonneberger, U. Schloder, C. Zimmermann, [**62**, 063602 (2000)](http://pra.aps.org/abstract/PRA/v62/i6/e063602). P. Vignolo, A. Minguzzi, M. P. Tosi, [**85**, 2850 (2000)](http://prl.aps.org/abstract/PRL/v85/i14/p2850_1). W. Wonneberger, [**63**, 063607 (2001)](http://pra.aps.org/abstract/PRA/v63/i6/e063607). M. Brack, B. P. van Zyl, [**86**, 1574 (2001)](http://prl.aps.org/abstract/PRL/v86/i8/p1574_1). X. Z. Wang, [**65**, 045601 (2002)](http://pra.aps.org/abstract/PRA/v65/i4/e045601). E. J. Mueller, [**93**, 190404 (2004)](http://prl.aps.org/abstract/PRL/v93/i19/e190404). G. B. Whitham, *Linear and nonlinear waves*, Wiley (1974). M. Kulkarni, A. G. Abanov, [**86**, 033614 (2012)](http://pra.aps.org/abstract/PRA/v86/i3/e033614). N. K. Lowman, M. A. Hoefer, [**88**, 013605 (2013)](http://pra.aps.org/abstract/PRA/v88/i1/e013605). G.E. Astrakharchik, J. Boronat, J. Casulleras, S. Giorgini, [**95**, 190407, (2005)](http://prl.aps.org/abstract/PRL/v95/i19/e190407). D. Muth, B. Schmidt, M. Fleischhauer, [ New J. Phys. **12** 083065 (2010)](http://iopscience.iop.org/1367-2630/12/8/083065/). S. Peotta, M. Di Ventra, [arXiv:1307.8416 (2013)](http://arxiv.org/abs/1307.8416). H. Yoshida, [Phys. Lett. A [**150**]{}, 262 (1990)](http://www.sciencedirect.com/science/article/pii/0375960190900923).
--- abstract: 'We show that a null–homologous transverse knot $K$ in the complement of an overtwisted disk in a contact 3–manifold is the boundary of a Legendrian ribbon if and only if it possesses a Seifert surface $S$ such that the self–linking number of $K$ with respect to $S$ satisfies ${\operatorname{sl}}(K,S)=-\chi(S)$. In particular, every null–homologous topological knot type in an overtwisted contact manifold can be represented by the boundary of a Legendrian ribbon. Finally, we show that a contact structure is tight if and only if every Legendrian ribbon minimizes genus in its relative homology class.' address: - 'S. Baader, Dept. Mathematik der ETH Z[ü]{}rich, R[ä]{}mistr. 101, 8092 Z[ü]{}rich, Switzerland' - 'K. Cieliebak, Mathematisches Institut der LMU München, Theresienstr. 39, 80333 München, Germany' - 'T. Vogel, Mathematisches Institut der LMU München, Theresienstr. 39, 80333 München, Germany' author: - 'S. Baader, K. Cieliebak, T. Vogel' title: Legendrian ribbons in overtwisted contact structures --- Introduction ============ In this note, $(M,\xi)$ will always denote a cooriented contact 3–manifold, i.e. $\xi=\ker\lambda$ for a 1–form $\lambda$ on $M$ satisfying $\lambda\wedge d\lambda\neq 0$. An oriented knot $K \subset M$ is called [*(positively) transverse*]{} if $\lambda(\dot\gamma)>0$ for a positive parametrization $\gamma:S^1\to K$. We will always assume that $K$ is null–homologous. Then $K$ possesses a [*Seifert surface*]{}, i.e. an embedded oriented connected surface $\Sigma\subset M$ with boundary ${\partial}\Sigma=K$. Given $K$ and $\Sigma$, we choose a nowhere vanishing section $v$ of $\xi{\big|_{\Sigma}}$ and use $v$ to push $K$ away from itself. The resulting knot is denoted by $K'$. The [*self–linking number*]{} ${\operatorname{sl}}(K,\Sigma)$ is defined as the algebraic intersection number of $K'$ and $\Sigma$. The self–linking number ${\operatorname{sl}}(K,\Sigma)$ depends only on $K$ and $[\Sigma]\in H_2(M,K)$ and is independent of the choice of $v$. Moreover it is always odd (since it has the same parity as the Euler characteristic of $\Sigma$). When it is clear which Seifert surface we use we will simply write ${\operatorname{sl}}(K)$. Obviously, isotopic transverse knots have the same self–linking number (with respect to Seifert surfaces carried along with the isotopy). A contact structure $\xi$ on a manifold $M$ is called *overtwisted* if $M$ contains an overtwisted disc, i.e. an embedded disc whose boundary is a Legendrian unknot with Thurston–Bennequin number zero (see Figure 1). Otherwise the contact structure $\xi$ is called *tight*. ![](overtwisted) The following result was first proved for the standard contact structure on ${\mathbb{ R}}^3$ by D. Bennequin in [@Be] and then generalized to tight contact structures by Y. Eliashberg in [@El]. \[t:bennequin\] If $(M,\xi)$ is a tight contact 3–manifold and $K$ is a transverse knot with Seifert surface $\Sigma$, then $$\label{e:tb ungl} {\operatorname{sl}}(K,\Sigma)\le-\chi(\Sigma).$$ A [*Legendrian graph*]{} in a contact 3–manifold $(M,\xi)$ is a trivalent embedded graph $\Gamma\subset M$ such that all edges are tangent to $\xi$. To a Legendrian graph one can associate a transverse link type as follows. Choose a surface $\Sigma_\Gamma$ containing $\Gamma$ with smooth boundary such that $\Sigma_\Gamma$ is tangent to $\xi$ at every point of $\Gamma$. We use the orientation of the contact structure to orient $\Sigma_\Gamma$. Then for $\Sigma_\Gamma$ sufficiently small (i.e. after replacing $\Sigma_\Gamma$ by a sufficiently small neighbourhood of $\Gamma$ in $\Sigma_\Gamma$), its boundary $\partial \Sigma_\Gamma$ is a link all of whose components are positive transverse knots. The isotopy class of the resulting transverse link depends only on $\Gamma$. We refer to $\Sigma_\Gamma$ as a [*Legendrian ribbon*]{}. Legendrian Ribbons and overtwisted discs ======================================== The following theorem is an analogue of a result explained in [@Dy] for overtwisted knots. Before stating it, recall that positive transverse knots are obtained from knots tangent to $\xi$ by choosing an oriented framing of $\xi$ such that one of the components of the framing is tangent to the knot and pushing the knot in the direction opposite to the second component of the framing. As described for example in in [@Et], a positive transverse knot $\gamma$ can be stabilized to a knot $\hat{\gamma}$ with ${\operatorname{sl}}(\hat{\gamma})={\operatorname{sl}}(\gamma)-2$. \[t:class\] Let $(M,\xi)$ be a contact 3–manifold and $D\subset M$ an overtwisted disc. If $L,K$ are two null–homologous transverse knots lying in the complement of $D$ such that $K,L$ represent the same topological knot type and ${\operatorname{sl}}(K)={\operatorname{sl}}(L)$ (with respect to Seifert surfaces carried along with the isotopy), then $L$ and $K$ are isotopic as transverse knots. Any two transverse knots $K,L$ representing the same topological knot types become transversely isotopic after sufficiently many stabilizations; this fact can be shown as the analogous statement for Legendrian knots, cf. [@Et]. (At this point the contact structures are allowed to be tight or overtwisted.) Of course, every stabilization changes the self–linking number of the knots and therefore this procedure by itself does not produce a transverse isotopy between the original knots $K,L$. However, when $K,L$ lie in the complement of a fixed overtwisted disc $D$, then one can neutralize each stabilization by pulling a segment of the knot over $D$. Indeed, let $\partial D$ be oriented in such a way that ${\operatorname{rot}}(\partial D)=1$, then its positive transversal push–off $\partial D^+$ has self–linking number one, according to the general formula $${\operatorname{sl}}(\gamma^+)={\operatorname{tb}}(\gamma)+{\operatorname{rot}}(\gamma),$$ which holds for all null–homologous Legendrian knots $\gamma$ in a 3–manifold (see [@Be]). Here ${\operatorname{tb}}$ and ${\operatorname{rot}}$ denote the Thurston–Bennequin number and the rotation number, respectively. Given two transversal knots $\gamma, \gamma'$ in $M$ which lie in disjoint balls one can define the connected sum such that the self linking number of the resulting knot $\gamma\#\gamma'$ satisfies $${\operatorname{sl}}(\gamma\#\gamma')={\operatorname{sl}}(\gamma)+{\operatorname{sl}}(\gamma')+1.$$ Let $D_{ot}$ be an overtwisted disc and $\gamma'$ the positive push off of the boundary $\partial D_{ot}$ with ${\operatorname{sl}}(\gamma')=+1$. If $\gamma$ is a positive transverse knot which is disjoint from $D_{ot}$, then ${\operatorname{sl}}(\gamma\#\gamma')={\operatorname{sl}}(\gamma)+2$ and $\gamma\#\gamma'=\gamma$ as topological knot types. We call $\gamma\#\gamma'$ the destabilization of $\gamma$. A stabilization of $\gamma\#\gamma'$ yields a knot which is isotopic to $\gamma$ as positive transverse knot. The isotopy is obtained by a push off of the isotopy constructed in Lemma 4.7 in [@Dy] (where a similar situation for Legendrian knots is considered). In order to construct a transverse isotopy between $K$ and $L$ we follow a procedure used in [@Dy] in the context of Legendrian knots. We first isotope a segment of $K$ such that it coincides with a segment $\sigma$ of $L$. Then we stabilize both knots sufficiently often on the complements of $\sigma$ and we destabilize $\sigma$ using $D$ in order to undo the stabilizations. If this is done sufficiently many times, then the complements of $\sigma$ become transversely isotopic while $K$ and $L$ still coincide along $\sigma$. Because we have never changed the transverse knot types this shows that $K,L$ are isotopic transverse knots. The following lemma shows that transverse knots obtained from Legendrian ribbons realize equality in the Thurston–Bennequin inequality . Y. Kanda has given examples of topological knot types for which the equality in cannot be realized by any transverse representative, cf. [@Ka]. In particular, such knots cannot be boundaries of Legendrian ribbons. \[l:euler\] Let $K=\partial\Sigma_\Gamma$ be the boundary of a Legendrian ribbon $\Sigma_\Gamma$. Then ${\operatorname{sl}}(K,\Sigma_\Gamma)=-\chi(\Sigma_\Gamma)$. Let $w$ be a vector field along $K={\partial}\Sigma_\Gamma$ which is tangent to $\Sigma_\Gamma$ and points outwards. As $\Sigma_\Gamma$ is almost tangent to $\xi$, $w$ projects to a nonvanishing section $v$ of $\xi$ along $K$. The self–linking number of $K$ can be described as the obstruction to the existence of a nowhere vanishing extension over $\Sigma_\Gamma$ of $v$ as a section of $\xi$, or equivalently, of $w$ as a section of $T\Sigma_\Gamma$. But the latter obstruction equals minus the Euler characteristic, hence ${\operatorname{sl}}(K,\Sigma_\Gamma)=-\chi(\Sigma_\Gamma)$. \[t:ribbon\] Let $K$ be a transverse knot in an overtwisted contact 3–manifold $(M,\xi)$ with Seifert surface $\Sigma$. Assume that $M\setminus K$ is still overtwisted and that ${\operatorname{sl}}(K,\Sigma)=-\chi(\Sigma)$. Then there is a Legendrian graph $\Gamma$ with $\partial\Sigma_\Gamma=K$. Moreover, $[\Sigma_\Gamma]=[\Sigma]\in H_2(M,K)$ and $\chi(\Sigma_\Gamma)=\chi(\Sigma)$. We fix an overtwisted disc $D$ in the complement of $K$. Without loss of generality we may assume that $D$ and $\Sigma$ do not intersect. The following constructions can be carried out in the complement of $D$. Let $G\subset \Sigma$ be a trivalent graph such that $G$ is a deformation retract of $\Sigma$. After a $C^0$–small isotopy of $\Sigma$ we may assume that $\Sigma$ is tangent to $\xi$ at the vertices of $G$ such that the orientations of $\Sigma$ and of $\xi$ coincide at the vertices. Moreover, we may assume that the edges of $G$ are tangent to $\xi$ near the vertices. Next, we smoothly isotope each edge to a Legendrian curve contained in a small neighbourhood of the original edge, fixing it near the vertices, and pull along $\Sigma$ with the isotopy. After doing this, $\Sigma$ and $\xi$ need not induce the same framing of the edges. However, the framings can be arranged to agree for each edge by either stabilizing the edge sufficiently often or by sliding the edge sufficiently often over $D$, cf. [@Dy]. These operations correspond to taking a connected sum with a Legendrian unknot with Thurston–Bennequin number $-1$ or $+1$, respectively. The latter only exist in the presence of an overtwisted disc. An example of a Legendrian ribbon whose core curve is a Legendrian unknot with Thurston–Bennequin number $+1$ is shown in Figure 2. The two ‘horizontal’ parts of the ribbon lie on the boundary of two parallel overtwisted discs there. ![](tbone) We denote the resulting Legendrian graph by $\Gamma$ and the resulting transverse knot by $L={\partial}\Sigma_\Gamma$. By construction, $K$ and $L$ are equivalent as topological knots, ${\operatorname{sl}}(K)=-\chi(\Sigma)=-\chi(\Sigma_\Gamma)={\operatorname{sl}}(L)$ by Lemma \[l:euler\], and the complement of $K\cup L$ is still overtwisted. So by [Theorem \[t:class\]]{} the knots $K$ and $L$ are transversely isotopic. This isotopy can be realized by an ambient contact isotopy (see [@Et]), so after pulling along $\Sigma_\Gamma$ with this isotopy we may assume ${\partial}\Sigma_\Gamma=K$. By construction we have $[\Sigma_\Gamma]=[\Sigma]\in H_2(M,K)$ and $\chi(\Sigma_\Gamma)=\chi(\Sigma)$, which finishes the proof of the theorem. Denote by $g(S)$ the genus of a surface $S$. Let $(M,\xi)$ be an overtwisted contact manifold. Let $K\subset M$ be a null–homologous transverse knot in the complement of an overtwisted disc and $S$ be a Seifert surface for $K$ which minimizes the genus in its relative homology class. Then $K$ is the bondary of a Legendrian ribbon representing the class $[S]\in H_2(M,K)$ if and only if ${\operatorname{sl}}(K,S)\geq 2g(S)-1$. In particular, every null–homologous topological knot type in an overtwisted contact $3$–manifold can be represented by the boundary of a Legendrian ribbon. Suppose first that ${\operatorname{sl}}(K,S)\geq 2g(S)-1$. By taking connected sums with null–homologous tori we can increase the genus of $S$ by any positive integer, without changing its relative homology class. Due to the assumptions, this allows us to find a Seifert surface $\Sigma$ for $K$, homologous to $S$, with ${\operatorname{sl}}(K,\Sigma)=2g(\Sigma)-1=-\chi(\Sigma)$. So by Theorem \[t:ribbon\], $K$ is the boundary of a Legendrian ribbon representing the class $[S]\in H_2(M,K)$. Conversely, if $K={\partial}\Sigma_\Gamma$ for a Legendrian graph $\Gamma$ with $[\Sigma_\Gamma]=[S]\in H_2(M,K)$, then Lemma \[l:euler\] yields ${\operatorname{sl}}(K,S)=-\chi(\Sigma_\Gamma)=2 g(\Sigma_\Gamma)-1\geq 2g(S)-1$ because $S$ minimizes the genus in its relative homology class. The last statement holds because any topological knot type in an overtwisted contact 3–manifold can be realized by a transverse knot $K$ in the complement of an overtwisted disk, and by repeated destabilization we can arrange ${\operatorname{sl}}(K,S)\geq 2g(S)-1$. As it is shown in [@BaM], the situation in tight contact structures is quite different: A transverse knot bounding a Legendrian ribbon is quasipositive in the sense of Rudolph [@Ru]. Quasipositivity is quite a strong condition, as it implies chirality. For example, the figure–8 knot is not quasipositive since it is achiral, i.e. topologically equivalent to its mirror image. A classification of quasipositive knots up to 10 crossings is given in [@Ba]. To conclude this note, we give a characterization of tightness in terms of Legendrian ribbons. A contact structure $\xi$ on a 3–manifold $M$ is tight if and only if every Legendrian ribbon has minimal genus among all embedded surfaces with the same boundary and in the same relative homology class. Assume first that $\xi$ is tight and $\Sigma_\Gamma$ is a Legendrian ribbon with boundary ${\partial}\Sigma_\Gamma=K$. By Lemma \[l:euler\] we have ${\operatorname{sl}}(K,\Sigma_\Gamma)=-\chi(\Sigma_\Gamma)$, and by Theorem \[t:bennequin\] every other Seifert surface $\Sigma$ for $K$ homologous to $\Sigma_\Gamma$ satisfies ${\operatorname{sl}}(K,\Sigma)\leq-\chi(\Sigma)$, so $g(\Sigma)\geq g(\Sigma_\Gamma)$. Conversely, if $\xi$ is overtwisted, then we construct a Legendrian ribbon $\Sigma_\Gamma$ of genus one whose boundary $K={\partial}\Sigma_\Gamma$ is the topological unknot, as shown in Figure 3. ![](unknot) Since $\Sigma_\Gamma$ is contained in the neighbourhood of an overtwisted disc, i.e. in a ball, it is homologous rel $K$ to a disk and therefore not genus minimizing. [12345]{} S. Baader, [*Slice and Gordian numbers of track knots*]{}, Osaka J. Math. 42 (2005), no. 1, 257–271. S. Baader, M. Ishikawa, [*Legendrian graphs and quasipositive digrams*]{}, arXiv:math.GT/0609592. D. Bennequin, [*Entrelacements et equations de Pfaff*]{}, Ast[é]{}risque 107–108 (1983), 83–161. K. Dymara, [*Legendrian knots in overtwisted contact strucures*]{}, arXiv:math.GT/0410122. Y. Eliashberg, [*Contact $3$–manifolds twenty years since J. Martinet’s work*]{}, Ann. Inst. Fourier 42, 1–2 (1992), 165–192. J. Etnyre, [*Legendrian and transversal knots*]{}, preprint 2003. D. Fuchs, S. Tabachnikov, [*Invariants of Legendrian and transverse knots in the standard contact space*]{}, Topology 36 no. 5 (1997), 1025–1053. Y. Kanda, [*On the Thurston–Bennequin invariant of Legendrian knots and nonexactness of Bennequin’s inequality*]{}, Invent. Math. 133 no. 2 (1998), 227–242. L. Rudolph, [*Constructions of quasipositive knots and links III. A characterization of quasipositive Seifert surfaces*]{}, Topology 31 no. 2 (1992), 231–237.
--- abstract: | Future galaxy surveys hope to realize significantly tighter constraints on various cosmological parameters. The higher number densities achieved by these surveys will allow them to probe the smaller scales affected by non-linear clustering. However, in these regimes, the standard power spectrum can extract only a portion of such surveys’ cosmological information. In contrast, the alternate statistic $A^*$ has the potential to double these surveys’ information return, provided one can predict the $A^*$-power spectrum for a given cosmology. Thus, in this work we provide a prescription for this power spectrum $P_{A^*}(k)$, finding that the prescription is typically accurate to about 5 per cent for near-concordance cosmologies. This prescription will thus allow us to multiply the information gained from surveys such as *Euclid* and WFIRST.\ author: - | Andrew Repp & István Szapudi\ Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822, USA bibliography: - 'PkAstar.bib' title: 'Predicting the Sufficient-Statistics Power Spectrum for Galaxy Surveys: A Recipe for $P_{A^*}(k)$' --- \[firstpage\] cosmology: theory – cosmological parameters – cosmology: miscellaneous Introduction ============ Cosmology – the characterization of the Universe as a whole – seeks a precise determination of the $\Lambda$CDM parameters. In particular, those dealing with dark energy and neutrino mass are not yet well-constrained. Galaxy surveys constitute one of the most promising avenues of approach to this problem since they permit direct comparison between the observed galaxy distribution statistics and those predicted for various cosmological parameter values. The degree of anticipation associated with upcoming surveys such as *Euclid* [@EuclidRedBook] and the Wide Field InfraRed Survey Telescope (WFIRST – [@WFIRST]) reflects the expected value of these surveys. Of the various statistics one could use for this comparison, the power spectrum $P(k)$ of the overdensity $\delta = \rho/\overline{\rho} - 1$ (or, the two-point correlation function $\xi(r)$, which is its Fourier transform) has perhaps received the most attention (e.g. [@Peebles1980; @BaumgartFry1991; @Martinez2009]). One reason for this emphasis is that the two-point statistics of a Gaussian distribution completely characterize the distribution, and thus the power spectrum of the distribution exhausts the information inherent in it. And since the fluctuations in the cosmic microwave background appear (so far) to be consistent with primordial Gaussianity [@Planck2015NG] – and since the matter distribution remains roughly Gaussian on large, “linear” scales ($k \la 0.1h$ Mpc$^{-1}$) – it follows that $P(k)$ is the statistic of choice for analyzing galaxy surveys at these scales. Future surveys, however, promise a galaxy number density sufficient to probe much smaller scales. At these scales, nonlinear gravitational amplification has over time produced an extremely non-Gaussian matter distribution (e.g., [@FryPeebles1978; @Sharp1984; @SSB1992; @Bouchet1993; @Gaztanaga1994]). The long positive tail – and the correspondingly higher stochastic incidence of massive clusters – heavily impacts the power spectrum on these small scales, resulting in large cosmic variance. This variance in turn markedly reduces the cosmological Fisher information captured by the power spectrum $P(k)$ (e.g., [@RimesHamilton2005; @RimesHamilton2006; @NSR2006]). In particular, pushing a survey to smaller scales will not proportionately increase the Fisher information in $P(k)$, due to coupling between large and small Fourier modes [@MeiksinWhite1999; @ScocZH1999], which results in an “information plateau” [@NeyrinckSzapudi2007; @LeePen2008; @Carron2011; @CarronNeyrinck2012; @Wolk2013]. Hence, standard methods of analysis using the power spectrum can miss a large fraction – in some cases, approximately half [@WCS2015a; @WCS2015b; @Repp2015] – of the Fisher information inherent in these surveys. The log transform provides a means of recovering this information [@Neyrinck2009]. Furthermore, using the theory of sufficient statistics (observables which capture all of the field’s information) @CarronSzapudi2013 find that for typical cosmological fields, the log transform yields an alternate statistic $A = \ln(1+\delta)$ which is essentially sufficient: i.e., this transformation counteracts nonlinear evolution to the point where the first two moments of $A$ contain virtually all of the cosmological information in any given survey pixel. It follows that the power spectrum $P_A(k)$ and mean $\langle A\rangle$ of this alternate statistic are the quantities one should study in order to deduce cosmological information from future surveys. To this end, @ReppPAk provide a simple and accurate fit for $P_A(k)$, and @ReppApdf provide a similar prescription for $\langle A\rangle$; they also show that a Generalized Extreme Value (GEV) model fits the one-point distribution of $A$ quite well. However, the statistic $A$ describes only the continuous dark matter distribution; the discreteness of galaxy counts (an empty cell of which would render the log transform problematic) requires modification of $A$. For such fields, @CarronSzapudi2014 provide an analysis of the discrete optimal observable, denoting this observable as $A^*$. Hence, in order to avoid the information loss incurred by application of $P(k)$ to future dense galaxy surveys, one should perform the analysis using the $A^*$ statistic: i.e., one should compare the observed power spectrum $P_{A^*}(k)$ and mean $\langle A^* \rangle$ with the predictions of these quantities for various cosmological parameter values[^1]. To do so, of course, one requires the ability to make said predictions of $P_{A^*}(k)$ and $\langle A^* \rangle$. The aforementioned $A$-probability distribution allows prediction of $\langle A^*\rangle$ [@ReppAstarbias], leaving characterization of the $A^*$-power spectrum the remaining problem. @WCS2015b identify the most salient feature of $P_{A^*}(k)$, namely, that it is biased with respect to the (continuous) log spectrum $P_A(k)$. @ReppAstarbias provide an a priori prescription for this bias in near-concordance cosmologies, with an accuracy better than 3 per cent for *Euclid*-like surveys. In this paper we complete the task begun in @ReppAstarbias by providing a detailed characterization of the $A^*$ power spectrum, including its discreteness plateau and the shape change incurred by passing from $A$ to $A^*$. We organize the work as follows: the relevant background appears in Section \[sec:Astar\], which reviews and defines the $A^*$ statistic, and in Section \[sec:bias\], which provides the $A^*$-bias prescription and briefly discusses its limits of applicability. Section \[sec:disc\] analyzes the plateau introduced into $P_{A^*}(k)$ by the discreteness of the galaxy field – analogous to (but not equal to) the $1/\overline{n}$ shot noise plateau in the standard power spectrum. Section \[sec:shape\] then characterizes (and provides a prescription for) the shape of $P_{A^*}(k)$. We quantify the accuracy of our prescription in Section \[sec:accuracy\], and we conclude in Section \[sec:concl\]. The Discrete Sufficient Statistic $A^*$ {#sec:Astar} ======================================= @CarronSzapudi2013 demonstrate that the log transform $A = \ln (1+\delta)$ yields a statistic that is essentially “sufficient,” in that it extracts (virtually) all of the Fisher information in a survey.[^2] Because this transformation thus approximately Gaussianizes the overdensity field $\delta$, the power spectrum $P_A(k)$ of the log overdensity extracts substantially more information at small scales than the power spectrum $P(k)$ of the overdensity field itself. In reality, of course, one surveys not the dark matter density but the galaxy distribution, thus introducing shot noise. Under the assumption that light traces mass, galaxy surveys represent a discretization of the underlying dark matter field. Since $A$ is a continuous variable, it requires modification in order to serve as an efficient information-extractor from a discrete field. For this reason, @CarronSzapudi2014 provide the appropriate generalization of the log transform to discrete fields, formulating a statistic which they denote $A^*$, and showing that it is a good approximation to a sufficient statistic for discrete fields. $A^*$ is the Bayesian reconstruction of the underlying dark matter field, given the measured galaxy counts $N$. In particular, to construct $A^*(N)$ one must first know the probability distribution $\mathcal{P}(A)$ of the log density contrast $A$ (or equivalently the distribution $\mathcal{P}(\delta)$ of $\delta$).[^3] One must also assume a discrete sampling scheme $\mathcal{P}(N|A)$, which provides the probability of finding $N$ galaxies given an underlying dark matter log density $A$. Perhaps the simplest such scheme is Poisson sampling, for which $$\mathcal{P}(N|A) = \frac{1}{N!} \left( \overline{N} e^A\right)^N \exp \left( -\overline{N}e^A \right), \label{eq:Poisson}$$ where $\overline{N}$ is the average number of galaxies per survey pixel. Given these two distributions $\mathcal{P}(A)$ and $\mathcal{P}(N|A)$, @CarronSzapudi2014 define $A^*(N)$ as the value of $A$ which maximizes $\mathcal{P}(A)\mathcal{P}(N |A)$; they further show that $A^*(N)$ is also the peak of the Bayesian a posteriori distribution for the dark matter log density in a survey pixel containing $N$ galaxies. In the following two subsections, we provide expressions for $A^*$ under the assumption of Poisson sampling, given two approximations for the distribution of dark matter $\mathcal{P}(A)$. It is straightforward to define $A^*$ for other dark matter probability distributions and sampling schemes. $A^*$ for a Lognormal Matter Distribution ----------------------------------------- A lognormal model for the cosmic matter distribution arises naturally from simple assumptions [@ColesJones; @KTS2001] and is an accurate approximation in the projected, two-dimensional case. It is this model (with Poisson sampling) with which @CarronSzapudi2014 explicitly deal, concluding that $A^*(N)$ is the solution of $$e^{A^*} + \frac{A^*(N)}{\overline{N} \sigma_A^2} = \frac{N-1/2}{\overline{N}}. \label{eq:Astarln}$$ Here $\overline{N}$ is the average number of galaxies per survey pixel and $\sigma^2_A$ is the variance of the log dark matter density contrast. It is through these two parameters, respectively, that $A^*$ depends on the discrete sampling scheme and the underlying dark matter distribution, respectively. $A^*$ for a log-GEV Matter Distribution --------------------------------------- If we consider the matter distribution in three dimensions (rather than projecting to two), it departs significantly from the lognormal on translinear scales. We show in @ReppApdf that a Generalized Extreme Value (GEV) distribution provides a better fit to the $A$-distribution (in contrast to a Gaussian distribution for $A$, which would follow from a lognormal distribution for $\delta$). In particular, we show that for redshifts $z=0$ to 2 and for scales down to $2h^{-1}$Mpc, the following distribution is an excellent fit to the Millennium Simulation [@MillSim] results: $$\label{eq:GEV} \mathcal{P}(A) = \frac{1}{\sigma_G} t(A)^{1+\xi_G} e^{-t(A)},$$ where $$\label{eq:GEV_t} t(A) = \left(1 + \frac{A - \mu_G}{\sigma_G}\xi_G\right)^{-1/\xi_G}.$$ Here, $\mu_G$, $\sigma_G$, and $\xi_G$ depend on the mean $\langle A \rangle$, variance $\sigma_A^2$, and skewness $\gamma_1$ of $A$, as follows: $$\gamma_1 = -\frac{\Gamma(1-3\xi_G) - 3\Gamma(1-\xi_G)\Gamma(1-2\xi_G) + 2\Gamma^3(1-\xi_G)}{\left(\Gamma(1-2\xi_G) - \Gamma^2(1-\xi_G)\right)^{3/2}} \label{eq:xiG}$$ $$\sigma_G = \sigma_A \xi_G \cdot \left(\Gamma(1-2\xi_G) - \Gamma^2(1-\xi_G)\right)^{-1/2} \label{eq:sigG}$$ $$\mu_G = \langle A \rangle - \sigma_G \frac{\Gamma(1-\xi_G) - 1}{\xi_G}, \label{eq:muG}$$ where $\Gamma(x)$ is the gamma function. In @ReppAstarbias we show that Poisson sampling of a GEV distribution yields the following equation for $A^*$: $$\begin{gathered} \label{eq:AstarGEV} \frac{1}{\sigma_G} \left( 1 + \frac{A^*(N) - \mu_G}{\sigma_G} \xi_G \right)^{-1-\frac{1}{\xi_G}} + N \\ = \frac{1+\xi_G}{\sigma_G + \left(A^*(N) - \mu_G\right)\xi_G} + \overline{N}e^{A^*(N)}.\end{gathered}$$ Once again, $A^*(N)$ depends on the sampling scheme through the $\overline{N}$ parameter, and it depends on the dark matter distribution through the $\mu_G$, $\sigma_G$, and $\xi_G$ parameters. It is Equation \[eq:AstarGEV\] which we use for calculating $A^*$ throughout the remainder of this paper. The Bias of the $A^*$-Power Spectrum {#sec:bias} ==================================== To a first approximation, the power spectrum of $A^*$ exhibits the same shape as its continuous analog $P_A(k)$, with the exception of a multiplicative bias [@WCS2015b]. @WCS2015b also provide an approximate formula for this bias in the case of a two-dimensional (projected) galaxy survey, assuming a lognormal probability distribution. For conceptual clarity, it is important to note that this bias (denoted $b^2_{A^*}$ below) is unrelated to the more commonly encountered “galaxy bias,” which expresses the fact that galaxies cluster more strongly than dark matter. The latter depends on galaxy formation physics, whereas the former is a statistical effect of the passage from $A$ to $A^*$. Even in the case of identical galaxy- and dark matter-clustering (i.e., a galaxy bias of unity), the power spectrum $P_{A^*}(k)$ would nevertheless exhibit an offset from $P_A(k)$ in the amount of $b^2_{A^*}$. In this work we deal solely with the $A^*$-bias; we do not again mention galaxy bias until the penultimate sentence of Section \[sec:concl\]. To deal with the full three-dimensional data, @ReppAstarbias derive an expression for the bias in terms of the discrete sampling scheme $\mathcal{P}(N|A)$ and the underlying dark matter distribution $\mathcal{P}(A)$: $$b^2_{A^*} = \frac{1}{\sigma_A^4} \left\lbrace \sum_N\int dA\,(A-\overline{A})(A^* - \overline{A^*})\mathcal{P}(N|A)\mathcal{P}(A) \right\rbrace^2. \label{eq:Astarbias}$$ The accuracy of this formula depends on the assumption that at large scales the correlation functions $\xi$ of $A$ and of $A^*$ have the same shape, so that $\xi_{A^*}(r) = b_{A^*}^2 \xi_A(r)$. This assumption is not completely valid, as we mention below (albeit in the context of the power spectra rather than the correlation functions); indeed, when the average number $\overline{N}$ of particles per cell is too low ($\overline{N} \la 0.5$), the shapes are sufficiently different that Equation \[eq:Astarbias\] yields too low a value for $b^2_{A^*}$. However, the practical applicability of galaxy survey results is limited to scales at which $\overline{N} \ga 1$, and in this regime we can use Equation \[eq:Astarbias\] to provide the overall bias and then make the slight shape modifications discussed in the following sections. Note that below (see Section \[sec:shape\]) we refine our understanding of this bias in terms of the decomposition of $A^*$ accomplished in Section \[sec:disc\]. Equation \[eq:Astarbias\] receives similar modification. Discreteness Effects in $P_{A^*}(k)$ {#sec:disc} ==================================== It is well-known (e.g., [@Peebles1980]) that if one Poisson-samples a continuous density contrast field $\delta(\mathbf{r})$ to obtain a discrete density contrast $\delta_d(\mathbf{r})$, then the power spectra of the two fields relate as follows: $$P_d(k) = P(k) + \frac{1}{\overline{n}}, \label{eq:1_over_n}$$ where $P_d(k)$ is the power spectrum of $\delta_d$, $P(k)$ is the power spectrum of $\delta$, and $\overline{n}$ is the number density in units of inverse volume. As @Neyrinck2011 note, the discrete log spectrum exhibits a similar plateau at high values of $k$. Thus, we here derive an expression for this analogous $A^*$-discreteness plateau. We first (Section \[sec:N\]) derive the power spectrum for the field of galaxy number counts $N$ rather than for the density contrast $\delta_d$. Then we derive (Section \[sec:f\]) the power spectrum for an arbitrary function $f$ of $N$, under the assumption that the field $N$ is uncorrelated. We next (Section \[sec:deltaAstar\]) decompose $A^*$ into correlated and uncorrelated components, permitting determination of the $A^*$-discreteness plateau. Discussion of the plateau follows in Section \[sec:discretediscussion\]. The Number Count Field {#sec:N} ---------------------- To derive the discreteness plateau for $A^*$, we first consider the power spectrum $P_N(k)$ for the actual number count field (i.e., the number of galaxies in each cell). Number counts depend on survey cell size in a way that densities do not. In any given survey cell $\mathbf{r}_i$, we have the number count $N_i = \overline{N} ( \delta_d(\mathbf{r}_i) + 1)$, where $\overline{N}$ is the mean number of counts per cell. Thus to obtain $P(N)$ we simply multiply Equation \[eq:1\_over\_n\] by the square of $\overline{N}$: $$P_N(k) = \overline{N}^{\,2} P(k) + \frac{\overline{N}^{\,2}}{\overline{n}} = \overline{N}^{\,2} P(k) + \overline{N}\, \delta V, \label{eq:PN}$$ where $\delta V$ is the size of a survey cell. Transforming Uncorrelated Counts {#sec:f} -------------------------------- In practice we smooth (integrate) over finite subcells – the survey pixels – before taking the power spectrum. Since integration is a linear operation, the only effect of doing so is the introduction of pixel window effects into the continuous (correlated) part of Equations \[eq:1\_over\_n\] and \[eq:PN\]; the discreteness term is unaffected. The situation changes with the $A^*$ field. If we begin with a discrete field $n(\mathbf{r})$, we would integrate over a finite subcell to obtain $N_i$ and then determine $A^*(N_i)$. Only after this highly-nonlinear $A^*$ transformation do we we take the Fourier transform to get $P_{A^*}(k)$. To address this problem of nonlinearity, we begin by considering an uncorrelated field of number counts $N$, and we let $f(N)$ be an arbitrary transformation of this field, subject only to the condition that the transformed field $f(N)$ remain uncorrelated. We first note that the power spectrum of any uncorrelated field is constant. This being the case, let $C$ be the constant value of the power spectrum $P_f(k)$ of $f(N)$, and consider a cubical volume divided into cubical survey cells of side length $(\delta V)^{1/3}$. Then $k_N = (\delta V)^{-1/3}\pi$ is the Nyquist frequency of the survey, and we can obtain the variance $\sigma_f^2$ of the field by integrating the power spectrum over a cube in $k$-space of side length $2k_N$: $$\sigma_f^2 = \int_{-k_N}^{k_N} \frac{dk_1}{2\pi} \int_{-k_N}^{k_N} \frac{dk_2}{2\pi} \int_{-k_N}^{k_N} \frac{dk_3}{2\pi} P_f(k) = C\left( \frac{k_N}{\pi} \right)^3.$$ But $k_N^3 = \pi^3/\delta V$, so $\sigma_f^2 = C/\delta V$. We thus obtain the simple result that $$P_f(k) = \delta V \cdot \sigma_f^2. \label{eq:PAsdisc}$$ We can gain more insight into this result by temporarily imposing the additional assumption that the transformation $f$ is a function $f(N)$ of only the number counts in each cell (rather than, say, depending on the underlying dark matter density). In this case, if we assume that $\overline{N}$ is small enough that $N_i = 1$ or 0 for all cells, then the transformation is linear, being determined by the two points $N = 0 \longmapsto f(0)$ and $N = 1 \longmapsto f(1)$. The transformation from $N$ to $f$ then consists of only a scaling (and an irrelevant amplitude shift). Thus the discreteness correction in Equation \[eq:PN\] transforms to $$P_f(k) = (f(1) - f(0))^2 \,\overline{N} \delta V, \mbox{\hspace{0.5cm}}\overline{N} \ll 1. \label{eq:lowN}$$ We can then deal in turn with arbitrary number densities by recalling that the $f$-variances of two transformed fields of number counts will be the integrals of their (constant) power spectra, and thus the variances will be in the same ratio as the values of the $f$-power spectra: $$\frac{\sigma_{f,\overline{N}_1}^2}{\sigma_{f,\overline{N}_2}^2} = \frac{P_{f,\overline{N}_1}(k)}{P_{f,\overline{N}_2}(k)}. \label{eq:varratios}$$ Let us denote the quantities in the low-$\overline{N}$ limit as $\overline{N}_0$, $\sigma_{f_0}^2$, etc.; in this limit Equation \[eq:lowN\] holds. Then for any $\overline{N}$, Equation \[eq:varratios\] allows us to write $$P_f(k) = \sigma_f^2 \frac{P_{f_0}(k)}{\sigma^2_{f_0}} = \sigma_f^2 \frac{\overline{N}_0 \left(f(1) - f(0)\right)^2 \delta V}{\sigma_{f_0}^2}. \label{eq:gen_shotnoise}$$ Working to first order in $\overline{N}_0$, we can say that the probability of one particle in the cell is $\mathcal{P}(1) = \overline{N}_0$ and the probability of an empty cell is $\mathcal{P}(0) = 1 - \overline{N}_0$. In this limit, $$\begin{aligned} \sigma_{f_0}^2 = {} & \langle f^2 \rangle_0 - \langle f \rangle^2_0 \\ = {} &(\mathcal{P}(0)\cdot f(0)^2 + \mathcal{P}(1) \cdot f(1)^2 ) \\ & - (\mathcal{P}(0)\cdot f(0) + \mathcal{P}(1) \cdot f(1) )^2\\ = {} & ((1 - \overline{N}_0) f(0)^2 + \overline{N}_0 \cdot f(1)^2 ) \\ & - ((1 - \overline{N}_0)\cdot f(0) + \overline{N}_0 \cdot f(1) )^2\\ = {} & \overline{N}_0 \left(f(1) - f(0)\right)^2,\end{aligned}$$ and Equation \[eq:PAsdisc\] follows. However, our original derivation of Equation \[eq:PAsdisc\] does not require $f$ to be a function of only number counts $N$; rather, it simply requires that the transformed field $f$ be uncorrelated. Hence, the transformation $f$ can depend on the dark matter density, as long as that dependence does not introduce correlations into the transformed field. The Discreteness Plateau for $A^*$ {#sec:deltaAstar} ---------------------------------- To extend Equation \[eq:PAsdisc\] to $A^*$ on correlated fields, we note that there are two sources of variation in $A^*$: the value of $A^*\!(\mathbf{r_1})$ might differ from that of $A^*\!(\mathbf{r_2})$ because the underlying dark matter densities differ (i.e., $A(\mathbf{r_1}) \neq A(\mathbf{r_2})$); or it might differ because of stochasticity during discrete sampling. Equivalently, there are two effects involved in the passage from $A$ to $A^*$. First, the mapping in Equation \[eq:AstarGEV\] (as well as that in Equation \[eq:Astarln\]) is inherently nonlinear in $A$. Second, in addition to this nonlinearity we have the stochastic nature of the discrete sampling process, reflected in the fact that $A^*$ is a function of $N$ rather than of $A$. In order to disentangle these effects, we decompose $A^*\!(\mathbf{r})$ into two components. The first component is the expected value of $A^*$ given an underlying (dark matter) value of $A(\mathbf{r})$; we denote this component $\tilde{A}$: $$\tilde{A}\left(A(\mathbf{r})\right) \equiv \langle A^* \rangle \Big|_A = \sum_{N=0}^\infty \mathcal{P}(N|A) A^*(N). \label{eq:Atilde}$$ $\tilde{A}(A)$ thus encapsulates the nonlinearity of $A^*$ without its stochasticity. Figure \[fig:Atildeplot\] shows $\tilde{A}(A)$ for various values of $\overline{N}$. Inspection of this figure shows that $\tilde{A}$ approaches $A$ for large values of $A$, as expected, since for high dark matter densities the effects of discretization are increasingly irrelevant. Likewise, the higher the value of $\overline{N}$, the less difference there is between $\tilde{A}(A)$ and $A$ itself. On the other hand, we see that $\tilde{A}$ asymptotes to a distinct minimum value (which depends on $\overline{N}$); this minimum corresponds to the value of $A$ at which $\mathcal{P}(N\!=\!0\,|A) \approx 1$, so that $\tilde{A} \approx A^*\!(N\!=\!0)$. Again as expected, this minimum value of $\tilde{A}$ decreases with $\overline{N}$, because with more galaxies, one obtains better resolution in low-density regions. The fact that the minimum $\tilde{A}$ consistently falls below $\langle A \rangle$ – even for very small $\overline{N}$ – is a result of the fact that the most likely value of $A$ is less than the mean of $A$ (because of the positive skewness of the $A$-distribution). Thus far in this section we have considered the “continuous” component of $A^*$; we now turn to the remaining component, which we denote $\delta\!A^*$: $$\delta\!A^*(\mathbf{r}) \equiv A^*(\mathbf{r}) - \tilde{A}\left(A(\mathbf{r})\right).$$ This component contains the stochasticity induced by discreteness at a particular point in the field. Since $\delta\!A^*$ is the result solely of stochasticity in the Poisson sampling, it is reasonable to assume that the field $\delta\!A^*\!(\mathbf{r})$ is uncorrelated – a fact which we demonstrate rigorously in the Appendix. And since $\delta\!A^*$ depends on number counts, we can use Equation \[eq:PAsdisc\] with the transformation $f:N(\mathbf{r}) \longrightarrow \delta\!A^* (\mathbf{r})$ and obtain the power spectrum of $\delta\!A^*$: $$P_{\delta\!A^*} (k) = \delta V \cdot \sigma^2_{\delta\!A^*}. \label{eq:PkdeltAstar}$$ This (constant) value gives the discreteness plateau of the $A^*$-power spectrum, and we can write $$P_{A^*}(k) = P_{\tilde{A}}(k) + \delta V \cdot \sigma^2_{\delta\!A^*}.$$ In addition, the Appendix also shows that $\sigma^2_{\delta\!A^*} = \sigma^2_{A^*} - \sigma^2_{\tilde{A}}$; thus we conclude $$P_{A^*}(k) = P_{\tilde{A}}(k) + \delta V \cdot \left(\sigma^2_{A^*} - \sigma^2_{\tilde{A}}\right). \label{eq:PAswithPdeltAs}$$ Using the probability distributions to explicitly write the variances from Equation \[eq:PAswithPdeltAs\], we have $$\begin{aligned} \sigma^2_{A^*} & = \sum_N \int dA\,\,\mathcal{P}(A) \mathcal{P}(N|A)\left(A^*(N) - \langle A^* \rangle \right)^2 \label{eq:sig2Astar}\\ \sigma^2_{\tilde{A}} & = \int dA\,\,\mathcal{P}(A) \left(\tilde{A}(A) - \langle \tilde{A} \rangle \right)^2 \label{eq:sig2Atilde}.\end{aligned}$$ Likewise, one can express the means $\langle A^* \rangle$ and $\langle \tilde{A} \rangle$ as moments of the appropriate probability distributions. Note that it is straightforward to verify that we recover the standard $1/\overline{n}$ discreteness plateau by formally setting $A^*(N) = \delta_d = N/\overline{N} - 1$, $\tilde{A}(A) = \delta = e^A - 1$, and $\mathcal{P}(N|A) = \mathrm{Pois}(\overline{N}e^A)$ in Equations \[eq:PAswithPdeltAs\], \[eq:sig2Astar\], and \[eq:sig2Atilde\], where $\mathrm{Pois}(\lambda)$ denotes a Poisson distribution with mean $\lambda$. Discussion {#sec:discretediscussion} ---------- Figure \[fig:PkdeltAstar\] displays typical values for this discreteness plateau $P_{\delta\!A^*}$ for three galaxy number densities and for a variety of pixel side lengths. We display both the results of using our GEV probability distribution and those of applying Equation \[eq:PkdeltAstar\] to discrete realizations of the Millennium Simulation. In Sections \[sec:shape\] and \[sec:accuracy\] we further describe these discrete realizations, and we also discuss the (slight) disparity between the two sets of calculations. At this point, however, we note a few general trends. First, at large scales the discreteness plateau of $P_{A^*}(k)$ approaches the standard $1/\overline{n}$ value for the galaxy spectrum $P_g(k)$. This behavior is not unexpected, since at large scales the density contrast is small, so that $A = \ln(1 + \delta) \approx \delta$; since $A^*$ is the discrete analog of $\delta_g$, it is unsurprising that their behaviors match on these scales. At these scales, the level of the plateau increases as number density decreases. Second, it is interesting that the approach to this $1/\overline{n}$ value is not necessarily monotonic: there are scales and number densities at which $P_{\delta\!A^*}$ is slightly higher than $1/\overline{n}$, whereas $P_{A^*}(k)$ is in general lower than $P(k)$ (because of the bias – see Section \[sec:bias\]). However, because $P_{\delta\!A^*}$ exhibits some cosmology-dependence (through the effects of the probability distribution $\mathcal{P}(A)$ on $\sigma_{A^*}^2$ and $\sigma^2_{\tilde{A}}$), it is still possible to extract information on scales at which $P_{\delta\!A^*}$ dominates over $P_{\tilde{A}}(k)$. Finally, we see (left panel of Figure \[fig:PkdeltAstar\]) that at the smallest scales the relationship between number density and $P_{\delta\!A^*}$ is inverted (with respect to the relationship at large scales) – namely, lower number densities imply a lower plateau. At first this reversal appears counterintuitive – why would lower number densities effectively produce *less* shot noise? But this behavior is a direct consequence of the scale-dependent nature of the map $A^*\!(N)$, which depends explicitly on counts per cell $\overline{N}$ (rather than $\overline{n}$, by which we denote counts per unit volume). The $A^*$ map itself thus depends on the smoothing scale, in contrast to the galaxy overdensity $\delta_g(N) = N/\overline{N} -1$, in which the cell volume affects $N$ and $\overline{N}$ equally. As a result, the galaxy power spectrum $P_g(k)$ does not depend on the smoothing scale (except for pixelation effects, etc.), whereas the power spectrum of the nonlinear map $A^*$ does. For this reason, the standard derivation of the shot noise plateau for $P_g(k)$ proceeds by subdivision of cells until each cell contains either $N=0$ or $N=1$, and this procedure is permissible because $P_g(k)$ is independent of the pixel size. The resulting plateau at $1/\overline{n}$ essentially yields the average volume containing a single galaxy, and it is this average volume which determines the level of the plateau. However, when we attempt to apply the same procedure to $P_{A^*}(k)$, we find that the subdivision process changes the $A^*$ map itself; it thus also changes the spectrum and hence the spectrum’s shot-noise plateau. It is for this reason that the curves in Figure \[fig:PkdeltAstar\] are not straight lines (and also for this reason that Equation \[eq:lowN\] required modification to Equation \[eq:gen\_shotnoise\]). It follows that the left-hand panel of Figure \[fig:PkdeltAstar\] displays two entangled effects: first, there is the average volume per galaxy (i.e., the number density $\overline{n}$), which sets the plateau for the standard spectrum and is independent of pixel size. Second, there is the effect of the number of galaxies per cell ($\overline{N}$) on the $A^*$-map – and this $\overline{N}$ depends on both cell size and $\overline{n}$. For instance, in the left-hand panel of Figure \[fig:PkdeltAstar\], cells with sides of length $5h^{-1}$ Mpc correspond to $\overline{N}$-values ranging from 0.17 to 17, depending on $\overline{n}$. To disentangle these two effects, we can plot the discreteness plateau levels as functions of $\overline{N}$ rather than scale (as in the right-hand panel of Figure \[fig:PkdeltAstar\]). Since any given $\overline{N}$ yields the same $A^*$-map[^4], the difference in the three curves is due only to the difference in number densities (or equivalently, due only to the average volume per galaxy). In terms of Equation \[eq:PkdeltAstar\], the right-hand panel keeps $\overline{N}$ constant and thus forces $\sigma^2_{\delta \!A^*} = (\sigma^2_{A^*} - \sigma^2_{\tilde{A}})$ to be (relatively) constant, given the (relative) constancy of the $A^*$ map. However, maintaining a constant $\overline{N}$ requires varying pixel volumes $\delta V$ for varying number densities $\overline{n}$, and it is this $\delta V$ which is analogous to the $1/\overline{n}$ value in the standard shot noise plateau. When we compare the $A^*$-plateau levels in this way we see that, as expected, it is the higher number densities which correspond to lower plateaus. On the other hand, if we insist on comparison at a constant spatial scale (as in the left-hand panel), then we force a constant $\delta V$, and the variation in the $A^*$-map for different values of $\overline{N}$ causes the intersecting curves in that panel. The Shape of the $A^*$ Spectrum {#sec:shape} =============================== Parametrizing $\tilde{A}$ ------------------------- We now turn to more accurate characterization of the shape of $P_{A^*}(k)$. In Section \[sec:disc\] we decomposed $A^*$ into the continous nonlinear map $\tilde{A}(A)$ and the stochastic component $\delta\!A^*$, which in turn allowed us to decompose the power spectrum as in Equation \[eq:PAswithPdeltAs\]. It follows that the $A^*$-bias from Section \[sec:bias\] belongs, strictly speaking, to the continuous component $\tilde{A}$, since the stochastic part of $A^*$ introduces only an additive constant to its power spectrum. Thus, the same procedure used in @ReppAstarbias allows us to write the bias formula from Equation \[eq:Astarbias\] in terms of $\tilde{A}$ rather than $A^*$: $$b^2_{A^*} = \frac{1}{\sigma_A^4} \left\lbrace \int dA\,(A-\overline{A})\left(\tilde{A}(A) - \langle\tilde{A}(A) \rangle\right)\mathcal{P}(A) \right\rbrace^2, \label{eq:Atildebias}$$ so that we now write $$P_{A^*}(k) \approx b^2_{A^*} P_A(k) + \delta V \cdot \left(\sigma^2_{A^*} - \sigma^2_{\tilde{A}}\right). \label{eq:prelimPAs}$$ This representation provides the correct overall bias and discreteness plateau. However, the nonlinear nature of the $\tilde{A}$ transformation also introduces slight but non-negligible changes into the shape of $P_{\tilde{A}}(k)$. As long as the number of particles per cell is not too low ($\overline{N} \ga 0.5$), we find that it will suffice to correct the bias in Equation \[eq:prelimPAs\] with two shape-change terms. (As we mention in Section \[sec:bias\], at lower number densities the shape change is severe enough to render Equation \[eq:Atildebias\] inaccurate.) At this point we introduce one subtlety that is of practical importance, namely, that the measured power spectrum will reflect the effects of pixelation and aliasing (see [@Jing2005]). In the sequel we must explicitly distinguish between measured and theoretical spectra – and thus we use $P^\mathcal{M}_A(k)$ and $P^\mathcal{M}_{\tilde{A}}(k)$ to denote the *measured* power spectra (which include the pixel window and aliasing effects), and we retain the notation $P_A(k)$ and $P_{\tilde{A}}(k)$ for the theoretical spectra (such as those obtained from <span style="font-variant:small-caps;">Camb</span>), which do not include these effects. The relationships detailed in @Jing2005 show how to account for these effects; in particular, it is fairly straightforward (numerically) to pass from $P_A(k)$ to $P^\mathcal{M}_A(k)$ (and likewise for $P^\mathcal{M}_{\tilde{A}}(k)$) – see Equation \[eq:P\_waa\]. Since observations typically count galaxies instead of directly measuring dark matter, we will write our expression for $\tilde{A}$ in terms of the measured spectra (which include pixelation and aliasing effects); thus, our task is to fit the difference between $P^\mathcal{M}_{\tilde{A}}(k) / P^\mathcal{M}_A(k)$ and the constant $b^2_{A^*}$ from Equations \[eq:Atildebias\] and \[eq:prelimPAs\]. By comparing with discrete realizations of the Millennium Simulation (described in more detail later in this section), we observe that at small scales (large $k$) this ratio $P^\mathcal{M}_{\tilde{A}}(k) / P^\mathcal{M}_A(k)$ is virtually linear in $k$, and at large scales (small $k$) it matches a decaying exponential. We hence add two terms to $b^2_{A^*}$ and write $$\frac{P^\mathcal{M}_{\tilde{A}}(k)}{P^\mathcal{M}_A(k)} = b^2_{A^*} - B\left(1 - e^{-Ck}\right) + Dk, \label{eq:parametrization}$$ where $B$, $C$, and $D$ are (possibly cosmology-dependent) factors to be determined (see Figure \[fig:Atildecomponents\]). Considering the terms one by one, we see that at the largest scales (smallest $k$) the constant bias $b^2_{A^*}$ is dominant. The second term sets a scale $k \sim C^{-1}$ at which the ratio $P^\mathcal{M}_{\tilde{A}}(k)/P^\mathcal{M}_A(k)$ decreases to $b^2_{A^*} - B$. The final term indicates that at small scales (large $k$) the passage from $A$ to $\tilde{A}$ produces more power than a simple bias would produce, and $D$ (with units of length) parametrizes this increase. Our task is somewhat eased by the fact that the fit is not extremely sensitive to the values of any one of these parameters: experimentation shows enough degeneracy among them that changes in the value of one parameter can often be offset by a change in the value of another, without substantially affecting the overall accuracy of the fit. To obtain reasonable values for these parameters, we note from numerical experiments that the quality of the fit is not particularly sensitive to the value of $C$. Thus, based on typical best-fit values (when fitting all three parameters simultaneously), we note that we can employ a constant value for $$C^{-1} = 0.15h \mbox{ Mpc}^{-1} \label{eq:C}$$ to set the scale for the onset of the shape change caused by the second term of Equation \[eq:parametrization\]. Experimentation with other values of $C$ – including allowing for a redshift-dependence – did not seem to materially affect the accuracy of our fits. This insensitivity is presumably due to the fact that $B$ and $C$ are somewhat degenerate, and, as detailed later, we calculate $B$ from a given value of $C$ (and of other parameters). We next characterize the parameter $D$. To do so, we obtain Millennium Simulation [@MillSim] snapshots[^5] from $z = 0.0$, 1.0, and 2.1; these snapshots utilize cubical survey cells of side length $500h^{-1}\mathrm{Mpc}/256\,$cells $= 1.95h^{-1}$Mpc/cell. We then create discrete realizations (via Poisson sampling) at number densities $\overline{N} = 0.01,$ 0.1, 0.5, 1.0, 3.0, and 10.0 galaxies per cell. We also smooth these realizations by binning them on scales ranging from the original $1.95h^{-1}$Mpc/cell up to $31.25h^{-1}$Mpc/cell. After calculating the power spectrum $P^\mathcal{M}_{\tilde{A}}(k)$ for these realizations, we fit the power spectrum with Equation \[eq:parametrization\] (fixing $C$ to the value in Equation \[eq:C\]). We thus obtain a set of best-fit values (using least-squares optimization) for the parameters $B$ and $D$. It is these best-fit values for $D$ which appear in Figure \[fig:D\_plot\]. The parameter $D$ has units of length, and the most obvious length scale is the pixel size $s$. It is also reasonable to suppose that $D$ depends on the constant bias term $b^2_{A^*}$. We note that if we normalize $D$ by $s b^2_{A^*}$, we obtain a power-law relationship between $D/sb^2_{A^*}$ and $\overline{N}$ (the average number of galaxies per cell, after smoothing) shown in Figure \[fig:D\_plot\]. We thus propose the following fitting formula for $D$: $$\label{eq:D} D = \frac{s \, b^2_{A^*}}{4\overline{N}^{\,0.6}},$$ where $s$ is the side length of the (cubical) survey cell and $\overline{N}$ is the average number of galaxies per cell. (This relationship appears as a dashed line on Figure \[fig:D\_plot\].) We note that for extremely low number densities and high redshifts, Equation \[eq:D\] appears to overpredict the best-fit value of $D$; however, the practical utility of the power spectrum is limited to cases in which $\overline{N} \ga 1.$ In any case, the rationale for the additional factor and exponent in Equation \[eq:D\] is purely pragmatic, to be justified by whether or not it ultimately produces a reasonable fit to the $A^*$-power spectra of the Millennium Simulation data (including the rescalings we consider in Section \[sec:accuracy\]). Figure \[fig:B\_plot\] indicates that the best-fit values of the $B$-parameter also (roughly) follow a power law in $\overline{N}$. However, now that we have an analytic approximation for $D$, we need not derive an analogous expression for $B$; rather, we can utilize the fact that the integral of the power spectrum yields the variance. Thus, if $V_k$ is the survey volume in Fourier space, the relationship $$\begin{split} \sigma^2_{\tilde{A}} & = \int_V \frac{d^3k}{(2\pi)^3} P^\mathcal{M}_{\tilde{A}}(k) \\ & = \int_{V_k} \frac{d^3k}{(2\pi)^3} \left( b^2_{A^*} - B\left(1 - e^{-Ck}\right) + Dk\right) P^\mathcal{M}_A(k) \end{split} \label{eq:Bbetter}$$ (together with Equations \[eq:sig2Atilde\], \[eq:C\], and \[eq:D\]) permits calculation of $B$. Using Equation \[eq:Bbetter\] to calculate $B$ (after using Equations \[eq:C\] and \[eq:D\] to determine $C$ and $D$), we obtain the values displayed in Figure \[fig:B\_plot\]. Note that the figure does not display the $B$-values obtained by simultaneously fitting $D$ and $B$, but rather the $B$-values necessary to obtain the correct variance given our approximation for $D$. A Recipe for Calculating $P^\mathcal{M}_{A^*}(k)$ {#sec:recipe} ------------------------------------------------- We now have all the information necessary to calculate $P^\mathcal{M}_{A^*}(k)$. We summarize the process below and present the same information schematically in Figure \[fig:block\_diag\]. For ease of reference, we here reproduce the relevant equations with their original equation numbers. The required inputs for the process (besides survey parameters such as redshift) are (1) the underlying cosmology and (2) the discrete sampling scheme $\mathcal{P}(N|A)$, which is the probability distribution for galaxy counts given an underlying value of $A$. (5.5,17.5) ellipse (1.5 and 0.5); at (5.5,17.5) [Cosmology]{}; (5.5,17) – (5.5,16); (5,16.1) rectangle (6,16.4); at (5.5,16.25) [<span style="font-variant:small-caps;">Camb</span>]{}; (4,15) rectangle (7,16); at (5.5,15.5) [$P_\mathrm{lin}(k)$]{}; (5,15) – (3,11); (6,15) – (8,14); (8,14.1) rectangle (11,14.4); at (9.5,14.2) [Equations \[eq:PAk\]–\[eq:N\]]{}; (8,13) rectangle (11,14); at (9.5,13.5) [$P_A(k)$]{}; (0,11.1) rectangle (3,11.4); at (1.5,11.2) [Equations \[eq:GEV\]–\[eq:muG\], \[eq:sigA\]–\[eq:pns\]]{}; (0,10) rectangle (3,11); at (1.5,10.5) [$\mathcal{P}(A)$]{}; (9.5,13) – (9.5,12); (8.5,12.1) rectangle (10.5,12.4); at (9.5,12.22) [Equation \[eq:P\_waa\]]{}; (8,11) rectangle (11,12); at (9.5,11.5) [$P_A^\mathcal{M}(k)$]{}; (8,11) – (3,10.5); (4.5,9.5) ellipse (1.5 and 0.5); at (4.5,9.5) [$\mathcal{P}(N|A)$]{}; (1.5,10) – (1.5,8); (4.5,9) – (4,8); (1.6,8.1) rectangle (3.9,8.7); at (2.75,8.5) [Equations \[eq:xiG\]–\[eq:AstarGEV\], \[eq:Atilde\]]{}; at (2.75,8.2) [Equations \[eq:sig2Astar\], \[eq:sig2Atilde\]]{}; (0,6) rectangle (4,8); at (2,7.5) [$A^*$, $\tilde{A}$]{}; at (2,6.5) [$\langle A^* \rangle$, $\langle \tilde{A} \rangle$, $\sigma_{A^*}^2$, $\sigma_{\tilde{A}}^2$]{}; (2,6) – (3,5); (9,11) – (6,5); (3,5.1) rectangle (6,5.4); at (4.5,5.22) [Equations \[eq:Atildebias\], \[eq:C\], \[eq:D\], \[eq:Bbetter\]]{}; (3,4) rectangle (6,5); at (4.5,4.5) [$b_{A^*}^2$, $C$, $D$, $B$]{}; (9.5,11) – (9.5,3); (6,4) – (8,3); (8.5,3.1) rectangle (10.5,3.4); at (9.5,3.22) [Equation \[eq:parametrization\]]{}; (8,2) rectangle (11,3); at (9.5,2.5) [$P_{\tilde{A}}^\mathcal{M}(k)$]{}; (8,2) – (7,1); (2,6) – (2,0.5) – (4,0.5); (4.5,1.1) rectangle (6.5,1.4); at (5.5,1.22) [Equation \[eq:PAswithPdeltAs\]]{}; (4,0) rectangle (7,1); at (5.5,0.5) [$P_{A^*}^\mathcal{M}(k)$]{}; The first step is to obtain the linear power spectrum using <span style="font-variant:small-caps;">Camb</span> or similar software. From $P_\mathrm{lin}(k)$ one can then derive the log spectrum $P_A(k)$ using the following prescription of @ReppPAk: $$\label{eq:PAk} P_A(k) = NC_\mathrm{corr}(k) \cdot \frac{\mu}{\sigma_\mathrm{lin}^2} \ln \left(1+\frac{\sigma_\mathrm{lin}^2}{\mu}\right) \cdot P_\mathrm{lin}(k),$$ with the best-fit value $\mu=0.73$, and where one calculates the linear variance by $$\label{eq:sig2lin} \sigma_\mathrm{lin}^2 = \int_0^{k_N} \frac{dk\,k^2}{2\pi^2} P_\mathrm{lin}(k).$$ In this equation, $k_N$ is the Nyquist frequency $\pi/\ell$, where $\ell$ is the side length of one pixel of the survey volume. $C_\mathrm{corr}(k)$ in Equation \[eq:PAk\] is a slope correction[^6] with normalization $N$, both of which are given by the following equations: $$\label{eq:slopemod} C_\mathrm{corr}(k)=\left\{ \begin{array}{ll} 1 & \mbox{if $k < 0.15h$ Mpc$^{-1}$} \\ (k/0.15)^\alpha & \mbox{if $k \ge 0.15h$ Mpc$^{-1}$} \end{array} \right.,$$ $$\label{eq:N} N=\frac{\int dk\, k^2 P_\mathrm{lin}(k)}{\int dk\, k^2 C_\mathrm{corr}(k) P_\mathrm{lin}(k)}.$$ Appropriate values of $\alpha$ range from 0.02 at $z=0$ to 0.14 at $z=2.1$ (see table 1 of [@ReppPAk]). Next, the prescription of @Jing2005 allows us to obtain the “measurable” $P^{\mathcal{M}}_A(k)$, which includes pixel window and alias effects: $$P^\mathcal{M}_{A}(k) = \left\langle\sum_{\mathbf{n} \in \mathbb{Z}^3} P_A(\mathbf{k}+2k_N\mathbf{n}) W(\mathbf{k}+2k_N\mathbf{n})^2\right\rangle_{|\mathbf{k}|=k}; \label{eq:P_waa}$$ here the sum runs over all three-dimensional integer vectors $\mathbf{n}$, though we find it sufficient to consider only $|\mathbf{n}| < 3$. At this point we can obtain the moments of the log probability distribution – and the distribution itself – by using the GEV prescription of @ReppApdf. First, the variance: $$\sigma^2_A = \int_{V_k\setminus\{0\}} \frac{d^3k}{(2\pi)^3} P^\mathcal{M}_{A}(k), \label{eq:sigA}$$ where the region denoted $V_k\setminus\{0\}$ is the set of non-zero $\mathrm{k}$-vectors corresponding to the real-space volume of the survey. Next, the mean: $$\langle A \rangle = -\lambda \ln \left(1+\frac{\sigma^2_\mathrm{lin}(k_N)}{2\lambda}\right), \label{eq:meanAfit}$$ where the best-fit value of $\lambda$ is 0.65. Finally, the skewness: $$\gamma_1 \equiv \frac{\langle\left(A-\langle A \rangle\right)^3\rangle}{\sigma_A^3}$$ $$\gamma_1 = \left(a(n_s+3) + b\right) \left( \sigma_A^2 \right)^{-p(n_s) + 1/2}$$ $$p(n_s) = d + c \ln(n_s+3)\label{eq:pns},$$ where $n_s$ is the slope of the linear no-wiggle power spectrum of @EisensteinHu, and the best values of the parameters are $a=-0.70$, $b=1.25$, $c=-0.26$, and $d=0.06$. One can now calculate the log probability distribution $\mathcal{P}(A)$, as explained previously: $$\tag{\ref{eq:GEV}} \mathcal{P}(A) = \frac{1}{\sigma_G} t(A)^{1+\xi_G} e^{-t(A)}$$ $$\tag{\ref{eq:GEV_t}} t(A) = \left(1 + \frac{A - \mu_G}{\sigma_G}\xi_G\right)^{-1/\xi_G}$$ $$\gamma_1 = -\frac{\Gamma(1-3\xi_G) - 3\Gamma(1-\xi_G)\Gamma(1-2\xi_G) + 2\Gamma^3(1-\xi_G)}{\left(\Gamma(1-2\xi_G) - \Gamma^2(1-\xi_G)\right)^{3/2}} \tag{\ref{eq:xiG}}$$ $$\sigma_G = \sigma_A \xi_G \cdot \left(\Gamma(1-2\xi_G) - \Gamma^2(1-\xi_G)\right)^{-1/2} \tag{\ref{eq:sigG}}$$ $$\mu_G = \langle A \rangle - \sigma_G \frac{\Gamma(1-\xi_G) - 1}{\xi_G}. \tag{\ref{eq:muG}}$$ The next step is to calculate the first two moments of $A^*$ and $\tilde{A}$, which in turn require expressions for these two quantities. The relevant equations (assuming a GEV log matter distribution) are as follows: $$\begin{gathered} \tag{\ref{eq:AstarGEV}} \frac{1}{\sigma_G} \left( 1 + \frac{A^*(N) - \mu_G}{\sigma_G} \xi_G \right)^{-1-\frac{1}{\xi_G}} + N \\ = \frac{1+\xi_G}{\sigma_G + \left(A^*(N) - \mu_G\right)\xi_G} + \overline{N}e^{A^*(N)}\end{gathered}$$ $$\tilde{A}(A) = \sum_N \mathcal{P}(N|A) A^*(N) \tag{\ref{eq:Atilde}}$$ $$\sigma^2_{A^*} = \sum_N \int dA\,\,\mathcal{P}(A) \mathcal{P}(N|A)\left(A^*(N) - \langle A^* \rangle \right)^2 \tag{\ref{eq:sig2Astar}}$$ $$\sigma^2_{\tilde{A}} = \int dA\,\,\mathcal{P}(A) \left(\tilde{A}(A) - \langle \tilde{A} \rangle \right)^2 \tag{\ref{eq:sig2Atilde}}.$$ Recall that $\xi_G$, $\sigma_G$, and $\mu_G$ are the parameters of the distribution $\mathcal{P}(A)$, related to the moments of $A$ by Equations \[eq:xiG\]–\[eq:muG\]. The first moments $\langle A^* \rangle$ and $\langle \tilde{A} \rangle$ are calculated using integrals analogous to those in Equations \[eq:sig2Atilde\] and \[eq:sig2Astar\]. From these moments and from $P^{\mathcal{M}}_A(k)$, one can then obtain the parameters $b_{A^*}^2$, $C$, $D$, and $B$ for $P^\mathcal{M}_{\tilde{A}}(k)$: $$b^2_{A^*} = \frac{1}{\sigma_A^4} \left\lbrace \int dA\,(A-\overline{A})\left(\tilde{A}(A) - \langle\tilde{A}(A) \rangle\right)\mathcal{P}(A) \right\rbrace^2 \tag{\ref{eq:Atildebias}}$$ $$C^{-1} = (1/0.15)h^{-1} \mbox{Mpc} \tag{\ref{eq:C}}$$ $$\tag{\ref{eq:D}} D = \frac{s \, b^2_{A^*}}{4\overline{N}^{\,0.6}}$$ $$\tag{\ref{eq:Bbetter}} \sigma^2_{\tilde{A}} = \int_{V_k} \frac{d^3k}{(2\pi)^3}\left( b^2_{A^*} - B\left(1 - e^{-Ck}\right) + Dk\right) P^\mathcal{M}_A(k)$$ These four parameters – along with $P^{\mathcal{M}}_A(k)$ – then yield the spectrum of $\tilde{A}$: $$P^\mathcal{M}_{\tilde{A}}(k) = \left[b^2_{A^*} - B\left(1 - e^{-Ck}\right) + Dk\right] \cdot P^\mathcal{M}_A(k). \tag{\ref{eq:parametrization}}$$ Finally, one must add the discreteness plateau to obtain the power spectrum of $A^*$ itself: $$P^\mathcal{M}_{A^*}(k) = P^\mathcal{M}_{\tilde{A}}(k) + \delta V \cdot \left(\sigma^2_{A^*} - \sigma^2_{\tilde{A}}\right). \tag{\ref{eq:PAswithPdeltAs}}$$ Accuracy {#sec:accuracy} ======== It remains to evaluate the accuracy of the prescription in Sections \[sec:bias\]–\[sec:shape\]. To do so, we obtain (as described previously in Section \[sec:shape\]) snapshots of the Millennium Simulation at $z=0$, 1.0, and 2.1. These snapshots comprise $256^3$ cubical pixels with side lengths $1.95h^{-1}$Mpc. We then Poisson-sample the dark matter density to obtain discrete realizations of each snapshot for mean number of particles per pixel $\overline{N} = 0.01, 0.1, 0.5, 1.0, 3.0$, and 10.0. For $\overline{N} \ge 0.5$, we generate 10 realizations per redshift; for $\overline{N} = 0.1$ and 0.01, we generate 20. This ensures an overall sampling variance (in each pixel) of at most $0.5(1+\delta)^{-1}$, except for $\overline{N}=0.01$. However, the $\overline{N}=0.01$ case is of little practical importance until we rebin the realizations (see below) on scales of $\sim 4h^{-1}$Mpc (see also extent of thick line in Figures \[fig:PkdeltAstar\] and \[fig:RMSErrors\]), at which point the pixel variance reaches $0.6(1+\delta)^{-1}$, comparable to the other number densities. Since the $A^*$-power spectrum is scale-dependent, we must investigate multiple smoothing scales. To do so, we take each of our discrete realizations and rebin it to two, four, eight, and sixteen times the original pixel length, reaching a maximum scale of $31.25h^{-1}$Mpc. For each of these rebinnings, we calculate $P^\mathcal{M}_{A^*}(k)$, provided that the mean number of galaxies per cell $\overline{N} < 100$; at higher number densities, $A^*$ differs little from $A$, and the computation of the $A^*$-moments becomes expensive. The procedure so far allows us to test our prescription for only the original Millennium Simulation cosmology. However, @AnguloWhite outline a method for re-scaling simulations from one cosmology to another by matching linear variances; doing so involves both a re-scaling of survey cell size and a re-mapping of simulation snapshots to redshift. Such rescalings of the Millennium Simulation to the WMAP7 and Planck 2013 cosmologies are publicly available. Therefore we repeat the above procedure ($z=0.0,1.0,2.1$, for $\overline{N}$ at the original pixel scale from 0.01 to 10.0, rebinned to scales from 1 to 16 times the original pixel length) for both of these rescalings. Hence we end up with simulations in three redshifts, with multiple number densities, at scales ranging from $2h^{-1}$ to $32h^{-1}$Mpc, and for three near-concordance cosmologies. These discrete realizations provide us with a standard against which to compare our prescription. We implement our prescription using <span style="font-variant:small-caps;">Camb</span> (Code for Anisotropies in the Microwave Background:[^7] [@CAMB]) to generate the appropriate linear power spectra $P_\mathrm{lin}(k)$, from which we obtain $P_A(k)$ by following the prescription presented in @ReppPAk. We can then use the work summarized at the end of Section \[sec:shape\] to predict the power spectrum $P^\mathcal{M}_{A^*}(k)$. We also wish to compare the accuracy of our prescription to that of @Smith_et_al/@Takahashi2012 (hereafter ST), which is the prescription used in <span style="font-variant:small-caps;">Camb</span> for nonlinear spectra. To do so, we measure the galaxy power spectra $P^\mathcal{M}_g(k)$ of the various realizations; and we obtain the ST prescription by using <span style="font-variant:small-caps;">Camb</span> to calculate $P_g(k) = P(k) + 1/\overline{n}$ and then following the method of @Jing2005 to get the predicted $P^\mathcal{M}_g(k)$. Our metric for comparison is the root-mean-square (RMS) per cent difference between the predicted and measured power spectra (using logarithmically-spaced $k$-values). To mitigate the effect of the increase in cosmic variance at large scale – and the resultant power spectrum stochasticity – due to the limited number of $k$-modes included in the Millennium Simulation volume at such scales, we weight the mean by the inverse cosmic variance at each $k$-value. Thus, $$\mathrm{RMS} = \sqrt{ \sum_k \frac{\left( M(k) - T(k) \right)^2 }{\sigma_{\mathrm{CV}}^2(k)} \left/ \sum_k \frac{1}{\sigma_{\mathrm{CV}}^2(k)} \right. },$$ where $M(k)$ is the power spectrum value measured from our realizations, $T(k)$ is the value predicted by our recipe, and $\sigma_{\mathrm{CV}}(k) = P_{(A^*\mbox{\textrm{ \scriptsize{or}} }g)}(k)/\sqrt{N_k}$ is the cosmic variance (or technically, standard deviation) at a given $k$-mode for a given power spectrum value ($P_{A^*}(k)$ or $P_g(k)$) determined from a given number $N_k$ of modes. We take the sum over logarithmically-spaced $k$-values: the simulation size sets $k_{\mathrm{min}}=2\pi/\ell$, where $\ell$ is the length of one side of the simulation cube ($500h^{-1}$ Mpc for the original Millennium Simulation); the pixel size sets $k_{\mathrm{max}}=k_N\sqrt{3}=\pi\sqrt{3}/s$, where $s$ is the smoothing scale (i.e., the side length of a cubical pixel) – this being the largest $k$ measurable from such pixels. The value of $\sigma_{\mathrm{CV}}$ represents only the cosmic variance inherent in the Millennium Simulation, not the variance from the Poisson sampling process (which we reduce by averaging multiple Poisson realizations). Note that even with this weighting, cosmic variance will dominate the calculated RMS “error” for large smoothing scales. The results appear in Figure \[fig:RMSErrors\]. In this plot, we again use a transition from thick to thin lines to indicate the Poisson shot noise limit, at which $P(k)$ becomes less than $1/\overline{n}$; we restrict our focus to scales above this limit. We find that for small smoothing scales ($\la 4h^{-1}$ Mpc) our recipe performs quite well, with accuracy to a few per cent, comparable to that of ST. At large scales ($\ga 15h^{-1}$ Mpc) the per cent difference (we should not in this case call it “error”) increases greatly because of cosmic variance, both for our prescription and for that of ST; nevertheless, our accuracy is comparable to that of ST at these scales. At intermediate scales ($\sim 8h^{-1}$ Mpc) at low redshifts we find, however, a pronounced increase in the per cent difference with respect to ST. One way to account for the large-scale cosmic variance is to divide each of the per cent differences from our recipe by the corresponding per cent difference from ST, thus obtaining a ratio of the prescriptions’ accuracies. These ratios appear in Figure \[fig:RMSError\_ratios\] and confirm that the worst accuracy of our prescription comes from smoothing on scales $\sim 8h^{-1}$ Mpc, whereas at larger scales the accuracy is virtually indistinguishable from that of ST. At the smallest scales, the accuracy is typically better than ST except at higher redshifts. However, reference to Figure \[fig:RMSErrors\] shows that even in these cases, the error in our prescription is still only a few per cent. Thus in general, we find that the RMS error of our prescription is comparable to that of ST, or on the level of a few per cent. The exception would be at low redshifts ($z \sim 0$) at scales on the order of $8h^{-1}$ Mpc. Even in these cases, the error is typically less than 10 per cent. Furthermore, the low survey volume available at these redshifts gives them less weight in a survey designed for precise constraint of cosmological parameters. We finally focus on nine specific examples of interest (denoted by letters A through I on Figures \[fig:RMSErrors\] and \[fig:RMSError\_ratios\]); we display the $A^*$- and galaxy-power spectra for these examples in Figure \[fig:SelectSpec\]. The first three spectra (letters A–C in Figures \[fig:RMSErrors\]–\[fig:SelectSpec\], and the top row of panels in Figure \[fig:SelectSpec\]) correspond to number densities roughly equivalent to those anticipated for *Euclid* and WFIRST, smoothed on scales $\sim 30h^{-1}$ Mpc. At these scales and densities, the accuracy of our prescription is virtually identical to that of ST. Indeed, reference to Figure \[fig:RMSError\_ratios\] demonstrates that at $z \ga 1$, the accuracy of our prescription for such a Euclid-like survey is virtually indistinguishable from that of ST, given the impact of Poisson noise. The next three spectra (letters D–F in Figures \[fig:RMSErrors\]–\[fig:SelectSpec\], and the middle row of panels in Figure \[fig:SelectSpec\]) investigate the anomalously high per cent differences encountered at intermediate smoothing scales and low redshifts. It appears that this difference is the result of higher-than-predicted power around $k \sim 0.1$–$0.2h$ Mpc$^{-1}$, although a significant amount of cosmic variance appears in this regime as well. The final three spectra (letters G–I in Figures \[fig:RMSErrors\]–\[fig:SelectSpec\], and the bottom row of panels in Figure \[fig:SelectSpec\]) investigate the relatively poor performance of our prescription at $z \sim 2$ on small scales and lower number densities. Note first that the RMS error is in these cases still less than 5 per cent (Figure \[fig:RMSErrors\]), although ST performs much better (1–2 per cent). Note also that at $z \sim 2$, smoothing on this scale puts one at or past the Poisson limit for the specified number density. Nevertheless, it is interesting that inspection of the spectra (Figure \[fig:SelectSpec\]) shows the same higher-than-predicted power at intermediate scales, and the low overall per cent error comes from the predominance of the well-fit higher $k$-modes.[^8] We therefore conclude that our prescription is typically accurate to within 5 per cent and is often comparable to that of the ST prescription for the galaxy power spectrum. Nevertheless, there is potentially room for improvement at scales around $10h^{-1}$ Mpc. In addition, the scale of the Millennium Simulation and the resulting cosmic variance makes it difficult to obtain a precise estimate of the accuracy of our prescription on larger scales – although we note that on the largest, linear scales, the distribution is sufficiently Gaussian to obviate the need for sufficient statistics such as $A^*$. Larger-volume high-resolution simulations would however permit a better assessment of the large-scale accuracy of our prescription. Conclusion {#sec:concl} ========== As noted in the introduction, the optimal observable for galaxy surveys is not the overdensity $\delta_g = N/\overline{N}-1$, but rather $A^*(N)$ – because the power spectrum of the alternate statistic $A^*$ avoids the information plateau that besets the standard power spectrum $P_g(k)$ at small scales. However, in order to realize the potential of $A^*$, one must have in place a prescription for $P_{A^*}(k)$ with which to compare survey results. We have shown that $A^*$ decomposes naturally into a continuous part $\tilde{A}$ and a stochastic part $\delta\!A^*$, and that the power spectrum decomposes in a similar manner. The contribution of the stochastic part is a discreteness plateau (Equation \[eq:PkdeltAstar\]) similar to the $1/\overline{n}$ shot noise plateau in the standard power spectrum. For the continuous part, we find that one can obtain the power spectrum of $\tilde{A}$ from the dark matter log spectrum $P_A(k)$ via an amplitude shift (the bias $b_{A^*}^2$) and a shape change parametrized by quantities $D$ and $B$ (as long as we restrict our consideration to scales at which the survey is not shot-noise dominated). We have also provided prescriptions for each of these quantities. In addition, we have tested our prescription for $P_{A^*}(k)$ using discrete realizations of the Millennium Simulation and its rescalings; we find a typical accuracy around 5 per cent, although the value fluctuates depending on scale, redshift, etc. This accuracy is in most cases better than 5 per cent and is in general comparable to that of the (standard) ST prescription utilized in <span style="font-variant:small-caps;">Camb</span> for the nonlinear power spectrum. We thus now have a procedure for predicting the power spectrum and mean of the discrete sufficient statistic $A^*$ for near-concordance cosmologies. As we and our collaborators show in previous work, this prediction is necessary in order to make full use of the data to be returned by future surveys. In particular, as @WCS2015a [@WCS2015b] have shown, the use of $A^*$ rather than the standard power spectrum can at a stroke double the information gleaned from such surveys. Our prescription for predicting $P_{A^*}(k)$ is thus a major component of an approach that could result in a non-incremental multiplication of the effectiveness of WFIRST and *Euclid*. Besides possible improvement of this prescription, remaining work includes analysis of the effects of both redshift space distortions and galaxy bias upon the power spectrum of $A^*$. The resultant information multiplication has the potential to advance our ultimate goal of characterizing the Universe. Acknowledgements {#acknowledgements .unnumbered} ================ The Millennium Simulation data bases used in this Letter and the web application providing online access to them were constructed as part of the activities of the German Astrophysical Virtual Observatory (GAVO). This work was supported by NASA Headquarters under the NASA Earth and Space Science Fellowship program – “Grant 80NSSC18K1081” – and AR gratefully acknowledges the support. IS acknowledges support from National Science Foundation (NSF) award 1616974. Appendix {#appendix .unnumbered} ======== This appendix contains demonstrations of certain results which we quote in the main text. First, we prove that the field $\delta\!A^*(\mathbf{r})$ is uncorrelated, where $$\delta\!A^*(\mathbf{r}) = A^*(\mathbf{r}) - \tilde{A}\left(A(\mathbf{r})\right). \label{eq:defdeltA}$$ We first note that the mean of $\delta\!A^*$ vanishes, since $$\begin{aligned} \langle A^* \rangle & = \sum_{N=0}^\infty A^*(N) \int dA \,\mathcal{P}(A) \mathcal{P}(N|A) \\ & = \int dA \left(\sum_{N=0}^\infty \mathcal{P}(N|A) A^*(N) \right) \mathcal{P}(A) \\ & = \langle \tilde{A} \rangle, \label{eq:eqmeans}\end{aligned}$$ the last equality due to Equation \[eq:Atilde\]. We now let $\xi_f$ denote the two-point correlation function of a field $f$, so that $\xi_f(r) = \langle f(\mathbf{x}) f(\mathbf{x+r}) \rangle - \langle f \rangle^2$; similarly, we let $\xi_{f\!f'}$ denote the cross-correlation of two fields, so that $\xi_{f\!f'}(r) = \langle f(\mathbf{x}) f'(\mathbf{x+r}) \rangle - \langle f \rangle \langle f' \rangle$. From Equations \[eq:defdeltA\] and \[eq:eqmeans\] it follows that $$\xi_{\delta\!A^*}(r) = \xi_{A^*}(r) - 2\xi_{A^*\!\tilde{A}}(r) + \xi_{\tilde{A}}(r).$$ To show that $\delta\!A^*$ is uncorrelated, it thus suffices to demonstrate that $\xi_{A^*\!\tilde{A}}(r) = \xi_{A^*}(r) = \xi_{\tilde{A}}(r)$. For the first equality, $$\begin{aligned} \xi_{A^*\!\tilde{A}}(r) & = \left\langle A^*(\mathbf{x}) \tilde{A}(\mathbf{x+r}) \right\rangle\\ & = \sum_{N_1} A^*(N_1) \int dA_2 \,\mathcal{P}(N_1, A_2) \tilde{A}(A_2).\label{eq:jprobint}\end{aligned}$$ Here subscripts 1, 2 refer (respectively) to the values of the field at a given point $\mathbf{x}$ and at another point $\mathbf{x}+\mathbf{r}$; thus $\mathcal{P}(N_1, A_2)$ is the joint probability of finding a number count $N$ at point $\mathbf{x}$ and a log matter overdensity $A$ at point $\mathbf{x+r}$. Using Equation \[eq:Atilde\] to expand $\tilde{A}$, the integral in Equation \[eq:jprobint\] becomes $$\begin{aligned} \lefteqn{\int dA_2 \sum_{N_2} A^*(N_2) \mathcal{P}(N_1|A_2) \mathcal{P}(N_2|A_2) \mathcal{P}(A_2)}\nonumber\\ & = \sum_{N_2} A^*(N_2)\int dA_2 \,\mathcal{P}(N_1|A_2) \mathcal{P}(A_2|N_2) \mathcal{P}(N_2)\label{eq:useBayes} \\ & = \sum_{N_2} A^*(N_2)\mathcal{P}(N_1|N_2) \mathcal{P}(N_2),\label{eq:useMarkov}\end{aligned}$$ where Equation \[eq:useBayes\] follows from Bayes’ Theorem, and Equation \[eq:useMarkov\] follows from the fact that $N_1$ depends on $N_2$ only through $A_2$ (i.e., the number counts are correlated only because the underlying dark matter is correlated). Combining Equations \[eq:jprobint\] and \[eq:useMarkov\] we thus have $$\begin{aligned} \xi_{A^*\!\tilde{A}}(r) & = \sum_{N_1,N_2} \mathcal{P}(N_1, N_2) A^*(N_1) A^*(N_2)\\ & = \left\langle A^*(\mathbf{x}) A^*(\mathbf{x+r}) \right\rangle = \xi_{A^*},\end{aligned}$$ which was to be proved. A similar argument shows that $\xi_{A^*}(r) = \xi_{\tilde{A}}(r)$: $$\begin{aligned} \xi_{\tilde{A}}(r) & = \left\langle \tilde{A}(\mathbf{x}) \tilde{A}(\mathbf{x+r}) \right\rangle\\ & = \int dA_1\,dA_2 \mathcal{P}(A_1,A_2) \tilde{A}(A_1) \tilde{A}(A_2)\\ \begin{split} & = \sum_{N_1, N_2} A^*(N_1) A^*(N_2)\; \times \\ & \hspace{0.7cm}\int dA_1\,dA_2 \mathcal{P}(A_1|A_2) \mathcal{P}(A_2) \mathcal{P}(N_1|A_1)\mathcal{P}(N_2|A_2) \end{split}\\ \begin{split} & = \sum_{N_1, N_2} A^*(N_1) A^*(N_2)\; \times \\ & \hspace{0.7cm} \int dA_1\,dA_2 \mathcal{P}(A_1|A_2) \mathcal{P}(N_1|A_1)\mathcal{P}(A_2|N_2)\mathcal{P}(N_2) \end{split}\\ & = \sum_{N_1, N_2} A^*(N_1) A^*(N_2) \mathcal{P}(N_1|N_2)\mathcal{P}(N_2)\\ & = \left\langle A^*(\mathbf{x}) A^*(\mathbf{x+r}) \right\rangle = \xi_{A^*}.\end{aligned}$$ It follows that the $\delta\!A^*$ field is uncorrelated, so Equation \[eq:PAsdisc\] yields its power spectrum. Second, we note that since $\xi_{A^*}(r) = \xi_{\tilde{A}}(r)$, and since $A^* = \tilde{A} + \delta\!A^*$, we can say that $$P_{A^*}(k) = P_{\tilde{A}}(k) + P_{\delta\!A^*}.$$ Finally, we can obtain an expression for $\sigma^2_{\delta\!A^*}$, by first considering $\left\langle A^* \tilde{A} \right\rangle$: $$\begin{aligned} \left\langle A^* \tilde{A} \right\rangle & = \int dA \sum_N \mathcal{P}(N, A) A^*(N) \,\tilde{A}(A)\\ & = \int dA\, \mathcal{P}(A) \tilde{A}(A)\cdot \sum_N \mathcal{P}(N|A) A^*(N)\\ & = \int dA\, \mathcal{P}(A)\, \tilde{A}(A)^2\\ & = \left\langle \tilde{A}(A)^2 \right \rangle.\end{aligned}$$ Since $\langle \delta\!A^* \rangle$ vanishes by Equation \[eq:eqmeans\], $$\begin{aligned} \sigma^2_{\delta\!A^*} & = \left\langle \left(\delta\!A^*\right)^2 \right\rangle\\ & = \left\langle (A^*)^2 \right\rangle - 2\left\langle A^* \tilde{A} \right\rangle + \left\langle \tilde{A}^2 \right\rangle\\ & = \left\langle (A^*)^2 \right\rangle - \left\langle \tilde{A}^2 \right\rangle\\ & = \sigma^2_{A^*} - \sigma^2_{\tilde{A}}.\end{aligned}$$ \[lastpage\] [^1]: We note that due to the nonlinear nature of the transformations, both $A$ and $A^*$ depend on pixel size in a way that $\delta$ does not. Defining the $A$ and $A^*$ statistics requires specification of the pixel scale, and evaluation of these statistics requires smoothing/binning to that scale. [^2]: Specifically, by this we mean that all of the information in the one-point distribution is contained in the first two moments of the pixel values after application of the transformation. [^3]: Throughout this article we distinguish probability distributions from power spectra by using script and roman letters, respectively: thus $\mathcal{P}(A)$, but $P_A(k)$. [^4]: This statement is only approximately true, since the distribution $\mathcal{P}(A)$ also depends on the smoothing scale. [^5]: http://gavo.mpa-garching.mpg.de/Millennium/ [^6]: Unrelated to the parameter $C$ from Equations \[eq:parametrization\] and \[eq:C\] [^7]: http://camb.info/ [^8]: Note that the “wiggles” in these spectra (due to the finite volume of the Millennium Simulation) remain essentially identical from $z \sim 2$ to $z = 0$, reflecting the fact that linear growth uniformly augments the amplitudes of large-scale simulation modes without rearranging them.
--- abstract: 'This paper addresses the problem of stylized text generation in a multilingual setup. A version of a language model based on a long short-term memory (LSTM) artificial neural network with extended phonetic and semantic embeddings is used for stylized poetry generation. The quality of the resulting poems generated by the network is estimated through bilingual evaluation understudy (BLEU), a survey and a new cross-entropy based metric that is suggested for the problems of such type. The experiments show that the proposed model consistently outperforms random sample and vanilla-LSTM baselines, humans also tend to associate machine generated texts with the target author.' bibliography: - 'slt.bib' title: 'Guess who?\' --- stylized text generation, poetry generation, artificial neural networks, multilingual models Introduction ============ The problem of making machine-generated text feel more authentic has a number of industrial and scientific applications, see, for example, [@Livingstone] or [@Dix]. Most modern generative models are trained on huge corpora of texts which include different contributions from various authors. It is no surprise that texts produced with such models are often not perceived as natural and are characterized as flat and non-human since humans have recognizable writing and communication styles. One of the possible ways to approach this problem is to propose a model that would generate texts resembling the style of a particular author within the training data set. In this paper we quantify this stylistic similarity, propose a generative model that captures it, and show that it outperforms a standard long short-term memory (LSTM) model used for text generation. We strongly believe that the proposed model is also applicable to prose or dialogue setup, but we carry out our experiments using poetry for a number of reasons. First of all, it is harder to train a model on poetic texts since the absolute size of the training corpus for poetry would be inevitably smaller than a corpus for prose which would include a comparable number of authors. On the other hand from a stylistic perspective, poetry is often believed to be more expressive than prose, so one can better see if the generated output is indeed stylized. This factor significantly affects any kind of qualitative tests that involve subjective human judgement. The contribution of this paper is four-fold: (1) we formalize the problem of stylized poetry generation; (2) we suggest a [*sample cross-entropy*]{} metric to measure the quality of author stylization; (3) we propose an LSTM with extended phonetic and semantic embeddings and quantify the quality of the obtained stylized poems both subjectively through a survey and objectively with sample cross-entropy and BLEU metrics; (4) we demonstrate that the proposed approach works in a multilingual setting, providing examples in English and in Russian. Related work ============ The idea that computers can generate poetry algorithmically dates back more than half a century, see [@Wheatley]. A detailed taxonomy of generative poetry techniques can be found in [@Lamb]. In this paper, we specifically focus on RNN-based generative models, so let us briefly mention several contributions relevant to the further discussion. Recently [@Lipton], [@Kiddon], [@Lebret], [@Radford], [@Tang], [@Hu] have developed RNN-based generative or generative adversarial models for controlled text generation that were focused on the [*content*]{} and [*semantics*]{} of the output, yet did not tale the stylistic aspects of the generated texts into consideration. In [@Li2016APN] the authors came up with a persona-based models for handling the issue of speaker consistency in neural response generation. They focused on the speaker consistency in the dialogue setup and demonstrated that the model could show better results than baseline sequence-to-sequence models. In [@Sutskever] the authors demonstrated that a character-based recurrent neural network with gated connections can successfully generate texts that resemble news or Wikipedia articles. In [@Graves] it was shown that comparable prosaic texts can be generated with LSTM networks as well. There are a number of works specifically focused on Chinese classical poetry generation, for example [@Hezhou], [@Yan1], [@Yan2], [@Yi] or [@ZhangJ], however interesting contributions in the area of generative poetry in languages other than Chinese or in a multilingual setting are relatively rare. One could mention the paper by [@Ghazvininejad] where an algorithm generates a poem in line with a topic given by the user and the paper by [@Potash] in which the authors generate stylized rap lyrics with LSTM trained on a rap poetry corpus. A literary style is actually not an obvious notion. There is a number of style transfer papers that deal with different aspects of literary styles. These could be a sentiment of a text (see [@Shen] or [@li]), it’s politeness [@Sennrich] or a so-called style of the time (see [@Hughes]). The style of the time aspect is specifically addressed by [@Jhamtani] and by [@Carlson]. A paper by [@Fu] generalizes these ideas measuring the success of a particular style aspect with a specifically trained classifier. However, the problem of style transfer differs from the stylized text generation significantly since as it was shown in [@guu] an existent human-written source used to control the saliency of the output can significantly improve the quality of the resulting texts. The generative model does not have such input and generates stylized texts from scratch, in this sense our problem set-up is similar to [@Ficler], but differs in the area of application and the definition of style. Specifically, we believe that style of the text should be implicitly defined by the corpus rather than be a set of binary, human-defined characteristics [@wrong]. Generation of stylized texts {#formulation} ============================ Let us consider a corpus $C = \{ T_i \}^{M}_{i = 0}$ of $M$ literary texts written in one natural language. Every text of length $l$ is a sequence $T_i = (w_j)^{l}_{j = 0}$ where words (denoted here as $w_j$) are drawn from a vocabulary set $V = \{ w_j \}^{L}_{j=1}$, where $L$ is the size of a given vocabulary. In a generative context, the standard language model predicts the next word $w_k$ using a conditional probability $P(w_k | (w_i)^{k-1}_{i=0})$. Neural networks have been widely considered as the most promising technique for language modeling since [@Bengio], see also [@Morin] and [@Mnih]. One of the key advantages of neural networks is that they help to avoid the dimensionality curse [@Mikolov] of a classical language model obtaining an effective mapping $Y : (C, \mathbb{R}^{m}, F) \rightarrow \mathbb{R}^{d}$ and then train a model such that $G(C) : \mathbb{R}^{d} \rightarrow \{T^{G}_i \}$. In the majority of works on text generation, one uses additional observable information to improve the general performance of the model [@Shi]. That is, if authors define a certain performance metric $D$ (such as BLEU, F1, etc.) one usually tries to minimize $D(\{ T_i \}, \{T^{G}_i \})$, where $\{ T_i \}$ is usually a randomized sample of $C$. We on the other hand suggest to look for a [*stylization model*]{} $G(C|S)$ that takes into consideration a subset $S$ of continuos and categorial variables out of $(\mathbb{R}^{m}, F)$ and a metric $D$ so that $$G(C|S): \begin{cases} \label{problem} ( C, \mathbb{R}^{m}, F) \rightarrow \{T^{G}_i \} \\ \{ T^{G}_i | S \} \sim \{ T_i | S \} \hspace{2pt} \text{w.r.t.} \hspace{2pt} D \end{cases}$$ A distinct difference in this approach is that we train our model on all information available to us, i.e. $( C, \mathbb{R}^{m}, F)$, and yet we are not interested in its overall performance, but rather test it on a certain domain $S$. The motivation here is in some sense similar to one-shot learning, see [@fei06] and, generally, transfer learning, see [@pratt12], and author-attribution method, see [@Bagnall]. A model uses information on the structure of the broader domain of data. Such information is formally exogenous to the problem in its’ narrow formulation, but it can improve the performance of the model. Stylization model has a number of interesting benefits in contrast to a language model. First of all, it naturally implies customization. If we want to control certain parameters of the model, we include them in $S$ and can expect that output $\{ T^{G}_i | S \}$ will resemble original texts $ \{ T_i | S \}$ that satisfy $S$ conditions. This makes such an approach easily applicable to, say, personalized interfaces. On the other hand, one would expect that due to its umbrella structure in which $G(C|S)$ learns from the whole corpus $(C, \mathbb{R}^{m}, F)$ such a model would outperform a set of smaller models obtained from different subsamples of $C$. Artificial neural networks are known to generalize very well, which lets one speculate that system that is trained on the whole corpus $C$ would be generally outperforming the system that uses less information for training. Further in this paper, we describe an artificial neural network that uses the name of an author of a poetic text as a condition $S$. We show that this model can generate lyrics that resemble the text written by a given author both objectively (in terms of sample cross-entropy that we define further and BLEU) and subjectively (based on a survey of respondents). This model has been trained with English and Russian, and we do not see obstacles for its application to the corpora in other languages. Model ===== We use an LSTM-based language model that predicts the $w_{n+1}$ word based on $w_1, ... , w_n$ previous inputs and some other parameters of the modeled sequence. One of the most widespread approaches for passing the needed parameter to the network is to write it in its initial state. A general weakness of this approach is that the network ’forgets’ the general parameters of the document as the generated sequence gets longer. Since we want to develop a model in line with the formulation given in (\[problem\]) we support our model at every step with the embeddings of the document that is currently being analyzed. This idea differentiates our approach from a classical word-based LSTM and was, for example, used in [@TiYa] to facilitate stylized music generation. A schematic picture of the model is shown in Figure \[fig:mod\], document information projections are highlighted with blue and white arrows. We used an LSTM with 1152-dimensional input and 512-dimensional state. ![The scheme of the language model used. Document information projections are highlighted with blue and white arrows. The projections on a state space of the corresponding dimension is achieved with simple matrix multiplication of document embeddings.[]{data-label="fig:mod"}](neuronamap.png){width="\linewidth"} Another key feature of the proposed model is a concatenated word representation shown schematically in Figure \[fig:emb\]. Information about the document (512-dimensional projection of a concatenated author and document embeddings) is included at every step. Final states of two char bidirectional LSTMs with a 128-dimensional vector are also concatenated into a word embedding. One of the LSTMs works with letters from a char-representation of the word whereas another uses phonemes of the International Phonetic Alphabet, employing an heuristics to transcribe words into phonemes. A somewhat similar idea, but with convolutional neural networks rather than with LSTMs, was proposed in [@Jozefowicz], but the bidirectional LSTM approach is new to our knowledge. ![Concatenated word representation.[]{data-label="fig:emb"}](neuronamap1.png){width="\linewidth"} In the Section \[ex\] we describe a series of objective and subjective tests that we ran across a generated output $ \{ T_i | S \}$, but first let us briefly describe the datasets used for training. Datasets ======== We have trained our model on two datasets of English and Russian poetry. The datasets were proprietary ones and were already available. All punctuation was deleted, every character was transferred to a lower case. No other preprocessing was made. The datasets sizes can be found in Table \[tab:dt\]. ---------------------------------------------- -- -- -- -- **& **N. of & **Size of & **N. of & **Size\ **& **documents & **vocab. & **authors & **\ English & 110000 & 165000 & 19000 & 150 Mb\ Russian & 330000 & 400000 & 1700 & 140 Mb\ ******************** ---------------------------------------------- -- -- -- -- : \[tab:dt\] Parameters of the training datasets. During the training phase we tokenize the beginning and ending of every text $T_i$, so that in the generation phase the network is initialized with a special ’start’ token and is conditioned on values of document parameters $S$. In this paper we test the proposed mechanism for the stylized text generation with one categorical variable - the name of the author. We trained the model for English (running tests on lyrics of William Shakespeare, Edgar Allan Poe, Lewis Carroll, Oscar Wilde and Bob Marley as well as lyrics of the American band Nirvana and UK band Muse) and Russian (Alexander Pushkin, Sergey Esenin, Joseph Brodsky, Egor Letov and Zemfira Ramazanova). As one can see in Table \[tab:dt\], there were far more authors in the dataset, but we chose more prominent ones who are known for their poetic styles and therefore could be more readily identified by an educated reader who is fluent in the target language. We want to emphasize that we do not see any excessive difficulties in implementation of the proposed model for other languages for which one can form a training corpus $C$ and provide a phonetically transcribed vocabulary $V_p$. Table \[tab:ex\] shows some generated stylized poetry examples. The model captures syntactic characteristics of the author (note the double negation in the first and the last line of generated Marley) alongside with the vocabulary (’burden’, ’darkness’, ’fears’ could be subjectively associated with gothic lyrics of Poe, whereas ’sunshine’, ’fun’, ’fighting every rule’ could be associated with positive yet rebellious reggae music). -------------------------------------------------------------------- -- **Generated-Poe & **Generated-Marley\ her beautiful eyes were bright & don t you know you ain t no fool\ this day is a burden of tears & you r gonna make some fun\ the darkness of the night & but she s fighting every rule\ our dreams of hope and fears & ain t no sunshine when she s gone\ **** -------------------------------------------------------------------- -- Experiments and evaluation {#ex} ========================== The most standard approach for a comparison of two generative models would be to measure cross entropy loss at certain checkpoints. However, as [@Xie] writes: “There can be significant differences in final performance across checkpoints with similar validation losses.” In our case cross entropy calculated in a straightforward manner does not give us any meaningful information. In order to quantitatively estimate our final model $G(C|S)$ we trained a plain vanilla LSTM without word-by-word document information support and with only classic word embeddings. We also trained a model with document information support but without bidirectional LSTMs for phonemes and characters included in the embeddings. All three models have shown comparable values of cross-entropy loss after an equal amount of epochs, which means that proposed additional structure is probably not facilitating learning but is likely not hindering it either. Sample cross entropy {#ss:ce} -------------------- Cross entropy is one of the most natural theoretic-informational metrics to estimate the similarity of different texts. In order to distinguish this metric from the cross entropy loss, we call it the [*sample cross entropy*]{} and calculate it as described below. We sample several subsets with the same length (in words) from the original author texts in such a way that we end up with samples that contain a comparable number of unique texts for each author. We split the texts of a given author $A_i$ in two random groups and calculate the pairwisecross entropy between original texts of the author $A_i$ and texts generated by the model conditioned on that author $\{ T^{G}_i | A_i\}$. The cross entropy between the sets of texts was calculated with MITML, see [@Hsu], in the following manner: for every sample written by the author and described above, we build a standard 3-gram based language model with standard MITML smoothing. We also build a common vocabulary across all samples. Then we calculate the perplexity by applying the language models based on the author-written texts to generative and original texts. After that, we apply logarithm to get the cross entropy instead of the perplexity, though both values in principle have a similar meaning. In Table \[tab:ce\] one can see the results of these estimations. Analogous results for Russian can be found in Appendix in Table \[tab:ruce\]. One can see that alongside with individual styles the model captures the [*style of the time*]{} mentioned earlier. Generated texts stylized for the authors from a similar time period tend to demonstrate lower sample cross entropies with human written texts written close to that time. --------------------------------------------------------------------------------------------------------- -- -- -- -- -- -- -- **Model $G(A_i)/$ author & **Shakespeare & **Poe & **Carroll & **Wilde & **Marley & **Nirvana & **MUSE\ **Generated-Shakespeare & $19.0^{**}$ & $21.6 $ & $18.5^{*} $ & $19.9 $ & $21.8 $ & $22.0$ & $22.4$\ **Generated-Poe & $22.0$ & $20.4^{**} $ & $21.2$ & $19.0^{*} $ & $26.0 $ & $25.4 $ & $26.0$\ **Generated-Carroll & $22.2$ & $23.6$ & $18.9^{*} $ & $22.5 $ & $22.4 $ & $21.8^{**}$ & $23.8$\ **Generated-Wilde & $21.2$ & $20.9$ & $20.5^{**}$ & $18.4^{*} $ & $24.5$ & $24.8 $ & $26.4$\ **Generated-Marley & $24.1$ & $26.5 $ & $22.0 $& $27.0 $& $15.5^{*}$ & $15.7^{**}$ & $16.0$\ **Generated-Nirvana & $23.7$ & $26.2$ & $20.0 $ & $26.6 $& $19.3 $& $18.3^{*} $ & $19.1^{**}$\ **Generated-MUSE & $21.1$ & $23.9$ & $18.5$ & $23.4 $ & $17.4 $ & $16.0^{**}$ & $14.6^{*}$\ **Uniform Random & $103.1$ & $103.0$ & $103.0$ & $103.0 $ & $103.5 $ & $103.3$ & $103.6$\ **Weighted Random & $68.6$ & $68.8$ & $67.4$ & $68.5 $ & $68.5 $ & $68.0$ & $68.0$\ **SELF & $23.4$ & $21.8$ & $25.1$ & $27.3 $ & $ 20.8 $ & $17.8$ & $13.3$\ ************************************ --------------------------------------------------------------------------------------------------------- -- -- -- -- -- -- -- ------------------------------------------------------------------------------------ -- -- -- -- -- -- -- **Model $G(A_i)/$ author & **Pushkin & **Esenin & **Brodsky & **Letov & **Zemfira\ **Generated-Pushkin & $17.9^{*}$ & $21.8^{**}$ & $23.4$ & $27.0 $ & $30.8 $\ **Generated-Esenin & $20.4^{**}$ & $18.8^{*} $ & $21.0$ & $22.7$ & $26.0$\ **Generated-Brodsky & $23.5$ & $21.1^{**}$ & $17.2^{*} $ & $20.9$ & $23.8 $\ **Generated-Letov & $22.2$ & ${20.0^{**}}$ & $20.8$ & $19.6^{*} $ & $23.6$\ **Generated-Zemfira & $19.5$ & $17.1^{**}$ & $18.1 $ & $18.2$ & $16.6^{*} $\ **Uniform Random & $103.0$ & $103.1$ & $103.0 $ & $103.0$ & $103.8$\ **Weighted Random & $40.8$ & $40.2$ & $40.2 $ & $42.6$ & $45.6 $\ **SELF & $35.0$ & $33.7$ & $38.0$ & $28.3 $ & $ 12.0 $\ **************************** ------------------------------------------------------------------------------------ -- -- -- -- -- -- -- The lower is the sample cross entropy between the texts generated by the model and the texts written by every author the better the model captures author’s writing style and vocabulary. The cross entropy between random samples from the texts of the same author demonstrates how [*self-similar*]{} the human-written texts are. Since an overwhelming amount of English text in our training dataset was text from the 20th century, the model ’perceives’ texts of William Shakespeare or Edgar Allan Poe to be closer to the lyrics of Lewis Carrol and Oscar Wilde than to the samples of the original texts, however Shakespeare and Poe are also fairly well approximated by the model (it shows second best cross entropy there). To give a baseline we also provide cross-entropies between human-written texts and the texts sampled randomly out of the vocabulary as well as the the texts obtained through a weighted average sampling method. BLEU ---- Since BLEU is a metric estimating the correspondence between a machine’s output and that of a human it is very natural to use it in order to measure the quality of the proposed model. For the experiments we sampled a random starting line out of the human-written poems and initialized the generative model with this line. Then we calculated BLEU between three actual lines that finished the human-written quatrain starting with a given first line and three lines generated by the model when initialized with the same human-written line. In Section \[formulation\] we stated that one of the contributions of this paper is the idea to train the stylization model $G(C|S)$ on the whole corpus $C$ and then estimate the performance of $G(C|S)$ for different $S$. Table \[tab:bleu\] illustrates this idea. ---------------------------------------------------------------- -- -- -- -- -- -- -- **Model $G(A_i)$ & **Chosen author $S$ & **Validation dataset\ **$G(S)$ & $33.0\%$ & $19.0\%$\ **$G(C|S)$ & $37.3\% (+13\%)$ & $37.6\% (+98\%)$\ ********** ---------------------------------------------------------------- -- -- -- -- -- -- -- : \[tab:bleu\] BLEU for the full model trained on one particular author dataset, $G(S)$, and on the whole dataset, $G(C|S)$, calculated on the chosen author validation dataset and on the validation dataset that includes a variety of authors. The results may vary across authors depending on the relative sizes of $S$ and $C$ but the general picture does not change. Indeed, not only the model $G(S)$ trained on texts of a particular author $S$ demonstrates the results that are worse than $G(C|S)$ when validated on the lyrics of the chosen author, $G(C|S)$ also performs almost two times better than $G(S)$ on the validation dataset containing texts from other authors. Table \[tab:bleu2\] shows BLEU calculated on the validation dataset for the plain vanilla LSTM, LSTM with author information support but without bidirectional LSTMs for phonemes and characters included in the embeddings and the full model. The uniform random and weighted random give baselines to compare the model to. ----------------------------------------------------- -- -- -- -- -- -- -- **Model $G(A_i)$ & **BLEU\ **Uniform Random & $0.35\%$\ **Weighted Random & $24.7\%$\ **Vanilla LSTM & $29.0\%$\ **Author LSTM & $29.3\%$ ($+1\%$ to vanilla LSTM)\ **Full model & $29.5\%$ ($+1.7\%$ to vanilla LSTM)\ ************** ----------------------------------------------------- -- -- -- -- -- -- -- : \[tab:bleu2\] BLEU for uniform and weighted random random sampling, vanilla LSTM, LSTM with author embeddings but without phonetics, and for the full model. Phonetics is estimated to be almost as important for the task of stylization as the information on the target author. Survey data ----------- -- ------------------------------------------------------------------------------------------ -- -- -- -- **Shak.& **Carroll & **Marley & **MUSE & **LSTM\ **G.Shak. & $0.37^{*}$ & $0.04 $ & $0.05 $ & $0.14$ & $0.3^{*}$\ **R.Shak. & $0.46^{*}$ & $0.05$ & $0.04$ & $0.07$ & $0.3^{*}$\ **G.Carroll & $0.02$ & $0.07$ & $0.26^{*} $ & $0.18$ & $0.41^{*}$\ **R.Carroll & $0.05$ & $0.2^{*}$ & $0.14$ & $0.11$ & $0.32^{*}$\ **G.Marley & $0.02$ & $0.01$ & $0.47^{*}$& $0.2 $& $0.29^{*}$\ **R.Marley & $0.15$ & $0.05$ & $0.4^{*}$ & $0.1 $& $0.24^{*}$\ **G.MUSE & $0.09$ & $0$ & $0.12$ & $0.34^{*}$ & $0.39^{*} $\ **R.MUSE & $0.03$ & $0.05$ & $0.28^{*}$ & $0.39^{*} $ & $0.2 $************************** -- ------------------------------------------------------------------------------------------ -- -- -- -- : \[tab:te\] Results of a survey with 140 respondents. Shares of each out of 5 different answers given by people when reading an exempt of a poetic text by the stylistic model of an author (prefaced with G. for [*generated*]{}) or by an actual author (prefaced with R. for [*real*]{}). The two biggest values in each row are marked with \* and a bold typeface. -- -------------------------------------------------------------------------------------------- -- -- -- -- **Pushkin & **Esenin & **Letov & **Zemf. & **LSTM\ **G.Pushkin & $0.31^{*}$ & $0.22 $ & $0.02 $ & $0.0$ & $0.44^{*}$\ **R.Pushkin & $0.62^{*}$ & $0.11$ & $0.03$ & $0.01$ & $0.23^{*}$\ **G.Esenin & $0.02$ & $0.61^{*}$ & $0.08 $ & $0.0$ & $0.29^{*}$\ **R.Esenin & $0.06$ & $0.56^{*}$ & $0.07$ & $0.02$ & $0.29^{*}$\ **G.Letov & $0.0$ & $0.02$ & $0.40^{*}$& $0.08 $& $0.51^{*}$\ **R.Letov & $0.0$ & $0.01$ & $0.61^{*}$ & $0.02 $& $0.35^{*}$\ **G.Zemfira & $0.0$ & $0.06$ & $0.13$ & $0.4^{*}$ & $0.41^{*} $\ **R.Zemfira & $0.0$ & $0.02$ & $0.08$ & $0.58^{*} $ & $0.31^{*}$************************** -- -------------------------------------------------------------------------------------------- -- -- -- -- : \[tab:ru\] Results of a survey with 178 respondents. Shares of each out of 5 different answers given by people when reading an exempt of a poetic text by the stylistic model of an author (prefaced with G. for [*generated*]{}) or by an actual author (prefaced with R. for [*real*]{}). The two biggest values in each row are marked with \* and a bold typeface. We randomly sampled 2 quatrains from William Shakespeare, Lewis Carroll, Bob Marley and MUSE band, and 2 quatrains generated by the model conditioned on those four authors respectively. Then 140 fluent English-speakers were asked to read all 16 quatrains in randomized order and choose one option out of five offered for each quatrain, i.e. the author of this verse is William Shakespeare, Lewis Carroll, Bob Marley, MUSE or an Artificial Neural Network. The summary of the obtained results is shown in Table \[tab:te\]. Analogous results but for Russian language could be seen in Appendix in Table \[tab:ru\] alongside with more detailed description of the methodology. It is important to note that the generated pieces for tests were human-filtered for mistakes, such as demonstrated in Table \[tab:ex\], whereas the automated metrics mentioned above were estimated on the whole sample of generated texts without any human-filtering. Looking at Table \[tab:te\] one can see the model has achieved good results in author stylization. Indeed the participants recognized Shakespeare more than 46% of the times (almost 2.5 times more often than compared with a random choice) and did slightly worse in their recognition of Bob Marley (40% of cases) and MUSE (39% of cases, still 2 times higher than a random choice). This shows that the human-written quatrains were, indeed, recognizable and the participants were fluent enough in the target language to attribute given texts to the correct author. At the same time, people were ’tricked’ into believing that the text generated by the model was actually written by a target author in 37% of cases for generated Shakespeare, 47% for generated Marley, and 34% for generated MUSE, respectively. Somehow, Lewis Carroll turned out to be less recognizable and was recognized in the survey only in 20% of cases (corresponds to a purely random guess). The subjective underperformance of the model on this author can therefore be explained with the difficulty experienced by the participants in determining his authorship. Conclusion ========== [In this paper we have defined a problem of stylized text generation and have proposed an LSTM-based method for dealing with such tasks. We have also proposed a cross entropy based method to estimate the quality of stylization. The proposed LSTM is an extension of a language model which is supported by the document meta information at every step and works with large concatenated embeddings that include word embedding, a phoneme-based bidirectional LSTM final state, and a char-based bidirectional LSTM final state. We have successfully trained this model in Russian and in English. The texts generated by the model tend to be closer to the texts of the target author than the text generated by a plain vanilla LSTM both in terms of the cross sample entropy and BLEU. When faced with an author who is recognized by the participants of the test approximately two times more frequently than at random, participants mistakenly attribute the output of the proposed generative model to the target author as often as they correctly attribute original texts to the author in question. Such stylization can be of importance for more authentic dialogue interfaces and personalized human-machine interaction.]{}
--- abstract: 'The paper summarises the contributions in a session at GCM 2019 presenting and discussing the use of native and translation-based solutions to common analysis problems for Graph Transformation Systems (GTSs). In addition to a comparison of native and translation-based techniques in this area, we explore design choices for the latter, s.a. choice of logic and encoding method, which have a considerable impact on the overall quality and complexity of the analysis. We substantiate our arguments by citing literature on application of theorem provers, model checkers, and SAT/SMT solver in GTSs, and conclude with a general discussion from a software engineering perspective, including comments from the workshop participants, and recommendations on how to investigate important design choices in the future.' author: - Reiko Heckel - Leen Lambers - Maryam Ghaffari Saadat title: | Analysis of Graph Transformation Systems:\ Native vs Translation-based Techniques --- Introduction ============ Analysis of Graph Transformation Systems {#sec:problems} ======================================== Native vs Translation-based Techniques {#sec:comparison} ====================================== In this section, we explore two different approaches to analyse graph transformation systems: native versus translation-based. We start with defining more precisely what we actually mean with both terms and we illustrate these definitions with some examples from the literature following one or the other approach (cf. ). We derive from these definitions some distinguishing characteristics that can help in guiding the selection of one or the other approach (cf. ). We complement this conceptual comparison with an overview of experimental comparisons of both approaches that we have encountered in the literature for different analysis problems (cf. ). We conclude this section with an evaluation of the question: Is there any empirical evidence backing up the conceptual comparison and how significant is it? Finally we discuss some challenges or open questions that arise from this evaluation. Definitions and Examples {#sec:definition} ------------------------ A *native approach* to solving a graph transformation (GT) analysis problem is an approach where this problem serves directly as an input to a GT-specific solver. A *translation-based approach* to solving a graph transformation analysis problem is an approach where this problem is translated to a logic-based specification, also called *target specification*, in some logic-based domain, also called *target domain*, where this problem is then also analysed. The target domain usually does not focus on graphs in particular. A *hybrid approach* uses a mixture of the native as well as the translation-based approach to solve a graph transformation analysis problem. Let us have a closer look at *model checking* for graph transformation as an *example graph transformation analysis problem* in order to illustrate the above definitions. The model checking problem for GT can be formulated as follows: Is a specific liveness or safety property fulfilled in the graph transition system generated by a given start graph and a given set of graph transformation rules? The *input* to the analysis problem consists of a start graph and a set of graph transformation rules as well as a liveness or safety property to be checked. The *output* of the analysis problem consists of the answer yes/no, whereas in the latter case the answer comes with a counterexample. An example approach following the *native approach* to solving this problem is GROOVE [@GMRZZ12]. This tool allows for feeding it directly with the above-described problem input and delivers the above-described output. The computations underlying GROOVE for solving the analysis problem are GT-specific. There exist several example approaches described in the literature [@IsenbergSW13; @BoronatHM09; @BaresiS06] following the *translation-based approach* to solving this problem by translating the latter to a target specification in first-order logic, rewriting logic, or relational logic, respectively. Appropriate solvers for these target domains such as the SMT solver Z3 [@z3], Maude [@maude2007], or Alloy [@Jackson06], respectively, are subsequently used to find an answer to the original GT analysis problem. Conceptual Comparison {#sec:conceptual-comparison} --------------------- We can derive the following characteristics from the above definition of the *native approach*: - *No problem translation* necessary avoiding additional effort as well as potential errors due to translation. - *Promoted understanding of specifics* of graph and graph transformation analysis. - *Graph-specific optimizations* usually built-in. - Structured support for *different variants of graphs and graph transformation* promoting reuse of the commonalities of underlying native techniques. On the contrary, we can derive the following complementary characteristics from the above definition of the *translation-based approach*: - *Problem translation necessary*, which might be a source of errors or misunderstandings. [^1] - *Understanding of target domain and related solver(s) necessary* in order to obtain correct and useful target specification. [^2] - *Graph-specific optimizations* usually not built-in. - *Reuse experience and tool support* from target domain. Depending on the use case and based on these characteristics it might make sense to opt for one or the other approach. In addition to these characteristics described conceptually, we study more experimental comparisons of both approaches for some example analysis problems in the following. In particular, we will thereby focus the practical implications of the characteristics described conceptually here. Experimental Comparison {#sec:experimental-comparison} ----------------------- We have found a few *experimental comparisons* in the literature with respect to following a native versus translation-based approach for solving particular graph transformation analysis problems. In particular, we describe the main findings of such an experimental comparison for model checking graph transformation systems [@RSV04], constraint verification applied to pre- and post-condition reasoning [@Pennemann2009], constraint verification applied to invariant checking [@BeckerBGKS06], and constraint verification in the sense of satisfiability solving and automated reasoning [@SemerathNV18; @SchneiderLO17]. We start with a generic description of the analysis problem at hand as well as giving a few pointers to example approaches solving the analysis problem following the native or translation-based approach. Then we report more in detail on the above-mentioned experimental comparison found in the literature and describe their practical findings with respect to the conceptual characteristics of each of the approaches. #### Model checking We have described the model checking problem for graph transformation already in , where we have listed some pointers to representatives of the native vs translation-based approach to solving this problem. Now we report on the *experimental comparison* [@RSV04] of the native approach followed by GROOVE [@GMRZZ12] and the translation-based approach followed by CheckVML [@SV03], exploiting off-the-shelf model checker tools like SPIN [@Holzmann97]. On the one hand, it is reported that GROOVE is able to ”Simulate graph production rules directly and build the state space directly from the resultant graphs and derivations. This avoids the preprocessing phase, and makes additional abstraction techniques available to handle symmetries and dynamic allocation.”, referring to characteristics N1 (No translation) and N3 (Built-in graph-specific optimizations) described in in particular. On the other hand, it is reported that CheckVML is able to “Encode graphs into fixed state vectors and transformation rules into guarded commands that modify these state vectors appropriately to enjoy all the benefits of the years of experience incorporated in existing model checking tools.”, referring to characteristics T1 (Problem translation necessary), T2 (Understanding of target domain and related solver(s) necessary) and T4 (Reuse experience and tool support) in particular. The overall conclusion sounds as follows “CheckVML outperforms GROOVE if the dynamic and/or symmetric nature of the problem under analysis is limited, while GROOVE shows its superiority for inherently dynamic and symmetric problems.”, referring to characteristics N3 (Built-in graph-specific optimizations), T3 (No built-in graph-specific optimiztations) and T4 (Reuse experience and tool support) in particular. #### Pre- and post-condition reasoning The related graph transformation problem can be formulated as follows: Given an input graph satisfying a particular pre-condition, does the output graph generated by the given graph program satisfy the post condition? The *input* to this analysis problem consists of a pre- as well as post-condition (in the form of graph conditions) and a graph program. The *output* consists of the answer yes, no, or unknown, since this problem is undecidable in general. Solving this problem is usually performed with some kind of interactive theorem proving, where the user needs to specify, for example, loop invariants. A first example *native approach* [@HP09] is based on Dijkstra’s approach to program verification and adapted to graph programs, in particular. A second example native approach [@PoskittP12] is based on a Hoare-style proof system for graph programs. Two example *translation-based approaches* [@Strecker18; @BrenasES18] translate the problem to a target domain like Isabelle/HOL [@NipkowPW02] or description logics [@Baader03a], respectively. We report in particular on an *experimental comparison* of a native and translation-based approach to this problem as described in the PhD thesis of Karl-Heinz Pennemann [@Pennemann2009]. The native approach is based on a native theorem prover ProCon and SAT solver SeekSat for graph conditions, whereas the translation-based approach resorts to off-the-shelf first-order logic theorem provers and satisfiability solvers such as e.g. VAMPIRE [@RiazanovV02] and DARWIN [@BaumgartnerFT06], respectively. The author reports that ”ProCon and SeekSat are structure-specific in a constructive way. In contrast, theorem prover and satisfiability solver for general first-order logic necessarily consider arbitrary structures and have to be restricted by a set of axioms to a target structure which adds to the complexity of the problem.” This illustrates the characteristics N1 (No problem translation necessary) and T1 (Problem translation necessary) of each approach, respectively. Moreover, he reports on the characteristic N3 (Built-in graph-specific optimizations) of the native solvers in the following way: ”An algorithm on conditions can and should use the fact that conditions make quantifications and statements in bulks, that is, a quantifier may introduce a number of elements. In this sense, conditions may have a lower logical complexity when compared to their translations in first-order logic.” Moreover, experiments on several case studies in the thesis have demonstrated that the native solvers outperform off-the-shelf solvers from the target domain when it comes to efficiency, which can be seen as an illustration of N3 as well as T3 (No built-in graph-specific optimizations). In particular, he reports also on characteristic T2 (necessary understanding of the target domain) as follows: “For formulas, it remains open if the values of variables are equal or distinct, unless it is explicitly stated. If the nodes and edges of a graph condition are not distinct by their labels, inequations have to be introduced during the translation.” Finally note that the native approach illustrates nicely characteristic N4 (support for different variants of graphs and GT), since the underlying theories and tooling are based on the framework of weak adhesive high-level replacement categories [@EhrigEPT06] supporting these different variants. #### Invariant checking We formulate the related graph transformation problem as follows: Does each rule application via a rule of a given set of GT rules on a graph satisfying a particular graph condition lead to a graph satisfying this condition again? The *input* to this problem is a graph condition together with a set of GT rules. The *output* consists of the answer yes, no, or unknown, since in general this is an undecidable problem. An example *native approach* to solving this problem is presented by Becker et al. [@BeckerBGKS06], whereas a translation-based approach is described by König et al. [@KonigE10]. The latter approach is based on an approximation by Petri nets. In particular, Becker et al. [@BeckerBGKS06] describe not only a native approach to the invariant checking problem for GT (called explicit algorithm in the following), but present in addition an *experimental comparison* with a translation-based approach (called symbolic algorithm in the following). The symbolic algorithm resorts to the relational programming language RML as target domain with the related solver CrocoPat [@BeyerNL04]. The authors in particular report on efficiency issues, illustrating evidence for characteristic N3 (Built-in graph-specific optimizations) as well as T4 (Reuse experience and tool support) in the following way, respectively: “For the explicit algorithm, the combinatoric complexity of the rule/invariant pair has the most significant impact on the computation time. This explains why the pair goDC2 and noDC is a particularly easy case for the explicit algorithm, in spite of the size of the pair, as the number of possible intersections is constrained by a large number of positive edges.”[^3] and “Speed-up due to the symbolic encoding can be extremely high for certain hard cases with a high number of nodes and edges.”. #### SAT-solving and automated reasoning The *SAT-solving problem* for graph conditions can be formulated as follows: Does there exist a graph satisfying the given graph condition? The *automated reasoning problem* on the other hand can be considered as complementary and can be formulated as follows: Do all graphs satisfy the given graph condition? The *input* to both problems is a graph condition and the *output* consists of the answer yes, no, or unknown, since in general both problems are undecidable. There exist a number of example *native* approaches as well as *translation-based* approaches to both problems. For example, Pennemann [@Pennemann08] presents a native theorem prover, whereas Schneider et al. [@SchneiderLO17] and Semeráth et al. [@SemerathNV18] present native SAT solvers for graph conditions. Example translation-based approaches [@KuhlmannHG11; @GonzalezBCC12; @SemerathVV16] map the SAT solving problem to target domains such as relational logic [@Jackson06] and constraint logic programming. We first report on an *experimental comparison* to SAT solving [@SemerathNV18] of a native approach and translation-based approach using Alloy [@Jackson06] concentrating on scalability of the corresponding solutions. In particular, the authors write the following conclusions from their experimental comparison, illustrating the characteristics N3 (Built-in graph-specific optimizations) as well as T2 (Understanding of target domain and related solver(s) necessary): ”Our graph solver provides a strong platform for generating consistent graph models which are 1-2 orders of magnitude larger (with similar or higher quality) than derived by mapping based approaches using Alloy with an underlying SAT-solver. Such a difference in scalability can only partly be dedicated to our conceptually different approach which combines several advanced graph techniques to improve performance instead of fine-tuning a mapping. However, it likely indicates fundamental shortcomings of existing mapping based approaches. Based on in-depth profiling we suspect that representing each potential edge between a pair of nodes as a separate Boolean variable blows up the state space for sparse graphs with only linear number of edges.”. We conclude with describing another experimental comparison to SAT solving [@SchneiderLO17], again of a native approach implemented in the tool AutoGraph and translation-based approach using Alloy [@Jackson06] focusing efficiency as well as conciseness of the generated solutions. In particular, the authors write “AutoGraph is capable of obtaining minimal, symbolic models, which allow for a straightforward exploration of further models whereas Alloy generates models for a given scope not necessarily determining minimal models. Also AutoGraph allows for the refutation of a given formula, which is not directly given in Alloy where non-existence of models is also bound to scope sizes. Hence, we conclude that AutoGraph computes in this sense stronger results compared to Alloy.” and “We observed for our running example comparable runtimes. However, as stated before, AutoGraph already returns stronger results by computing not only a minimally representable model, but a symbolic model.”. This illustrates on the one hand the characteristic N2 (Promoted understanding of specifics of graph and GT analysis) of the native approach and on the other hand the characteristic N3 (Built-in graph-specific optimizations) for the native and T4 (Reuse experience and tool support) for the translation-based approach. Evaluation {#sec:evaluation-comparison} ---------- The experimental comparisons studied in the literature and reviewed in demonstrate that each of the characteristics for the native versus translation-based approach as identified in indeed play a role in practice. Each of the experimental comparisons basically showed which practical implications some of the conceptual characteristics have that can then account for a significant difference between a native versus translation-based approach. Therefore we suggest that consciously investigating the practical implications of each of the characteristics for each new use case, might help in guiding the decision between a native or translation-based approach. Open questions, discussion topics and challenges arising from this evaluation are the following: - Is the list of characteristics from the conceptual comparison in complete enough to be able to guide the choice between a native or translation-based approach for each use case? If not, do we need more, or also more specific characteristics, e.g. parametrized by the type of analysis problem? Will future experiments contradict some of the characteristics such that they would need to be rethought? - Why are some analysis problems currently addressed prevalently by native (or translation-based) approaches? Are there problems for which a native (translation-based or hybrid approach) would be more appropriate? - Which target domains have been used for translation-based approaches and why are they appropriate for the given graph transformation analysis problems? - What can we learn from native versus translation-based approaches and experimental comparisons in the past for core computations such as e.g. the subgraph isomorphism problem? Use of SAT and SMT Solvers ========================== Discussion and Outlook ====================== [^1]: This translation may be automated reducing, in general, the source of errors or misunderstandings considerably. [^2]: If no automated translation is available, then the user needs this understanding, otherwise merely the developer of this automated translation needs it. [^3]: Note that goDC2 is a rule and noDC an invariant.
--- abstract: | In previous work, we associated to any finite simple graph a particular set of derangements of its vertices. These derangements are in bijection with the spheres in the wedge sum describing the homotopy type of the boolean complex for this graph. Here we study the frequency with which a given derangement appears in this set.\ *Keywords.* derangement, finite simple graph, Coxeter system, boolean complex address: - 'Department of Mathematical Sciences, DePaul University, Chicago, Illinois' - 'Department of Mathematical Sciences, DePaul University, Chicago, Illinois' author: - Kári Ragnarsson - Bridget Eileen Tenner title: Derangement Frequency in the Boolean Complex --- Introduction ============ The boolean elements in the Bruhat order on a finitely generated Coxeter system were studied by the second author in [@tenner] and characterized by pattern avoidance in some cases. The boolean elements form a simplicial poset and hence form the face poset of a regular face complex, called the *boolean complex* of the Coxeter system. In [@ragnarsson-tenner] we studied the homotopy type of this complex, proved that it has the homotopy type of a wedge of spheres of maximal dimension, and gave a recursive formula for computing the number of spheres using edge operations on the underlying, unlabeled Coxeter graph. In [@ragnarsson-tenner-2] we gave combinatorial meaning to these spheres. Taking graphs as a starting point, this involved assigning to any ordered graph a set of derangements of its vertex set, and constructing a homology class for each derangement in this set. The homology classes so constructed form a basis for the homology of the boolean complex, and consequently we get a bijection between the derangment sets and the spheres in the wedge sum describing the homotopy type of the boolean complex. In the present note, we focus on statistical properties of the derangement sets defined in [@ragnarsson-tenner-2]. Given a finite ordered set $V$ and a derangement $w$ of $V$, we determine which graphs with vertex set $V$ admit $w$ in their derangement set. This allows us to calculate the frequency of the derangament $w$. In this note, all graphs are understood to be finite simple graphs. By an *ordered* graph we mean a graph with a total ordering of the vertex set. A *derangement* of a set $V$ is a fixed-point free permutation of the elements of $V$. The derangement set of an ordered graph {#sec:review} ======================================= In this section we briefly recall some background material on derangement sets. This material is covered in greater detail in [@ragnarsson-tenner-2], and here we focus only on the key points needed to perform the frequency calculations in Section \[sec:stat\]. In [@ragnarsson-tenner-2 Section 3], we describe an algorithm that for an ordered graph $G$ constructs a set ${\mathcal{D}}(G)$ of derangements of the vertex set of $G$. We refer to ${\mathcal{D}}(G)$ as the *derangement set* of $G$. This algorithm is recursive, using edge operations to reduce to smaller graphs. We also give an explicit, closed-form description of ${\mathcal{D}}(G)$ in [@ragnarsson-tenner-2 Theorem 3.11]. This description is used to compute the frequency of derangements in Section \[sec:stat\], and so we take it as the definition of the derangement set in the current paper. Some notation is necessary before we can state the definition \[def:Canopy\] Let $G$ be an ordered graph and let $w$ be a permutation of its vertex set. For a vertex $t$ in $G$ set $$\rho_w(t) = \{t, w(t), \cdots, w^{k-1}(t) \} \, ,$$ where $k$ is the smallest positive integer such that $ w^k(t) \leq t$, and set $$\lambda_w(t) = w^{-\ell}(t) \, ,$$ where $\ell$ is the smallest positive integer such that $w^{-\ell}(t) \leq t$. Write $w$ in standard cycle form. If $t$ is not the smallest element in its cycle, then $\lambda_w(t)$ is the first element appearing to the left of $t$ that is smaller than $t$, and $\rho_w(t)$ is the set of elements obtained by starting at $t$ and moving to the right until reaching an element less than $t$. When $t$ is the smallest element in its cycle, $\rho_w(t)$ is the set of elements in the entire cycle containing $t$, and $\lambda_w(t) = t$. \[defn:criterion\] Let $G$ be an ordered graph with vertex set $V$. The *derangement set* of $G$, denoted ${\mathcal{D}}(G)$, is the set consisting of permutations $w$ of $V$ such for every vertex $t$ of $G$ the vertex $\lambda_w(t)$ is adjacent to some vertex in $\rho_w(t)$. It is not hard to check that every $w \in {\mathcal{D}}(G)$ is indeed a derangement, as the name indicates. Note that ${\mathcal{D}}(G)$ depends on the chosen ordering of the vertex set of $G$. \[ex:criterion\] Let $G$ be the 7-vertex graph depicted below. (1,0) – (4,0); (3,0) – (3,1); (3,1) – (5,1); in [1,2,3,4]{} [(, 0) circle (2pt); (,0) node\[below\] [$\x$]{};]{} in [3,4,5]{} [(, 1) circle (2pt);]{} (3,1) circle (2pt); (3,1) node\[above\] [$5$]{}; (4,1) node\[above\] [$6$]{}; (5,1) node\[above\] [$7$]{}; It is easy to see that $(1234)(567) \in {\mathcal{D}}(G)$. On the other hand, the derangement $(1234567) \not\in {\mathcal{D}}(G)$ because $\lambda_{(1234567)}(5) = 4$ and $\rho_{(1234567)}(5) = \{5,6,7\}$, but $4$ is not adjacent to $5$, $6$, or $7$ in the graph. Similarly, the derangement $(13472)(56)$ is excluded from ${\mathcal{D}}(G)$ because $\lambda_{(13472)(56)}(3) = 1$ and $\rho_{(13472)(56)}(3) = \{3,4,7\}$, and $1$ is not adjacent to $3$, $4$, or $7$. Derangement frequency {#sec:stat} ===================== We now compute the frequency of a derangement, defined as follows. Given a derangement $w$ of a finite, totally ordered set $V$, let $f(w)$ be the number of simple ordered graphs $G$ with vertex set $V$ such that $w \in {\mathcal{D}}(G)$, and let $$r(w) = \frac{f(w)}{2^{\binom{|V|}{2}}} \, .$$ The statistic $f(w)$ is the *frequency* of $w$ and $r(w)$ is the *rate* of $w$. The frequency of a derangement $w$ of the set $V$ describes how often $w$ appears in the derangement sets for graphs with vertex set $V$, while the rate of $w$ calculates the proportion of all graphs on $V$ with $w$ in their derangement sets. Given a derangement $w$ of $V$, Definition \[defn:criterion\] requires that a graph $G$ with $w \in {\mathcal{D}}(G)$ must contain at least one edge in the following sets for each vertex $t$ in $G$. Given a vertex $t$ in the finite simple ordered graph $G$ and a $w \in {\mathcal{D}}(G)$, let $E_{w,t} = \left\{\{\lambda_w(t), s\} : s \in \rho_w(t)\right\}$. Note that when $t$ is not minimal in its $w$-cycle, we have $\lambda_w(t) < s$ for all $s \in \rho_w(t)$. On the other hand, when $t$ is minimal in its $w$-cycle, we have $t = \lambda_w(t) < s$ for all $s \in \rho(w) \setminus \{t\}$. These cases often have to be treated separately, and the ensuing discussion is simplified by the following definition. Give a derangement $w$ of an ordered set $V$, let $U(w)$ be the set of elements that are not minimal in their $w$-cycles. Calculating the frequency of a derangement amounts to counting the possible edge sets that contain at least one edge in each $E_{w,t}$. This is greatly simplified by the following lemma. The proof of the lemma uses the observation following Definition \[def:Canopy\]. \[lem:disjoint edge requirements\] Let $w$ be a derangement of a totally ordered set $V$. (a) For distinct $t \in U(w)$, the sets $E_{w,t}$ are disjoint. (b) If $s$ is minimal in its $w$-cycle then $E_{w,w(s)} \subseteq E_{w,s}$. We consider $w$ written in standard cycle form. (a) Consider $t, t' \in U(w)$. If $t$ and $t'$ are in difference $w$-cycles, then certainly $E_{w,t} \cap E_{w,t'} = \emptyset$. Now suppose that $t$ and $t'$ are in the same cycle of $w$, and that $t < t'$. To have $\lambda_w(t) = \lambda_w(t')$, we must have $t'$ appearing to the left of $t$ when written in standard cycle form. But then the set of elements $\rho_w(t')$ is a string of consecutive elements beginning with $t'$, extending to the right, and stopping before reaching $t$. Thus $\rho_w(t') \cap \rho_w(t) = \emptyset$, so again $E_{w,t} \cap E_{w,t'} = \emptyset$. (b) Let $s$ be minimal in its $w$-cycle. Then $\lambda_w(s) = s$ and $\rho_w(s)$ is the entire cycle containing $s$. Certainly, then, $\rho_w(w(s)) \subseteq \rho_w(s)$. Moreover, $\lambda_w(w(s))$ is the first element less than $w(s)$ which appears to the left of $w(s)$. The element immediately to the left of $w(s)$ is necessarily $s$, and $s$ is the minimal element in the cycle so we must have that $s < w(s)$. Therefore $\lambda_w(w(s)) = s = \lambda_w(s)$, so $E_{w,w(s)} \subseteq E_{w,s}$. Lemma \[lem:disjoint edge requirements\] shows that to construct a graph $G$ with $w \in {\mathcal{D}}(G)$, one can independently choose edges from the sets $E_{w,t}$ with $t \in U(w)$ and in doing so also end up with edges from the sets $E_{w,s}$ with $s \notin U(w)$. From this it is straightforward to derive the following description of the frequency and rate of a derangement. \[thm:frequency\] Let $w$ be a derangement of a totally ordered set $V$. Then the frequency of $w$ is $$\begin{aligned} f(w) &=& 2^{\binom{|V|}{2} - \sum_{t \in U(w)} |\rho_w(t)|} \cdot \prod_{t \in U(w)} \left(2^{|\rho_w(t)|} - 1\right)\\ &=& 2^{\binom{|V|}{2}} \cdot \prod_{t \in U(w)} \left(1 - \frac{1}{2^{|\rho_w(t)|}}\right),\end{aligned}$$ and the rate of $w$ is $$r(w) = \prod_{t \in U(w)} \left(1 - \frac{1}{2^{|\rho_w(t)|}}\right).$$ There are $\binom{|V|}{2}$ possible edges in a graph with vertex set $V$. Only $\sum_{t \in U(w)}|\rho_w(t)|$ of these are elements of $$\bigcup_{t \in U(w)} E_{w,t}.$$ Thus the remaining edges can be in the graph or not in the graph, yielding the factor $$2^{\binom{|V|}{2} - \sum_{t \in U(w)} |\rho_w(t)|}.$$ For each $t \in U(w)$, a nonempty subset of the edges in $E_{w,t}$ must be present in the graph, yielding the factor $2^{|\rho_w(t)|} - 1$. The rate of $w$ is obtained by dividing the frequency by the total number of graphs with vertex set $V$. Consider $w = (13472)(56)$. Then $$f(w) = 2^{14} \left(2^1 - 1 \right)\left(2^3 - 1\right)\left(2^2 -1\right)\left(2^1-1\right)\left(2^1 -1 \right) = 2^{14} \cdot 21$$ and $$r(w) = \frac{21}{2^7}.$$ This means that there are $2^{14} \cdot 21$ ordered graphs $G$ for which $w \in {\mathcal{D}}(G)$, and that of all ordered graphs $G$ on $7$ vertices, $21/2^7$ of them have $w \in {\mathcal{D}}(G)$. The frequency and rate function are, in a certain sense, increasing with respect to the sizes of the sets $\rho_w(t)$. To make this precise, we define the following objects. Let ${\mathcal{D}}^k(V)$ be the set of derangements of $V$ with $k$ disjoint cycles. For $w \in {\mathcal{D}}^k(V)$ let $\theta(w) \in ({\mathbb{Z}}_+)^{|V|-k}$ be the $(|V|-k)$-tuple consisting of the numbers $\{|\rho_w(t)|: t \in U(w)\}$, written in non-decreasing order. We order the $(|V|-k)$-tuples $\{\theta(w): w \in {\mathcal{D}}^k(V)\}$ via the cartesian order. Using this as a basis for comparing derangements, we obtain the following monotonicity result as a direct consequence of Theorem \[thm:frequency\]. \[cor:FreqIncr\] Let $V$ be an ordered set and $w_1,w_2 \in {\mathcal{D}}^k(V)$ for some $k$. If $\theta(w_1) \leq \theta(w_2) $ then $f(w_1) \leq f(w_2)$ and $r(w_1) \leq r(w_2)$. Let $w = (13472)(56)$, and let $w' = (13427)(56)$. Then $\theta(w) = (1,1,1,2,3)$ and $\theta(w') = (1,1,1,2,2)$, so $\theta(w) > \theta(w')$. Also, $f(w) = 172032$ and $f(w') = 147456$, while $r(w) = .08203125$ and $r(w') = .0703125$. Extremal examples ================= We now give examples of the frequency and rate functions for some extremal cases. In each of these, the vertex set is $V = \{1, \ldots, n\}$. We begin by examining rare derangements; that is, derangements with minimal frequency. \[ex:fixed number of cycles\] For derangements with a fixed number of cycles, $\theta(w)$ is minimized when every vertex that is not minimal in its cycle is followed by a smaller vertex. That is, suppose that $\rho_w(t) = \{t\}$ for each $t \in U(w)$, meaning that each cycle can be written as a decreasing sequence. Then the frequency of $w$ is $$\frac{2^{\binom{n}{2}}}{2^{|U(w)|}}$$ and the rate of $w$ is $$\frac{1}{2^{|U(w)|}}.$$ If we allow the number of cycles to vary, then this value is minimized if there is only one value of $t$ which is minimal in its cycle, meaning that the standard cycle form of $w$ has exactly one cycle. In other words, this is minimized for $w = (1n \cdots 432)$: $$f((1n \cdots 432)) = \frac{2^{\binom{n}{2}}}{2^{n -1}} \text{\ \ \ and\ \ \ } r((1n \cdots 432)) = \frac{1}{2^{n -1}}.$$ On the other hand, the derangements with the highest rate in this situation are those where the standard cycle form of $w$ has as many cycles as possible, namely $\lfloor n/2 \rfloor$ cycles, in which case the rate of $w$ is $$\label{eqn:maximizing} \frac{1}{2^{\lceil n/2 \rceil}}.$$ Complementary to the previous example, let us now consider persistent derangements; that is, derangements with maximal frequency. \[ex:fixed cycle lengths\] Suppose that the cycle lengths in the standard cycle form of $w$ are $c_1, c_2, \ldots$. If, for each non-minimal element in each cycle, $|\rho_w(x)|$ is maximal, meaning that each cycle can be written as an increasing sequence, then the frequency of $w$ is $$2^{\binom{n}{2}} \cdot \prod_i \prod_{k=1}^{c_i-1} \left(1 - \frac{1}{2^k}\right).$$ and the rate of $w$ is $$\prod_i \prod_{k=1}^{c_i-1} \left(1 - \frac{1}{2^k}\right).$$ This value is maximized if there is only one cycle in the standard form of $w$, and hence $c_1 = n$. In other words, this is maximized for $w = (1234\cdots n)$: $$f((1234\cdots n)) = \prod_{k=1}^{n-1} \left(2^k - 1\right) \text{\ \ \ and\ \ \ } r((1234\cdots n)) = \frac{\prod_{k=1}^{n-1} \left(2^k - 1\right)}{2^{\binom{n}{2}}}.$$ On the other hand, the derangements with the lowest rate in this situation are those where the standard cycle form of $w$ has as many cycles as possible, meaning that all cycles are transpositions except possibly for a single $3$-cycle in the case when $n$ is odd. If $n$ is even, then the rate of $w$ is $$\label{eqn:minimizing even} \frac{1}{2^{n/2}},$$ whereas if $n$ is odd, then the rate of $w$ is $$\label{eqn:minimizing odd} \frac{3}{2^{(n+1)/2}}.$$ Notice that varying the number of cycles has the opposite effect in Examples \[ex:fixed number of cycles\] and \[ex:fixed cycle lengths\]. Thus we cannot do much to improve Corollary \[cor:FreqIncr\]. Also, observe that expressions and agree (when $n$ is even), as they ought to do. Expressions and disagree (when $n$ is odd), because the $3$-cycle in the former situation can be written in decreasing order, whereas the $3$-cycle in the latter situation can be written in increasing order. Stated another way, the least frequently occurring derangement is $$(1n\cdots 432)$$ and the most frequently occurring derangement is $$(1234\cdots n).$$ [99]{} K. Ragnarsson and B. E. Tenner, Homotopy type of the boolean complex of a Coxeter system, *Adv. Math.* **222** (2009), 409–430. K. Ragnarsson and B. E. Tenner, Homology of the boolean complex, to appear in *J. Algebraic Combin.* B. E. Tenner, Pattern avoidance and the Bruhat order, *J. Combin. Theory, Ser. A* **114** (2007), 888–905.
--- abstract: 'We study the heat transport due to phonons in nanomechanical structures using a phase space representation of non-equilibrium Green’s functions. This representation accounts for the atomic degrees of freedom making it particularly suited for the description of small (molecular) junctions systems. We [rigorously]{} show that for the steady state limit our formalism correctly recovers the heuristic Landauer-like heat conductance for a quantum coherent molecular system coupled to thermal reservoirs. We find general expressions for the non-stationary heat current due to an external periodic drive. In both cases we discuss the quantum thermodynamic properties of the systems. We apply our formalism to the case of a diatomic molecular junction.' author: - 'Marcone I. Sena-Junior' - 'Leandro R. F. Lima' - 'Caio H. Lewenkopf' bibliography: - 'quantum\_thermal\_transport.bib' title: 'Phononic heat transport in nanomechanical structures: steady-state and pumping' --- Introduction {#sec:introduction} ============ Significant progress has been recently achieved on the understanding of phononic heat transfer at the molecular level [@Dubi2011; @Li2012; @Chen2005]. In addition to the investigation of fundamental aspects of the problem [@Dubi2011; @Dhar2008], several authors have realized that phonons, usually regarded as an energy waste, can be manipulated and controlled to carry and process information. Exploring analogies with electrons and photons, theoretical proposals have been put forward aiming the fabrication of devices such as thermal diodes [@Li2004], thermal transistors [@Li2006; @Joulain2016], and thermal logic gates [@Wang-Li2007], some of them already experimentally verified [@Chang2006; @Narayana2012; @Martinez-Perez2015]. These ideas have given rise to the emerging field of phononics [@Li2012; @Madovan2013]. The presence of an external time-dependent drive, such as an external force or time-varying thermal bath temperature, gives another interesting twist to the problem, making possible to explore non-equilibrium phenomena such as directed heat pumping and cooling [@Galperin2009; @Santandrea2011; @Ren2010; @Li2012; @Arrachea2012; @Arrachea2013; @Li2015; @Beraha2016]. [ Early reports on the measurement of quantized thermal conductance in suspended nanostructures [@Tighe1997; @Schwab2000] attracted attention to the field. More recently, ballistc thermal conductance has been experimentally studied in carbon nanotubes [@Yang2002; @Prasher2009; @Marconnet2013], silicon nanowires [@Bourgeois2007; @Maire2017], as well as molecular and atomic contacts [@Wang2007exp; @Cui2017]. The experimental advances in these studies are remarkable and pose important challenges to the quantum theory of thermal conductance [@Marconnet2013; @Cui2017]. ]{} One of the fundamental tools for the theoretical study of non-equilibrium properties of quantum systems is the non-equilibrium Green’s functions (NEGF) theory [@Rammer1986; @Kamenev2011]. This approach, originally developed for fermionic systems [@Caroli1971; @Meir1992], has been nicely adapted to describe the heat transfer in small junctions systems [@Ozpineci2001; @Segal2003; @Yamamoto2006; @Dhar2008; @Wang2007; @Wang2008; @Wang2014]. Despite its success, the implementation of the NEGF to calculate phonon heat currents driven by a temperature difference between source and drain still has some caveats, like the need to symmetrize the heat current [to obtain the standard Landauer-like transmission formula]{} [@Wang2007; @Wang2008; @Wang2014]. The relevance of NEGF for phononics calls for a deeper [and careful]{} analysis of the formalism. The purpose of the paper is twofold. First, we present a rigorous method for the description of quantum thermal transport properties due to phonon or atomic degrees of freedom using nonequilibrium Green’s function in phase space. We show that our formal developments solve the problems of the previous works [@Wang2007; @Wang2008; @Wang2014] and recover the well known Landauer-like formula for the stationary heat current in the ballistic regime [@Pendry1983; @Rego1998; @Angelescu1998; @Blencowe1999; @Mingo2003; @Chalopin2013]. Second, we extend the formalism to address systems under the influence of a time dependent drive. As an example, we derive general expressions for the heat current pumped by an external time-dependent periodic potential for a system coupled to two thermal reservoirs at the same temperature. We [show how to apply our method by analyzing]{} the steady-state heat transport properties of a diatomic molecule coupled to thermal reservoirs by semi-infinite linear harmonic chains. Next, we study the heat current pumped through the system due to a time-dependent driving force and discuss its thermodynamic properties. The paper is organized as follows: In Sec. \[sec:GF\] we introduce the phase space representation of the Green’s functions on which our derivations are built. We begin Sec. \[sec:model\] by presenting the model Hamiltonian addressed in this study. We then use the Green’s function formalism to derive expressions for the thermal current due to a source-drain temperature difference and the heat current pumped by an external periodical drive of the system atomic degrees of freedom. In Sec. \[sec:application\], we apply our results to the simple model of a diatomic molecular junction. Finally, we present our conclusions in Sec. \[sec:conclusion\]. Green’s functions in phase space {#sec:GF} ================================ In this the section we [use]{} a phase space representation of non-equilibrium Green’s functions [@Dhar2006; @Dhar2012]. We show that this representation is very convenient for a canonical quantization of the displacements $\vec{u}\equiv (u_{1}, \ldots, u_{n})$ and their canonical conjugated momenta $\vec{p}\equiv (p_{1}, \ldots, p_{n})$ in a $2n$-dimensional phase space. Let us consider a quadratic Hamiltonian expressed in terms of space phase variables $(\vec{u}, \vec{p})$ representing a system of coupled oscillators. The model Hamiltonian reads $$\label{hamil} H(t)=\frac{1}{2}\,\vec{p}^{\;\text{T}}\cdot\vec{p} + \frac{1}{2}\,\vec{u}^{\;\text{T}}\cdot\hat{K}(t)\cdot\vec{u} \equiv \frac{1}{2}\,\mbox{\boldmath$\zeta$}^{\,\text{T}}\cdot\check{\mathcal{M}}(t)\cdot\mbox{\boldmath$\zeta$},$$ where, for the sake of compactness, we assume that the masses are identical and have unit value. $\hat{K}(t)$ is the force constant matrix that represents the couplings of the oscillators network. The dynamic variable $\mbox{\boldmath$\zeta$}$ and the matrix $\check{\mathcal{M}}$ have the symplectic structure $$\mbox{\boldmath$\zeta$}= \begin{pmatrix} \vec{u}\\ \vec{p} \end{pmatrix} \qquad\text{and}\qquad \check{\mathcal{M}}(t)= \begin{pmatrix} \hat{K}(t) & \hat{0}\\ \hat{0} & \hat{I} \end{pmatrix},$$ where $\hat{I}$ is the identity matrix. The equation of motion for ${\bm\zeta}$ reads $$\label{derivada} \frac{{\mathrm{d}}}{{\mathrm{d}}t}{\mbox{\boldmath$\zeta$}}=\check{\mathcal{Q}}\cdot\frac{\partial}{\partial{\mbox{\boldmath$\zeta$}}}H = -\check{\mathcal{K}}(t)\cdot{\mbox{\boldmath$\zeta$}},$$ where $$\label{eq_4b} \check{\mathcal{Q}}= \begin{pmatrix} \hat{0} & \hat{I}\\ -\hat{I} & \hat{0} \end{pmatrix}\quad\text{and}\quad \check{\mathcal{K}}(t)\equiv-\check{\mathcal{Q}}\cdot\check{\mathcal{M}}(t)= \begin{pmatrix} \hat{0} & -\hat{I}\\ \hat{K}(t) & \hat{0} \end{pmatrix}.$$ We define the phase space correlation functions $\hat{C}(\tau,\tau^{\prime})$ on the Keldysh contour [@Rammer1986] as $$\label{corr_keldy} \check{C}(\tau,\tau^{{\prime}})\equiv\frac{1}{{\imath}\,\hbar}\left\langle\mathbb{T}_{\mathcal{C}}\,{\mbox{\boldmath$\zeta$}}(\tau)\otimes{\mbox{\boldmath$\zeta$}}(\tau^{{\prime}})\right\rangle\equiv \begin{pmatrix} \hat{C}^{(uu)} & \hat{C}^{(up)}\\ \hat{C}^{(pu)} & \hat{C}^{(pp)} \end{pmatrix}(\tau,\tau^{{\prime}}),$$ where ${\imath}\hbar\,\hat{C}^{(\alpha\beta)}\equiv\langle\mathbb{T}_{\mathcal{C}}\,\vec{\alpha}(\tau) \otimes\vec{\beta}(\tau^{{\prime}})\rangle$. The correlation functions $\hat{C}^{(\alpha\beta)}(\tau,\tau^{\prime})$ are a straightforward phase space generalization of standard Green’s functions [@Rammer1986; @Kamenev2011], as we discuss below. As standard [@Rammer1986], the *greater*, *lesser*, *time-ordered*, and *anti-time-ordered* correlations functions read \[set\_5\] $$\begin{aligned} \check{C}^{>}(t,t^{{\prime}}) &= ({\imath}\,\hbar\,)^{-1}\big\langle{\mbox{\boldmath$\zeta$}}(t)\otimes{\mbox{\boldmath$\zeta$}}(t^{{\prime}})\big\rangle,\\ \check{C}^{<}(t,t^{{\prime}}) &= \left[\check{C}^{>}(t,t^{{\prime}})\right]^{\text{T}},\\ \check{C}^{\mathbb{T}}(t,t^{{\prime}}) &= \theta(t-t^{{\prime}})\,\check{C}^{>}(t,t^{{\prime}}) + \theta(t^{{\prime}}-t)\,\check{C}^{<}(t,t^{{\prime}}),\\ \check{C}^{\overline{\mathbb{T}}}(t,t^{{\prime}}) &= \theta(t^{{\prime}}-t)\,\check{C}^{>}(t,t^{{\prime}}) + \theta(t-t^{{\prime}})\,\check{C}^{<}(t,t^{{\prime}}),\end{aligned}$$ where $(\check{C}^{\,\mathbb{T}} + \check{C}^{\,\overline{\mathbb{T}}} - \check{C}^{>} - \check{C}^{<}\,)(t,t^{{\prime}}) = 0$. Alternatively, the correlation functions can be represented by their *retarded* $\check{C}^{r}$, *advanced* $\check{C}^{a}$, and *Keldysh* $\check{C}^{K}$ components, namely \[set\_6\] $$\begin{aligned} \check{C}^{r}(t,t^{{\prime}}) & =\frac{1}{2}(\,\check{C}^{\,\mathbb{T}}+\check{C}^{>}-\check{C}^{<}-\check{C}^{\,\overline{\mathbb{T}}}\,)(t,t^{{\prime}})\nonumber\label{eq_6a}\\ & = \theta(t-t^{{\prime}})\,\left(\,\check{C}^{>}-\check{C}^{<}\,\right)(t,t^{{\prime}}),\\ \check{C}^{a}(t,t^{{\prime}}) & =\frac{1}{2}(\,\check{C}^{\,\mathbb{T}}-\check{C}^{>}+\check{C}^{<}-\check{C}^{\,\overline{\mathbb{T}}}\,)(t,t^{{\prime}})\nonumber\label{eq_6b}\\ & = \theta(t^{{\prime}}-t)\,\left(\,\check{C}^{<}-\check{C}^{>}\,\right)(t,t^{{\prime}}),\\ \check{C}^{K}(t,t^{{\prime}}) & =\frac{1}{2}(\,\check{C}^{\,\mathbb{T}}+\check{C}^{>}+\check{C}^{<}+\check{C}^{\,\overline{\mathbb{T}}}\,)(t,t^{{\prime}})\nonumber\\ & = \left(\,\check{C}^{>}+\check{C}^{<}\,\right)(t,t^{{\prime}}). \label{eq_6c}\end{aligned}$$ Using Eqs.  and we obtain the equations of motion for $\check{C}^{\gtrless}(t,t^{{\prime}})$ and $\check{C}^{\mathbb{T},\overline{\mathbb{T}}}(t,t^{{\prime}})$, namely \[set\_7\] $$\begin{aligned} &\left(\check{\mathcal{I}}\,\frac{\partial}{\partial t} + \check{\mathcal{K}}(t)\right)\cdot\check{C}^{\gtrless}(t,t^{{\prime}})=0,\label{Cgreater}\\ &\left(\check{\mathcal{I}}\,\frac{\partial}{\partial t} + \check{\mathcal{K}}(t)\right)\cdot\check{C}^{\mathbb{T},\overline{\mathbb{T}}}(t,t^{{\prime}})=\pm\delta(t-t^{{\prime}})\,\check{\mathcal{Q}}.\end{aligned}$$ Similarly, using Eqs.  and , we show that $\check{C}^{\text{K}}(t,t^{{\prime}})$ and $\check{C}^{r,a}(t,t^{{\prime}})$ satisfy \[set\_8\] $$\begin{aligned} &\left(\check{\mathcal{I}}\,\frac{\partial}{\partial t}+\check{\mathcal{K}}(t)\right)\cdot \check{C}^{K}(t,t^{{\prime}})=0,\label{keldysh}\\ &\left(\check{\mathcal{I}}\,\frac{\partial}{\partial t} +\check{\mathcal{K}}(t)\right)\cdot \check{C}^{r,a}(t,t^{{\prime}})=\delta(t-t^{{\prime}})\,\check{\mathcal{Q}}\label{eq_green},\end{aligned}$$ where $\check{\mathcal{I}}$ is the $2n\times 2n$ identity matrix. To obtain Eq. , we use the identity $\check{C}^{>}(t,t)-\check{C}^{<}(t,t)=\check{\mathcal{Q}}$, that follows from the canonical commutations relations. To make the notation compact, we write the correlation function in a block structure as (*Keldysh space*)$\,\otimes\,$(*symplectic space*) in its irreducible representation, namely $$\begin{aligned} \label{structure_KS} \breve{\mathcal{C}}(t,t^{{\prime}})=& \begin{pmatrix} \check{C}^{K} & \check{C}^{r}\\ \check{C}^{a} & \check{0} \end{pmatrix}(t,t^{{\prime}}) \nonumber\\ \equiv &\;\sigma_{1}\otimes\check{\mathcal{G}}(t,t^{{\prime}}) + \text{homogeneous solution},\end{aligned}$$ where $\sigma_1$ is the first Pauli matrix. Note that $\check{\mathcal{G}}(t,t^{{\prime}})$ has also a symplectic structure and satisfies (by inspection) the equation of motion $$\begin{aligned} \left(\check{\mathcal{I}}\,\frac{\partial}{\partial t}+\check{\mathcal{K}}(t)\right)\cdot \check{\mathcal{G}}(t,t^{{\prime}}) = \delta(t-t^{{\prime}})\, \check{\mathcal{Q}}, \label{diff}\end{aligned}$$ with a self-adjoint equation $$\label{diff_self} \check{\mathcal{G}}(t,t^{{\prime}})\cdot\left(\check{\mathcal{I}}\,\overleftarrow{\frac{\partial}{\partial t^{{\prime}}}} + \check{\mathcal{K}}^{\text{T}}(t^{{\prime}})\right) =- \delta(t-t^{{\prime}})\,\check{\mathcal{Q}}.$$ Using Eqs.  and we obtain the following identity $$\begin{aligned} \frac{{\mathrm{d}}}{{\mathrm{d}}t}\check{\mathcal{G}}(t,t) &\equiv\left(\frac{\partial}{\partial t}+ \frac{\partial}{\partial t^{{\prime}}}\right)\check{\mathcal{G}}(t,t^{{\prime}})\Bigg\vert_{t=t^{\prime}}\nonumber\\ &=-\check{\mathcal{K}}(t)\cdot \check{\mathcal{G}}(t,t) - \check{\mathcal{G}}(t,t)\cdot\check{\mathcal{K}}^{\text{T}}(t).\end{aligned}$$ Performing the Keldysh rotation [@Kamenev2011] in Eq. , we obtain a reducible representation of the correlation function in terms of the quantities defined in Eq.  as $$\begin{aligned} \breve{\mathcal{P}}\cdot\breve{\mathcal{C}}(t,t^{{\prime}})\cdot\breve{\mathcal{P}}^{\text{T}}=& \begin{pmatrix} \check{C}^{\mathbb{T}} & \check{C}^{<}\\ \check{C}^{>} & \check{C}^{\overline{\mathbb{T}}} \end{pmatrix}(t,t^{{\prime}})\nonumber\\ \equiv &\;\sigma_{3}\otimes\check{\mathcal{G}}(t,t^{{\prime}}) + \text{homog. solution},\end{aligned}$$ where $\breve{\mathcal{P}}=\frac{1}{\sqrt{2}}\left(I_{2} + {\imath}\, \sigma_{2}\right)\otimes\check{\mathcal{I}}$ and $\sigma_{2}$ is the second matrix of Pauli. Let us now introduce the frequency representation of the correlation functions. Assuming time translational invariance, [*i.e.*]{}, that the matrix $\check{\mathcal{K}}$ does not depend on time, one defines $\underline{\check{\mathcal{G}}}[\omega]$ in terms of the Fourier transform $$\label{fourier} \underline{\check{\mathcal{G}}}[\omega] = \int_{-\infty}^{\infty}\!{\mathrm{d}}(t-t^\prime) \,\text{e}^{{\imath}\omega (t-t^{{\prime}})}\,\underline{\check{\mathcal{G}}}(t-t^{\prime}),$$ for $\underline{\check{\mathcal{G}}}(t-t^{\prime})=\check{\mathcal{G}}(t,t^{\prime})$. We study the time-dependent problem in Sec. \[sec\_pumping\]. By inserting Eq.  in \[or in \], we write $$\begin{aligned} \label{12} \underline{\check{\mathcal{G}}}[\omega]=&\left(-{\imath}\,\omega\,\check{\mathcal{I}} + \check{\mathcal{K}}\right)^{-1}\cdot\check{\mathcal{Q}} \nonumber\\ =& -\check{\mathcal{Q}}\cdot\left({\imath}\,\omega\,\check{\mathcal{I}} + \check{\mathcal{K}}^{\text{T}}\right)^{-1},\end{aligned}$$ where $$\label{wang} \underline{\check{\mathcal{G}}}[\omega] \equiv \begin{pmatrix} \check{\mathcal{G}}^{(uu)}[\omega] & \check{\mathcal{G}}^{(up)}[\omega] \\ \check{\mathcal{G}}^{(pu)}[\omega] & \check{\mathcal{G}}^{(pp)}[\omega] \end{pmatrix} = \begin{pmatrix} \hat{G}[\omega] & {\imath}\,\omega\,\hat{G}[\omega]\\ -{\imath}\,\omega\,\hat{G}[\omega] & \hat{G}[\omega]\cdot\hat{K} \end{pmatrix},$$ with $$\label{G_wang} \hat{G}[\omega]=(\omega^{2}\,\hat{I} - \hat{K})^{-1}.$$ Equation  has been obtained in Ref.  by directly taking the Fourier transform of the displacement $\lbrace u_{i}\rbrace$ and the canonically conjugate momentum operators $\lbrace p_{i}\rbrace$. We note that despite being very appealing, this straightforward procedure is [formally]{} problematic, since the canonical commutation relations $\left[u_{i}(t), p_{j}(t)\right]$ can not be consistently defined in the frequency domain [(see Appendix \[commutation\] for more details)]{}. This problem can be circumvented [@Wang2014] by performing the Fourier transform of the phase space correlation functions, as described above. The Green’s function $\hat{G}[\omega]$ can be represented as $$\label{xx} \hat{G}[\omega] =\frac{1}{2}\int_{-\infty}^{\infty}\dfrac{{\mathrm{d}}\bar{\omega}}{2\pi}\,\hat{J}(\bar{\omega}) \,\left(\frac{1}{\omega-\bar{\omega}}-\frac{1}{\omega+\bar{\omega}}\right),$$ where the spectral operator $\hat{J}(\bar{\omega})$ is $$\hat{J}(\omega) = 2\pi \sum_{j}\frac{1}{\omega_{j}}\,\delta(\omega-\omega_{j})\, \vert j\rangle\langle j\vert.$$ Here we have used that $\hat{K}$ is a positive-semidefinite matrix [@Bollobas2013], which satisfies $\hat{K}\vert j\rangle = \omega_{j}^2\,\vert j\rangle$ with $\omega_{j}\geqslant 0$ (recall that $\langle j\vert j^{{\prime}}\rangle=\delta_{j, j^{{\prime}}}$ and $\sum_{j}\vert j\rangle\langle j\vert =\hat{I}$). The general expression does not distinguish the retarded, advanced, ordered, and anti-ordered components of $\hat{G}[\omega]$. A proper representation of the components requires a regularization around the poles $\omega =\pm\bar{\omega}$ of Eq. , namely \[green\_14\] $$\begin{aligned} \hat{G}^{r,a}[\omega] & = \frac{1}{2}\int\limits_{-\infty}^{\infty}\dfrac{{\mathrm{d}}\bar{\omega}}{2\pi}\, \hat{J}(\bar{\omega})\left(\frac{1}{\omega-\bar{\omega}\pm {\imath}0^{+}}-\frac{1}{\omega+\bar{\omega}\pm {\imath}0^{+}}\right) \nonumber\\ & =\left[(\omega\pm{\imath}0^{+})^{2}\,\hat{I}-\hat{K}\right]^{-1}, \label{15a}\\ \hat{G}^{\mathbb{T},\overline{\mathbb{T}}}[\omega] & = \frac{1}{2}\int\limits_{-\infty}^{\infty}\dfrac{{\mathrm{d}}\bar{\omega}}{2\pi}\,\hat{J}(\bar{\omega})\left(\frac{1}{\omega-\bar{\omega}\pm {\imath}0^{+}}-\frac{1}{\omega+\bar{\omega}\mp {\imath}0^{+}}\right)\nonumber \\ & = \left[\omega^{2}\,\hat{I} - (\sqrt{\hat{K}}\mp{\imath}0^{+}\,\hat{I})^{2}\right]^{-1}. \label{15b}\end{aligned}$$ The Green’s functions $\hat{G}^{r,a}(t,t^{{\prime}})$ and $\hat{G}^{\mathbb{T},\overline{\mathbb{T}}}(t,t^{{\prime}})$ are obtained by the inverse Fourier transform of Eqs.  and are consistent with Eqs.  and , as they should. Substituting Eqs.  and in the inverse Fourier transform Eq. , we write the retarded component of $\underline{\mathcal{G}}(t-t^{{\prime}})$ as $$\begin{gathered} \underline{\check{\mathcal{G}}}^{r}(t-t^{{\prime}})=\theta(t-t^{\prime})\\ \times\begin{pmatrix} -\frac{\sin\left[\sqrt{\hat{K}}(t-t^{{\prime}})\right]}{\sqrt{\hat{K}}} & \cos\left[\sqrt{\hat{K}}\,(t-t^{{\prime}})\right]\\ -\cos\left[\sqrt{\hat{K}}\,(t-t^{{\prime}})\right] & -\sqrt{\hat{K}}\,\sin\left[\sqrt{\hat{K}}(t-t^{{\prime}})\right] \end{pmatrix}\\ +\;\text{solution of homogeneous equation},\end{gathered}$$ and $\underline{\check{\mathcal{G}}}^{a}(t-t^{{\prime}}) = -\,\underline{\check{\mathcal{G}}}^{r}(t^{{\prime}}-t)$, where $\check{\mathcal{G}}^{r,a}(0^{\pm})=\mathcal{\check{Q}}$. Similarly, the ordered and anti-ordered components read $$\begin{gathered} \underline{\check{\mathcal{G}}}^{\mathbb{T},\overline{\mathbb{T}}}(t-t^{{\prime}})=\\ \begin{pmatrix} \frac{1}{2{\imath}\,\sqrt{\hat{K}}}\text{e}^{\mp{\imath}\sqrt{\hat{K}}\,\vert t- t^{{\prime}}\vert} & \pm\frac{1}{2}\text{sgn}(t-t^{\prime})\,\text{e}^{\mp{\imath}\sqrt{\hat{K}}\,\vert t- t^{{\prime}}\vert}\\ \pm\frac{1}{2}\text{sgn}(t^{\prime}-t)\,\text{e}^{\mp{\imath}\sqrt{\hat{K}}\,\vert t- t^{{\prime}}\vert} & \frac{1}{2{\imath}}\sqrt{\hat{K}}\cdot \text{e}^{\mp{\imath}\sqrt{\hat{K}}\,\vert t- t^{{\prime}}\vert} \end{pmatrix} \\ +\;\text{solution of homogeneous equation},\end{gathered}$$ which satisfy $\check{\mathcal{G}}^{\mathbb{T}}(0^{\pm}) - \check{\mathcal{G}}^{\overline{\mathbb{T}}}(0^{\pm})=\pm\check{\mathcal{Q}}$. The Keldysh component of the correlation function is, in general, more demanding to obtain. As standard, the exception is the equilibrium case. In this limit, the fluctuation-dissipation theorem [@Haug2008] relates the Keldysh component of the correlation function of a bosonic system to its retarded and advanced components as $$\begin{aligned} \label{thermal} \hat{G}^{K}_{\rm eq}[\omega] & =\big(\hat{G}^{r}[\omega]-\hat{G}^{a}[\omega]\big)\left(2 f(\omega)+1\right),\end{aligned}$$ where $f(\omega)=\left(\text{e}^{\beta\hbar\omega} -1\right)^{-1}$ is the Bose-Einstein distribution function. One can also write $$\label{20_0} \hat{G}^{>}_{\rm eq}[\omega] + \sigma\,\hat{G}^{<}_{\rm eq}[\omega] = {\imath}\,\hat{A}[\omega]\,\big( 2 f(\omega)\,\delta_{\sigma,+} + 1\big),$$ where ${\sigma}=\pm 1$ and $${\imath}\,\hat{A}(\omega)=\hat{G}^{r}[{\omega}] - \hat{G}^{a}[{\omega}]=\frac{1}{2{\imath}}\left[\hat{J}({\omega})-\hat{J}(-{\omega})\right].$$ As a result, the equilibrium lesser and greater Green’s functions are given by \[21\] $$\begin{aligned} & \hat{G}^{<}_{\rm eq}[\omega] = {\imath}\,\hat{A}(\omega) \,f(\omega),\\ & \hat{G}^{>}_{\rm eq}[\omega] = {\imath}\,\hat{A}(\omega) \,\left(f(\omega)+1\right).\end{aligned}$$ Model Hamiltonian {#sec:model} ================= In this section, we describe the heat transport properties of a molecular junction modeled by a central region $C$ representing a nanostructure coupled by multiple leads connected to reservoirs in thermal equilibrium [@Wang2007; @Wang2008]. We recall that we only consider thermal transport due vibrational degrees of freedom, which is the dominant mechanism in insulator systems. This partition scheme allows one to write the general Hamiltonian of Eq.  as $$\label{eq:general_Hamiltonian} H(t)=\sum_{\alpha}H_{\alpha}(t)\; + H_{C}(t) + H_{T}(t),$$ where \[set\_eq27\] $$\begin{aligned} &H_{\alpha}(t)= H_{\alpha}^{0} + U_{\alpha\alpha}(t),\\ &H_{C}(t) = H_{C}^{0} + U_{CC}(t),\\ &H_{T}(t) = \sum_{\alpha}\Big[\,U_{C\alpha}(t) + U_{\alpha C}(t)\,\Big], \label{4bb}\end{aligned}$$ correspond to the Hamiltonian of the $\alpha$-lead, central region and tunneling, respectively. We define the decoupled Hamiltonian $H_{a}^{0}$ corresponding to the $a$-partition as $$H_{a}^{0}\equiv\frac{1}{2}\,\vec{p}^{\,\,\text{T}}_{a}\cdot\vec{p}_{a}\,+\,\frac{1}{2}\,\vec{u}^{\,\text{T}}_{a} \cdot K_{a a}^{0}\cdot\vec{u}_{a}$$ and the coupling Hamiltonian $U_{ab}(t)$ between $a$ and $b$-partitions as $$U_{ab}(t)\equiv\frac{1}{2}\,\vec{u}^{\,\text{T}}_{a}\cdot V_{a b}(t)\cdot\vec{u}_{b}.$$ The force constant matrix in Eq.  is decomposed as $\hat{K}(t)=\hat{K}^{0}+\hat{V}(t)$, where $\hat{K}^{0}$ gives the dynamical matrix of the decoupled partitions $$\begin{aligned} \hat{K}^{0}&=\left[\bigoplus_{\alpha} K_{\alpha}^{0}\right]\oplus K^{0}_{C},\label{24a}\end{aligned}$$ and $\hat{V}(t)$ corresponds to the coupling between different partitions, namely $$\begin{aligned} \hat{V}(t)&=\left[\bigoplus_{\alpha} V_{\alpha\alpha}(t)\right]\oplus V_{CC}(t)\;\; +\;\;\hat{V}_{\text{mixed}}(t).\label{24b}\end{aligned}$$ These definitions allow us to write the tunneling Hamiltonian $H_{T}(t)$ as $$\begin{aligned} H_{T}(t) =\frac{1}{2}\, \vec{u}^{\,\text{T}}\cdot\hat{V}_{\text{mixed}}(t)\cdot\vec{u},\label{4b}\end{aligned}$$ where $\vec{u}\equiv\left[\bigoplus_{\alpha}\vec{u}_{\alpha}\right]\oplus\vec{u}_{C}$. Note that $\hat{V}=\hat{V}^{\text{T}}$ and therefore $V_{\alpha C}=V_{C\alpha}^{\text{T}}$ for all $\alpha$ terminals. The model Hamiltonian in Eq.  includes $V_{aa}$ ($a=\alpha, C$) terms that have not been explicitly accounted for by previous works [@Wang2007; @Wang2008; @Wang2014]. [Neglecting $V_{aa}$ can be problematic for the consistency of NEGF. This can be seen using the adiabatic switch-on picture, the standard implementation of NEGF in the steady-state regime (A discussion of different implementation schemes can be found, for instance, in Ref. [@Odashima2017]).]{} The absorption of $V_{aa}$ into $K^{0}_{\alpha\alpha}$ modifies the free Green’s functions making their calculation troublesome. This issue becomes clear in the formal development below as well as in the applications [discussed]{} in Sec. \[sec:application\]. To discuss the thermodynamic properties of the system it is convenient to describe the molecular junction as formed by reservoirs coupled to an extended central region, which we refer to as “molecule". Accordingly, we write Eq.  as $$H(t) = \sum_{\alpha}H_{\alpha}(t) + H_{M}(t),$$ where the molecule Hamiltonian reads $$H_{M}(t)\equiv H_{C}(t) + H_{T}(t).$$ The energy of the extended molecule is defined as $E_{M}(t)\equiv\left\langle H_{M}(t)\right\rangle$, namely $$\begin{aligned} \label{EM} &E_{M}(t)=\frac{{\imath}\hbar}{2}\,{\rm Tr} \Big\lbrace C_{CC}^{<(pp)}(t,t) + K_{CC}(t)\cdot C_{CC}^{<(uu)}(t,t) \nonumber\\ &+\sum_{\alpha}\left[V_{C\alpha}(t)\cdot C_{\alpha C}^{<(uu)}(t,t) \, + C_{C\alpha}^{<(uu)}(t,t)\cdot V_{\alpha C}(t)\right]\Big\rbrace,\end{aligned}$$ where the components of lesser functions are explicit given by $$\begin{aligned} {\imath}\hbar\,\big[C^{<(pp)}_{a b}(t,t^{\prime})\big]_{k,k^{\prime}} &= \big\langle \left[\vec{p}_{b}(t^{\prime})\right]_{k^{\prime}}\; \left[\vec{p}_{a}(t)\right]_{k} \big\rangle;\\ {\imath}\hbar\,\big[C^{<(uu)}_{a b}(t,t^{\prime})\big]_{n,n^{{\prime}}} &= \big\langle \left[\vec{u}_{b}(t^{\prime})\right]_{n^{{\prime}}}\; \left[\vec{u}_{a}(t)\right]_{n} \big\rangle ; \\ {\imath}\hbar\,\big[C^{<(up)}_{a b}(t,t^{\prime})\big]_{n,k} &= \big\langle \left[\vec{p}_{b}(t^{\prime})\right]_{k}\; \left[\vec{u}_{a}(t)\right]_{n} \big\rangle;\\ {\imath}\hbar\,\big[C^{<(pu)}_{a b}(t,t^{\prime})\big]_{k,n} &= \big\langle \left[\vec{u}_{b}(t^{\prime})\right]_{n}\; \left[\vec{p}_{a}(t)\right]_{k} \big\rangle.\end{aligned}$$ with $a, b = \lbrace C, \alpha\rbrace$, in line with Eq. . One can define the thermal current flowing through an open molecule connected to multiple reservoirs by comparing its energy variation $$\begin{aligned} \label{displacement_I} \frac{{\mathrm{d}}E_{M}(t)}{{\mathrm{d}}t}=& \left\langle\frac{{\mathrm{d}}H_{M}(t)}{{\mathrm{d}}t}\right\rangle \nonumber \\ =& \frac{{\imath}}{\hbar} \left\langle\left[H(t), H_{M}(t)\right]\right\rangle + \left\langle\frac{\partial H_{M}(t)}{\partial t}\right\rangle\end{aligned}$$ with the energy continuity equation, expressed as $$\label{displacement_II} \frac{{\mathrm{d}}E_{M}(t)}{{\mathrm{d}}t} = \sum_{\alpha} J_{\alpha}(t) + \Phi(t),$$ where one associates $J_{\alpha}(t)$ to the thermal current from $\alpha$-reservoir into the molecule and $\Phi(t)$ is power developed by the ac sources (or drives) in the molecule. Hence, by inspection one infers that $$\begin{aligned} J_{\alpha}(t) = -\frac{i}{\hbar}\left\langle\left[H(t), H_{\alpha}(t)\right]\right\rangle \label{IL}\end{aligned}$$ and $$\begin{aligned} \label{PHII} & \Phi(t) = \left\langle\frac{\partial H_{M}(t)}{\partial t}\right\rangle. \end{aligned}$$ [Using the equation-of-motion method [@Haug2008]]{}, we write the thermal current from $\alpha$-reservoir into the molecule in terms of the correlation functions as $$\begin{aligned} J_{\alpha}(t) =&\,\text{Re}\left[\text{Tr}\left\lbrace V_{C\alpha}(t)\cdot{\imath}\hbar\,C^{<(pu)}_{\alpha C}(t,t)\right\rbrace\right],\label{eq_ILL}\end{aligned}$$ while the power developed by the external time-dependent drives reads $$\begin{aligned} \Phi(t) = & \text{Re}\left[\text{Tr}\left\lbrace \frac{1}{2}\,\dot{V}_{CC}(t)\cdot {\imath}\hbar\,C_{CC}^{<(uu)}(t,t)\right.\right.\nonumber\\ &\left.\left. \qquad\qquad + \sum_{\alpha}\dot{V}_{C\alpha}(t)\cdot {\imath}\hbar\,C_{\alpha C}^{<(uu)}(t,t) \right\rbrace\right].\label{eq_Phii}\end{aligned}$$ In the following subsections we study separately the steady-state transport ($\dot{V}_{ab}=0$) and the heat transport due to pumping by an external drive ($\dot{V}_{ab}\neq 0$) for $a,b = \lbrace \alpha, C\rbrace$. Steady-state transport {#Sec_Sst} ---------------------- Let us now calculate the steady-state thermal current flowing from the $\alpha$-lead due to a temperature difference in the reservoirs. Here, we consider the heat current expression for a time-independent coupling matrix $\hat{V}$. Since the Hamiltonian does not explicitly depends on time, it is convenient to work in the frequency representation. The Fourier transform of $C^{<(up)}_{C\alpha}(t,t^{{\prime}})$ is $$\begin{aligned} \label{eq_29} C_{C\alpha}^{<(up)}(t,t^{{\prime}}) &=\int_{-\infty}^{\infty} \frac{{\mathrm{d}}\omega}{2\pi}\,\text{e}^{-{\imath}\omega (t-t^{{\prime}})}\,C_{C\alpha}^{<(up)}[\omega],\end{aligned}$$ where $C_{C\alpha}^{<(up)}[\omega]={\imath}\,\omega\,G^{<}_{C\alpha}[\omega]$. Substituting Eq.  into Eq. , we cast the steady-state heat current as $$\label{40} J_{\alpha}^{(S)}=\int_{-\infty}^{\infty}\,\frac{{\mathrm{d}}\omega}{4\pi}\,\hbar\omega\, {\rm Tr}\!\left\{ V_{C\alpha}\cdot G^{<}_{\alpha C}[\omega] - G^{<}_{C\alpha}[\omega]\cdot V_{\alpha C} \right\} .$$ The system Green’s function $\hat{G}[\omega]=\big(\omega^2\,\hat{I} - \hat{K}\big)^{-1}$ satisfies the Dyson equation $$\begin{aligned} \label{dyson_I} \hat{G}[\omega] & = \hat{g}[\omega]+\hat{g}[\omega]\cdot\hat{V}\cdot\hat{G}[\omega] \nonumber \\ & = \hat{g}[\omega]+\hat{G}[\omega]\cdot\hat{V}\cdot\hat{g}[\omega],\end{aligned}$$ where $\hat{K}=\hat{K}^{0}+\hat{V}$ and $\hat{g}[\omega]=\big(\omega^2\,\hat{I} - \hat{K}^{0}\big)^{-1}$. Note that the free Green’s function $ \hat{g}[\omega]$ is block diagonal in the partitions. From Eq.  we obtain \[43\] $$\begin{aligned} & G_{C\alpha}[{\omega}] =G_{CC}[{\omega}]\cdot V_{C\alpha}\cdot \tilde{g}_{\alpha}[{\omega}],\label{34a}\\ & G_{\alpha C}[{\omega}] =\tilde{g}_{\alpha}[{\omega}]\cdot V_{\alpha C}\cdot G_{CC}[{\omega}],\label{34b}\\ & G_{CC}[{\omega}] = \left(\tilde{g}_{C}[{\omega}]^{-1}-\tilde{\Sigma}[{\omega}]\right)^{-1},\label{34c}\\ & G_{\alpha\beta}[{\omega}] = \tilde{g}_{\alpha}[{\omega}]\cdot V_{\alpha C}\cdot G_{CC}[{\omega}]\cdot V_{C\beta}\cdot\tilde{g}_{\beta}[{\omega}]\nonumber\\ &\qquad\quad\quad + \delta_{\alpha\beta}\;\tilde{g}_{\alpha}[{\omega}],\label{34d}\end{aligned}$$ where, for notational convenience, we introduce an *effective embedding self-energy* $$\label{eq:self-energy_VgV} \tilde{\Sigma}[{\omega}]=\sum_{\alpha}\tilde{\Sigma}_{\alpha}[{\omega}] = \sum_\alpha V_{C\alpha}\cdot\tilde{g}_{\alpha}[{\omega}]\cdot V_{\alpha C},$$ and an *effective free Green’s function* $$\begin{aligned} \label{set_32} &\tilde{g}_{a}[{\omega}]^{-1}=g_{a}[{\omega}]^{-1} - V_{aa}\quad\text{with}\quad a=\lbrace \alpha,C\rbrace,\end{aligned}$$ where $g_{a}[{\omega}]=({\omega}^2 I_{a}- K^{0}_{a})^{-1}$. [ In Sec. \[sec\_application\_ss\] and in Appendix \[sec:freegf\] we discuss the importance of including $V_{aa}$ in the surface Green’s function. ]{} For $a=\alpha$, it corresponds to Green’s function in thermal equilibrium with the $\alpha$-reservoir at a temperature $T_{\alpha}$. Hence, using Eqs.  and we write obtain \[set\_33\] $$\begin{aligned} & g^{<}_{\alpha}[\omega] ={\imath}A_{\alpha}(\omega)\,f_{\alpha}({\omega}),\\ & g^{>}_{\alpha}[\omega] = {\imath}A_{\alpha}(\omega)\,\Big(1+f_{\alpha}({\omega})\Big),\\ & g^{r,a}_{\alpha}[\omega] =\left[\left(\omega \pm {\imath}0^{+}\right)^{2}\,I_{\alpha}-K^{0}_{\alpha}\right]^{-1},\label{set_33_c}\end{aligned}$$ where ${\imath}A_{\alpha}(\omega) \equiv g^{r}_{\alpha}[{\omega}]-g^{a}_{\alpha}[{\omega}]$ is the *$\alpha$-lead “free" spectral function* and $f_{\alpha}({\omega}) = \left(\text{e}^{\beta_{{\alpha}}\hbar{\omega}}-1\right)^{-1}$ with $\beta_{\alpha}=1/k_{B}T_{\alpha}$. [ In general, the retarded and advanced surface Green’s functions ${g}_{\alpha}^{r,a}[{\omega}]$ are computed by decimation techniques [@Sancho1985; @Wang2007]. ]{} The lesser components of $G_{\alpha C}$ and $G_{C\alpha}$ are obtained by applying the Langreth rules [@Rammer1986; @Haug2008] to Eq. . By inserting the result in Eq. , we obtain $$\begin{aligned} \label{20} J_{\alpha}^{(S)} = \int_{-\infty}^{\infty} &\frac{{\mathrm{d}}{\omega}}{4\pi}\,\hbar\omega\,\text{Tr} \left\lbrace G^{<}_{C C}[{\omega}]\cdot\big(\tilde{\Sigma}_{\alpha}^{r}[{\omega}]-\tilde{\Sigma}_{\alpha}^{a}[{\omega}]\big)\right. \nonumber \\ &\ \ \ \ \ \ \ \left. -\big(G^{r}_{C C}[{\omega}] - G^{a}_{C C}[{\omega}]\big)\cdot\tilde{\Sigma}_{\alpha}^{<}[{\omega}] \right\rbrace.\end{aligned}$$ The self-energies are given in terms of $$\begin{aligned} & \tilde{g}^{<}_{\alpha}[\omega] ={\imath}\tilde{A}_{\alpha}(\omega)\,f_{\alpha}({\omega}),\label{37a}\\ & \tilde{g}^{>}_{\alpha}[\omega] = {\imath}\, \tilde{A}_{\alpha}(\omega)\,\bigl(1+f_{\alpha}({\omega})\bigl),\label{37b}\\ & \tilde{g}^{r,a}_{\alpha}[\omega] =\left[\left(\omega \pm {\imath}0^{+}\right)^{2}\,I_{\alpha}-K_{\alpha}\right]^{-1},\label{37c}\end{aligned}$$ where $K_{\alpha}=K_{\alpha}^{0}\,+\,V_{\alpha\alpha}$ and ${\imath}\,\tilde{A}_{\alpha}(\omega)=\tilde{g}^{r}_{\alpha}[{\omega}]-\tilde{g}^{a}_{\alpha}[{\omega}]$. Hence, \[set:gamma\] $$\label{width} \tilde{\Sigma}_{\alpha}^{r}[{\omega}]-\tilde{\Sigma}_{\alpha}^{a}[{\omega}] = {\imath}V_{C\alpha}\cdot\tilde{A}_{\alpha}[{\omega}]\cdot V_{\alpha C} \equiv-{\imath}\,\tilde{\Gamma}_{\alpha}[{\omega}],$$ where $\tilde{\Gamma}_{\alpha}[{\omega}]$ is the $\alpha$-contact line width function. Similarly, $$\label{Eq_48b} \tilde{\Sigma}_{\alpha}^{<}[{\omega}] = V_{C\alpha}\cdot\tilde{g}^{<}_{{\alpha}}[{\omega}]\cdot V_{{\alpha}C} = -{\imath}\, f_{\alpha}({\omega})\,\tilde{\Gamma}_{{\alpha}}[{\omega}].$$ By expressing the self-energies in terms of the line width functions, we write the heat current as $$\begin{gathered} \label{51} J_{\alpha}^{(S)} =\int_{-\infty}^{\infty}\frac{{\mathrm{d}}{\omega}}{4\pi{\imath}}\,\hbar\omega\,\text{Tr}\left\lbrace\tilde{\Gamma}_{\alpha}[\omega]\cdot\Big[G^{<}_{CC}[\omega]\right. \\ \left. -\, f_{\alpha}(\omega)\Big(G^{r}_{CC}[\omega]-G^{a}_{CC}[\omega]\Big)\Big]\right\rbrace.\end{gathered}$$ Applying the Langreth rules to Eq.  and using Eq. , we obtain \[52\] $$\begin{aligned} & G^{<}_{CC}[{\omega}]=-\sum_{\alpha}G^{r}_{CC}[{\omega}]\cdot{\imath}\,\tilde{\Gamma}_{\alpha}[{\omega}]\cdot G^{a}_{CC}[{\omega}]\;f_{\alpha}({\omega}),\\ & G^{r}_{CC}[{\omega}]-G^{a}_{CC}[{\omega}]=-\sum_{\alpha} G^{r}_{CC}[{\omega}]\cdot{\imath}\,\tilde{\Gamma}_{\alpha}[{\omega}]\cdot G^{a}_{CC}[{\omega}],\end{aligned}$$ that are inserted in Eq.  to finally arrive at the steady-state heat current $$\label{53} J_{\alpha}^{(S)}=\sum_{\beta} \int_{0}^{\infty}\frac{{\mathrm{d}}\omega}{2\pi}\hbar\omega\, \mathcal{T}_{\alpha\beta}(\omega)\, \Big[f_{\alpha}(\omega)-f_{\beta}(\omega)\Big],$$ where $$\label{transmission} \mathcal{T}_{\alpha\beta}(\omega)\equiv \text{Tr}\left\{\tilde{\Gamma}_{\alpha}[\omega]\cdot G^{r}_{CC}[\omega] \cdot\tilde{\Gamma}_{\beta}[\omega]\cdot G^{a}_{CC}[\omega]\right\},$$ rigorously obtaining the Landauer heat conductance that has been phenomenologically put forward [@Rego1998] and adopted by several authors, see for instance, Refs. . As a consequence, the numerical implementation of the heat current $J_{\alpha}^{(S)}$ given by Eq. , is obviously the same as the one using the scattering matrix [@Mingo2003; @Zhang2007]. [ The explicitly symmetric tunneling Hamiltonian $H_T(t)$, Eq. , leads to an expression for the heat current $J_\alpha(t)$ with terms depending on both $V_{C\alpha}$ and $V_{\alpha C}$. This ensures that $J_\alpha(t)$ accounts for processes corresponding to the heat flow from the central region $C$ to the $\alpha$-lead as well as from $\alpha$ to $C$. Our result differs from the heat current derived by Wang and collaborators [@Wang2007; @Wang2008; @Wang2014]. These authors derive the heat current using the Hamiltonian without explicitly taking into account processes associated to $V_{L C}$ (corresponding to $\alpha=L$). The obtained expression for heat current depends only on the hybrid Green’s function $G^{<}_{CL}$. Furthermore, the absence of $V_{\alpha C}$ (or $V_{C\alpha}$) in their Hamiltonian implies that the self-energy $\Sigma_{L}=V_{CL}\cdot g_{L}\cdot V_{LC}$ has to be introduced in a somewhat arbitrary manner. Moreover, Refs. [@Wang2007; @Wang2008; @Wang2014] need the [*ad hoc*]{} symmetrization, $J = (J_L + J_L^* - J_R - J_R^*)/4$, to obtain the well known Caroli formula for the transmission since the integrand of Eq. (\[40\]) is not purely real in the absence of $V_{\alpha C}$ (or $V_{C\alpha}$). ]{} The transmission coefficient $\mathcal{T}_{\alpha\beta}({\omega})$ is interpreted as the probability of an energy $\hbar\omega$ to be transmitted from the reservoir $\alpha$ to the reservoir $\beta$ and has the same structure of the Meir-Wingreen formula [@Meir1992] that describes the electronic conductance of fully coherent systems of non-interacting electrons. It is straightforward to verify that $\mathcal{T}_{\alpha\beta}(\omega)=\mathcal{T}_{\beta\alpha}(\omega)$, which implies that in steady-state $J^{(S)}\equiv J_{L}^{(S)} = -J_{R}^{(S)}$. Hence, ${\mathrm{d}}E_{M}/{\mathrm{d}}t=0$ and, as expected, the molecule energy does not change in time. Pumping transport {#sec_pumping} ----------------- Let us now study the heat current in nanoscopic systems due to a time-dependent external drive, as motivated in the introduction. As in the stationary case, we employ the NEGF theory, since more standard approaches, like the Kubo-Greenwood one, are only suitable for bulk systems. The analysis of heat currents in time-dependent systems is far more involved for bosonic degrees of freedom than for the electronic ones. In the latter case, the Fermi energy (and the corresponding Fermi velocity) establishes a characteristic time scale for the electronic dynamics. In experiments [@Switkes1999] the external driving is slow with respect to the electronic dynamics, which allows to approach the problem using the adiabatic approximation [@Buttiker1994; @Brouwer1998; @Vavilov2001; @Mucciolo2007; @Hernandez2009]. In the bosonic case there is no internal characteristic time scale and analytical progress has to resort on the assumption that the driving force is small to employ perturbation theory. As an example of time-dependent transport, we study the case of periodically driven system in time. We assume that the coupling between regions depends on time as $\hat{V}(t)=\hat{V} + \varepsilon\,\hat{v}(t)$, where $\varepsilon$ is a dimensionless parameter. [ The initial state is the fully connected molecule-leads system in equilibrium. ]{} Defining an auxiliary matrix $\check{\mathcal{V}}(t)$ as $$\begin{aligned} \label{perturb} \check{\mathcal{V}}(t)=\varepsilon\, \begin{pmatrix} \hat{v}(t) & \hat{0}\\ \hat{0} & \hat{0} \end{pmatrix},\end{aligned}$$ we can write $\check{\mathcal{K}}(t) = \check{\mathcal{K}} - \check{\mathcal{Q}}\cdot\check{\mathcal{V}}(t)$ or, equivalently, $\check{\mathcal{M}}(t)=\check{\mathcal{M}}+\check{\mathcal{V}}(t)$. It follows from Eq.  that the Dyson’s equation reads $$\label{dyson_II} \check{\mathcal{G}}(t,t^{{\prime}})=\underline{\check{\mathcal{G}}}(t-t^{{\prime}}) + \int{\mathrm{d}}\bar{t}\;\underline{\check{\mathcal{G}}}(t-\bar{t})\cdot\check{\mathcal{V}}(\bar{t})\cdot\check{\mathcal{G}}(\bar{t},t^{{\prime}}),$$ where $\underline{\check{\mathcal{G}}}(t-t^{{\prime}})$ denotes the steady-state Green’s function transport, given by Eqs.  to . We consider $\varepsilon\ll 1$ and treat the problem using pertubation theory. This is an alternative approach to the Floquet analysis used in Refs. . [ We note that the Floquet method is extremely efficient, irrespective of coupling strength, provided the ratio between the band width and the driving frequency is not large, a condition that keeps the size of the Hilbert space computationally manageable. The opposite limit of small $\Omega$ is in general computationally prohibitive for this method. For electronic systems, however, it has been argued that if the characteristic single particle dwell time $\tau_d$ (evaluated at the Fermi energy) in the scattering region is much smaller than $1/\Omega$ only few harmonics of the perturbation are coupled. This allows for an effective truncation of the Hilbert space. The dwell time $\tau_d$ depends on the spectral density and on the strength of its coupling to the leads [@Lewenkopf2004]. Since these quantities typically show a strong energy dependence, one has to verify if $\tau_d\Omega \ll 1$ is indeed fulfilled. In general the latter condition rules out the application of the Floquet approach for small $\Omega$ to a potentially large number of systems. ]{} The Green’s function deviation from steady-state, $\delta \check{\mathcal{G}}(t,t^{{\prime}}) \equiv \check{\mathcal{G}}(t,t^{{\prime}}) - \underline{\check{\mathcal{G}}}(t-t^{{\prime}})$, is conveniently represented by $$\begin{aligned} \delta \check{\mathcal{G}}(t,t^{{\prime}}) & = \iint\frac{{\mathrm{d}}\omega\,{\mathrm{d}}\omega^{{\prime}}}{(2\pi)^2}\,\text{e}^{-{\imath}(\omega t - \omega^{{\prime}} t^{{\prime}})}\;\delta\check{\mathcal{G}}[\omega,\omega^{{\prime}}],\end{aligned}$$ where $$\begin{aligned} &\delta\check{\mathcal{G}}[\omega,\omega^{{\prime}}]= \begin{pmatrix} 1 & {\imath}\,\omega^{{\prime}}\\ -{\imath}\,\omega & {\omega}\,{\omega}^{{\prime}} \end{pmatrix}\otimes\sum_{n\geqslant 1}\varepsilon^{n}\,\hat{\Lambda}_{n}[{\omega},{\omega}^{{\prime}}],\end{aligned}$$ and the set $\left\lbrace\hat{\Lambda}_{n}[{\omega},{\omega}^{{\prime}}]\right\rbrace$ is defined by the recurrence relation \[eqs\_recurrs\] $$\label{eq_recurr} \hat{\Lambda}_{n}[{\omega},{\omega}^{{\prime}}]=\hat{G}[{\omega}]\cdot\int\limits_{-\infty}^{\infty}\frac{{\mathrm{d}}\nu}{2\pi}\; \hat{v}[{\omega}-\nu]\cdot\hat{\Lambda}_{n-1}[\nu,\omega^{{\prime}}],$$ with $$\begin{aligned} \label{set_49} \hat{\Lambda}_{1}[\omega,\omega^{{\prime}}]=\hat{G}[\omega]\cdot\hat{v}[{\omega}-{\omega}^{{\prime}}]\cdot\hat{G}[\omega^{{\prime}}],\end{aligned}$$ where [$\hat{G}[\omega]=\big(\omega^2\,\hat{I}-\hat{K}\big)^{-1}$]{} has been discussed in the previous section and $$\label{fourier_v} \hat{v}[{\omega}]=\int_{-\infty}^{\infty}{\mathrm{d}}t\;\hat{v}(t)\;\text{e}^{{\imath}\omega t}.$$ We model the coupling terms as \[set\_55\] $$\begin{aligned} &v_{\alpha C}(t) =\phi_{\alpha}(t)\,V_{\alpha C},\\ &v_{C \alpha}(t) =\phi_{\alpha}(t)\,V_{C\alpha},\\ &v_{\alpha\alpha}(t) =\phi_{\alpha}(t)\,V_{\alpha\alpha},\\ &v_{CC}(t) = \sum_{\alpha}\phi_{\alpha}(t)\,V_{CC}^{(\alpha)}\label{59d},\end{aligned}$$ where $\phi_{\alpha}(t)$ is a dimensionless function that describes the pumping time-dependence of the $\alpha$-lead. For a periodic pumping, [*i.e.*]{}, $\phi_{\alpha}(t+\tau)=\phi_{\alpha}(t)$ the pumping function can be expressed by a Fourier series in harmonic form as $$\label{pumping} \phi_{\alpha}(t)=\sum_{n=1}^{\infty}2 a_{n}^{(\alpha)}\cos(\Omega_{n}\,t+\varphi_{n}^{(\alpha)})\;\;\text{for}\;\;\Omega_{n}=n\,\frac{2\pi}{\tau}.$$ By construction [$\big\langle\phi(t)\big\rangle_{\tau}=0$]{}, where [$\langle\cdots\rangle_{\tau} \equiv\frac{1}{\tau}\int_{0}^{\tau}{\mathrm{d}}t\,(\ldots)$]{} stands for the time average over a period. We assume that $\vert\phi_{\alpha}(t)\vert_{\text{m{a}x}}=1$. Expanding the Dyson equation, Eq. , in a power series in $\varepsilon$, we write the energy $E_{M}(t)$ of the extended molecule as $$\label{energy_perturbation} E_{M}(t) = E_{M}^{(0)} + \varepsilon \,E_{M}^{(1)}(t) + \varepsilon^2 \,E_{M}^{(2)}(t) + \cdots .$$ The explicitly expression for $E_{M}^{(n)}(t)$ are rather lengthy and are given in Appendix \[sec:pertubative\]. For a periodic pumping we show that $E^{(0)}_{M}$ does not depend on time and $E_{M}^{(n)}(t)=E_{M}^{(n)}(t+\tau)$ for $n=1, 2, \ldots$ (see Appendix \[sec:pertubative\]). We express the variation of the extended molecule energy between $t$ and $t+\Delta t$ in the form of a first law of thermodynamics, namely, $\Delta E_{M}^{(\Delta t)} \equiv \sum_{\alpha}Q_{\alpha}^{(\Delta t)} + W^{(\Delta t)}$. Note that $-Q_{\alpha}^{(\Delta t)}$ corresponds the heat transferred from the molecule to the $\alpha$-reservoir, while $W^{(\Delta t)}$ is the energy transferred to the molecule that does not come from reservoirs, namely, $$\begin{aligned} \label{set_65} Q_{\alpha}^{(\Delta t)}=\int_{t}^{t+\Delta t}{\mathrm{d}}\bar{t}\;J_{\alpha}(\bar{t})\quad\text{and}\quad W^{(\Delta t)} = \int_{t}^{t+\Delta t}{\mathrm{d}}\bar{t}\;\Phi(\bar{t}),\end{aligned}$$ where $J_{\alpha}(t)$ and $\Phi(t)$ are, respectively, the thermal current flowing from $\alpha$-reservoir into the molecule and the power developed by the ac sources. For a periodic process after a cycle of period $\Delta t=\tau$, we finding that $\Delta E_{M}^{(\tau)} = 0$, so that $$\label{eqs_Q+W} \sum_{\alpha}Q_{\alpha}^{(\tau)}+ W^{(\tau)}=0,$$ where we define $$\begin{aligned} \label{eqs_Q_W} Q_{\alpha}^{(\tau)} = \tau\,\left\langle J_{\alpha}(t)\right\rangle_{\tau} \qquad\text{and}\qquad W^{(\tau)} = \tau\,\left\langle \Phi(t)\right\rangle_{\tau}. \end{aligned}$$ $\left\langle J_{\alpha}(t)\right\rangle_{\tau}$ and $\left\langle \Phi(t)\right\rangle_{\tau}$ can be evaluated by using a perturbative expansion \[cycles\] $$\begin{aligned} \left\langle J_{\alpha}(t)\right\rangle_{\tau} &= J_{\alpha}^{(S)} + \varepsilon^{2}\,J_{\alpha}^{(P)} + \mathcal{O}(\varepsilon^3),\label{J_cycle}\\ \left\langle \Phi(t)\right\rangle_{\tau} &= \varepsilon^{2}\,\Phi^{(P)}\; + \;\mathcal{O}(\varepsilon^4)\label{Phi_cycle},\end{aligned}$$ where $J_{\alpha}^{(P)}$ and $\Phi^{(P)}$ are discussed in Appendix \[sec:pertubative\] and can be cast as \[set\_69\] $$\begin{aligned} J^{(P)}_{\alpha}=&\sum_{n=1}^{\infty}\sum_{\beta\gamma}a_{n}^{(\beta)}a_{n}^{(\gamma)}\bigg[\cos\left(\varphi_{n}^{(\beta)}-\varphi_{n}^{(\gamma)}\right)\,A_{\beta\gamma}^{\alpha}(n)\nonumber\\ &-\sin\left(\varphi_{n}^{(\beta)}-\varphi_{n}^{(\gamma)}\right)\,B_{\beta\gamma}^{\alpha}(n)\bigg],\\ \Phi^{(P)}=&\sum_{n=1}^{\infty}\sum_{\beta\gamma}a_{n}^{(\beta)}a_{n}^{(\gamma)}\bigg[\cos\left(\varphi_{n}^{(\beta)}-\varphi_{n}^{(\gamma)}\right)\,D_{\beta\gamma}(n)\nonumber\\ &-\sin\left(\varphi_{n}^{(\beta)}-\varphi_{n}^{(\gamma)}\right)\,E_{\beta\gamma}(n)\bigg],\end{aligned}$$ where the quantities $A_{\beta\gamma}^{\alpha}(n)$, $B_{\beta\gamma}^{\alpha}(n)$, $D_{\beta\gamma}(n)$, and $E_{\beta\gamma}(n)$ are given by intricate expressions involving combinations of equilibrium Green’s functions. The latter are explicitly given by Eq. . Note that in Eq. (\[cycles\]) the first order contributions in $\varepsilon$ vanish. The second order terms $J_{\alpha}^{(P)}$ and $\Phi^{(P)}$ depend explicitly on the periodic profile $\phi_\alpha(t)$. The perturbative approach we put forward allows us to write the pumping currents and power order by order in terms of products and sums of steady-state Green’s functions, which are represented by square matrices of the order of the number of degrees of freedom of the system. Hence, here the numerical bottleneck for addressing realistic systems is the same as in the steady-state, namely, the calculation of the equilibrium Green’s functions as a function of the frequency. Having obtained these objects by any standard method, one needs only to insert the corresponding quantities in the expressions given in the App. \[sec:pertubative\]. We note that the non-perturbative regime requires a calculation of the system Green’s functions by directly solving the corresponding differential equations, that is in general a very challenging task. Application: Molecular junction {#sec:application} =============================== We investigate the consequences of our findings using the molecular junction model presented in Sec. \[sec:model\]. We consider a one-dimensional system where a central region with $N$ atoms is attached to two semi-infinite linear chains acting as leads, as depicted in Fig. \[fig:cadeia\_linear\]. For the sake of clarity, we consider the simplest non trivial case of a diatomic molecule, namely, $N=2$. The force constant between the atoms in the leads and its first neighbors is $k$. The force constant between the atoms in the central region is $k_C$ while the left (right) atom connects to the left (right) lead though a coupling $k_L$ ($k_R$). ![(Color online) Sketch of the model system. Balls represent the chain sites while springs represent the coupling potential. The central region, formed by 2 atoms $A$ and $B$ coupled by a spring with force constant $k_C$, is connected to left and right semi-infinite leads through couplings $k_L$ and $k_R$, respectively. The leads have a constant coupling $k$.[]{data-label="fig:cadeia_linear"}](fig1_system.pdf){width="1.00\columnwidth"} In this model the inter-partition and central coupling reduced matrices are \[set\_coupling\] [$$\begin{aligned} V_{LL}& = \begin{pmatrix} k_{L} \end{pmatrix},& V_{LC}& = \begin{pmatrix} -\,k_{L} & 0 \end{pmatrix},& V_{LR}& = \begin{pmatrix} 0 \end{pmatrix}&\nonumber\\ V_{CL}& = \begin{pmatrix} -k_{L} \\ 0 \end{pmatrix},& V_{CC}& = \begin{pmatrix} k_{L} & 0 \\ 0 & k_{R} \end{pmatrix},& V_{CR}& = \begin{pmatrix} 0 \\ -k_{R} \end{pmatrix},&\nonumber\\ V_{RL}& = \begin{pmatrix} 0 \end{pmatrix}, & V_{RC}& = \begin{pmatrix} 0 & -k_{R} \end{pmatrix},& V_{RR}& = \begin{pmatrix} k_{R} \end{pmatrix},\label{vcllc}\end{aligned}$$]{} and [$$\begin{aligned} K_{CC}^{0}=\begin{pmatrix} k_{C} & -k_{C}\\ -k_{C} & k_{C} \end{pmatrix}.\end{aligned}$$]{} Here the matrices $V_{CC}^{(L)}$ and $V_{CC}^{(R)}$ introduced in read $$\begin{aligned} &V_{CC}^{(L)}=\begin{pmatrix} k_{L} & 0\\ 0 & 0 \end{pmatrix}, & V_{CC}^{(R)}=\begin{pmatrix} 0 & 0\\ 0 & k_{R} \end{pmatrix}, \end{aligned}$$ and satisfy $V_{CC} = V_{CC}^{(L)} + V_{CC}^{(R)}$. The retarded and advanced components of the modified Green’s functions are $$\tilde{g}_{\alpha}^{r,a}[{\omega}]= \begin{cases} \dfrac{1}{2}\,\dfrac{{\omega}^2-2k_{\alpha}\mp{\imath}\,{\omega}\sqrt{4k-\omega^2}}{(k-k_{\alpha})\,{\omega}^2 + k_{\alpha}^{2}},& \vert\omega\vert\leqslant\sqrt{4k}\\ \dfrac{1}{2}\,\dfrac{{\omega}^2-2k_{\alpha} -\sqrt{{\omega}^{2}\left(\omega^2-4k\right)}}{(k-k_{\alpha})\,{\omega}^2 + k_{\alpha}^{2}},& \vert\omega\vert >\sqrt{4k}, \end{cases}\label{gfree}$$ for $\alpha=L,R$. Note that the property $\tilde{g}_{\alpha}^{r}[-{\omega}]=\tilde{g}_{\alpha}^{a}[{\omega}]$ is satisfied according to the Eq. . The derivation of Eqs.  is presented in App. \[sec:pertubative\]. Steady-state {#sec_application_ss} ------------ Equations and , allow us to calculate the retarded and advanced self-energies $\tilde{\Sigma}_{L(R)}^{r,a}[{\omega}]$ defined in Eq. , the level-width functions $\tilde{\Gamma}_{L(R)}[{\omega}]$ given by Eqs.  and , and the central region Green’s functions $G_{CC}^{r,a}[{\omega}]$. The local density of states (LDOS) at the site $j=A,B$ in the central region reads $$\begin{aligned} \label{DOS} \text{DOS}_{j}(\omega) = -\frac{2\omega}{\pi}\,\text{Im}\Big[ G^{r}_{CC}[{\omega}]\Big] _{jj}.\end{aligned}$$ The factor $2\omega$ is present to convert the value coming directly from the imaginary part of $G^r_{CC}[{\omega}]$ into the DOS per unit of $\omega$, ensuring that $\int\text{DOS}(\omega)\,{\mathrm{d}}\omega$ equals the number of propagating channels in the system. For the equal force constant case we can calculate the LDOS and the transmission analytically, namely $$\begin{aligned} & \text{DOS}_{j}({\omega})=\frac{2}{\pi\sqrt{4k-{\omega}^2}}\;\Theta(4k-{\omega}^2),\quad\forall\,j\\ & \mathcal{T}({\omega})=\Theta(4k-{\omega}^2).\end{aligned}$$ Figure \[fig:TDOS\] shows the DOS at one of the sites in the central region for $k_L=k_R=k_C=k$. Our formalism recovers the standard DOS for a linear chain. The singularity at $\omega=\sqrt{4k}$ agrees with the frequency in which the dispersion relation of a linear chain $\omega = \sqrt{4k}\sin(k_xa/2)$ becomes flat, [*i.e.*]{}, at the edge of the first Brillouin zone. Here $k_x$ is the longitudinal momentum and $a$ is the lattice parameter. Also, the transmission coefficient $\mathcal{T}(\omega)$ corresponds to a perfect transmission inside the frequency band of the leads $\vert\omega\vert<\sqrt{4k}$ and it is zero otherwise. ![(Color online) DOS and transmission ${\cal T}$ as functions of the frequency $\omega$ in units of $\sqrt{k}$ for $k_L=k_C=k_R=k$.[]{data-label="fig:TDOS"}](fig2_transmission_DOS.pdf){width="1.00\columnwidth"} In the limit of small temperatures and small temperature differences, namely, $T_{L/R} = T \pm \Delta T/2$ with $\Delta T\ll T$ for $T\to 0$, the thermal current for steady-state can be written as $ J^{(S)}_{L,R}= \pm\,\sigma(T)\;\Delta T$, where $\sigma(T)$ corresponds to the thermal conductance defined by [ $$\begin{aligned} \sigma(T)= \frac{2k_B^2T}{h}\int_{0}^{\frac{\hbar\omega_c}{2k_BT}} {\mathrm{d}}x\frac{x^2}{\sinh^{2}x}\, \mathcal{T}\left(\frac{2k_BT}{\hbar} x\right) $$ where $\omega_{c}\equiv \sqrt{4 k}$. From Eq.  it is possible to verify that $\mathcal{T}(T\rightarrow 0)=1$. ]{} The low temperature limit of $\sigma(T)$ is $$\label{quantum_conductance} \sigma_0 = \frac{\pi^2 k_{B}^{2}}{3\,h}\,T$$ as theoretically predicted [@Pendry1983; @Rego1998] and experimentally observed [@Schwab2000]. [ Thus, at low temperatures the thermal conductance $\sigma(T) \propto T$ ]{} vanishes for $T\to 0$, as required by the third law of thermodynamics. ![(Color online) Transmission ${\cal T}$ as a function of the frequency $\omega$ in units of $\sqrt{k}$ in the weak coupling regime. The values of $k_L$, $k_R$ and $k_C$ are indicated in the picture in units of $k$. The vertical dashed lines are the frequencies given by Eq. (\[classicalomega\]).[]{data-label="fig:tsmallkl"}](fig3_transmission_kL_kC_kR.pdf){width="1.00\columnwidth"} Let us now study situations where the force constants are different. In the weak coupling limit, $k_R,k_L \ll k,k_C$, the central region is nearly disconnected from the outside world having only one resonant level at $\omega_C=\sqrt{2k_C}$. Thus, the conductance is only expected to be significant at the vicinity of $\omega_C$. Instead, Fig. \[fig:tsmallkl\] shows one peak at $\omega \approx \omega_C$ and two additional strong peaks, one at zero frequency and another intermediate peak at $0<\omega<\omega_C$. The first peak at $\omega=0$ corresponds to the acoustic mode that has an infinite long wavelength so that the short ranged “defects" introduced by $k_L, k_R, k_C\neq k$ do not affect the transport across the system. This picture is reenforced by noticing that the zero frequency peak is robust against changes in the value of $k_L$, $k_C$ and $k_R$ in the weak coupling regime, see Fig. \[fig:tklkr\]. ![(Color online) Transmission ${\cal T}$ as a function of the frequency $\omega$ in units of $\sqrt{k}$ for different values of $k_R$ (indicated in the figure) with $k_C=0.4$ and $k_L=0.05$. All constants are in units of $k$. The maximum transmission occurs when $k_L=k_R$. The vertical dashed lines are the frequencies given by Eq. (\[classicalomega\]).[]{data-label="fig:tklkr"}](fig4_transmission_kL_diff_kR.pdf){width="1.00\columnwidth"} By coupling the diatomic molecule to leads, the resonance level at $\omega_{C}$ is shifted and acquires broadening, as described by the self-energy $\tilde{\Sigma}^{r}[{\omega}]$. Hence the peak near $\omega_C$ is very sensitive to variations in $k_L$ and $k_R$. These features, are illustrated in Fig. \[fig:tsmallkl\], by inspecting a set of transmission curves where we keep $k_C$ constant and increase $k_L=k_R$. On the other hand, the remaining peak at $0<\omega<\omega_C$ depends only on the values of $k_L$ and $k_R$. In the weak coupling regime, a semi-classical picture explains this additional transmission peak. The natural interfaces frequencies $\omega_\alpha \propto \sqrt{k_\alpha}$, with $\alpha=L,R$, are much smaller then $\omega_{C}$. The large separation in frequencies suggest that the resonance close to $\omega_{C}$ is dominated by the isolated molecule mode, while the other corresponds to an oscillation of a frozen central region. The Green’s functions of such a system gives resonances at the frequencies $$\begin{aligned} \omega_{1,2} = \sqrt{k_C + \left(\frac{k_L+k_R}{2}\right) \pm \sqrt{ \left(\frac{k_L-k_R}{2}\right)^2 + k_C^2} } . \label{classicalomega}\end{aligned}$$ For $k_L=k_R$, $\omega_{1} = \sqrt{2k_C+k_L}$ and $\omega_{2} = \sqrt{k_L}$ that are plotted in Fig. \[fig:tsmallkl\] as vertical dotted lines matching the peaks positions. For $k_{L}\neq k_{R}$, the symmetry is broken and the maximum transmission at all the peaks, except for the one with zero frequency, is no longer perfect. [ We note that our results are qualitatively similar to those in Ref. [@Wang2007], that analyze the steady-state transport through a benzene ring. There is an important difference though: Taking into account $V_{aa}$ in $\tilde g_\alpha$ guarantees that ${\cal T} (\omega \rightarrow 0) \rightarrow 1$, which is a necessary condition to obtain the quantum of thermal conductance for $T\rightarrow 0$. In distinction, by using $g_\alpha$ as the surface Green’s function, as done in Refs. [@Wang2007; @Wang2008; @Wang2014], one obtains ${\cal T} (\omega \rightarrow 0) \rightarrow 0$. ]{} Pumping ------- For simplicity, let us analyze a pumping process between reservoirs at the same temperature. In this case, the steady-state current from the $\alpha$-reservoir is $J_{\alpha}^{(S)}=0$. Hence, $J_\alpha^{(P)}$ gives the leading contribution to the heat flow. We consider the case of pumping functions with a phase difference $\varphi$, namely, $\phi_{L}(t)=\phi_{R}(t-\varphi/\Omega)$, which implies that $\varphi_{n}^{(L)}-\varphi_{n}^{(R)}= n\,\varphi$ and $a_{n}^{(L)}=a_{n}^{(R)}\equiv a_{n}$ for $n\geqslant 1$. According to Eq. , we can express the $\alpha$-thermal pumped current as $$\begin{aligned} J^{(P)}_{\alpha}(\Omega)=&\sum_{n=1}^{\infty}a_{n}^2\bigg[\mathcal{A}_{\text{homo}}^{\alpha}(n\Omega) + \cos(n\varphi)\,\mathcal{A}_{\text{hete}}^{\alpha}(n\Omega) \nonumber\\ &-\sin\left(n\varphi\right)\,\mathcal{B}^{\alpha}(n\Omega)\bigg],\label{JP}\end{aligned}$$ where $\mathcal{A}^{\alpha}_{\text{homo}}(n\Omega)\equiv A^{\alpha}_{LL}(n)+A^{\alpha}_{RR}(n)$, $\mathcal{A}^{\alpha}_{\text{hete}}(n\Omega)\equiv A^{\alpha}_{LR}(n)+A^{\alpha}_{RL}(n)$, $\mathcal{B}^{(\alpha)}(n\Omega)\equiv B^{\alpha}_{LR}(n)-B^{\alpha}_{RL}(n)$ for $\alpha=L,R$. For the symmetric coupling case, [*i.e.*]{}, $k_{L}=k_{R}\neq k_{C}$, we can show that $\mathcal{A}^{L}_{\text{homo/hete}}(n\Omega)=\mathcal{A}^{R}_{\text{homo/hete}}(n\Omega)$ and $\mathcal{B}^{L}(n\Omega)=-\mathcal{B}^{R}(n\Omega)$. Similarly, the pumped power reads $$\begin{aligned} \Phi^{(P)}(\Omega)=&\sum_{n=1}^{\infty}a_{n}^2\bigg[\mathcal{D}_{\text{homo}}(n\Omega) + \cos\left(n\varphi\right)\,\mathcal{D}_{\text{hete}}(n\Omega)\nonumber\\ &-\sin\left(n\varphi\right)\,\mathcal{E}(n\Omega)\bigg]\label{Phi},\end{aligned}$$ where $\mathcal{D}_{\text{homo}}(n\Omega)\equiv D_{LL}(n)+D_{RR}(n)$, $\mathcal{D}_{\text{hete}}(n\Omega)\equiv D_{LR}(n)+D_{RL}(n)$ and $\mathcal{E}(n\Omega)\equiv E_{LR}(n)-E_{RL}(n)$ defined for $0< n\,\Omega<4\sqrt{k}$ and zero otherwise. For further details see Appendix \[sec:pertubative\]. For a symmetric setup (i.e. $k_{L}=k_{R}\neq k_{C}$), we can show that $\mathcal{E}(n\Omega)\equiv 0$. Note that $\Phi^{(P)}$ satisfies the condition $\Phi^{(P)}>0$, [ as exemplified for the diatomic molecule in the Fig. \[figs\_pumping\], which corresponds to positive rate of work performed on the system. Therefore, for reservoirs at the same temperature, the entropy production per cycle cast as $(\Delta S)_{\text{cycle}}/\tau=\varepsilon^2\, \dot{\mathcal{S}}^{(P)}+\mathcal{O}(\varepsilon^{4})$ satisfies $$\dot{\mathcal{S}}^{(P)} = \frac{-J_{L}^{(P)}}{T} + \frac{-J_{R}^{(P)}}{T} = \frac{\Phi^{(P)}}{T} > 0,$$ as expected from the second law of thermodynamics. Note that the overall partition scheme, the definitions of the heat currents, and the pumped power are consistent with general thermodynamic properties. ]{} The pumping function is determined by the choice of the parameter set $\lbrace a_{n}\rbrace$ and phase difference $\varphi$. We study four examples of pumping functions: *single-mode* represented by $a_{n}=\delta_{n,1}/2$; *square* oscillation represented by $a_{n}=2/(n\pi)$ for $n$ odd (and zero otherwise); *triangle* oscillation by $a_{n}=\frac{4}{n^{2}\pi^{2}}(-1)^{\frac{n-1}{2}}$ for $n$ odd (and zero otherwise); *sawtooth* oscillation by $a_{n}=-1/(\pi n)$ for all $n$. The thermal current absorbed by the $\alpha$-reservoir, $-J_{\alpha}^{(P)}>0$, as a function of pumping frequency $\Omega$ for different pumping profiles and phase difference is shown in Fig. \[figs\_pumping\]. We find a suppression of thermal current according to the type of pumping in the following order: *square*, *single-mode*, *triangle* and *sawtooth*. The pumping peak occurs in the frequency window $ 2\sqrt{k}<\Omega<3\sqrt{k}$ and a sub-peak within $0<\Omega<\sqrt{k}$, accompanied by a weak suppression in the domain $\sqrt{k} <\Omega <2\sqrt{k}$ and the strong suppression for $\Omega\gtrsim 4\sqrt{k}$. As the phase difference is increased, the suppression in $\sqrt{k} <\Omega <2\sqrt{k}$ is intensified. Note that the unperturbed proposed setup is symmetric ($k_{L}=k_{R}\neq k_{C}$). The phase shift $\varphi$ causes the difference between $J_{L}^{(P)}$ and $J_{R}^{(P)}$, see Fig. \[figs\_pumping\]. The effective heat flux between the two reservoirs $\Delta J^{(P)}(\Omega)\equiv J^{(P)}_{R}(\Omega)-J^{(P)}_{L}(\Omega)$ for the symmetric coupling case (*i.e.* $k_{L}=k_{R}\neq k_{C}$) reads $$\label{DJ} \Delta J^{(P)}(\Omega)= \sum_{n=1}^{\infty}2a_{n}^{2}\,\sin(n\varphi)\,\mathcal{B}^{L}(n\Omega),$$ where $\Delta J^{(P)}>0$ corresponds to the heat flux from *right*-reservoir to *left*-reservoir and $\Delta J^{(P)}<0$ to reverse direction. Note that for $\varphi = 0, \pm 2\pi, \pm 4\pi, \pm 6\pi, \ldots$, we obtain $\Delta J^{(P)} =0$, as expected. For the *single-mode* case we verify that the maximum value of $\vert\Delta J^{(P)}\vert $ occurs for $\varphi = \pm\pi/2, \pm 3\pi/2, \pm 5\pi/2, \ldots$. $\Delta J^{(P)}$ versus the pumping frequency $\Omega$ is represented in the Figs. \[figs\_pumping\](g)-(i). The time-dependent drive breaks the system symmetry. Hence, one can engineer configurations of $\Omega$ and $\phi_\alpha(t)$ that direct the pumped heat either to the left or to the right. Equation (\[DJ\]) shows that by replacing $\varphi \rightarrow -\varphi$ the directionality is reversed, namely, $\Delta J^{(P)}(\varphi) = -\Delta J^{(P)}(-\varphi)$. For all considered pumping profiles and phase differences, we find $\Delta J^{(P)}=0$ at $\Omega=\Omega_{1} \approx 1.371\sqrt{k}$ and $\Omega=\Omega_{2}\approx 2.029\sqrt{k}$, see Fig. \[figs\_pumping\]. The largest negative peak of $\Delta J^{(P)}$ occurs between $\Omega_{1}$ and $\Omega_{2}$. Other two positive peaks occur on $\Omega<\Omega_{1}$ and $\Omega>\Omega_{2}$. The external drives contribute to the energy transfer by exciting the original unperturbed propagating energy $\hbar\omega$ to $\hbar(\omega + n\Omega)$ where $n=1,2,3,\ldots $. In our model, the energy transfer between leads is only possible if the injected energy $\hbar\omega$ and the excited energy $\hbar(\omega + n\Omega)$ satisfy the conditions $\vert\omega\vert \le \omega_c$ and $|\omega + n\Omega| \le \omega_c$, respectively, where $\omega_c \equiv 2\sqrt{k}$. Outside this frequency window the leads linewidths vanish and no energy transfer is allowed. Thus, only modes with $n \le 2\omega_c/\Omega$ will contribute to the energy transfer between reservoirs. As $\Omega$ increases a smaller number of modes contribute and the overall energy transport decreases, as we see in Fig. \[figs\_pumping\]. For $\Omega > 2\omega_c = 4\sqrt{k}$ the external drives do not induce energy transport to the reservoirs as no positive integer can satisfy $n<1$. For small $\Omega$, many modes $n$ with $n \le 2\omega_c/\Omega$ compete and contribute to the energy transfer. From Eqs.  and , we define the (cooling) efficiency $\kappa$ of the heat pump that operates between two reservoirs *left* ($L$) and *right* ($R$) at equal temperatures, considering a full period as $$\begin{aligned} \label{eqq_kappa} \kappa \equiv\frac{Q_{R}^{(\tau)}-Q_{L}^{(\tau)}}{|Q_{R}^{(\tau)}|+|Q_{L}^{(\tau)}|} = \frac{J_{R}^{(P)}-J_{L}^{(P)}}{|J_{R}^{(P)}|+|J_{L}^{(P)}|} + \mathcal{O}(\epsilon).\end{aligned}$$ $\vert\kappa \vert$ is the ratio between the net current and the total heat current driven by the external ac source per cycle. Figure \[kappa\] shows that $\kappa$ can be positive or negative depending on $\Omega$ and $\phi_{L,R}(t)$. $\kappa=\pm 1$ correspond to situations where the heat currents have opposite signs, independent of their magnitudes. For $\Omega \gtrsim 0.5\sqrt{k}$, the thermal energy is always absorbed by the reservoirs for all the studied pumping profiles (see Fig. \[figs\_pumping\]). Thus, the denominator of Eq. , $\vert Q_{R}^{(\tau)} \vert + \vert Q_{L}^{(\tau)}\vert$, corresponds to realized work $W>0$. In contrast, when $\Omega \lesssim 0.5\sqrt{k}$ for the single-mode and the triangle profiles, we find a plateau $\kappa=-1$, indicated in the Fig.\[kappa\]. In this cases, the external drive pumps heat from the left ($-Q_{L}^{(\tau)}<0$) to the right reservoir ($-Q_{R}^{(\tau)}>0$) and $\kappa=-1$ indicates all the thermal energy extracted from the left reservoir by the external drive is transferred to the right one. Unfortunatly, the corresponding heat current is rather small. Outside of this regime, we find that in order to optimize the pumped heat $\Delta J^{(P)}$ one should: (i) tune the pumping frequency to $\Omega \approx \sqrt{k}$ or $\Omega \approx 1.75\sqrt{k}$ producing positive ($\Delta J^{(P)}>0$) or negative ($\Delta J^{(P)}<0$), respectively, heat transfer (see Fig. \[figs\_pumping\]) and (ii) tune the phase shift to $\pi/2$ (see Eq. (\[DJ\])). ![ (Color online) Efficiency $\kappa$ of the system as a function of the pumping frequency $\Omega$ (in units of $\sqrt{k}$) using $k_L = k_R = 0.5 k$ and $k_{C}=0.25 k$ for equal temperatures with $k_{B}T_{L}=k_{B}T_{R}=1$ (in units of $\hbar\sqrt{k}$) and for distinct pumping functions. []{data-label="kappa"}](fig6a_efficiency_1pi4.pdf "fig:"){width="0.85\columnwidth"} ![ (Color online) Efficiency $\kappa$ of the system as a function of the pumping frequency $\Omega$ (in units of $\sqrt{k}$) using $k_L = k_R = 0.5 k$ and $k_{C}=0.25 k$ for equal temperatures with $k_{B}T_{L}=k_{B}T_{R}=1$ (in units of $\hbar\sqrt{k}$) and for distinct pumping functions. []{data-label="kappa"}](fig6b_efficiency_1pi2.pdf "fig:"){width="0.85\columnwidth"} ![ (Color online) Efficiency $\kappa$ of the system as a function of the pumping frequency $\Omega$ (in units of $\sqrt{k}$) using $k_L = k_R = 0.5 k$ and $k_{C}=0.25 k$ for equal temperatures with $k_{B}T_{L}=k_{B}T_{R}=1$ (in units of $\hbar\sqrt{k}$) and for distinct pumping functions. []{data-label="kappa"}](fig6c_efficiency_3pi4.pdf "fig:"){width="0.85\columnwidth"} The high frequency, $\Omega\gg\sqrt{k}$, asymptotic behavior of $\kappa$ (see Fig. \[kappa\]) is trivial due to the contribution of a smaller number of modes $n$ as discussed before. We also note a surprisingly similar behavior of the efficiency curves for *single-mode* and *triangle* pumping profiles. This can be explained by inspecting Eq.  and recalling the rapid decrease of $a_{n}^2 \propto 1/n^4$ with $n$ for the latter case. In other words, the pumped current, Eq. (\[JP\]), for the *triangle* profile is dominated by the first mode $n=1$, resulting in a frequency dependence similar to the *single-mode* profile. Conclusions {#sec:conclusion} =========== In this paper we have presented a rigorous description of quantum thermal transport properties due to phonons in molecular and nanomechanical systems using the non-equilibrium Green’s function theory. We approached the problem using a phase-space representation on the quantum correlations functions in the Keldysh contour, a convenient generalization of the standard Green’s functions technique [@Rammer1986; @Kamenev2011]. We have shown that in the stationary regime our approach recovers a Landauer-like transmission formula, as expected. Our derivation solves some inconsistencies of previous theoretical works based on NEGF [@Wang2007; @Wang2008; @Wang2014]. For instance, the use of phase-space correlation functions avoids the necessity of taking the Fourier transform of $\vec{u}$ and canonically conjugate $\vec{p}$ operators, that is troublesome for the commutation relations $[\vec{u},\vec{p}]$. The partition we put forward in Sec. \[sec:model\], avoids conceptual difficulties with the adiabatic switch-on picture on picture on which the formalism is based. [ Finally, starting with a symmetrized Hamiltonian, our formalism avoids the necessity of imposing the *ad hoc* symetrization $J=\left(J_{L}+J^{\ast}_{L}-J_{R}-J^{\ast}_{R}\right)/4$ used in Refs. [@Wang2007; @Wang2008; @Wang2014]. ]{} [We]{} extend the formalism to study the heat transport in systems subjected to a time-dependent external drive which opens the possibility of addressing situations of interest for applications in phononics. In distinction to the electronic case where the Fermi velocity and the system size give a characteristic time scale for the dynamics, the absence of such time scale in bosonic systems leads us to develop a [new]{} perturbation theory [scheme]{} assuming that the external drive is weak. We apply our results to a model of a diatomic molecule coupled to semi-infinite linear chains in equilibrium with thermal reservoirs. The simplicity of the model allows for an amenable computation and to understand its main physical features using simple analytical considerations. This gives us confidence on the method and [we expect it to be used]{} to treat more realistic systems. This work is supported by the Brazilian funding agencies CAPES, CNPq, FAPERJ and FAPEAL. The authors thank the hospitality of the International Institute of Physics (IIP) in Natal (Brazil), where this work was concluded. Canonical commutation relations and Fourier transform in frequency space {#commutation} ======================================================================== The canonical quantization procedure of a classical Hamiltonian expressed in terms of the set of independent variables represented by the displacements $\lbrace u_{i}\rbrace$ and by canonically conjugated momenta $\lbrace p_{j}\rbrace$, renders the commutation relations $\left[u_i(t), p_{j}(t)\right]={\imath}\hbar\,\delta_{i,j}$ and $\left[u_i(t), u_{j}(t)\right]=0=\left[p_i(t), p_{j}(t)\right]$. The standard approach [@Wang2007] is not consistent with the above relations, as we show below. Let us consider the Fourier transform of $u_i(t)$ and $p_{i}(t)\equiv \dot{u}_i(t)$ as [@Wang2007; @Wang2008; @Wang2014] $$\begin{aligned} u_{i} (t) &= \int_{-\infty}^{\infty}\frac{{\mathrm{d}}{\omega}}{2\pi}\,e^{-{\imath}{\omega}t}\,u_{i}[{\omega}], \\ p_i (t) &= \dot{u}_{i}(t) = \int_{-\infty}^{\infty}\frac{{\mathrm{d}}{\omega}}{2\pi}\,e^{-{\imath}{\omega}t}\,\left(-{\imath}\,{\omega}\,u_{i}[{\omega}]\right), \label{transformation} \end{aligned}$$ where the condition $(u_{i}[-{\omega}])^{\dagger}=u_{i}[{\omega}]$ must be satisfied as a result of $\left(u_{i}(t)\right)^{\dagger}=u_{i}(t)$ (and reciprocally $\left(p_{i}(t)\right)^{\dagger}=p_{i}(t)$). Hence, the canonical commutation relations $\left[u_i(t), p_{j}(t)\right]={\imath}\hbar\,\delta_{i,j}$ and $\left[u_i(t), u_{j}(t)\right]=0$ can be written as $$\begin{aligned} &\int_{-\infty}^{\infty}\frac{{\mathrm{d}}{\omega}}{2\pi}\,\int_{-\infty}^{\infty}\frac{{\mathrm{d}}{\omega}^{\prime}}{2\pi}\,e^{-{\imath}({\omega}+ {\omega}^\prime) t} (-{\imath}\,{\omega}^{\prime})\,\Big[u_{i}[{\omega}], \,u_j[{\omega}^{\prime}]\Big] = \nonumber\\ & \hspace{6cm}={\imath}\hbar\,\delta_{ij} \label{eq2a}\\ &\int_{-\infty}^{\infty}\frac{{\mathrm{d}}{\omega}}{2\pi}\int_{-\infty}^{\infty}\frac{{\mathrm{d}}{\omega}^{\prime}}{2\pi}\,e^{-{\imath}({\omega}+ {\omega}^\prime) t} \Big[u_{i}[{\omega}], u_j[{\omega}^{\prime}]\Big] = 0 \label{eq2b} \end{aligned}$$ Note that it is not possible to obtain a consistent result for $\left[u_{i}[{\omega}], u_j[{\omega}^{\prime}]\right]$ with the e , simultaneously. This results from the fact that canonicity relations involve operators at equal times, which is not compatible with the transformation (\[transformation\]). Thus, the frequency Fourier transform (\[transformation\]) is not a canonical transformation. In our construction we obtain the equations of motion for the NEGFs. The Fourier transform to frequency space is performed in the Green’s functions arguments and not in the displacement and momentum operators, as standard. Hence, circumventing potential problems with the commutation relations. Surface Green’s functions for semi-infinite lattices {#sec:freegf} ==================================================== [ In this Appendix we present a [novel]{} direct analytical calculation of $\tilde{g}_{\alpha}^{r,a}[{\omega}]$ for the non-ideal coupling case, namely, $k_\alpha\neq k$. (The results of Refs. [@Wang2007; @Wang2008] are recovered by taking $k_{\alpha}=k$.) Next, we discuss the importance of the term $V_{aa}$ which originates the difference between ${g}_{\alpha}^{r,a}[{\omega}]$ and $\tilde{g}_{\alpha}^{r,a}[{\omega}]$. ]{} According to and , we write $$\label{hessian-ring} \tilde{g}_{\alpha}^{r,a}[{\omega}]= \langle e_{1}\vert\left(\begin{array}{cccccc} {\omega}^{2}_{\pm}-k_{\alpha}-k & \;\; k & \\ \cline{2-4} k &{\multicolumn{1}{|c}}{} & & \\ &{\multicolumn{1}{|c}}{} & \mathcal{D}_{n}^{\pm} & \\ &{\multicolumn{1}{|c}}{} & & \\ \end{array}\right)^{-1}\vert e_{1}\rangle ,$$ where $\vert e_{1}\rangle = (1, 0, \cdots, 0)^{\text{T}}$ represents the surface site and $$\mathcal{D}_{n}^{\pm}=\left(\begin{array}{ccccc} {\omega}_{\pm}^2-2k & k & & &\\ k & {\omega}_{\pm}^2-2k & k & &\\ & k & {\omega}_{\pm}^2-2k & k &\\ & \qquad\ddots & \qquad\ddots & \qquad\ddots & \end{array}\right)_{n\times n},$$ with ${\omega}_{\pm}\equiv{\omega}\pm{\imath}\,0^{+}$ and $n\to\infty$. Applying the method of co-factors in we write $$\label{16} \tilde{g}_{\alpha}^{r,a}[{\omega}]=\left(\,k-k_{\alpha} - \lim_{n\to\infty}\frac{d^{\pm}_{n+1}}{d^{\pm}_{n}}\,\right)^{-1},$$ where $d_{n}^{\pm}\equiv (-1)^{n}\det\mathcal{D}_{n}^{\pm}$. The Laplace’s method gives the following recurrence equation $$\label{17a} d^{\pm}_{n+1}+ ({\omega}^{2}_{\pm}-2k)\, d^{\pm}_{n} + k^2\,d^{\pm}_{n-1} =0.$$ The discriminant $\Delta = {\omega}_{\pm}^2\,\left({\omega}_{\pm}^2 -4k\right)$ of the associated characteristic equation has non trivial roots $\vert{\omega}\vert =\sqrt{4k}$. Hence, we split the solution of the recurrence equation in two frequency domains: (*i*) $\vert{\omega}\vert\leqslant\sqrt{4k}\,$ and (*ii*) $\vert{\omega}\vert > \sqrt{4k}$. (*i*) For $\vert{\omega}\vert\leqslant\sqrt{4k}$ we introduce the parametrization ${\omega}_{\pm}\equiv{\omega}\pm{\imath}\,0^{+}=\sqrt{4k}\,\sin\left(\theta_{\pm}/2\right)$ with $\theta_{\pm}=\theta \pm {\imath}\eta$ and $\eta=0^{+}$ for $\theta\in [-\pi,\pi]$. Substituting the latter in , we find $$d_{n}^{\pm}=k^{n}\,\frac{\sin[(n+1)\,\theta_{\pm}]}{\sin\theta_{\pm}}.$$ Since $\tan\left[m(\theta\pm{\imath}\,\eta)\right]\sim\pm{\imath}$ for $m\gg 1$ and $\eta>0$, $$\label{A6} \lim_{\eta=0^{+}}\lim_{n\to\infty}\frac{d^{\pm}_{n+1}}{d^{\pm}_{n}}= k \,\text{e}^{\mp{\imath}\,\theta}.$$ that, with the help of the identities $2k\cos\theta=2k-{\omega}^2$ and $2k\sin\theta={\omega}\sqrt{4k-{\omega}^2}$, leads to $$\label{A7} \tilde{g}^{r,a}_{\alpha}[{\omega}]=\frac{1}{2}\,\frac{{\omega}^2-2k_{\alpha}\mp{\imath}\,{\omega}\sqrt{4k-\omega^2}}{(k-k_{\alpha})\,{\omega}^2 + k_{\alpha}^{2}}.$$ Note that satisfies the property $\tilde{g}_{\alpha}^{r}[-{\omega}]=\tilde{g}_{\alpha}^{a}[{\omega}]$ in line with . (*ii*) For $\vert{\omega}\vert>\sqrt{4k}$, we parametrize ${\omega}_{\pm}\equiv{\omega}\pm{\imath}\,0^{+}=\sqrt{4k}\,\cosh\left(\theta_{\pm}/2\right)\,\text{sgn}(\theta)$ with $\theta_{\pm}=\theta\pm{\imath}\eta$ and $\eta=0^{+}$ for $\theta\in\mathbb{R}^{\ast}$, where $\text{sgn}(\theta)$ is the sign function of $\theta$. Substituting this parametrization in we find $$d_{n}^{\pm}=(-k)^{n}\,\frac{\sinh[(n+1)\,\theta_{\pm}]}{\sinh\theta_{\pm}}.$$ Since $\coth\left[m(\theta\pm{\imath}\,\eta)\right]\sim\text{sgn}(\theta)$ for $m\gg 1$ and $\eta>0$, $$\label{A6_1} \lim_{\eta=0^{+}}\lim_{n\to\infty}\frac{d^{\pm}_{n+1}}{d^{\pm}_{n}} =- k \,\text{e}^{\vert\theta\vert}.$$ Using $2k\,\cosh\theta = {\omega}^2 - 2k$ and $2k\,\text{sgn}(\theta)\,\sin\theta=\sqrt{{\omega}^{2}({\omega}^2-4k)}$, we obtain $$\label{A11} \tilde{g}^{r,a}_{\alpha}[{\omega}]=\frac{1}{2}\,\frac{{\omega}^2-2k_{\alpha} -\sqrt{{\omega}^{2}\left(\omega^2-4k\right)}}{(k-k_{\alpha})\,{\omega}^2 + k_{\alpha}^{2}}.$$ Note that $\tilde{g}^{r,a}_{\alpha}[{\omega}]\sim 1/{\omega}^{2}\to 0$ for $\vert{\omega}\vert\gg\sqrt{4 k}$, which guarantees convergence in the integrations. We can write $\tilde{g}^{r,a}_{\alpha}[{\omega}]$ in a convenient form as $$\label{eq_B11} \tilde{g}^{r,a}_{\alpha}[{\omega}]=\frac{1}{2}\left(\tilde{\mu}_{\alpha}[{\omega}]\mp{\imath}\,\tilde{\gamma}_{\alpha}[{\omega}]\right),$$ where the real auxiliary functions $\tilde{\gamma}_{\alpha}[{\omega}]$ and $\tilde{\mu}_{\alpha}[{\omega}]$ are $$\begin{aligned} \tilde{\gamma}_{\alpha}[{\omega}] &= \frac{{\omega}\sqrt{4k-{\omega}^2}}{(k-k_{\alpha})\,{\omega}^{2}+k_{\alpha}^{2}}\,\Theta(4k-{\omega}^{2}),\label{gamma_alpha}\\ \tilde{\mu}_{\alpha}[{\omega}] &= \frac{{\omega}^2-2k_{\alpha}-\sqrt{{\omega}^2({\omega}^2-4k)}\,\Theta({\omega}^2-4k)}{(k-k_{\alpha})\,{\omega}^{2}+k_{\alpha}^{2}}\label{mu_alpha}.\end{aligned}$$ \[eq\_B12\] [It is straightforward to verify that $\tilde{g}_{\alpha}^{r}[{\omega}]$ satisfies the Kramers-Kronig relations, as it should [@Tuovinen2016; @Tuovinen2016PhD].]{} The $\alpha$-contact line width function, $\tilde{\Gamma}_{\alpha}[{\omega}]$, Eq. , becomes $$\label{width_app} \tilde{\Gamma}_{\alpha}[{\omega}] = V_{C\alpha}\cdot\tilde{\gamma}_{\alpha}[{\omega}]\cdot V_{\alpha C}.$$ Let us now analyze the role of term $V_{aa}$ in the transmission $\mathcal{T}({\omega}\rightarrow 0)$, given by Eq. (\[transmission\]). We consider a system where the central region is composed by a dimer as shown in Fig. \[fig:cadeia\_linear\]. According to Eqs. (\[eq:self-energy\_VgV\]), (\[set\_coupling\]) and (\[A7\]) the low frequency limit of $\tilde{\Gamma}_{\alpha}[{\omega}]$ and $\tilde{\Sigma}^{r,a}[{\omega}]$ are $$\begin{aligned} \tilde{\Gamma}_{\alpha}[{\omega}] &\approx 2\,\sqrt{k}\,{\omega}\,P_{\alpha},\label{Gamma}\\ \tilde{\Sigma}^{r,a}[{\omega}] &\approx -V_{CC} \mp{\imath}\sqrt{k}\,{\omega}\,(P_{L}+P_{R}),\label{embed}\end{aligned}$$ where $P_{L}=\begin{pmatrix} 1 \; &\; 0\\ 0 \; &\; 0 \end{pmatrix}$, $P_{R}=\begin{pmatrix} 0 \; &\; 0\\ 0 \; &\; 1 \end{pmatrix}$, $V_{CC} = \sum_{\alpha}k_{\alpha}\,P_{\alpha}$ and $\alpha=L,R$. The central region Green’s function is $$\begin{aligned} G^{r,a}_{CC}[\omega]=\left( \omega_{\pm} ^2\,\text{I}_{2} - K_{CC}^0 - V_{CC} - \tilde{\Sigma}^{r,a}[{\omega}] \right)^{-1}, \label{gcentral}\end{aligned}$$ where $I_2$ is a $2\times 2$ identity matrix. Using Eq.  in Eq. (\[gcentral\]), we obtain $G^{r,a}_{CC}[{\omega}]=[-K_{CC}^{0}\,\pm\,{\imath}\,\sqrt{k}\,{\omega}\,(P_{L}+P_{R}) + \mathcal{O}({\omega}^2)]^{-1}$, where the spring-constant matrix of the decoupled central region $K_{CC}^{0}$ is singular and gives rise to the expansion $$\begin{aligned} \label{GGr} G_{CC}^{r,a}[{\omega}] = \frac{\mp{\imath}}{2\sqrt{k}\,{\omega}} \begin{pmatrix} 1 \;\, & \,\; 1\\ 1 \;\, & \,\; 1 \end{pmatrix} + \mathcal{O}({\omega}^{0}).\end{aligned}$$ Substituting Eqs.  and into Eq., we obtain $\mathcal{T}({\omega})=1+\mathcal{O}({\omega})$, that is, $\mathcal{T}(\omega\rightarrow 0)=1$. [ In summary, the term $V_{aa}$ in Eq. (\[set\_coupling\]) leads to $\text{Re}\lbrace\tilde{\Sigma}^{r,a} [0]\rbrace=-V_{CC}$. The latter cancels out the term $-V_{CC}$ in Eq. (\[gcentral\]), leading to Eq. (\[GGr\]), that results in unit transmission for $\omega\rightarrow 0$. We conclude that for the general non-ideal coupling case, the use of $\tilde{g}^{r,a}_{\alpha}[{\omega}]$ is key to obtain the correct transmission low-frequency behavior. ]{} Perturbative weak pumping regime {#sec:pertubative} ================================= In this Appendix, we derive the perturbation expansion for the energy $E_{M}(t)$ of the extended molecule and analyze the periodic behavior for pumped-induced heat transport. Next, we obtain the perturbation expansion in $\varepsilon$ for thermal current $J_{\alpha}(t)$ and the power $\Phi(t)$. Energy $E_{M}(t)$ ----------------- We expand the Dyson equation, Eq. , in a power series in $\check{\mathcal{V}}(t)$, to obtain (after a lengthy but straightforward calculation) the energy $E_{M}(t)$ as $$\begin{aligned} E_{M}^{(0)}=&\,\frac{{\imath}\hbar}{2}\,\int_{-\infty}^{\infty}\frac{{\mathrm{d}}{\omega}}{2\pi}\,\text{Tr}\Big\lbrace\left(G^{<}[{\omega}]\cdot\underline{K}\right)_{CC} + \underline{K}_{CC}\cdot G^{<}_{CC}[{\omega}]+ \sum_{\alpha}\left(V_{C\alpha}\cdot G_{\alpha C}^{<}[{\omega}] + G_{C\alpha}^{<}[{\omega}]\cdot V_{\alpha C}\right)\Big\rbrace,\label{EM_0}\\ E_{M}^{(1)}(t)=&\,\frac{{\imath}\hbar}{2}\sum_{\alpha}\phi_{\alpha}(t)\int_{-\infty}^{\infty}\frac{{\mathrm{d}}{\omega}}{2\pi}\,\text{Tr}\Big\lbrace V_{CC}^{(\alpha)}\cdot G_{CC}^{<}[{\omega}]+ V_{C\alpha}\cdot G_{\alpha C}^{<}[{\omega}] + G_{C\alpha}^{<}[{\omega}]\cdot V_{\alpha C}\Big\rbrace +\frac{{\imath}\hbar}{2}\sum_{\beta}\iint\frac{{\mathrm{d}}{\omega}\,{\mathrm{d}}{\omega}^{{\prime}}}{(2\pi)^2}\,\text{e}^{-{\imath}({\omega}-{\omega}^{{\prime}})t}\nonumber\\ &\times\,\phi_{\beta}[{\omega}-{\omega}^{{\prime}}]\,\text{Tr}\Big\lbrace \left(I_{C}\,{\omega}{\omega}^{{\prime}}+\underline{K}_{CC}\right)\cdot\Xi_{CC,\beta}^{<}[{\omega},{\omega}^{{\prime}}] +\sum_{\alpha}\left(V_{C\alpha}\cdot\Xi_{\alpha C,\beta}^{<}[{\omega},{\omega}^{{\prime}}]+ \Xi_{C\alpha,\beta}^{<}[{\omega},{\omega}^{{\prime}}]\cdot V_{\alpha C} \right)\Big\rbrace,\label{EM_1}\end{aligned}$$ $$\begin{aligned} E_{M}^{(2)}(t) =&\,\frac{{\imath}\hbar}{2}\,\sum_{\alpha,\beta}\phi_{\alpha}(t)\,\iint\frac{{\mathrm{d}}{\omega}\,{\mathrm{d}}{\omega}^{{\prime}}}{(2\pi)^{2}}\text{e}^{-{\imath}({\omega}-{\omega}^{{\prime}})t}\,\phi_{\beta}[{\omega}-{\omega}^{{\prime}}]\,\text{Tr}\Big\lbrace V_{C\alpha}^{(\alpha)}\cdot\Xi_{\alpha C,\beta}^{<}[{\omega},{\omega}^{{\prime}}] + \Xi_{C\alpha,\beta}^{<}[{\omega},{\omega}^{{\prime}}]\cdot V_{\alpha C} \nonumber\\ &+ V_{CC}^{(\alpha)}\cdot\Xi_{CC,\beta}^{<}[{\omega},{\omega}^{{\prime}}] \Big\rbrace\; +\;\frac{{\imath}\hbar}{2}\sum_{\beta,\gamma}\iiint\frac{{\mathrm{d}}{\omega}\,{\mathrm{d}}\nu\,{\mathrm{d}}{\omega}^{{\prime}}}{(2\pi)^3}\,\text{e}^{-{\imath}({\omega}-{\omega}^{{\prime}})t}\,\phi_{\beta}[{\omega}-\nu]\,\phi_{\gamma}[\nu-{\omega}^{{\prime}}]\,\nonumber\\ &\times \text{Tr}\Big\lbrace{\left({\omega}\,{\omega}^{{\prime}} I_{C} + \underline{K}_{CC}\right)}\cdot\Xi_{CC,\beta\gamma}^{<}[{\omega},\nu,{\omega}^{{\prime}}]\;+\;\sum_{\alpha}\left(V_{C\alpha}\cdot\Xi^{<}_{\alpha C,\beta\gamma}[{\omega},{\omega}^{{\prime}}] + \Xi_{C\alpha,\beta\gamma}^{<}[{\omega},{\omega}^{{\prime}}]\cdot V_{\alpha C}\right) \Big\rbrace,\label{EM_2}\end{aligned}$$ where $\underline{\hat{K}}\equiv \hat{K}^{0} + \hat{V}$ and $\big(\hat{G}[{\omega}]\cdot\underline{\hat{K}}\big)_{CC}=\sum_{\alpha}\big( G_{C\alpha}^{<}[{\omega}]\cdot V_{\alpha C} + G_{CC}^{<}[{\omega}]\cdot V_{CC}^{(\alpha)}\big)$. Here, $\phi_{\alpha}[{\omega}]$ is the Fourier’s transform of the pumping function, Eqs.  and , given by $$\begin{aligned} \label{fourier_phi} \phi_{\alpha}[{\omega}]=& \int_{-\infty}^{\infty}{\mathrm{d}}t\,\text{e}^{{\imath}{\omega}t}\,\phi_{\alpha}(t) \nonumber \\ = & \sum_{n=1}^{\infty}\sum_{\sigma=\pm 1} a_{n}^{(\alpha)} 2\pi\,\delta(\omega+\sigma\Omega_{n})\,\text{e}^{{\imath}\sigma\varphi_{n}^{(\alpha)}}.\end{aligned}$$ $\Xi_{a b,\beta}^{<}[\omega,\omega^{{\prime}}]$ and $\Xi_{ab,\beta\gamma}^{<}[{\omega},\nu,{\omega}^{\prime}]$ are *lesser* components of \[set\_Xi\] $$\begin{aligned} &\Xi_{a b,\beta}[\omega,\omega^{{\prime}}] = G_{a\beta}[\omega]\cdot V_{\beta\beta}\cdot G_{\beta b}[\omega^{{\prime}}]\nonumber\\ & + G_{a C}[\omega]\cdot V_{C C}^{(\beta)}\cdot G_{C b}[\omega^{{\prime}}] + G_{a\beta}[\omega]\cdot V_{\beta C}\cdot G_{C b}[\omega^{{\prime}}]\nonumber\\ & + G_{a C}[\omega]\cdot V_{C \beta}\cdot G_{\beta b}[\omega^{{\prime}}],\end{aligned}$$ and $$\begin{aligned} &\Xi_{ab,\beta\gamma}[{\omega},\nu,{\omega}^{{\prime}}]= G_{a\beta}[{\omega}]\cdot V_{\beta\beta}\cdot \Xi_{\beta b,\gamma}[\nu,{\omega}^{\prime}]\nonumber\\ & + G_{aC}[{\omega}]\cdot V_{CC}^{(\beta)}\cdot\Xi_{Cb,\gamma}[\nu,{\omega}^{\prime}] + G_{a\beta}[{\omega}]\cdot V_{\beta C}\cdot\Xi_{Cb,\gamma}[\nu,{\omega}^{\prime}]\nonumber\\ & + G_{aC}[{\omega}]\cdot V_{C\beta}\cdot \Xi_{\beta b, \gamma}[\nu,{\omega}^{\prime}],\end{aligned}$$ respectively, with latin letters corresponding to reservoirs or $C$ and greek letters corresponding to reservoirs only. Note that $\Lambda_{1}^{<}$ and $\Lambda_{2}^{<}$ of Eq.  are related to $\Xi_{a b,\beta}^{<}$ and $\Xi_{ab,\beta\gamma}^{<}$ by $$\begin{aligned} \left(\Lambda_{1}^{<}[\omega,\omega^{{\prime}}]\right)_{ab} &= \sum_{\beta} \Xi_{a b,\beta}^{<}[\omega,\omega^{{\prime}}]\;\phi_{\beta}[\omega-\omega^{{\prime}}],\\ \left(\Lambda_{2}^{<}[{\omega},{\omega}^{{\prime}}]\right)_{ab} &= \sum_{\beta\gamma}\int\frac{{\mathrm{d}}\nu}{2\pi}\;\Xi_{ab,\beta\gamma}^{<}[{\omega},\nu,{\omega}^{\prime}]\nonumber\\ &\qquad\qquad\times\phi_{\beta}[{\omega}-\nu]\,\phi_{\gamma}[\nu-{\omega}^{\prime}].\end{aligned}$$ Note that $E_{M}^{(0)}$ is constant. Substituting in Eqs. and , we can see that $E_{M}^{(n)}(t)=E_{M}^{(n)}(t+\tau)$ for $n=1, 2, \ldots$. Current $J_{\alpha}(t)$ and power developed by the ac sources $\Phi(t)$ ----------------------------------------------------------------------- Substituting the results of Eqs. - into -, we obtain the current $J_{\alpha}(t)$ from $\alpha$-lead and the power developed by the ac sources $\Phi(t)$ in the form of a perturbative series in $\varepsilon$ as \[series\_app\] $$\begin{aligned} & J_{\alpha}(t)=J_{\alpha}^{(S)}+\varepsilon\,J^{(1)}_{\alpha}(t) +\varepsilon^{2}\,J^{(2)}_{\alpha}(t) + \cdots\\ & \Phi(t)=\varepsilon\;\Phi^{(1)}(t) +\varepsilon^{2}\;\Phi^{(2)}(t) + \cdots\end{aligned}$$ where $J^{(n)}_{\alpha}(t)$ and $\Phi^{(n)}(t)$ are $n$-order contribution of the series of $J_{\alpha}(t)$ and $\Phi(t)$, respectively. Below we give the explicit expressions for the first and second-order contributions. ### First-order contribution The coefficients $J_{\alpha}^{(1)}(t)$ and $\Phi^{(1)}(t)$ read $$\begin{aligned} & J^{(1)}_{\alpha}(t)=\,\text{Re}\bigg[\phi_{\alpha}(t)\,\int\limits_{-\infty}^{\infty}\frac{{\mathrm{d}}\omega}{2\pi}\,\hbar\omega\,\text{Tr}\left\lbrace V_{C\alpha}\cdot G^{<}_{\alpha C}[\omega]\right\rbrace\nonumber\\ &\;+ \sum_{\beta}\iint \frac{{\mathrm{d}}\omega\,{\mathrm{d}}\omega^{{\prime}}}{(2\pi)^2}\,\text{e}^{-{\imath}\,(\omega-\omega^{{\prime}})t}\,\phi_{\beta}[{\omega}-{\omega}^{{\prime}}]\nonumber\\ &\;\times \text{Tr}\left\lbrace\hbar\omega\,V_{C\alpha}\cdot \Xi_{\alpha C,\beta}^{<}[\omega,\omega^{{\prime}}]\right\rbrace\bigg],\\ &\text{and}\nonumber\\ &\Phi^{(1)}(t)=\text{Re}\bigg[\sum_{\alpha}{\imath}\hbar\,\dot{\phi}_{\alpha}(t)\int\limits_{-\infty}^{\infty}\frac{{\mathrm{d}}{\omega}}{2\pi}\,\text{Tr}\bigg\lbrace\frac{1}{2}\,V_{CC}^{(\alpha)}\cdot G_{CC}^{<}[{\omega}]\nonumber\\ &\; + V_{C\alpha}\cdot G_{\alpha C}^{<}[{\omega}]\bigg\rbrace\bigg].\end{aligned}$$ Using Eqs. (\[fourier\_v\]) and (\[fourier\_phi\]), we find $$\begin{aligned} &(i)\quad & &\big\langle\phi_{\alpha}(t)\big\rangle_{\tau}=0,\\ &(ii)\quad & &\big\langle\dot{\phi}_{\alpha}(t)\big\rangle_{\tau}=0,\\ &(iii)\quad & &\big\langle\text{e}^{-{\imath}({\omega}-{\omega}^{{\prime}})t}\,\phi_{\beta}[{\omega}-{\omega}^{{\prime}}]\big\rangle_{\tau} =0,\end{aligned}$$ where $\langle\cdots\rangle_{\tau}=\frac{1}{\tau}\int_{0}^{\tau}{\mathrm{d}}t\,(\cdots)$. Thus, $$\big\langle J_{\alpha}^{(1)}(t)\big\rangle_{\tau} = 0 = \big\langle \Phi^{(1)}(t)\big\rangle_{\tau}.$$ ### Second-order contribution The coefficients $J_{\alpha}^{(2)}(t)$ and $\Phi^{(2)}(t)$ read \[set\_B9\] $$\begin{aligned} &J^{(2)}_{\alpha}(t)=\text{Re}\bigg[\sum_{\beta\gamma}\iiint\frac{{\mathrm{d}}{\omega}\,{\mathrm{d}}\nu\,{\mathrm{d}}{\omega}^{{\prime}}}{(2\pi)^3}\,\hbar\omega\,\text{e}^{-{\imath}({\omega}-{\omega}^{{\prime}})t}\nonumber\\ &\times\phi_{\beta}[{\omega}-\nu]\,\phi_{\gamma}[\nu-{\omega}^{{\prime}}]\,\text{Tr}\left\lbrace V_{C\alpha}\cdot\Xi_{\alpha C,\beta\gamma}^{<}[{\omega},\nu,{\omega}^{{\prime}}] \right\rbrace\nonumber\\ & + \sum_{\beta}\iint\frac{{\mathrm{d}}{\omega}\,{\mathrm{d}}{\omega}^{{\prime}}}{(2\pi)^{2}}\,\hbar{\omega}\,\text{e}^{-{\imath}({\omega}-{\omega}^{{\prime}})t}\,\phi_{\alpha}(t)\,\phi_{\beta}[{\omega}-{\omega}^{{\prime}}]\nonumber\\ &\times \text{Tr}\left\lbrace V_{C\alpha}\cdot\Xi^{<}_{\alpha C,\beta}[{\omega},{\omega}^{{\prime}}]\right\rbrace\bigg],\\ &\text{and}\nonumber\\ &\Phi^{(2)}(t)=\text{Re}\bigg[\sum_{\alpha,\beta}\iint\frac{{\mathrm{d}}{\omega}\,{\mathrm{d}}{\omega}^{{\prime}}}{(2\pi)^{2}}\,\text{e}^{-{\imath}({\omega}-{\omega}^{{\prime}})t}\,{\imath}\hbar\,\dot{\phi}_{\alpha}(t)\nonumber\\ &\times\phi_{\beta}[{\omega}-{\omega}^{{\prime}}]\;\text{Tr}\bigg\lbrace\frac{1}{2}\,V_{CC}^{(\alpha)}\cdot\Xi_{CC,\beta}^{<}[{\omega},{\omega}^{{\prime}}]\nonumber\\ & + V_{C\alpha}\cdot\Xi_{\alpha C,\beta}^{<}[{\omega},{\omega}^{{\prime}}]\bigg\rbrace\bigg].\end{aligned}$$ Using the Eqs. and , we obtain \[set\_B10\] $$\begin{aligned} &\emph{(i)}\quad\big\langle\text{e}^{-{\imath}({\omega}-{\omega}^{{\prime}})t}\big\rangle_{\tau}\;\phi_{\beta}[{\omega}-\nu]\,\phi_{\gamma}[\nu-{\omega}^{{\prime}}] =\nonumber\\ &\quad = 2\pi\,\delta({\omega}-{\omega}^{\prime})\sum_{n=1}^{\infty} a_{n}^{(\beta)}a_{n}^{(\gamma)}\nonumber\\ &\quad\times\sum_{\sigma=\pm 1} 2\pi\,\delta({\omega}^{{\prime}}-\nu+\sigma\,\Omega_{n})\,\text{e}^{{\imath}\sigma\left(\varphi_{n}^{(\beta)}-\varphi_{n}^{(\gamma)}\right)},\\ &\emph{(ii)}\quad\big\langle\text{e}^{-{\imath}({\omega}-{\omega}^{{\prime}})t}\,\phi_{\alpha}(t)\big\rangle_{\tau}\;\phi_{\beta}[{\omega}-{\omega}^{{\prime}}] =\sum_{n=1}^{\infty}a_{n}^{(\alpha)}\,a_{n}^{(\beta)}\nonumber\\ &\quad \times\sum_{\sigma=\pm 1}2\pi\,\delta(\omega-\omega^{{\prime}}+\sigma\,\Omega_{n})\,\text{e}^{{\imath}\sigma \left(\varphi_{n}^{(\beta)}-\varphi_{n}^{(\alpha)}\right)},\\ &\emph{(iii)}\quad\big\langle\text{e}^{-{\imath}({\omega}-{\omega}^{{\prime}})t}\,\dot{\phi}_{\alpha}(t)\big\rangle_{\tau}\,\phi_{\beta}[{\omega}-{\omega}^{{\prime}}] =-\sum_{n=1}^{\infty} a_{n}^{(\alpha)}\,a_{n}^{(\beta)}\nonumber\\ &\times\sum_{\sigma=\pm 1}2\pi\Omega_{n}\,{\imath}\sigma\,\delta(\omega-\omega^{{\prime}}+\sigma\Omega_{n})\,\text{e}^{{\imath}\sigma \left(\varphi_{n}^{(\beta)}-\varphi_{n}^{(\alpha)}\right)}.\end{aligned}$$ Hence, $$\begin{gathered} \big\langle J_{\alpha}^{(2)}(t)\big\rangle_{\tau}=\sum_{n=1}^{\infty}\sum_{\beta,\gamma} a_{n}^{(\beta)}\,a_{n}^{(\gamma)}\\ \times\sum_{\sigma=\pm 1}\,\text{Re}\left[\text{e}^{{\imath}\sigma\left(\varphi_{n}^{(\beta)}-\varphi_{n}^{(\gamma)}\right)}\,\mathcal{J}_{\beta\gamma}^{(\alpha)}(\sigma\Omega_{n})\right]\end{gathered}$$ and $$\begin{gathered} \big\langle \Phi^{(2)}(t)\big\rangle_{\tau}=\sum_{n=1}^{\infty}\sum_{\beta,\gamma} a_{n}^{(\beta)}\,a_{n}^{(\gamma)}\\ \times\sum_{\sigma=\pm 1}\,\text{Re}\left[\text{e}^{{\imath}\sigma\left(\varphi_{n}^{(\beta)}-\varphi_{n}^{(\gamma)}\right)}\,\mathcal{F}_{\beta\gamma}(\sigma\Omega_{n})\right],\end{gathered}$$ where we introduced the following integrals \[set\_12\] $$\begin{aligned} \mathcal{J}_{\beta\gamma}^{(\alpha)}(\sigma\Omega_{n})&=\int\limits_{-\infty}^{\infty}\frac{{\mathrm{d}}{\omega}}{2\pi}\,\hbar{\omega}\,\text{Tr}\bigg\lbrace V_{C\alpha}\cdot\Xi_{\alpha C,\beta\gamma}^{<}[{\omega},{\omega}+\sigma\Omega_{n},{\omega}]\nonumber\\ &+\delta_{\alpha\gamma}\,V_{C\alpha}\cdot\Xi_{\alpha C,\beta}^{<}[{\omega},{\omega}+\sigma\Omega_{n}]\bigg\rbrace,\\ \mathcal{F}_{\beta\gamma}(\sigma\Omega_{n})&=\int\limits_{-\infty}^{\infty}\frac{{\mathrm{d}}{\omega}}{2\pi}\,\hbar\sigma\Omega_{n}\,\text{Tr}\bigg\lbrace \frac{1}{2}\, V_{CC}^{(\gamma)}\cdot\Xi_{CC,\beta}^{<}[{\omega},{\omega}+\sigma\Omega_{n}]\nonumber\\ &+\,V_{C\gamma}\cdot\Xi_{\gamma C,\beta}^{<}[{\omega},{\omega}+\sigma\Omega_{n}]\bigg\rbrace.\end{aligned}$$ Defining $J^{(P)}_{\alpha}\equiv\big\langle J_{\alpha}^{(2)}(t)\big\rangle_{\tau}$ and $\Phi^{(P)}\equiv\big\langle\Phi^{(2)}(t)\big\rangle_{T}$, we get \[set\_B13\] $$\begin{aligned} J^{(P)}_{\alpha}=&\sum_{n=1}^{\infty}\sum_{\beta\gamma}a_{n}^{(\beta)}a_{n}^{(\gamma)}\bigg[\cos\left(\varphi_{n}^{(\beta)}-\varphi_{n}^{(\gamma)}\right)\,A_{\beta\gamma}^{\alpha}(n)\nonumber\\ &-\sin\left(\varphi_{n}^{(\beta)}-\varphi_{n}^{(\gamma)}\right)\,B_{\beta\gamma}^{\alpha}(n)\bigg],\\ \Phi^{(P)}=&\sum_{n=1}^{\infty}\sum_{\beta\gamma}a_{n}^{(\beta)}a_{n}^{(\gamma)}\bigg[\cos\left(\varphi_{n}^{(\beta)}-\varphi_{n}^{(\gamma)}\right)\,D_{\beta\gamma}(n)\nonumber\\ &-\sin\left(\varphi_{n}^{(\beta)}-\varphi_{n}^{(\gamma)}\right)\,E_{\beta\gamma}(n)\bigg],\end{aligned}$$ where \[set\_B14\] $$\begin{aligned} & A_{\beta\gamma}^{\alpha}(n)=\text{Re}\left[\sum_{\sigma=\pm 1}\mathcal{J}_{\beta\gamma}^{(\alpha)}(\sigma\Omega_{n})\right],\\ & B_{\beta\gamma}^{\alpha}(n)=\text{Im}\left[\sum_{\sigma=\pm 1}\sigma\mathcal{J}_{\beta\gamma}^{(\alpha)}(\sigma\Omega_{n})\right],\\ & D_{\beta\gamma}(n)=\text{Re}\left[\sum_{\sigma=\pm 1}\mathcal{F}_{\beta\gamma}(\sigma\Omega_{n})\right],\\ & E_{\beta\gamma}(n)=\text{Im}\left[\sum_{\sigma=\pm 1}\sigma\mathcal{F}_{\beta\gamma}(\sigma\Omega_{n})\right].\end{aligned}$$ Equations and and the energy conservation $\sum_{\alpha}J_{\alpha}^{(P)} + \Phi^{(P)}=0$ (according Eqs. (\[eqs\_Q+W\])-(\[cycles\])), lead to the following conditions $$\begin{aligned} &\sum_{\alpha}A_{(\beta\gamma)}^{\alpha}(n) + D_{(\beta\gamma)}(n) = 0,\\ &\sum_{\alpha}B_{[\beta\gamma]}^{\alpha}(n) + E_{[\beta\gamma]}(n) = 0,\end{aligned}$$ where we introduced symmetrization $O_{(\beta\gamma)}\equiv\frac{1}{2}\left(O_{\beta\gamma}+O_{\gamma\beta}\right)$ and anti-symmetrization $O_{[\beta\gamma]}\equiv\frac{1}{2}\left(O_{\beta\gamma}-O_{\gamma\beta}\right)$ shorthand notations. For the calculation of $\mathcal{J}_{\beta\gamma}^{(\alpha)}(\sigma\Omega_{n})$ and $\mathcal{F}_{\alpha\beta}(\sigma\Omega_{n})$, we use $V_{C\alpha}\cdot\Xi_{\alpha C,\beta}[{\omega},{\omega}^{{\prime}}]$, $\Xi_{C C,\beta}[{\omega},{\omega}^{{\prime}}]$ and $V_{C\alpha}\cdot\Xi_{\alpha C,\beta\gamma}[{\omega},{\omega}^{{\prime}},{\omega}]$ of and , as \[set\_B16\] $$\begin{aligned} &V_{C\alpha}\cdot\Xi_{\alpha C,\beta}[{\omega},{\omega}^{{\prime}}] = \,\delta_{\alpha\beta}\,\Pi_{\alpha}^{(1)}[{\omega},{\omega}^{{\prime}}]\cdot G_{CC}[{\omega}^{{\prime}}]\nonumber\\ &\;\; + \tilde{\Sigma}_{\alpha}[{\omega}]\cdot G_{CC}[{\omega}]\cdot \Pi_{\beta}^{(2)}[{\omega},{\omega}^{{\prime}}]\cdot G_{CC}[{\omega}^{{\prime}}],\\ &\nonumber\\ &\Xi_{C C,\beta}[{\omega},{\omega}^{{\prime}}] = G_{CC}[{\omega}]\cdot\Pi_{\beta}^{(2)}[{\omega},{\omega}^{{\prime}}]\cdot G_{CC}[{\omega}^{{\prime}}],\end{aligned}$$ $$\begin{aligned} &V_{C\alpha}\cdot\Xi_{\alpha C,\beta\gamma}[{\omega},{\omega}^{{\prime}},{\omega}] =\, \delta_{\alpha\beta}\,\delta_{\alpha\gamma}\,\Pi_{\alpha}^{(3)}[{\omega},{\omega}^{{\prime}}]\cdot G_{CC}[{\omega}]\nonumber\\ &\;\; +\delta_{\alpha\beta}\,\Pi_{\alpha}^{(1)}[{\omega},{\omega}^{{\prime}}]\cdot G_{CC}[{\omega}^{{\prime}}]\cdot\Pi_{\gamma}^{(2)}[{\omega}^{{\prime}},{\omega}]\cdot G_{CC}[{\omega}]\nonumber\\ &\;\; + \delta_{\beta\gamma}\,\tilde{\Sigma}_{\alpha}[{\omega}]\cdot G_{CC}[{\omega}]\cdot\Pi_{\beta}^{(4)}[{\omega},{\omega}^{{\prime}}]\cdot G_{CC}[{\omega}]\nonumber\\ &\;\; +\tilde{\Sigma}_{\alpha}[{\omega}]\cdot G_{CC}[{\omega}]\cdot \Pi_{\beta}^{(2)}[{\omega},{\omega}^{{\prime}}]\cdot G_{CC}[{\omega}^{{\prime}}]\cdot\Pi_{\gamma}^{(2)}[{\omega}^{{\prime}},{\omega}]\nonumber\\ &\;\;\cdot G_{CC}[{\omega}],\end{aligned}$$ where we define \[set\_Pi\] $$\begin{aligned} &\Pi_{\theta}^{(1)}[{\omega},{\omega}^{{\prime}}]\equiv\tilde{\Sigma}_{\theta}[{\omega},{\omega}^{{\prime}}]+\tilde{\Sigma}_{\theta}[{\omega}],\\ &\Pi_{\theta}^{(2)}[{\omega},{\omega}^{{\prime}}]\equiv\tilde{\Sigma}_{\theta}[{\omega},{\omega}^{{\prime}}]+\tilde{\Sigma}_{\theta}[{\omega}]+\tilde{\Sigma}_{\theta}[{\omega}^{{\prime}}]+V_{CC}^{(\theta)},\\ &\Pi_{\theta}^{(3)}[{\omega},{\omega}^{{\prime}}]\equiv\tilde{\Sigma}_{\theta}[{\omega},{\omega}^{{\prime}},{\omega}]+\tilde{\Sigma}_{\theta}[{\omega},{\omega}^{{\prime}}],\\ &\Pi_{\theta}^{(4)}[{\omega},{\omega}^{{\prime}}]\equiv\Pi_{\theta}^{(3)}[{\omega},{\omega}^{{\prime}}]+\Pi_{\theta}^{(1)}[{\omega}^{{\prime}},{\omega}]\end{aligned}$$ where $$\label{SIGMA} \tilde{\Sigma}_{\theta}[{\omega}_{1}, \ldots, {\omega}_{n}]\equiv V_{C\theta}\cdot\tilde{g}_{\theta}[{\omega}_{1}]\cdot V_{\theta\theta}\cdot\ldots \cdot\tilde{g}_{\theta}[{\omega}_{n}]\cdot V_{\theta C}.$$ Hence, we obtain the *lesser* components of as $$\begin{aligned} & V_{C\alpha}\cdot\Xi_{\alpha C, \beta}^{<}[{\omega},{\omega}^{{\prime}}]=\,\delta_{\alpha\beta}\;\Big[\left(\Pi_{\alpha}^{(1)}[{\omega},{\omega}^{{\prime}}]\right)^{r}\cdot G^{<}_{CC}[{\omega}] + \left(\Pi_{\alpha}^{(1)}[{\omega},{\omega}^{{\prime}}]\right)^{<}\cdot G^{a}_{CC}[{\omega}]\;\Big]\nonumber\\ &\;\; +\tilde{\Sigma}_{\alpha}^{r}[{\omega}]\cdot G^{r}_{CC}[{\omega}]\cdot\left(\Pi_{\beta}^{(2)}[{\omega},{\omega}^{{\prime}}]\right)^{r}\cdot G_{CC}^{<}[{\omega}^{{\prime}}] +\tilde{\Sigma}_{\alpha}^{r}[{\omega}]\cdot G^{r}_{CC}[{\omega}]\cdot\left(\Pi_{\beta}^{(2)}[{\omega},{\omega}^{{\prime}}]\right)^{<}\cdot G_{CC}^{a}[{\omega}^{{\prime}}]\nonumber\\ &\;\; +\tilde{\Sigma}_{\alpha}^{r}[{\omega}]\cdot G^{<}_{CC}[{\omega}]\cdot\left(\Pi_{\beta}^{(2)}[{\omega},{\omega}^{{\prime}}]\right)^{a}\cdot G_{CC}^{a}[{\omega}^{{\prime}}] +\tilde{\Sigma}_{\alpha}^{<}[{\omega}]\cdot G^{a}_{CC}[{\omega}]\cdot\left(\Pi_{\beta}^{(2)}[{\omega},{\omega}^{{\prime}}]\right)^{a}\cdot G_{CC}^{a}[{\omega}^{{\prime}}],\\ &\nonumber\\ &\Xi_{C C,\beta}^{<}[{\omega},{\omega}^{{\prime}}] = G_{CC}^{r}[{\omega}]\cdot\left(\Pi_{\beta}^{(2)}[{\omega},{\omega}^{{\prime}}]\right)^{r}\cdot G_{CC}^{<}[{\omega}^{{\prime}}] + G_{CC}^{r}[{\omega}]\cdot\left(\Pi_{\beta}^{(2)}[{\omega},{\omega}^{{\prime}}]\right)^{<}\cdot G_{CC}^{a}[{\omega}^{{\prime}}] \nonumber\\ &\;\; + G_{CC}^{<}[{\omega}]\cdot\left(\Pi_{\beta}^{(2)}[{\omega},{\omega}^{{\prime}}]\right)^{a}\cdot G_{CC}^{a}[{\omega}^{{\prime}}],\end{aligned}$$ $$\begin{aligned} &V_{C\alpha}\cdot\Xi_{\alpha C,\beta\gamma}^{<}[{\omega},{\omega}^{{\prime}},{\omega}] = \delta_{\alpha\beta}\,\delta_{\alpha\gamma}\,\Big[\,\left(\Pi_{\alpha}^{(3)}[{\omega},{\omega}^{{\prime}}]\right)^{r}\cdot G_{CC}^{<}[{\omega}] +\left(\Pi_{\alpha}^{(3)}[{\omega},{\omega}^{{\prime}}]\right)^{<}\cdot G_{CC}^{a}[{\omega}]\,\Big]\nonumber\\ &\;\;+\delta_{\alpha\beta}\Big[\left(\Pi_{\alpha}^{(1)}[{\omega},{\omega}^{{\prime}}]\right)^{r}\cdot G_{CC}^{r}[{\omega}^{{\prime}}]\cdot\left(\Pi_{\gamma}^{(2)}[{\omega}^{{\prime}},{\omega}]\right)^{r}\cdot G_{CC}^{<}[{\omega}]+\left(\Pi_{\alpha}^{(1)}[{\omega},{\omega}^{{\prime}}]\right)^{r}\cdot G_{CC}^{r}[{\omega}^{{\prime}}]\cdot\left(\Pi_{\gamma}^{(2)}[{\omega}^{{\prime}},{\omega}]\right)^{<}\cdot G_{CC}^{a}[{\omega}]\nonumber\\ &\;\;+\left(\Pi_{\alpha}^{(1)}[{\omega},{\omega}^{{\prime}}]\right)^{r}\cdot G_{CC}^{<}[{\omega}^{{\prime}}]\cdot\left(\Pi_{\gamma}^{(2)}[{\omega}^{{\prime}},{\omega}]\right)^{a}\cdot G_{CC}^{a}[{\omega}]+\left(\Pi_{\alpha}^{(1)}[{\omega},{\omega}^{{\prime}}]\right)^{<}\cdot G_{CC}^{a}[{\omega}^{{\prime}}]\cdot\left(\Pi_{\gamma}^{(2)}[{\omega}^{{\prime}},{\omega}]\right)^{a}\cdot G_{CC}^{a}[{\omega}]\,\Big]\nonumber\\ &\;\;+\delta_{\beta\gamma}\,\Big[\,\tilde{\Sigma}_{\alpha}^{r}[{\omega}]\cdot G_{CC}^{r}[{\omega}]\cdot\left(\Pi_{\beta}^{(4)}[{\omega},{\omega}^{{\prime}}]\right)^{r}\cdot G_{CC}^{<}[{\omega}] +\tilde{\Sigma}_{\alpha}^{r}[{\omega}]\cdot G_{CC}^{r}[{\omega}]\cdot\left(\Pi_{\beta}^{(4)}[{\omega},{\omega}^{{\prime}}]\right)^{<}\cdot G_{CC}^{a}[{\omega}]\nonumber\\ &\;\;+\tilde{\Sigma}_{\alpha}^{r}[{\omega}]\cdot G_{CC}^{<}[{\omega}]\cdot\left(\Pi_{\beta}^{(4)}[{\omega},{\omega}^{{\prime}}]\right)^{a}\cdot G_{CC}^{a}[{\omega}]+\tilde{\Sigma}_{\alpha}^{<}[{\omega}]\cdot G_{CC}^{a}[{\omega}]\cdot\left(\Pi_{\beta}^{(4)}[{\omega},{\omega}^{{\prime}}]\right)^{a}\cdot G_{CC}^{a}[{\omega}]\,\Big]\nonumber\\ &\;\; +\tilde{\Sigma}_{\alpha}^{r}[{\omega}]\cdot G_{CC}^{r}[{\omega}]\cdot\left(\Pi_{\beta}^{(2)}[{\omega},{\omega}^{{\prime}}]\right)^{r}\cdot G_{CC}^{r}[{\omega}^{{\prime}}]\cdot\left[\,\left(\Pi_{\gamma}^{(2)}[{\omega}^{{\prime}},{\omega}]\right)^{r}\cdot G_{CC}^{<}[{\omega}] + \left(\Pi_{\gamma}^{(2)}[{\omega}^{{\prime}},{\omega}]\right)^{<}\cdot G_{CC}^{a}[{\omega}]\,\right]\nonumber\\ &\;\; + \tilde{\Sigma}_{\alpha}^{r}[{\omega}]\cdot G_{CC}^{r}[{\omega}]\cdot\left[\,\left(\Pi_{\beta}^{(2)}[{\omega},{\omega}^{{\prime}}]\right)^{r}\cdot G_{CC}^{<}[{\omega}^{{\prime}}] + \left(\Pi_{\beta}^{(2)}[{\omega},{\omega}^{{\prime}}]\right)^{<}\cdot G_{CC}^{a}[{\omega}^{{\prime}}]\,\right]\cdot \left(\Pi_{\gamma}^{(2)}[{\omega}^{{\prime}},{\omega}]\right)^{a}\cdot G_{CC}^{a}[{\omega}]\nonumber\\ &\;\; + \left[\,\tilde{\Sigma}_{\alpha}^{r}[{\omega}]\cdot G_{CC}^{<}[{\omega}] + \tilde{\Sigma}_{\alpha}^{<}[{\omega}]\cdot G_{CC}^{a}[{\omega}]\,\right]\cdot\left(\Pi_{\beta}^{(2)}[{\omega},{\omega}^{{\prime}}]\right)^{a}\cdot G_{CC}^{a}[{\omega}^{{\prime}}]\cdot \left(\Pi_{\gamma}^{(2)}[{\omega}^{{\prime}},{\omega}]\right)^{a}\cdot G_{CC}^{a}[{\omega}],\end{aligned}$$ where we define the *lesser* component of the set as $$\begin{aligned} \left(\Pi_{\theta}^{(1)}[{\omega},{\omega}^{{\prime}}]\right)^{<}&\equiv \;\tilde{\Sigma}_{\theta}^{r,<}[{\omega},{\omega}^{{\prime}}]+\tilde{\Sigma}_{\theta}^{<,a}[{\omega},{\omega}^{{\prime}}]\nonumber\\ &\;\;\;+\tilde{\Sigma}_{\theta}^{<}[{\omega}],\\ \left(\Pi_{\theta}^{(2)}[{\omega},{\omega}^{{\prime}}]\right)^{<}&\equiv \;\tilde{\Sigma}_{\theta}^{r,<}[{\omega},{\omega}^{{\prime}}]+\tilde{\Sigma}_{\theta}^{<,a}[{\omega},{\omega}^{{\prime}}]\nonumber\\ &\;\;\;+\tilde{\Sigma}_{\theta}^{<}[{\omega}]+\tilde{\Sigma}_{\theta}^{<}[{\omega}^{{\prime}}], $$ $$\begin{aligned} &\left(\Pi_{\theta}^{(3)}[{\omega},{\omega}^{{\prime}}]\right)^{<}\equiv \;\tilde{\Sigma}_{\theta}^{r,r,<}[{\omega},{\omega}^{{\prime}},{\omega}]+\tilde{\Sigma}_{\theta}^{r,<,a}[{\omega},{\omega}^{{\prime}},{\omega}]\nonumber\\ &\;\;\;+\tilde{\Sigma}_{\theta}^{<,a,a}[{\omega},{\omega}^{{\prime}},{\omega}]+\tilde{\Sigma}_{\theta}^{r,<}[{\omega},{\omega}^{{\prime}}]+\tilde{\Sigma}_{\theta}^{<,a}[{\omega},{\omega}^{{\prime}}],\end{aligned}$$ and the *retarded*-($r$) and *advanced*-($a$) component of as $$\begin{aligned} \left(\Pi_{\theta}^{(1)}[{\omega},{\omega}^{{\prime}}]\right)^{x}\equiv &\;\tilde{\Sigma}_{\theta}^{x,x}[{\omega},{\omega}^{{\prime}}]+\tilde{\Sigma}_{\theta}^{x}[{\omega}],\\ \left(\Pi_{\theta}^{(2)}[{\omega},{\omega}^{{\prime}}]\right)^{x}\equiv &\;\tilde{\Sigma}_{\theta}^{x,x}[{\omega},{\omega}^{{\prime}}]+\tilde{\Sigma}_{\theta}^{x}[{\omega}]\nonumber\\ &+\tilde{\Sigma}_{\theta}^{x}[{\omega}^{{\prime}}]+V_{CC}^{(\theta)},\\ \left(\Pi_{\theta}^{(3)}[{\omega},{\omega}^{{\prime}}]\right)^{x}\equiv &\;\tilde{\Sigma}_{\theta}^{x,x,x}[{\omega},{\omega}^{{\prime}},{\omega}]+\tilde{\Sigma}_{\theta}^{x,x}[{\omega},{\omega}^{{\prime}}],\end{aligned}$$ with $x= r,a$ and where we denote the components of generalized function as $$\begin{gathered} \label{74d} \tilde{\Sigma}_{\alpha}^{x_{1},\ldots , x_{n}}[{\omega}_1, \ldots{\omega}_{n}] \equiv\\ V_{C\alpha}\cdot\tilde{g}_{\alpha}^{\,x_{1}}[{\omega}_{1}]\cdot V_{\alpha\alpha}\cdot\ldots \cdot\tilde{g}_{\alpha}^{\,x_{n}}[{\omega}_{n}]\cdot V_{\alpha C}.\end{gathered}$$
--- abstract: | L’objectif de cet article est d’étudier la notion d’amibe au sens de Favorov pour les systèmes finis de sommes d’exponentielles à fréquences réelles et de montrer que, sous des hypothèses de généricité sur les fréquences, le complémentaire de l’amibe d’un système de $(k+1)$ sommes d’exponentielles à fréquences réelles est un sous-ensemble $k$-convexe au sens d’Henriques. [**MSC:**]{} Primary 32A60; Secondary 42A75, 55.99 author: - '<span style="font-variant:small-caps;">[James Silipo]{}</span>' date: 7 Fevrier 2005 title: '**Amibes de Sommes d’Exponentielles**' --- Introduction et énoncé du résultat principal. ============================================= Soit $P\subset\mathbb C[u_1^{\pm 1},\ldots,u_n^{\pm 1}]$ un système fini de polynômes de Laurent en $n$ variables et $V(P)$ son ensemble de zéros dans le tore $(\mathbb C^*)^n$; si ${\rm Log}$ est l’application de $(\mathbb C^*)^n$ dans $\mathbb R^n$ définie par $${\rm Log}(u):=(\log\vert u_1\vert,\ldots,\log\vert u_n\vert)\,,\qquad u\in (\mathbb C^*)^n\,,$$ l’[*amibe*]{} ${\mathcal A}_P$ de $P$ est l’image de $V(P)$ par l’application ${\rm Log}$, soit $${\mathcal A}_P:={\rm Log}\,V(P)\,.$$ La notion d’amibe pour un seul polynôme de Laurent a été introduite par Gelfand, Kapranov et Zelevinsky dans [@GKZ] où l’on trouve exposées ses propriétés fondamentales. Des études plus raffinées et des généralisations diverses de cette notion ont été faites par d’autres, parmi eux Forsberg, Passare, Rullg[å]{}rd, Tsikh (dont les travaux [@For], [@FPT], [@Ru1], [@Ru2], [@PR] étudient les relations entre l’amibe ${\mathcal A}_p$ d’un polynôme de Laurent $p$, son polytope de Newton $\Gamma_p$ et les développements de Laurent de la fonction rationnelle $1/p$) ou encore Mikhalkin (qui donne dans [@Mi1] et [@Mi2] des applications et des généralisations de la notion d’amibe à la géométrie des courbes réelles et tropicales). En particulier, dans [@GKZ] on trouve la preuve du fait suivant: [**([@GKZ])**]{}\[0-conv\] le complémentaire ${\mathcal A}_p^c$ de l’amibe d’un (seul) polynôme de Laurent $p$ n’a q’un nombre fini de composantes connexes et chacune de ces composantes est convexe. La [Proposition \[0-conv\]]{} cesse d’être vraie si l’on passe à un système $P$ de polynômes de Laurent. En particulier les composantes connexes de ${\mathcal A}_P^c$ ne sont plus en général des ensembles convexes; cependant, Henriques [@Hen] a observé que ${\mathcal A}_P^c$ vérifie une propriété plus faible qui s’exprime en des termes homologiques de la manière suivante. [**([@Hen])**]{}\[k-conv\] Soit $k\in\mathbb N$, $S\subseteq\mathbb R^n$ un $(k+1)$-sous-espace affine orienté et $Y\subseteq S$ un sous-ensemble. Une classe d’homologie (singulière) réduite dans $\tilde H_k(Y,\mathbb Z)$ est dite [non négative]{} si, pour tout point $x\in S\setminus Y$, son image (sous le morphisme induit par l’inclusion) dans $\tilde H_k(S\setminus\{x\},\mathbb Z)\simeq\mathbb Z$ est non négative. Le sous-ensemble des classes non négatives du groupe $\tilde H_k(Y,\mathbb Z)$ est noté $\tilde H^+_k(Y,\mathbb Z)$. Un sous-ensemble $X\subseteq\mathbb R^n$ est dit $k$-[convexe]{} si pour tout $(k+1)$-sous-espace affine orienté $S\subset\mathbb R^n$, la classe nulle est la seule classe non négative de $\tilde H_k(S\cap X,\mathbb Z)$ qui appartient au noyau du morphisme $$\tilde H_k(S\cap X,\mathbb Z)\rightarrow \tilde H_k(X,\mathbb Z)$$ induit par l’inclusion. [**([@Hen])**]{}\[hen\] Soit $P\subset\mathbb C[z^{\pm 1},\ldots,z^{\pm 1}]$ un système de polynômes de Laurent tel que $V(P)\subset(\mathbb C^*)^n$ a codimension $(k+1)$. Alors, ${\mathcal A}_P^c$ est un sous-ensemble $k$-convexe. Cet énoncé peut se lire comme un résultat d’injectivité partielle du morphisme $$\iota_{k,S}:\tilde H_k(S\cap {\mathcal A}_P^c,\mathbb Z)\rightarrow \tilde H_k({\mathcal A}_P^c,\mathbb Z)$$ pour chaque $(k+1)$-sous-espace affine orienté $S\subset\mathbb R^n$. Si $k=0$, les morphismes $\iota_{0,S}$ correspondant sont effectivement tous injectifs (et dans ce cas le [Théorème \[hen\]]{} se réduit à la [Proposition \[0-conv\]]{}), par contre, dès que $k>0$, les morphismes $\iota_{k,S}$ ne le sont plus que dans un sens conjectural, (voir [@Mi2]). Les travaux de Ronkin et Favorov autour des amibes soulèvent des questions nouvelles et tout particulièrement intéressantes dans l’étude des certains sous-ensembles analytiques globaux de $\mathbb C^n$. En fait, les articles [@Ron] et [@Fav] adaptent la notion d’amibe au cadre des fonctions holomorphes presque périodiques définies dans les domaines de $\mathbb C^n$ du type $T_\Omega:=\mathbb R^n+i\Omega\,$, $\Omega$ étant un ouvert de $\mathbb R^n$. Il s’agit de la classe  $ AP(T_\Omega) $ des fonctions $g\in{\mathcal O}(T_\Omega)$ telles que l’ensemble $\{g(z+t)\in{\mathcal O}(T_\Omega)\mid t\in\mathbb R^n\}$ est relativement compacte dans la topologie $\tau(T_\Omega)$ induite sur ${\mathcal O}(T_\Omega)$ par la convergence uniforme sur les sous-domaines du type $T_D$, avec $D\Subset\Omega$. \[fav\] Soit $\Omega\subset\mathbb R^n$ un ouvert non vide. L’amibe d’un système fini $G\subset AP(T_\Omega)$ est le sous-ensemble de $\mathbb R^n$ donné par $$A_G:=\overline{\im V(G)}\,,$$ où $V(G)$ dénote l’ensemble de zéros de $G$ dans $T_\Omega$ et $\im:T_\Omega\rightarrow\Omega$ est l’application de prise de partie imaginaire sur chaque coordonnée. Dans Favorov [@Fav] on trouve la [Définition \[fav\]]{} dans le cas d’un système réduit à une seule fonction et afin d’éviter toute ambiguïté entre les notations ${\mathcal A}_{P}$ et $A_{G}$, on va dorénavant indiquer les amibes au sens de Favorov (soit au sens de la [Définition \[fav\]]{}) par le symbole ${\mathcal F}_{G}$. Un cas bien particulier (mais néanmoins très important[^1]) de fonctions de $AP(T_{\mathbb R^n})=AP(\mathbb C^n)$ est celui des sommes d’exponentielles à fréquences imaginaires pures, soit les fonctions de la forme $$\begin{aligned} g(z) = \sum_{\lambda\in\Lambda}c_\lambda \,e^{i\langle z,\lambda\rangle} = \sum_{\lambda\in\Lambda}c_\lambda \,e^{\langle z,-i\lambda\rangle}\end{aligned}$$ où $z\in\mathbb C^n$, $\Lambda\subset\mathbb R^n$ est un ensemble fini et $c_\lambda\in\mathbb C^*$ pour tout $\lambda\in\Lambda$ (les vecteurs $-i\lambda\in i\mathbb R^n$ étant les [*fréquences*]{} de $g$). Toutefois, dans cet article on va plutôt travailler avec les systèmes finis de sommes d’exponentielles à fréquences réelles, soit les systèmes finis de fonctions du type $$f(z)=g(-iz)\,,$$ où $g$ est de la forme $(1)$ ci-dessus, donc pour un tel système on va dorénavant assumer la définition suivante. Soit $F$ un système de sommes d’exponentielles à fréquences réelles. L’amibe au sens de Favorov de $F$ est l’ensemble $${\mathcal F}_F:=\overline{\re V(F)}\,,$$ où $V(F)$ dénote l’ensemble de zéros de $F$ dans $\mathbb C^n$ et $\re:\mathbb C^n\rightarrow\mathbb R^n$ est l’application de prise de partie réelle sur chaque coordonnée. Comme remarqué dans [@Ron] ou [@Fav], si $g\in AP(T_\Omega)$, puisqu’elle est holomorphe, chaque composante connexe de l’ensemble ${\mathcal F}_g^c\cap\Omega$ est aussi convexe. En outre, si $g$ (resp. $f$) est une somme d’exponentielles à fréquences imaginaires pures (resp. réelles), l’ensemble $\mathbb R^n\setminus\overline{\im V(g)}$, (resp. $\mathbb R^n\setminus\overline{\re V(f)}$) n’a qu’un nombre fini de composantes connexes convexes, donc la [Proposition \[0-conv\]]{} se traduit mot à mot au cadre des amibes des sommes d’exponentielles à fréquences imaginaires pures (resp. réelles). Si l’on passe aux systèmes finis des fonctions de $AP(T_\Omega)$, la structure des amibes devient considérablement plus compliquée. Cependant, dans le cadre des systèmes finis de sommes d’exponentielles à fréquences réelles (resp. imaginaires pures), la théorie développée par Kazarnovski[ǐ]{} [@Ka1] permet, d’une part, de mieux comprendre la structure des amibes au sens de Favorov associées à ces systèmes et, d’autre part, d’adapter au même cadre le résultat d’Henriques [@Hen]. Pour énoncer notre résultat on a besoin de quelques notations qui seront détaillées dans les sections suivantes. Si $F$ est un système fini de sommes d’exponentielles à fréquences réelles, on associe à $F$ une famille $\{F_\chi\}_\chi$ de systèmes “perturbés” du système $F$, l’indice $\chi$ parcourant un certain groupe de caractères associé à $F$. On introduit ainsi une nouvelle notion d’[*amibe*]{} en posant $${\mathcal Y}_F := \bigcup_{\chi} \,\re V(F_\chi)\,$$ et l’on obtient le résultat suivant (voir Théorème 3.1 et Théorème 5.1 respectivement). [**Résultat.**]{} [*Soit $F$ un système constitué par $(k+1)$ sommes d’exponentielles à fréquences réelles génériques, alors*]{} $(a)$ : $${\mathcal Y}_F = \mathbb R^n\cap\bigcup_{\chi} V(F_\chi) = \overline{\re V(F)}\,,$$ [*en particulier l’amibe ${\mathcal Y}_{F}$ coïncide avec l’amibe ${\mathcal F}_{F}$ au sens de Favorov;*]{} <!-- --> $(b)$ : [*  le complémentaire ${\mathcal F}^c_F$ de l’amibe de $F$ est $k$-convexe dans $\mathbb R^n$.*]{} La partie $(a)$ fournit un expression plus concrète de l’adhérence de l’ensemble $\re V(F)$ et elle implique, entre autres, que $$\overline{\re V(F)} = \overline{\re V(F_\chi)}\,,$$ pour tout $\chi\,$. La partie $(b)$ constitue le pendant du [Théorème \[hen\]]{} dans le cadre exponentiel. Les preuves de $(a)$ et de $(b)$ utilisent la technique de perturbation par caractères introduite depuis longtemps par A. Yger dans les travaux [@Y1] et [@Y2] (puis utilisés par C. Berenstein et A. Yger) pour montrer que certains systèmes d’équations de convolution possédaient la propriété de la synthèse spectrale. En ce sens, la présentation de l’amibe de Favorov donnée dans $(a)$ pourrait s’avérer intéressante du point de vue des questions de petits dénominateurs inhérentes aux systèmes à fréquences réelles non commensurables. Sommes d’exponentielles: définitions et notations ================================================= Dans cette section on rappelle toutes les notions et tous les résultats autour des sommes d’exponentielles utiles dans la suite. Soit $n\in \mathbb N^*$ fixé et ${\mathcal O}(\mathbb C^n)$ la $\mathbb C$-algèbre des fonctions holomorphes sur $\mathbb C^n$. Une [*somme d’exponentielles*]{} sur $\mathbb C^n$ est un élément de la sous-algèbre ${\mathcal S}_n$ de ${\mathcal O}(\mathbb C^n)$ engendrée, en tant que sous-espace vectoriel complexe, par les fonctions de la forme $e^{\langle z,\lambda\rangle}$, où $\lambda\in\mathbb C^n$. ${\mathcal S}_n^*$ dénote l’ensemble des sommes d’exponentielles non nulles. Si $f\in{\mathcal S}_n^*$, le [*spectre*]{} de $f$ est le plus petit sous-ensemble $\Lambda_f$ de $\mathbb C^n$ tel que $f$ appartient au sous-espace vectoriel de ${\mathcal S}_n$ engendré par l’ensemble des monômes exponentiels $\{ e^{\langle z,\lambda\rangle}\mid \lambda\in\Lambda_f\}$, (il s’agit d’un ensemble bien défini puisque la famille $\{e^{\langle z,\lambda\rangle}\}_{\lambda\in{\mathbb C}^n}$ est une base de ${\mathcal S}_n$ sur $\mathbb C$), les [*fréquences*]{} de $f$ sont les éléments de son spectre $\Lambda_f$. Le [*polytope de Newton*]{} de $f\in{\mathcal S}_n^*$ est l’enveloppe convexe $$\Gamma_f:=\conv \Lambda_f\subset\mathbb C^n$$ de son spectre $\Lambda_f$. À toute $f\in{\mathcal S}_n^*$ on associe la fonction réelle $k_f$ donnée, pour $z\in\mathbb C^n$, par $$k_f(z) := \sup_{\lambda\in\Lambda_f} e^{\re\langle z,\lambda\rangle}\,;$$ la fonction $k_f$ n’est rien d’autre que l’exponentielle de la fonction de support du polytope de Newton de $f$, calculée par rapport au produit scalaire $\re\langle\,,\rangle$ sur $\mathbb C^n$. Dans cet article on utilisera des sous-algèbres de ${\mathcal S}_n$, à savoir les sous-algèbres du type ${\mathcal S}_{n,\mathbb G}$ constituées par les sommes d’exponentielles à fréquences dans un sous-groupe additif $\mathbb G$ de $\mathbb C^n$, souvent $\mathbb G$ sera $\mathbb Z^n,\mathbb Q^n,\mathbb R^n$ ou $i\mathbb R^n$, on parlera ainsi de sommes d’exponentielles [*à fréquences entières, rationnelles, réelles ou imaginaires pures*]{}; on note que pour tout $\mathbb G$, on a ${\mathcal S}_{n,\mathbb G}^*={\mathcal S}_n^*\cap{\mathcal S}_{n,\mathbb G}$. Un [*système de sommes d’exponentielles*]{} (en abrégé SSE) est un sous-ensemble non vide et fini $F$ de ${\mathcal S}_n^*$. Pour un tel système $F$ on pose $$\Gamma_F:=\sum_{f\in F}\Gamma_f$$ (la somme au deuxième membre étant prise au sens de Minkowski). L’ [*ensemble des spectres*]{} de $F$ est l’ensemble $\{\Lambda_f\mid f\in F\}$ et les [*fréquences*]{} de $F$ sont les éléments de l’union des spectres des $f\in F$; $F$ est dit à [*fréquences entières, rationnelles, réelles ou imaginaires pures*]{} si chaque $f\in F$ l’est. On note, respectivement, $\Xi_F$, ${\rm vect}_\mathbb Q \Xi_F$ et ${\rm vect}_{\mathbb R}\Xi_F$ le sous-groupe additif, le $\mathbb Q$-sous-espace vectoriel et le $\mathbb R$-sous-espace vectoriel de $\mathbb C^n$ engendrés par les fréquences de $F$. Si $\mathbb G\subset \mathbb C^n$ est un sous-groupe additif qui contient les fréquences de $F$, pour tout homomorphisme $\chi$ de groupes abéliens, du groupe additif $\mathbb G$ à valeurs dans le groupe multiplicatif $\mathbb S^1$ des nombres complexes de module égale à $1$,  $\chi\in\Ch\mathbb G:=\Hom_{\mathbb Z}(\mathbb G,\mathbb S^1)$, on introduit le SSE $$F_\chi:=\{f_\chi\in{\mathcal S}_n^*\mid f\in F\}\,,$$ où, pour tout $f\in F$, on a posé $$f_\chi(z):=\sum_{\lambda\in\Lambda_f}c_\lambda \chi(\lambda)e^{\langle z,\lambda\rangle}\,.$$ On observe que le groupe abélien $\mathbb S^1$ est divisible, donc il est un objet injectif dans la catégorie des groupes abéliens (voir [@Eis]), ce qui est équivalent à la surjectivité de l’homomorphisme de restriction $$\rho:\Ch \mathbb G \longrightarrow\Ch\Xi_f\,.$$ On en déduit que l’ensemble  $ \{F_\chi\subset{\mathcal S}_n^*\mid\chi\in\Ch\mathbb G\} $ ne dépende que de $F$, quel que soit le sous-groupe additif $\mathbb G$ de $\mathbb C^n$ contenant les fréquences de $F$. À tout SSE $F$ on associe la fonction réelle bornée $K[F]$ donnée, pour tout $z\in\mathbb C^n$, par $$K[F](z) := \sum_{f\in F} {\vert f(z)\vert \over k_f(z)}\,$$ et à toute face $\Delta=\sum_{f\in F}\Delta_f$ de $\Gamma_F$ on associe le SSE $$F^\Delta:=\{f^\Delta\in{\mathcal S}_n^*\mid f\in F\}\,,$$ appelé [*$\Delta$-trace*]{} de $F$, obtenu en posant, pour $f\in F$, $$f^\Delta(z):= \sum_{\lambda\in \Lambda_f\cap \Delta_f} c_\lambda e^{\langle z,\lambda\rangle} \,.$$ Si $\Delta$ est une face d’un polytope $\Gamma\subset\mathbb C^n$, on note $\aff_{\mathbb C}\Delta$ le sous-espace affine complexe de $\mathbb C^n$ engendré par $\Delta$. Les notations que l’on vient de préciser, permettent de reprendre certaines notions introduites par Kazarnovski[ǐ]{} [@Ka1]. [**([@Ka1])**]{} Un SSE $F$ est dit [régulier]{} s’il existe $\varepsilon>0$ tel que, pour chaque $\Delta\preccurlyeq\Gamma_F$ avec $\dim_{\mathbb C}(\aff_{\mathbb C}\Delta)<\card F$, on a $K[F^\Delta]\geqslant\varepsilon$. [**([@Ka1])**]{}\[kaza\] Soit $F$ un SSE régulier, alors l’ensemble $V(F)$, des zéros de $F$ dans $\mathbb C^n$, est non vide si et seulement si $\dim_{\mathbb C}(\aff_{\mathbb C}\Gamma_F)\geqslant\card F$ et dans ce cas sa codimension est égale à $\card F$.$\square$ [**([@Ka1])**]{}\[sf\] L’ensemble des spectres d’un SSE $F$ est dit [fermé]{} si pour toute face $\Delta=\sum_{f\in F}\Delta_f$ de $\Gamma_F$ telle que $\dim_{\mathbb C}(\aff_{\mathbb C}\Delta)<\card F$, il existe $f$ dans $F$ pour lequel $\Delta_f$ soit réduit à un point. \[kaza2\][**([@Ka1])**]{} Un SSE dont l’ensemble des spectres est fermé est un SSE régulier.$\square$ La condition de la [Définition \[sf\]]{} ne regarde que les spectres du système (ou même juste les sommets des polytopes $\Gamma_f$, $f\in F$), donc si l’on fixe un nombre $r\leqslant n$ de spectres dans $\mathbb C^n$ (resp. dans $\mathbb R^n$) ainsi qu’un $r$-uplet $(\ell_1,\ldots,\ell_r)\in\mathbb N^r$, on peut montrer que l’ensemble des spectres d’un SSE constitué par $r$ sommes d’exponentielles dont les spectres comportent respectivement $\ell_1,\ldots,\ell_r$ fréquences, est génériquement fermé. On en déduit que, si $F$ est un SSE dont l’ensemble des spectres est fermé, alors, pour tout $\chi\in\Ch\Xi_F$, il en est de même ainsi de $F_\chi$; en particulier, si $V(F)\neq\varnothing$ (soit si $\dim_{\mathbb C}(\aff_{\mathbb C}\Gamma_F)\geqslant\card F$) alors $V(F_\chi)\neq\varnothing$ et $$\codim V(F)=\card F=\card F_\chi=\codim V(F_\chi)\,,$$ pour tout $\chi\in\Ch\Xi_F$. Amibes: définition et premières propriétés. =========================================== Suite à une idée d’Alain Yger [@Y1], [@Y2], on propose ici une nouvelle définition d’amibe pour les systèmes finis de sommes d’exponentielles à fréquences réelles.[^2] On verra en suite sous quelles conditions cette notion d’amibe coïncide avec celle due à Favorov. \[amibe\] Soit $F$ un [SSE]{} à fréquences réelles, $\mathbb G\subset\mathbb C^n$ un sous-groupe contenant les fréquences de $F$. On appelle [amibe]{} de $F$ le sous-ensemble ${\mathcal Y}_F$ de $\mathbb R^n$ défini par $${\mathcal Y}_F:=\bigcup_{\chi\in\Ch \mathbb G}\re V(F_\chi)\,.$$ L’amibe ${\mathcal Y}_F$ est bien définie car l’ensemble $\{F_\chi\subset{\mathcal S}_{n,\mathbb R^n}^*\mid \chi\in\Ch\mathbb G\}$ utilisé dans sa définition est indépendant du choix du sous-groupe additif $\mathbb G\subset\mathbb C^n$ parmi ceux qui contiennent les fréquences de $F$. Ceci nous autorise entre autre à représenter l’amibe ${\mathcal Y}_F$ à l’aide du groupe $\mathbb G$ qui nous convient le plus. On remarque aussi que si $\chi\in\mathbb G$ alors $F$ et $F_\chi$ ont les mêmes fréquences et donc les mêmes amibes: ${\mathcal Y}_F={\mathcal Y}_{F_\chi}$. \[propr\] Soit $F$ un SSE à fréquences réelles, alors $${\mathcal Y}_F=\mathbb R^n\cap\bigcup_{\chi\in\Ch\Xi_F}V(F_\chi)\,,$$ ${\mathcal Y}_F$ est un sous-ensemble fermé dans $\mathbb R^n$. [[**Démonstration. **]{}]{}$(i)$ Soit $r$ le rang[^3] de $\Xi_F$ et $\{\omega_1,\ldots,\omega_r\}$ un système de générateurs libres de $\Xi_F$, alors, pour tout $f\in F$, on a l’expression $$f(z) = \sum_{k\in A_f} a_{f,k} \Big(e^{i\langle\omega_1,\im z\rangle}\Big)^{k_1} \!\!\!\!\! \cdots \Big(e^{i\langle\omega_r,\im z\rangle}\Big)^{k_r} e^{\langle k_1\omega_1+\cdots+k_r\omega_r,\re z\rangle}$$ où $A_f\subset\mathbb Z^r$ est un sous-ensemble fini et $a_{f,k}\in\mathbb C^*$ pour tout $k\in A_f$. Si $\xi\in{\mathcal Y}_F$, il existe un $\chi\in\Ch\Xi_F$ et un $\eta\in\mathbb R^n$ tels que $f_\chi(\xi+i\eta)=0$, pour tout $f\in F$. Par conséquent, si pour tout $1\leqslant j\leqslant r$, $\theta_j$ désigne la détermination principale de l’argument de $\chi(\omega_j)$ on a $$f_\chi(\xi+i\eta) = \sum_{k\in A_f} a_{f,k} \Big(e^{i(\theta_1+\langle\omega_1,\eta\rangle)}\Big)^{k_1} \!\!\!\!\! \cdots \Big(e^{i(\theta_r+\langle\omega_r,\eta\rangle)}\Big)^{k_r} e^{\langle k_1\omega_1+\cdots+k_r\omega_r,\xi\rangle}$$ pour tout $f\in F$ donc, si $\chi^\prime$ dénote le caractère de $\Xi_F$ donné, pour $1\leqslant j\leqslant r$, par  $ \chi^\prime(\omega_j)=e^{i\langle \eta,\omega_j\rangle}\,, $ on aura $f_{\chi\chi^\prime}(\xi)=f_\chi(\xi+i\eta)=0$, pour tout $f\in F$, soit $\xi\in V(F_{\chi\chi^\prime})$. L’autre inclusion est triviale. $(ii)$ Si $(\xi_q)_{q\in\mathbb N}\subset{\mathcal Y}_F$ est une suite convergeante vers $\xi\in\mathbb R^n$, pour tout indice $q\in\mathbb N$, il existe, grâce à $(i)$, un $\chi_q\in\Ch \Xi_F$ tel que, $f_{\chi_q}(\xi_q)=0\,,$ pour tout $f\in F$. En vertu de la compacité de $\Ch\Xi_F$, la suite $(\chi_q)_{q\in\mathbb N}\subset\Ch \Xi_F$ admet une sous-suite $(\tilde\chi_{q_m})_{m\in\mathbb N}$ qui converge vers un caractère $\chi\in\Ch \Xi_F$, donc $$f_\chi(\xi)=\lim_{m\to \infty}f_{\chi_{q_m}}(\xi_{q_m})=0\,,$$ pour tout $f\in F$, soit $\xi\in{\mathcal Y}_F$.$\square$ [ Si $F$ est un SSE à fréquences réelles, on a évidemment $${\mathcal Y}_F = \bigcup_{\chi\in\Ch\Xi_F} {\mathcal F}_{F_{_\chi}} \qquad {\rm et} \qquad {\mathcal Y}_F^c = \bigcap_{\chi\in\Ch\Xi_F} {\mathcal F}^c_{F_{_\chi}}\,,$$ ainsi que les inclusions  $ \re V(F_\chi)\subseteq {\mathcal F}_{F_{_\chi}}\subseteq {\mathcal Y}_F\,, $ en général strictes et valables pour tout $\chi\in\Ch\Xi_F$.]{} On va maintenant s’intéresser plus en détail aux rapports entre la notion d’amibe que l’on vient de définir et celle due à Favorov. Le [Théorème \[proprF\]]{} fait le lien entre les deux notions mais sa preuve utilise une version multidimensionnelle d’un théorème dit [*d’Approximation de Kronecker*]{} (déjà utilisée par Ronkin [@Ron]) dont on préfère ajouter ici une démonstration. On montre d’abord un lemme. \[lemmey\] Soit $H\subset\mathbb R^r$ un sous-groupe (additif) fermé. Alors $H=\mathbb R^r$ ou bien il existe une forme $\mathbb R $-linéaire $\psi\not\equiv0$ sur $\mathbb R^r$ telle que $\psi(H)\subseteq\mathbb Z$. [[**Démonstration. **]{}]{}Si $r=1$ le lemme est une simple conséquence du fait bien connu qu’un sous-groupe additif de $\mathbb R$ est soit dense soit discret. Par récurrence, on suppose que le lemme soit vrai dans $\mathbb R^s$, pour tout $s<r$. Or, si $H\neq\mathbb R^r$, il existe un sous-espace linéaire $S\subset\mathbb R^r$, avec $0<\dim S<r$, tel que l’image de $H$, sous la projection orthogonale $\pi_S:\mathbb R^r\to S$, constitue un sous-groupe discret de $S$. Puisque $\pi_S(H)$ est discret dans $S$, il existe par hypothèse de récurrence une forme $\mathbb R$-linéaire $\psi_S\not\equiv 0$ sur $S$ telle que $\psi_S(\pi_S(H))\subseteq \mathbb Z$. La forme $\mathbb R$-linéaire $\psi:=\psi_S\circ\pi_S$ vérifie le lemme pour $H\subset\mathbb R^r$.$\square$ \[multikro\] Soient $\omega_1,\ldots,\omega_r\in\mathbb R^n$ vecteurs linéairement indépendants sur $\mathbb Z$. Alors le sous-groupe additif $$G := \{x\in\mathbb R^r\mid x_j=\langle t,\omega_j\rangle+p_j\quad\hbox{\rm où}\quad t\in\mathbb R^n,\;p_j\in\mathbb Z\quad{\rm et}\quad j=1,\ldots,r\}$$ est dense [^4]dans $\mathbb R^r$. [[**Démonstration. **]{}]{}Soit $H$ l’adhérence de $G$. $H$ est aussi un sous-groupe et on va montrer que $H=\mathbb R^m$. Si, par l’absurde, $H\subsetneq\mathbb R^r$, le [Lemme \[lemmey\]]{} implique qu’il existe une forme $\mathbb R$-linéaire $\psi\not\equiv 0$ sur $\mathbb R^r$ telle que $\psi(x)\in\mathbb Z$ pour tout $x\in H$ et donc [*à fortiori*]{} pour tout $x\in G$. En évaluant $\psi$ sur un point $p$ de $\mathbb Z^r\subset G$ on trouve $$\psi(p)=q_1p_1+\cdots+q_rp_r\,,$$ pour certains $q_1,\ldots,q_r\in\mathbb Z$ non tous nuls, d’autre part en l’évaluant sur un point du type $$x_t = (\langle t,\omega_1\rangle,\ldots,\langle t,\omega_r\rangle)\in G\,,$$ où $t\in\mathbb R^n$ est arbitraire, on obtient $$\psi(x_t) = q_1\langle t,\omega_1\rangle+\cdots+q_r\langle t,\omega_r\rangle = \langle t,q_1\omega_1+\cdots+q_r\omega_r\rangle\in\mathbb Z$$ pour tout $t\in\mathbb R^n$, ce qui est possible si et seulement si $$q_1\omega_1+\cdots+q_r\omega_r=0\,,$$ d’où la contradiction.$\square$ Les preuves du [Lemme \[lemmey\]]{} et du [Théorème \[multikro\]]{} ont été obtenues en modifiant celles que l’on trouve dans [@Mey]. \[proprF\] Soit $F=\{f_1,\ldots,f_s\}\subset{\mathcal S}^*_{n,\mathbb R^n}\,$, $s\leqslant n$, tel que $V(F_\chi)=\varnothing$ pour tout $\chi\in\Ch\Xi_F$, ou bien tel que $\codim V(F_\chi)=s$ pour tout $\chi\in\Ch\Xi_F$. Alors $${\mathcal Y}_F=\overline{\re V(F)}={\mathcal F}_F\,.$$ [[**Démonstration. **]{}]{}Si $V(F_\chi)$ est vide pour tout $\chi\in\Ch\Xi_F$, le théorème est vrai trivialement. Soit donc $\codim V(F_\chi)=s$, (en particulier $V(F_\chi)\neq\varnothing$), pour tout $\chi\in\Ch\Xi_F$. L’inclusion ${\mathcal Y}_F\supseteq \overline{\re V(F)}$ est évidente, on doit donc montrer que $\re V(F_\chi)\subseteq\overline{\re V(F)}$, pour tout $\chi\in\Ch\Xi_F$. On suppose, par l’absurde, qu’il existe un $\chi\in\Ch\Xi_F$ et un point $x\in{\mathbb R}^n\cap V(F_\chi)$ qui n’est pas adhérent à $\re V(F)$. Cela signifie qu’il existe un $\varepsilon>0$ tel et que la bande $$D:=\{z\in\mathbb C^n\mid\,\Vert x-\re z\Vert<\varepsilon\}$$ ne contient pas de zéros de $F$. Si $\{\omega_1,\ldots,\omega_r\}$ est un système de générateurs libres du groupe $\Xi_F$ et, pour $1\leqslant\ell\leqslant r$, $\theta_\ell$ est la détermination principale de l’argument de $\chi(\omega_\ell)$, on a les expressions suivantes pour tout $1\leqslant j\leqslant s$, $$f_j(z) = \sum_{k\in A_j} a_{j,k} \Big(e^{i\langle\omega_1,\im z\rangle}\Big)^{k_1} \!\!\!\!\! \cdots \Big(e^{i\langle\omega_r,\im z\rangle}\Big)^{k_r} e^{\langle k_1\omega_1+\cdots+k_r\omega_r,\re z\rangle}$$ et $$f_{j,\chi}(z) = \sum_{k\in A_j} a_{j,k} \Big(e^{i(\theta_1+\langle\omega_1,\im z\rangle)}\Big)^{k_1} \!\!\!\!\! \cdots \Big(e^{i(\theta_r+\langle\omega_r,\im z\rangle)}\Big)^{k_r} e^{\langle k_1\omega_1+\cdots+k_r\omega_r,\re z\rangle},$$ où $A_j$ est un sous-ensemble fini de $\mathbb Z^r$ et $a_{j,k}\in\mathbb C^*$ pour tout $k\in A_j$. Le [Théorème \[multikro\]]{} implique qu’il existe une suite $(t_m)\subset\mathbb R^n$ telle que, pour tout $1\leqslant\ell\leqslant r$, on ait $$\lim_{m\to+\infty} \langle\omega_\ell,t_m\rangle = \theta_\ell \qquad {\rm mod.}\;2\pi\mathbb Z\,,$$ ce qui fait que, pour tout $1\leqslant j\leqslant s$ et tout $z\in\mathbb C^n$, $$\lim_{m\to+\infty}f_j(z+it_m)=f_{j,\chi}(z)\,.$$ Pour tout $m\in \mathbb N$, tout $1\leqslant j\leqslant s$ et tout $z\in\mathbb C^n$, on pose $$g_{j,m}(z) := f_j(z+it_m)\,,$$ donc  $ \lim_{m\to+\infty} g_{j,m} = f_{j,\chi}\,, $ ce qui fait que le système $G_m=\{g_{1,m},\ldots,g_{s,m}\}$ “tend[^5]” vers $F_\chi$ si $m$ tend vers l’infini. Or, puisque $V(F)\cap D=\varnothing$, on a aussi $V(G_m)\cap D=\varnothing$, pour tout $m\in\mathbb N$, mais comme par construction $\codim V(G_m)=s=\codim V(F_\chi)$ pour tout $m\in\mathbb N$, la version plusieurs variables du théorème de Rouché fait que $V(F_\chi)\cap D=\varnothing$ aussi. Ceci est absurde car par hypothèse $x\in V(F_\chi)\cap D$. $\square$ Soit $F=\{f_1,\ldots,f_s\}\subset{\mathcal S}^*_{n,\mathbb R^n}\,$, $s\leqslant n$, tel que $V(F_\chi)=\varnothing$ pour tout $\chi\in\Ch\Xi_F$, ou bien tel que $\codim V(F_\chi)=s$ pour tout $\chi\in\Ch\Xi_F$. Alors $$\overline{\re V(F)} = \overline{\re V(F_\chi)}\,,$$ pour tout $\chi\in\Ch\Xi_F$. [[**Démonstration. **]{}]{}Pour tout $\chi\in\Ch\Xi_F$ le système $F_\chi$ vérifie les mêmes hypothèses que $F$ donc le [Théorème \[proprF\]]{} fait qu’on ait aussi $${\mathcal Y}_{F_\chi}=\overline{\re V(F_\chi)}={\mathcal F}_{F_\chi}\,,$$ d’autre part ${\mathcal Y}_F={\mathcal Y}_{F_\chi}$, donc  $ \overline{\re V(F)} = \overline{\re V(F_\chi)}\,. $$\square$ Soit $F\subset{\mathcal S}^*_{n,\mathbb R^n}$ un SSE dont l’ensemble des spectres est fermé, alors $${\mathcal Y}_F=\overline{\re V(F)}={\mathcal F}_F\,.$$ [[**Démonstration. **]{}]{}Il suffit de remarquer qu’à cause de la [Proposition \[kaza2\]]{}, pour tout caractère $\chi\in\Ch\Xi_F$, le système $F_\chi$ est régulier. Or pour le [Théorème \[kaza\]]{} on n’a plus que deux possibilités, ou bien $V(F_\chi)=\varnothing$ pour tout $\chi\in\Ch\Xi_F$, ou bien $\codim V(F_\chi)=\card F$ pour tout $\chi\in\Ch\Xi_F$.$\square$ Soit $f\in{\mathcal S}^*_{n,\mathbb R^n}$, alors ${\mathcal Y}_f=\overline{\re V(f)}={\mathcal F}_f$. [[**Démonstration. **]{}]{}L’ensemble des spectres d’un SSE constitué par une seule somme d’exponentielles est toujours fermé.$\square$ Le lemme qui suit concerne le comportement des amibes sous l’action d’un automorphisme $\mathbb C$-linéaire de $\mathbb C^n$ qui préserve $\mathbb R^n$, si $\varphi$ est un tel automorphisme et $F$ un SSE à fréquences réelles, on pose $$F\circ\varphi:=\{f\circ\varphi\in{\mathcal S}_{n,\mathbb R^n}^*\mid f\in F\}\,.$$ \[cov\] Soit $F$ un SSE à fréquences réelles et $\varphi:\mathbb C^n\longrightarrow\mathbb C^n$ un isomorphisme $\mathbb C$-linéaire tel que $\varphi(\mathbb R^n)=\mathbb R^n$. Alors : $$\re \Big[V(F\circ\varphi)\Big]=\re \Big[\varphi^{-1}(V(F))\Big]=\varphi^{-1}(\re V(F))$$ $${\mathcal Y}_F=\varphi^a({\mathcal Y}_{F\circ\varphi^a})\,,$$ où $\varphi^a$ dénote l’adjoint de $\varphi$ par rapport à la forme hermitienne standard sur $\mathbb C^n$. [**Démonstration.**]{} $(i)$ La première égalité est évidente. Si $x\in\re\varphi^{-1}(V(F))$ et $z=x+iy\in \varphi^{-1}(V(F))$, on a $\varphi(z)=\varphi(x)+i\varphi(y)$ d’où $\re\varphi(z)=\varphi(x)$, soit $x\in\varphi^{-1}(\re V(F))$. D’autre part, si $x\in\varphi^{-1}(\re V(F))$, il existe un point $\zeta\in V(F)$ tel que $\re \zeta=\varphi(x)$ et comme $\varphi$ est inversible, on a $$x=\varphi^{-1}(\re(\zeta))=\re\varphi^{-1}(\zeta)\in\re\varphi^{-1}(V(F))\,.$$ $(ii)$ Soit $f\in F$, $f(z):=\sum_{\lambda\in\Lambda_f}c_\lambda e^{\langle z,\lambda\rangle}$, alors, pour tout $\lambda\in\Lambda_f$ on a $$\langle \varphi^a(z),\lambda\rangle = \overline{\langle\lambda,\varphi^a(z)\rangle} = \overline{\langle\varphi(\lambda),z\rangle} = \langle z,\varphi(\lambda)\rangle\,,$$ d’où $$f\circ\varphi^a(z) = \sum_{\varphi(\lambda)\in\varphi(\Lambda_f)} c_{\varphi(\lambda)} e^{\langle z,\varphi(\lambda)\rangle}$$ et $$\Ch\Xi_{F\circ\varphi^a} = \{\chi\circ\varphi^{-1}_{\vert\varphi(\Xi_F)}\mid\chi\in\Ch\Xi_F\}\,.$$ Ceci fait que, pour tout $f\in F$ et tout $\chi\in\Ch\Xi_F$, on ait $$f_\chi\circ\varphi^a = (f\circ\varphi^a)_{\chi\circ\varphi^{-1}}$$ ainsi on en déduit $$\re V\big(F_\chi\circ\varphi^a\big) = \re V\Big((F\circ\varphi^a)_{\chi\circ\varphi^{-1}}\Big)$$ et, grâce à $(i)$ $$\re V(F_\chi) = \varphi^a\Big(\re V\Big((F\circ\varphi^a)_{\chi\circ\varphi^{-1}}\Big)\Big)\,,$$ d’où la conclusion en prenant l’union sur $\chi\in\Ch\Xi_F$.$\square$ \[cov2\] Soit $F$ un [SSE]{} à fréquences réelles tel que le rang de $\Xi_F$ soit égale à $\dim_{\mathbb R}({\rm vect}_{\mathbb R}\Xi_F)$. Alors on a l’égalité $${\mathcal Y}_F=\re V(F)\,.$$ [[**Démonstration. **]{}]{}On suppose d’abord que, pour tout $f\in F$, $\Lambda_f\subset\mathbb Z^n$ et que $\Xi_F$ est de la forme $$\Xi_F=\{(m_1,\ldots,m_s,0,\ldots,0)\in\mathbb R^n\mid m_1,\ldots,m_s\in\mathbb Z\}\,,$$ où $s\in\{1,\ldots,n\}$ dénote le rang de $\Xi_F$. Dans ce cas, pour déterminer l’amibe de $F$ on peut utiliser les caractères du groupe $\mathbb Z^n$ ; donc si $\chi\in\Ch \mathbb Z^n$ est le caractère associé au $n$-uplet  $ (e^{i\theta_1},\ldots,e^{i\theta_n})\,, $ où $(\theta_1,\ldots,\theta_s)\in\mathbb R^n$, $f\in F$ et $\lambda\in\Lambda_f$, pour tout $z\in\mathbb C^n$, on a $$\begin{aligned} \chi(\lambda)e^{\langle z,\lambda\rangle} &=& e^{i(m_1\theta_1+\cdots+m_s\theta_s)}e^{z_1 m_1+\cdots+ z_s m_s}\\ &=& e^{(z_1+i\theta_1)m_1+\cdots+(z_r+i\theta_s)m_s}\\ &=& e^{\langle z+i\theta,\lambda\rangle}\,,\end{aligned}$$ donc $f_\chi(z)=f(z+i\theta)$. On en tire que, pour tout $\chi\in\Ch\mathbb Z^n$, $z\in V(F_\chi)$ si et seulement si $z+i\theta\in V(F)$, d’où $\re V(F_\chi)=\re V(F)$ et pour le choix arbitraire de $\chi\in\Ch \mathbb Z^n$, on déduit que ${\mathcal Y}_F=\re V(F)$. On passe maintenant au cas général. Supposons que le rang $s$ de $\Xi_F$ soit égal à $\dim_{\mathbb R} ({\rm vect}_{\mathbb R} \Xi_F)$ et soit $\{\omega_1,\ldots,\omega_s\}$ un système libre de générateurs de $\Xi_F$. Les éléments $\omega_1\ldots,\omega_s$ sont linéairement indépendants sur $\mathbb R$ car autrement le sous-espace vectoriel de $\mathbb R^n$ qu’ils engendrent, à savoir le sous-espace ${\rm vect}_{\mathbb R}\Xi_F$, aurait dimension plus petite que $s$. Ceci nous permet de compléter le système $\{\omega_1,\ldots,\omega_s\}$ en une base $\{\omega_1,\ldots,\omega_s,\omega_{s+1},\ldots,\omega_n\}$ de $\mathbb R^n$. Soit $A$ la matrice donnée par $$A:= \pmatrix{\omega_{11}&\cdots&\omega_{n1}\cr \vdots &\ddots&\vdots \cr \omega_{1n}&\cdots&\omega_{nn}\cr} \,,$$ alors, si $B$ est l’inverse de $A$ et $\varphi$ l’automorphisme $\mathbb C$-linéaire de $\mathbb C^n$ répresenté dans les bases canoniques par la matrice $B$, on voit que $\varphi(\mathbb R^n)=\mathbb R^n$, et qu’à moins d’une permutation impaire des premières $s$ colonnes de $A$ on peut supposer $\det\varphi>0$. Ceci implique l’égalité $$\varphi(\Xi_F)=\{(m_1,\ldots,m_s,0,\ldots,0)\in\mathbb R^n\mid m_1,\ldots,m_s\in\mathbb Z\}\,;$$ donc, avec les notations du [Lemme \[cov\]]{}, la première partie de la démonstration nous assure que $${\mathcal Y}_{F\circ\varphi}=\re V(F\circ\varphi)\,,$$ et un recours au [Lemme \[cov\]]{} nous donne $${\mathcal Y}_F = \varphi({\mathcal Y}_{F\circ \varphi}) = \varphi(\re V(F\circ\varphi)) = \varphi(\varphi^{-1}(\re V(F))) = \re V(F)\,,$$ ce qui achève la preuve.$\square$ \[cov3\] Si $F$ est un SSE à fréquences rationnelles alors $${\mathcal Y}_F=\re V(F)\,.$$ [[**Démonstration. **]{}]{} Au vu de [Lemme \[cov2\]]{}, il suffit de vérifier que le rang $s$ de $\Xi_F$ est égal à $ \dim_{\mathbb R}({\rm vect}_{\mathbb R}\Xi_F)\,. $ Pour cela, soit $\{\omega_1,\ldots,\omega_s\}\subset\mathbb Q^n$ un système libre de générateurs de $\Xi_F$. On a  $s\leqslant n\,$; en effet, si $j\in\{1,\ldots,s\}$ et $$\omega_j=\bigg({p_{j1}\over q_{j1}},\ldots,{p_{jn}\over q_{jn}}\bigg)\,,$$ avec $p_{j1},\ldots,p_{jn}\in\mathbb Z$ et $q_{j1},\ldots,q_{jn}\in\mathbb Z^*$, alors, en posant $$\mu:={\textsc{ppmc}}\{q_{jk}\in\mathbb Z\mid j\in\{1,\ldots,s\}\;,k\in\{1,\ldots,n\}\}\,;$$ on voit que $\mu\neq 0$, donc $\Xi_f$ est isomorphe à $\mu\Xi_f$ et comme $\mu\Xi_F\subseteq\mathbb Z^n$, on en tire que $s\leqslant n$. De plus, $\omega_1,\ldots,\omega_s$ sont $\mathbb Q$-linéairement indépendants car en multipliant une éventuelle relation de dépendance linéaire sur $\mathbb Q$ par le plus petit multiple commun des dénominateurs des coefficients de la relation, on obtient une relation sur $\mathbb Z$, ce qui est contraire au fait que les éléments $\omega_1,\ldots,\omega_s$ définissent une famille libre sur $\mathbb Z$. Par conséquent, on peut compléter $\{\omega_1,\ldots,\omega_s\}$ en une base $\{\omega_1,\ldots,\omega_s,\omega_{s+1},\ldots,\omega_n\}$ de $\mathbb Q^n$. Comme dans la démonstration du [Lemme \[cov2\]]{}, soit $A$ la matrice donnée par $$A:= \pmatrix{\omega_{11}&\cdots&\omega_{n1}\cr \vdots &\ddots&\vdots \cr \omega_{1n}&\cdots&\omega_{nn}\cr} \,,$$ alors, si $B$ est l’inverse de $A$ et $\varphi$ l’automorphisme $\mathbb C$-linéaire de $\mathbb C^n$ représenté dans les bases canoniques par la matrice $B$, on a que $\varphi(\mathbb Q^n)=\mathbb Q^n$ et, à moins d’une permutation impaire des premières $s$ colonnes de $A$, on peut supposer $\det\varphi>0$. Ceci implique l’égalité $$\varphi({\rm vect}_{\mathbb Q}\Xi_F)=\{(m_1,\ldots,m_s,0,\ldots,0)\in\mathbb R^n\mid m_1,\ldots,m_s\in\mathbb Q\}\,,$$ où ${\rm vect}_{\mathbb Q}\Xi_F$ dénote le $\mathbb Q$-sous-espace vectoriel de $\mathbb Q^n$ engendré par $\Xi_F$, d’où $$\dim_{\mathbb R}({\rm vect}_{\mathbb R}\Xi_F) = \dim_{\mathbb R}(\varphi({\rm vect}_{\mathbb R}\Xi_F)) = \dim_{\mathbb Q}(\varphi({\rm vect}_{\mathbb Q}\Xi_F)) = s\,,$$ ce qui conclut la preuve.$\square$ [Le [Corollaire \[cov3\]]{} a comme conséquence le fait que notre notion d’amibe pour un système de sommes d’exponentielles généralise la notion classique d’amibe. En fait si $P=\{p_1,\ldots,p_r\}\subset\mathbb C[x_1^{\pm 1},\ldots,x_n^{\pm 1}]$ est un système de polynômes de Laurent non nuls, $V(P)$ son ensemble des zéros dans le tore $(\mathbb C^*)^n$ et ${\mathcal A}_P:={\rm Ln} V(P)$ son amibe au sens classique, la substitution $x_j=e^{z_j}$, pour $j=1,\ldots,n$, transforme $P$ en le SSE à fréquences entières $F:=\{f_1,\ldots,f_r\} \subset{\mathcal S}_{n,\mathbb Z^n}^*$, où, pour $1\leqslant k\leqslant r$ et $z\in\mathbb C^n$, on pose $$f_k(z):=p_k(e^{z_1},\ldots,e^{z_n})\,.$$ Comme, pour tout $j=1,\ldots,n$,  $ \ln\vert x_j\vert=\ln\vert e^{z_j}\vert=\re z_j\, $, on en déduit que $${\mathcal A}_P={\mathcal Y}_F={\mathcal F}_F\,.$$]{} \[bal1\] [ Soit $\gamma\in\mathbb R\setminus\mathbb Q$ et $f\in{\mathcal S}_{1,\mathbb R}^*$ donnée, pour $z\in\mathbb C$, par $$\begin{aligned} f(z) &=& \cos(iz)+\sin(i\gamma z)-2\\ &=& {1\over 2}\big(e^{-z}+e^z\big)+{1\over 2i}\big(e^{-\gamma z}-e^{\gamma z}\big)-2\,.\end{aligned}$$ L’ensemble $\re V(f)$ n’est pas fermé dans $\mathbb R$ donc $\re V(f)\subsetneq{\mathcal Y}_f$. En effet, si $z$ est imaginaire pur, $f(z)=0$ si et seulement si $\cos iz=1$ et $\sin (i\gamma z)=1$, soit si et seulement si $$iz\in 2\,\pi\mathbb Z \cap \Big((\pi/ 2\gamma)+(2\,\pi/\gamma)\mathbb Z\Big)=\varnothing\,,$$ en particulier $0\notin\re V(f)$. D’autre part, si $\chi\in\Ch\Xi_f$ est tel que $\chi(1)=1$ et $\chi(\gamma)=-i$, on a bien $$f_\chi(z)=\cos(iz)+\cos(i\gamma z)-2\,,$$ d’où $f_\chi(0)=0$ et donc $0\in{\mathcal Y}_f=\overline{\re V(f)}$.[^6] Puisque le rang du groupe $\Xi_f$ est égale à $2$ on voit que le [Lemme \[cov2\]]{} est en général faux si le rang de $\Xi_F$ est plus grand que $\dim_{\mathbb R}({\rm vect}_{\mathbb R}\,\Xi_F)$.]{} \[jam1\] [ Soit $\gamma\in\mathbb R\setminus\mathbb Q$ et $f\in{\mathcal S}^*_{1,\mathbb R}$ donnée, pour $z\in\mathbb C$, par $$f(z) = (e^z-1)(e^{\gamma z}-e^\gamma)\,,$$ alors $\re V(f)=\{0,1\}={\mathcal Y}_f$ malgré les hypothèses du [Lemme \[cov2\]]{} ne soient pas satisfaites. La condition énoncée dans le [Lemme \[cov2\]]{} est donc suffisante mais pas nécessaire pour qu’on ait ${\mathcal Y}_f=\re V(f)$.]{} \[jam2\] [Soit $\gamma\in\mathbb R\setminus\mathbb Q$ et $F=\{f,g\}\subset{\mathcal S}_{1,\mathbb R}^*\,$, où $f$ et $g$ sont données, pour $z\in\mathbb C$, par $$f(z) = \cos(iz)+\sin(i\gamma z)-2 \qquad{\rm et}\qquad g(z) = e^{z}-1\,.$$ Il s’agit d’un système qui n’a pas de solutions (car $f$ n’a pas de zéros imaginaires purs alors que $g$ n’a que de tels zéros), donc $\overline{\re V(F)}=\varnothing$. D’autre part ${\mathcal Y}_F\neq\varnothing$ car $0\in V(F_\chi)$, où $\chi$ désigne le caractère tel que $\chi(1)=1$ et $\chi(\gamma)=-i$. Donc il existe bien de systèmes $F\subset{\mathcal S}^*_{n,\mathbb R^n}$ qui n’ont pas de zéros et dont l’amibe ${\mathcal Y}_F$ n’est pas vide. Dans ces cas l’amibe ${\mathcal Y}_F$ est trop grande (donc peu intéressante) et le [Théorème \[proprF\]]{} est faux.]{} k-convexité selon Henriques. ============================ Dans cette section on va faire quelques remarques autour de la notion de $k$-convexité pour un ouvert d’un espace affine réel telle qu’elle a été introduite dans [@Hen], auquel on renvoie pour toutes les définitions, les détails techniques et tous les résultats que l’on évoquera dans la suite, en particulier en ce qui concerne le complexe des chaînes polyédrales. Si $\varnothing\neq X\subset\mathbb R^n$ est un ouvert, on note ${}^{\rm pl}C_\bullet(X)$ le complexe des chaînes polyédrales de $X$, il est obtenu comme le quotient du complexe ${}^\Delta C_\bullet(X)$ de chaînes linéaires par morceaux de $X$ modulo la relation $\sim$ d’équivalence géométrique de ces chaînes. Si $\sigma=\sum_{j=1}^m\lambda_j\sigma_j\in{}^{\rm pl}C_k(X)$, avec $\lambda_j\neq 0$ pour tout $j$ et $c=[\sigma]_\sim\in{}^\Delta C_k(X)$, on rappelle que le support $\Supp\sigma$ de $\sigma$ est l’union des images des chaînes $\sigma_j$ qui apparaissent dans l’expression de $\sigma$ et que $$\Supp c:=\bigcap_{\tau\sim\sigma} \Supp\tau\,,$$ ce dernier étant bien défini en vertu du Lemme 2.4 dans [@Hen]. On rappelle aussi que l’homologie du complexe ${}^\Delta C_\bullet(X)$ est isomorphe à l’homologie singulière de $X$, ([@Hen] Lemme 2.2), donc dans toute question de $k$-convexité pour un ouvert $X$ d’un espace affine réel, on pourra utiliser l’homologie du complexe ${}^\Delta C_\bullet(X)$ au lieu de celui des chaînes singulières de $X$. Le terme $k$-[*convexité*]{} n’est pas nouveau en Mathématiques, il existe en fait en analyse complexe de plusieurs variables ainsi qu’en analyse fonctionnelle. Néanmoins ces notions analytiques ne ressemblent pas à la notion présentée par Henriques, qui me parait quand même assez nouvelle. On mentionne d’ailleurs que Mikalkhin [@Mi2] a introduit, sous le même nom de $k$-convexité, une notion plus forte que celle d’Henriques. Il faut remarquer que, si $k\in\mathbb N$ est fixé, la $k$-convexité dans $\mathbb R^n$ ne dévient intéressante que pour $n\geqslant k+2$, sinon tout sous-ensemble de $\mathbb R^n$ est $k$-convexe. Des simples exemples sont le complémentaire d’une union finie de droites dans $\mathbb R^3$, qui est $1$-convexe mais qu’il n’est pas $0$-convexe[^7] et, plus en général, le complémentaire d’une union finie de $k$-sous-espaces affines dans $\mathbb R^{k+2}$, qui est $k$-convexe mais il n’est pas $\ell$-convexe, pour $\ell<k$. Par contre, le complémentaire d’un ensemble fini de points dans $\mathbb R^3$ n’est pas $0$-convexe, ni $1$-convexe (mais il est trivialement $2$-convexe). La “faiblesse” de la notion de $k$-convexité croit avec $k$. Soit $X\subset\mathbb R^n$ un sous-ensemble non vide et soit $k\in\mathbb N$. Alors, si $X$ est $k$-convexe, il est aussi $(k+1)$-convexe. [[**Démonstration. **]{}]{}On suppose par l’absurde que $X$ soit $k$-convexe mais qu’il ne soit pas $(k+1)$-convexe. Il existe donc un $(k+2)$-sous-espace affine orienté $S$ de $\mathbb R^n$ qui rencontre $X$ et il existe aussi une classe non nulle $c$ dans $\widetilde H^+_{k+1}(S\cap X,\mathbb Z)$ dont l’image, (sous le morphisme induit par l’inclusion), dans $\widetilde H_{k+1}(X,\mathbb Z)$ est nulle. Soit alors $\sigma$ un $(k+1)$-cycle non négatif dans $S\cap X$ qui représente la classe $c$ et $S^\prime$ est un $(k+1)$-sous espace affine orienté de $S$ tel que l’intersection $\sigma^\prime:=S^\prime\cap \sigma$ soit un $k$-cycle non négatif et non nul contenu dans $S^\prime\cap X$, (un tel sous-espace existe car autrement $\sigma$ représenterait la classe nulle de $\widetilde H^+_{k+1}(S\cap X,\mathbb Z)$). Puisque $\sigma$ représente la classe nulle dans $\widetilde H_{k+1}(X,\mathbb Z)$, on déduit que $\sigma^\prime$ représente la classe nulle dans $\widetilde H_{k}(X,\mathbb Z)$, ce qui est contraire à la $k$-convexité de $X$.$\square$ On termine la section par le lemme suivant. Soit $\varphi:\mathbb R^n\longrightarrow\mathbb R^n$ un isomorphisme d’espaces affines qui préserve l’orientation. Alors si $X\subset \mathbb R^n$ est $k$-convexe, $\varphi(X)$ l’est. [[**Démonstration. **]{}]{}Si $X=\varnothing$ il n’y a rien à montrer. Sinon, la restriction de $\varphi$ à $X$ induit un homéomorphisme de $X$ sur $\varphi(X)$ donc un isomorphisme en homologie réduite $\varphi_*:{\tilde H}_\bullet(X) \longrightarrow{\tilde H}_\bullet(\varphi(X))$. En outre, comme $\varphi$ préserve l’orientation, pour tout $(k+1)$-sous-espace affine orienté $S$ de $\mathbb R^n$ qui rencontre $X$, $\varphi(S)$ est un sous-espace affine de $\mathbb R^n$ qui est isomorphe à $S$, en tant qu’espace affine réel orienté, et qui rencontre $\varphi(X)$ ; d’autre part, tout $(k+1)$-sous-espace affine orienté de $\mathbb R^n$ qui rencontre $\varphi(X)$ est de la forme $\varphi(S)$ pour un unique $S$. Enfin, pour tout $(k+1)$-sous-espace affine $S$ de $\mathbb R^n$ qui rencontre $X$ et tout $x\in S\setminus X$, l’isomorphisme $\varphi$ induit un isomorphisme $$\varphi_*:\mathbb Z={\tilde H}_k(S\setminus\{x\})\longrightarrow {\tilde H}_k(\varphi(S)\setminus\{\varphi(x)\})=\mathbb Z$$ qui, comme on le voit facilement, n’est rien d’autre que l’isomorphisme identité. On peut donc conclure la démonstration, en fait, pour tout $(k+1)$-sous-espace affine orienté $S$ de $\mathbb R^n$ qui rencontre $X$ et tout $x\in S\setminus X$, $${\tilde H}_k^+(\varphi(S)\cap\varphi(X))\setminus\{0\} =\varphi_*({\tilde H}^+_k(S\cap X)\setminus\{0\})$$ et de plus le diagramme suivant $$\matrix{ {\widetilde H}_k(\varphi(S)\cap \varphi(X)) & \hfl{}{} & {\widetilde H}_k(\varphi(S)\setminus\{\varphi(x)\})\cr &&&\cr \vfl{\varphi_*^{-1}}{} & & \vfl{}{id}\cr &&&\cr {\widetilde H}_k(S\cap X) & \hfl{}{} & {\widetilde H}_k(S\setminus\{x\})=\mathbb Z\cr }$$ (où les flèches horizontales sont induites par l’inclusion), est commutatif.$\square$ Le complémentaire de l’amibe. ============================= Dans cette section on démontre un résultat sur le complémentaire ${\mathcal F}_F^c$ de l’amibe d’un SSE $F$ à fréquences réelles qui constitue le pendant du [Théorème \[hen\]]{}. Pour cela, on aura besoin d’une hypothèse géométrique sur les fréquences de $F$, à savoir l’hypothèse que l’ensemble des spectres de $F$ soit fermés. Soit $F\subset{\mathcal S}_{n,\mathbb R^n}^*$ un SSE dont l’ensemble des spectres est fermé. Si $F$ est constitué par $(k+1)$ sommes d’exponentielles, le complémentaire ${\mathcal F}_F^c$ de l’amibe de $F$ est un sous-ensemble $k$-convexe de $\mathbb R^n$. [[**Démonstration. **]{}]{} L’ensemble des spectres de $F$ est fermé donc ${\mathcal F}_F={\mathcal Y}_F$ et, pour tout $\chi\in\Ch \Xi_F$, le SSE $F_\chi$ est régulier. Si $\dim_{\mathbb C}(\aff_{\mathbb C}\Gamma_F)<(k+1)$, pour tout $\chi\in\Ch\Xi_F$, on a $V(F_\chi)=\varnothing\,$, donc ${\mathcal Y}_F^c=\mathbb R^n$ qui est évidemment $k$-convexe. Par contre, si $\dim_{\mathbb C}(\aff_{\mathbb C}\Gamma_F)\geqslant (k+1)$, l’ensemble analytique $V(F_\chi)$ est non vide et de codimension $(k+1)$ dans $\mathbb C^n$, pour tout $\chi\in\Ch\Xi_F$. On conduit la démonstration en trois étapes. $(i)$ Si $\Lambda_f\subset\mathbb Z^n$ pour tout $f\in F$, l’amibe ${\mathcal Y}_F$ co[ï]{}ncide avec l’amibe (au sens classique) ${\mathcal A}_P$ d’un système $P$ de polynômes de Laurent de $n$ variables tel que la codimension, dans $(\mathbb C^*)^n$, de l’ensemble algébrique $V(P)$ soit égale à $(k+1)$. Grâce au [Théorème \[hen\]]{}, on peut conclure que ${\mathcal Y}_F^c$ est $k$-convexe dans ce cas. $(ii)$ On suppose maintenant que $\Lambda_f\subset\mathbb Q^n$ pour tout $f\in F$, et, comme dans la démonstration du [Corollaire \[cov2\]]{}, on peut trouver un automorphisme $\mathbb C$-linéaire $\varphi$ de $\mathbb C^n$ tel que $\det\varphi>0$, $\varphi(\mathbb R^n)=\mathbb R^n$ et $\varphi(\Xi_F)\subset\mathbb Z^n$. Ainsi, avec les mêmes notations qu’au [Lemme \[cov\]]{}, on a $${\mathcal Y}_F=\varphi^a({\mathcal Y}_{F\circ\varphi^a}) \qquad {\rm et} \qquad {\mathcal Y}_F^c=\varphi^a({\mathcal Y}_{F\circ\varphi^a}^c)$$ car l’adjoint $\varphi^a$ de $\varphi$ est aussi bijectif. En outre, le fait que $\varphi$ soit un isomorphisme implique que l’ensemble des spectres du système $F\circ\varphi^a$ soit aussi fermé, donc $\dim_{\mathbb C}(\aff_{\mathbb C}\Gamma_{F\circ\varphi^a})\ge (k+1)$, et $\codim V(F\circ\varphi^a)=(k+1)$. Or, comme  $\Xi_{F\circ\varphi^a}=\varphi(\Xi_F)\subset\mathbb Z^n$, la première partie de la démonstration montre que l’ensemble ${\mathcal Y}^c_{F\circ\varphi^a}$ est $k$-convexe dans $\mathbb R^n$ et vu que $\det\varphi^a>0$ un recours au [Lemme \[cov\]]{} permet de conclure la démonstration dans ce deuxième cas. $(iii)$ On passe donc au cas général où $\Lambda_f\subset\mathbb R^n$ pour tout $f\in F$. Si $\{\omega_1,\ldots,\omega_r\}$ est un système libre de générateurs de $\Xi_F$ on aura $$f(z)=\sum_{k\in A_f}a_{f,k} e^{k_1\langle z,\omega_1\rangle+\cdots+k_r\langle z,\omega_r\rangle}\,,$$ où $A_f\subset\mathbb Z^r$ est un sous-ensemble fini et $a_{f,k}\in\mathbb C^*$ pour tout $k\in A_f$. Pour tout $j\in\{1,\ldots,r\}$, soit $(\omega_{j,\ell})_{\ell\in\mathbb N}\subset\mathbb Q^n$ une suite convergeante vers $\omega_j$ et, pour tout $\ell\in\mathbb N$, soit $F^{[\ell]}:=\{f^{[\ell]}\in{\cal S}_{n,\mathbb R} \mid f\in F\}$, où $f^{[\ell]}$ est la somme d’exponentielles donnée par $$f^{[\ell]}(z):= \sum_{k\in A_f} a_{f,k} e^{k_1\langle z,\omega_{1,\ell}\rangle+\cdots+k_r\langle z,\omega_{r,\ell}\rangle}\,.$$ On voit ainsi que, pour tout $f\in F$, la suite des polytopes $(\Gamma_{f^{[\ell]}})_{\ell\in\mathbb N}$ converge vers le polytope $\Gamma_f$ pour la métrique de Hausdorff; par conséquent, pour $\ell$ assez grand, l’ensemble des spectres du système $F^{[\ell]}$ est aussi fermé (donc $F^{[\ell]}$ est régulier) et $\dim_{\mathbb C}(\aff_{\mathbb C}\Gamma_{F^{[\ell]}})\geqslant (k+1)$. Ceci implique que, pour $\ell$ assez grand, l’ensemble analytique $V(F^{[\ell]})$ est non vide et de codimension $(k+1)$ dans $\mathbb C^n$. D’autre part, pour tout $f\in F$, le support de $f^{[\ell]}$ est contenu dans $\mathbb Q^n$, donc, en vertu de la deuxième partie de la démonstration, on sait que pour $\ell$ assez grand, l’ensemble ${\cal Y}_{F^{[\ell]}}^c$ est $k$-convexe. De façon analogue, pour tout $\chi\in\Ch\Xi_F$ et tout $\ell\in\mathbb N$, on peut définir $(F_\chi)^{[\ell]}$, et puisque, pour tout $\chi\in\Ch\Xi_F$, tout $\ell\in\mathbb N$ et tout $f\in F$, on a  $ \Lambda_{f^{[\ell]}}=\Lambda_{(f_\chi)^{[\ell]}}\subset\mathbb Q^n $, on peut également conclure que, pour tout caractère $\chi\in\Ch\Xi_F$ et pour $\ell$ assez grand, l’ensemble ${\cal Y}_{(F_\chi)^{[\ell]}}^c$ est $k$-convexe. Les hypothèses $F\neq\{0\}$ et ${\mathcal Y}_F\neq\varnothing$ impliquent l’existence d’un $(k+1)$-sous-espace affine orienté $S$ de $\mathbb R^n$ tel que $\varnothing\neq S\cap{\cal Y}_F^c\neq S$. Soit $S$ un tel sous-espace affine (d’espace vectoriel sous-jacent $E_S$) et supposons, par l’absurde, qu’il existe une classe $\gamma\in{\widetilde H}^+_k(S\cap{\cal Y}^c_F)\setminus\{0\}$ dont l’image est nulle sous le morphisme $$\iota:{\widetilde H}_k(S\cap{\cal Y}^c_F)\longrightarrow\widetilde H_k({\cal Y}^c_F)\,$$ induit par l’inclusion; il s’agit de montrer que l’existence d’un tel élément conduit à une contradiction. On choisit pour cela un représentant $c$ de $\gamma$ dans le groupe ${\cal C}^\Delta_k( S\cap {\cal Y}^c_F)$ (c’est-à-dire un $k$-cycle affine par morceaux de l’ouvert $S\cap {\cal Y}^c_F$ de l’espace affine $(k+1)$-dimensionnel $S$); grâce au Lemme 2.7 de [@Hen], il existe une unique $(k+1)$-chaîne affine par morceaux $C$ de ${\cal C}^\Delta_{k+1} (S)$ (dépendant de $c$) telle que $\partial C=c$ et l’hypothèse que la classe d’homologie de $c$ dans $S\cap {\cal Y}_F^c$ soit non nulle équivaut (pour le même Lemme 2.7 de [@Hen]) à ce que le support de $C$ ne soit pas inclus dans ${\cal Y}_F^c$; il existe donc un caractère $\chi_o$ de $\Xi_F$ tel que le support de $C$ n’est pas inclus dans $S\cap({\rm Re}\, V(F_{\chi_o}))^c$.\ En outre, comme $\Supp c\subset{\cal Y}^c_F$, on voit que la classe nulle de $\widetilde H_k({\cal Y}^c_F)$ peut être représentée par le cycle $c$, donc il existe un élément  $D\in{\cal C}^\Delta_{k+1}({\cal Y}^c_F)$, tel que $\partial D=c$ dans ${\cal Y}^c_F$. On admet pour l’instant qu’il existe $L\in\mathbb N$ tel que pour tout $\ell\geqslant L$ on a $$\Supp c \cup\Supp D\subseteq {\cal Y}_{(F_{\chi})^{[\ell]}}^c\,, \eqno{(*)_\chi^\ell}$$ pour tout $\chi\in\Ch\Xi_F$, donc, pour $\ell\geqslant L$, la relation $(*)_{\chi_o}^\ell$ implique que $c$ représente une classe d’homologie $\gamma_{\chi_o,\ell}$ de $\widetilde H_k(S\cap {\cal Y}^c_{(F_{\chi_o})^{[\ell]}})$ dont l’image est nulle sous le morphisme $$\iota_\ell: \widetilde H_k(S\cap {\cal Y}^c_{(F_{\chi_o})^{[\ell]}}) \longrightarrow \widetilde H_k({\cal Y}^c_{(F_{\chi_o})^{[\ell]}})\;,$$ induit par l’inclusion. En outre, l’hypothèse $\gamma\in{\widetilde H}^+_k(S\cap{\cal Y}^c_F)$ implique que, si $\ell\geqslant L$, on a $\gamma_{\chi_o,\ell}\in \widetilde H^+_k(S\cap {\cal Y}^c_{(F_{\chi_o})^{[\ell]}})$. En fait si, pour $\ell\geqslant L$ et $x$ appartenant $S\setminus{\cal Y}^c_{(F_{\chi_o})^{[\ell]}}$, $\upsilon_x$ dénote le générateur standard du groupe de cohomologie de de Rham $H^k_{dR}(S\setminus\{x\})$, $$\upsilon_x:= {1\over \varkappa_k} \sum_{j=0}^k (-1)^j{\xi_j-x_j\over\parallel \xi-x\parallel^{k+1}} \,d\xi_{[j]}\,,$$ ($\varkappa_k$ étant le volume $k$-dimensionnel de la sphère $k$-dimensionnelle), on a $$\int_c \upsilon_x >0\quad{\rm lorsque}~x\in\Supp C\,,$$ ou alors $$\int_c \upsilon_x=0\quad {\rm lorsque}~x\notin\Supp C\,.$$ Si l’on change de représentant pour $\gamma_{\chi_o,\ell}$, il est facile de voir (par le théorème de Stokes) qu’aucune des deux intégrales ci-dessus peut devenir négative, donc, grâce au Lemme 3.2 de [@Hen], si $\ell\geqslant L$, la classe $\gamma_{\chi_o,\ell}$ est non négative dans $\widetilde H_k(S\cap {\cal Y}^c_{(F_{\chi_o})^{[\ell]}})$, d’autre part, si $\ell\geqslant L$, la $k$-convexité de ${\cal Y}^c_{(F_{\chi_o})^{[\ell]}}$ implique que $\gamma_{\chi_o,\ell}$ représente la classe nulle dans le groupe ${\widetilde H}_k(S\cap {\cal Y}^c_{(F_{\chi_o})^{[\ell]}})$, soit $\Supp C\subseteq{\cal Y}^c_{(F_{\chi_o})^{[\ell]}}$. La contradiction attendue viendra alors du fait que l’on sait que le support de $C$ n’est pas inclus dans $S\cap {\rm Re}\, (V(F_{\chi_o}))^c$. En fait on peut trouver un point $x\in S\cap {\rm Re}\, (V(F_{\chi_o}))$ qui appartient aussi à l’intérieure relatif de $\Supp C$, donc il existe un voisinage $W$ de $x$ tel que $W\cap S$ soit entièrement contenu dans $\Supp C$. Si $y\in\mathbb R^n$ est tel que $x+iy\in V(F_{\chi_o})$, l’intersection de $V(F_{\chi_o})$ avec $$U:=S+i(y+E_S)\,,$$ constitue un ensemble analytique discret dans $\mathbb C^n$. Soit donc $B$ dans $\mathbb C^n$ une boule ouverte de centre $x+iy$ qui ne contient pas d’autres points de $V(F_{\chi_o})\cap U$. Pour $\ell$ assez grand, l’ensemble analytique $V((F_{\chi_o})^{[\ell]})\cap U$ est aussi discret et, dans ce cas, la version en plusieurs variables du théorème de Rouché assure que cet ensemble admet dans $B$ le même nombre d’éléments que $V(F_{\chi_o})\cap U$ y admet, soit un seul élément, que l’on note $x_\ell+iy_\ell$. Il est clair que la suite des points $x_\ell+iy_\ell$ tend vers $x+iy$ et, en particulier, que les points de la suite $(x_\ell)$ appartiennent à $W\cap S$, pour $\ell$ assez grand. Mais alors on a trouvé la contradiction attendue, car, pour $\ell$ assez grand, on a d’une part $x_\ell\in{\cal Y}_{(F_{\chi_o})^{[\ell]}}$ et d’autre part $x_\ell\in W\cap S\subset\Supp C\subset {\cal Y}_{(F_{\chi_o})^{[\ell]}}^c$. Pour terminer la démonstration, il nous reste à prouver qu’il existe $L\in\mathbb N$ tel que, pour tout $\ell\geqslant L$, la relation $(*)_\chi^\ell$ est vérifiée pour tout $\chi\in\Ch\Xi_F$. On commence par remarquer qu’il existe un nombre fini $m$ de boules fermées $\overline B(x_s,\varepsilon_{x_s})$, $1\leqslant s\leqslant m$, telles que $${\rm Supp}\, c \cup \Supp D\subset \bigcup\limits_{s=1}^m \overline B(x_s,\varepsilon_{x_s}) \subset {\cal Y}^c_F\,.$$ Il suffit de montrer que, pour chaque $1\leqslant s\leqslant m$, il existe un $l_s\in\mathbb N$ tel que, pour tout entier $\ell\geqslant l_s$, on ait $$\overline B(x_s,\varepsilon_{x_s})\subset{\cal Y}_{(F_\chi)^{[\ell]}}^c\,,$$ pour tout $\chi \in \Xi_F$, et prendre en suite $L:=\max\{l_s\mid 1\leqslant s\leqslant m\}$. On prouve ceci par l’absurde$\,$; on suppose que pour un certain $s$, $1\leqslant s\leqslant m$, il existe un une suite strictement croissante $(\ell_q)\subseteq\mathbb N$ et une suite $(\chi_q)\subset\Ch\Xi_F$ telles que $$\overline B(x_s,\varepsilon_{x_s})\cap{\cal Y}_{(F_{\chi_q})^{[\ell_q]}}\neq\varnothing\,.$$ Comme, pour tout $q\in\mathbb N$, ${\cal Y}_{(F_{\chi_q})^{[l_q]}}=\re V((F_{\chi_q})^{[l_q]})\,$, on déduit l’existence d’une suite de points $\xi_q$ de $\overline B(x_s,\varepsilon_{x_s})$ et d’une suite de points $\eta_q$ de $\mathbb R^n$ tels que, pour tout $f\in F$ et tout $q\in\mathbb N$, on ait $$(f_{\chi_q})^{[\ell_q]}(\xi_q+i\eta_q)=0\,,$$ soit $$(f_{\tilde \chi_q})^{[\ell_q]}(\xi_q)=0\,,$$ où, pour tout $q\in\mathbb N\,$, $\tilde \chi_q:=\chi_q \kappa_q$,  $\kappa_q$ désignant le caractère de $\Xi_F$ donné, pour $1\leqslant j\leqslant r$, par  $ \kappa_q(\omega_j)=e^{i\langle\eta_q,\omega_{j,\ell_q}\rangle}\,. $ Par compacité de $\overline B(x_s,\varepsilon_{x_s})$ et de $\Ch\Xi_F$, on extrait une sous-suite $(\xi_{q_r})$ et une sous-suite $(\tilde \chi_{q_r})$ convergeantes respectivement vers un point $\tilde\xi$ de la boule $\overline B(x_s,\varepsilon_{x_s})$ et un caractère $\tilde \chi$ de $\Ch\Xi_F$; en passant à la limite, on a donc, pour tout $f\in F$, $$0 = \lim_{r\to\infty}(f_{\tilde \chi_{q_r}})^{[\ell_{q_r}]}(\xi_{q_r}) = f_{\tilde \chi}(\tilde \xi)\,,$$ ce qui est absurde, vu que $\tilde \xi \in \overline B(x_s,\varepsilon_{x_s})\subset{\cal Y}_F^c$.$\square$ Je remercie mon directeur de thèse, Alain Yger, pour le support qu’il m’a témoigné pendant la préparation de cet article, ainsi que Michel Balazard et Mikael Passare pour les exemples fournis. [99]{} D. Eisenbud: [*Commutative Algebra with a View Towards Algebraic Geometry,*]{} G.T.M. 150 (1994). S. Favorov: [*Holomorphic almost periodic functions in tube domains and their amoebas,*]{} Comput. Methods Funct. Theory [**1**]{} (2001), vol 2. 403-415. M. Forsberg: [*Amoebas and Laurent Series,*]{} Doctoral thesis, Royal Institut of Technology, Stockholm, 1998. M. Forsberg, M. Passare, A. Tsikh: [*Laurent determinants and arrangements of hyperplane amoebas,*]{} Adv. in Math. 151 (2000), 45-70. I.M. Gelfand, M.M. Kapranov, A.V. Zelevinsky: [*Discriminants, Resultants and multidimensional Determinants,*]{} Birkha[" u]{}ser, Boston, 1994. A. Henriques: [*An analogue of convexity for complements of amoebas of varieties of higher codimension, an answer to a question asked by B. Sturmfels,*]{} Adv. in Geom. Vol. 4 (2004). B.Ja. Kazarnovski[ǐ]{}: [*On the zeros of exponential sums,*]{} Soviet Math. Dokl. Vol. 23 (1981), No. 2. Y. Meyer: [*Algebraic Numbers and Harmonic Analysis,*]{} North-Holland, (1972). G. Mikhalkin: [*Real algebraic curves, moment map and amoebas,*]{} Ann. of Math. 151 (2000), no. 1, 309-326. G. Mikhalkin: [*Amoebas of Algebraic Varieties and Tropical Geometry,*]{} Preprint, arXiv:math. AG/0403015 v1, (2004). M. Passare, H. Rullg[å]{}rd: [*Amoebas, Monge-Ampère measures and triangulations of the Newton Polytope,*]{} Duke Math. J. 121 (2004), n. 3, 481-507. L. Ronkin: [*On the zeros of almost periodic function generated by holomorphic functions in a multicircular domain,*]{} Complex analysis in Modern Mathematics, Fazis, Moscow, (2000), 243-256. H. Rullg[å]{}rd: [*Polynomial Amoebas and Convexity,*]{} Preprint, Stokholm University, (2000). H. Rullg[å]{}rd: [*Stratification des espaces de polynômes de Laurent et structure de leurs amibes,*]{} C.R. Acad. Sci. Paris Sér. I Math. 331, n. 5, (2000), 355-358. A. Yger, [*Fonctions définies dans le plan et moyennes en tout point de leurs valeurs aux sommets de deux carrés,*]{} C.R. Acad. Sci. Paris Sér. A [**288**]{} (1979), no. 10, A535 – A538. A. Yger, [*Une généralisation d’un théorème de J. Delsarte,*]{} C.R. Acad. Sci. Paris Sér. A [**288**]{} (1979), no. 9, A497 – A499. James SILIPO\ LaBAG, Institut de Mathématiques\ U.F.R. de Mathématiques et Informatique, Université Bordeaux 1\ 351 cours de la Libération, 33405, Talence Cedex, France\ [*silipo@math.u-bordeaux1.fr*]{} [^1]: Un résultat profond de la théorie des fonctions holomorphes presque périodiques (le théorème d’approximation de Bochner-Fejér) assure que toute fonction $g\in AP(T_\Omega)$ est la limite dans la topologie $\tau(T_\Omega)$ d’une suite convergeante de sommes d’exponentielles à fréquences imaginaires pures. [^2]: Cette notion d’amibe pourrait s’adapter au cas plus général des systèmes finis de fonctions holomorphes presque périodiques dans les domaines tubulaires de $\mathbb C^n$. [^3]: Puisque le groupe additif $\mathbb R^n$ n’a pas de torsion, ses sous-groupes de type fini sont libres. [^4]: On observe que des conditions diophantiennes portant sur les $\omega_1,\ldots,\omega_r$ “freinent” la vitesse de l’approximation du point courant de $\mathbb R^r$ par des points de $G$. [^5]: L’idée d’approcher les $f_{j,\chi}$ par des translatées des $f_j$ à l’aide du [Théorème \[multikro\]]{} a été déjà exploitée par Ronkin dans [@Ron]. [^6]: Une preuve directe de ceci n’utilisant pas le langage des amibes m’a été signalée par Michel Balazard. [^7]: Un autre exemple assez explicatif d’un tel sous-ensemble m’a été signalé par Mikael Passare, il s’agit du complémentaire d’une “tour Eiffel” dans $\mathbb R^3$.
--- abstract: 'A large number of treatments of the meson spectrum have been tried that consider mesons as quark - anti quark bound states. Recently, we used relativistic quantum constraint mechanics to introduce a fully covariant treatment defined by two coupled Dirac equations. For field-theoretic interactions, this procedure functions as a quantum mechanical transform of Bethe-Salpeter equation. Here, we test its spectral fits against those provided by an assortment of models: Wisconsin model, Iowa State model, Brayshaw model, and the popular semi-relativistic treatment of Godfrey and Isgur. We find that the fit provided by the two-body Dirac model for the entire meson spectrum competes with the best fits to partial spectra provided by the others and does so with the smallest number of interaction functions without additional cutoff parameters necessary to make other approaches numerically tractable.  We discuss the distinguishing features of our model that may account for the relative overall success of its fits. Note especially that in our approach for QCD, the resulting pion mass and associated Goldstone behavior depend sensitively on the preservation of relativistic couplings that are crucial for its success when solved nonperturbatively for the analogous two-body bound-states of QED.  ' address: | 12474 Sunny Glen Drive, Moorpark,\ California, 93021 author: - Horace Crater - Peter Van Alstine title: 'Relativistic Calculation of the Meson Spectrum: a Fully Covariant Treatment Versus Standard Treatments' --- Introduction ============= Over 50 years after the discovery of the first meson and over 25 years after the identification of its underlying quark degrees of freedom, the Strong-Interaction Bound-state problem remains unsolved. Perhaps eventually the full spectrum of mesonic and baryonic states will be calculated directly from Quantum Chromodynamics via lattice gauge theory. This would require use of techniques that were unknown to the founding fathers of QED. For the present though, researchers have had to content themselves with attempts to extend bits and pieces of traditional QED bound-state treatments into the realm of QCD. Unfortunately, for those bound systems whose constituent kinetic or potential energies are comparable to constituent rest masses, nonrelativistic techniques are inadequate from the start. In the QED bound-state problem, weakness of the coupling permitted calculation through perturbation about the nonrelativistic quantum mechanics of the Schrödinger equation. Using the equation adopted by Breit [@Breit1]-[@Breit3](eventually justified by the Bethe-Salpeter equation [@bet57]), one was faced with the fact that a nonperturbative numerical treatment of the Breit equation could not yield spectral results that agree to an appropriate order with a perturbative treatment of the semirelativistic form of that equation[@bet57]-[@cwyw].  This form of the equation contained such terms as contact terms bred by the vector Darwin interaction that could be treated only perturbatively, spoiling the interpretation of the Breit equation as a bona-fide wave equation. Forays into the full relativistic structure defined by the Bethe-Salpeter equation turned up fundamental problems which fortunately could be sidestepped for QED due to the smallness of $\alpha$. In the absence of definitive guidance from QED, in recent years researchers in QCD have felt free to jump off from any point that had proven historically useful in QED. Some have chosen to approach the spectrum using time-honored forms from the “relativistic correction structure” of atomic physics. Others have employed truncations of field-theoretic bound-state equations in hopes that the truncations do no violence to the dynamical structures or their relativistic transformation properties. A third set have broken away from QED by choosing to guess at “relativistic wave equations” as though such equations have no connection to field theory. Is there another way to attack this problem? Imagine that we could replace the Schrodinger equation by a many-body relativistic Schrodinger equation or improved Breit equation that could be solved numerically. One would have to establish its validity by connecting it to quantum field theory, and its utility by solving it for QCD. Of course such an approach would apply equally as well to QED and so would have to recapitulate the known results of QED. (These results might reemerge in unfamiliar forms since not originating in the usual expansion about the nonrelativistic limit.) Now, for the two-body bound-state problem, there is such an equation or rather a system of two coupled Dirac equations - for an interacting pair of relativistic spin one-half constituents. It turns out that for the two-body case, use of Dirac’s constrained Hamiltonian mechanics [@di64]-[@drz75] in a form appropriate for two spinning particles [@cra82], [@saz86] (pseudo-classical mechanics using Grassmann degrees of freedom[@brz],[@tei81])leads to a consistent relativistic quantum description. In the two-body case, one may explicitly construct the covariant Center of Momentum rest frame of the interacting system. In fact, the relativistic two-body problem may be written as an effective relativistic one-body problem [@tod76], [@cra84],[@cra87]. The proper formulation of this relativistic scheme requires the successful treatment of the quantum ghost states (due to the presence of the relative time) that first appeared in Nakanishi’s work on the Bethe-Salpeter equation[@nak69]. It might seem that although fully covariant and quantum mechanically legitimate, such an approach would merely give a sophisticated method for guessing relativistic wave equations for systems of bound quarks. However, this method assumes its full power when combined with the field-theoretic machinery of the Bethe-Salpeter equation. When used with the kernel of the Bethe-Salpeter equation for QED, our approach combines weak-potential agreement with QED [@bckr] with the nonperturbative structure of the field-theoretic eikonal approximation[@tod71],[@saz97]. The extra structure is automatically inherited from relativistic classical[@yng], [@cra92] and quantum mechanics[@saz97]. In QED our approach amounts to a quantum-mechanical transform [@saz85],[@saz92]of the Bethe Salpeter equation provided by two coupled Dirac equations whose fully covariant interactions are determined by QED in the Feynman Gauge[@cra88],[@bckr]. These Two-Body Dirac Equations are legitimate quantum wave equations that can be solved directly [@va86],[@bckr](without perturbation theory) whose numerical or analytic solutions automatically agree with results generated by ordinary perturbative treatment. (In our opinion the importance of this agreement cannot be overemphasized. A common fault of most of the models we discuss in this paper is that they lack such agreement. But, if a numerical approach to a two body bound state formalism when specialized to QED cannot reproduce the results given by its own perturbative treatment, how can one be certain that its application to highly relativistic QCD bound states will not include spurious short range contributions.) Of course there is a fly in the ointment - but one to be expected on fundamental grounds. It turns out that the only separable interacting system as yet explicitly constructed in a canonical relativistic mechanics is the two body system. In practical terms, this means that we must confine the present treatment to the meson spectrum. So far, even the relativistic treatment of the three-body problem of QED in the constraint approach is unknown.  No one has been able to produce three compatible separable Dirac equations which include not only mutual interactions but also necessary three body forces in closed form[@ror81]. Although still considered unusual or unfamiliar by the bulk of bound-state researchers, the structures appearing in these equations may have been anticipated classically by J. L. Synge, the spin structures were introduced into QED (incorrectly) by Eddington and Gaunt[@edd28],[@va97], and they have appeared in approximate forms appropriate for weak potentials in the works of Krapchev, Aneva, and Rizov[@krp79] and of Pilkuhn[@pilk79] . Of greatest surprise but greatest value (to the authors), their perturbative weak-potential versions were uncovered in QED by J. Schwinger in his virial treatment of the positronium spectrum[@sch73]. The associated relativistic mechanics transforms properly under spin-dependent generalizations of generators found by Pryce [@pry48]Newton and Wigner[@nwt49]. The techniques for quantization go back to those found by Dirac[@di64], and applied by Regge and Hanson to the relativistic top[@rge74], by Nambu to the string, by Galvao and Teitelboim to the single spin one-half particle [@tei81], and by Kalb and Van Alstine [@ka75]and by Todorov[@tod76] to the pair of spinless particles. Their progenitors can be found in the bilocal field theories of Yukawa, Markov, Feynman and Gell-Mann as well as the myriad treatments of the relativistic oscillator beginning with the work of Schrödinger. In this paper, we will compare our latest results for the meson spectrum provided by Two-Body Dirac Equations with the corresponding results given by a representative sample of alternative methods. The present paper is not a detailed account of this method (already presented elsewhere - see Refs.[@bckr][@cra94]and references contained therein). Neither is it an attempt to conduct an even-handed or thorough review. Rather, its purpose is to show how such an organized scheme fares in the real world of calculation of a relativistic spectrum and to contrast its results with those produced by an array of approaches, each chosen on account of popularity or structural resemblances or differences with our approach. In this paper we consider only approaches like ours that do not restrict themselves to the heavy mesons but attempt fits to the entire spectrum thus obtaining a more demanding comparison. (We do not consider here the myriad of partial spectral results for either the light or heavy mesons appearing in the recent literature). Where possible, we shall show how certain distinguishing features of the various approaches are responsible for success or failure of the resulting fits to the meson spectrum. Whether our equations ultimately prove correct or not, they have the virtue that they are explicitly numerically solvable without additional revisions, cutoffs, etc. unlike certain other approaches whose spectral consequences depend on ad hoc revision necessary for numerical solution. All of the treatments we examine attempt to describe the interactions of QCD through the inclusion of spin-dependent interactions that in part first appeared as small corrections in atomic physics. All include relativistic kinematics for the constituents. One contributor to the use of such techniques [@mor90] has even asserted that all of the alternative approaches that include relativistic kinematics are actually equivalent to the nonrelativistic quark model, so that the detailed relativistic structure of the interaction makes no difference to the bound state spectrum. However, as we shall see in a fully relativistic description with no extraneous parameters, the detailed relativistic interaction structure in fact determines the success or failure of a calculation of the full meson spectrum from a single equation. The order of the paper is as follows: First, in Section II, we review enough of the structures of our Two-Body Dirac Equations and their origins in relativistic constraint dynamics to make clear the equations that we are solving and the relativistic significances of the potential structures appearing in them. (Those readers who are already familiar with constraint dynamics might wish to go directly to the QCD applications of section III.)  In Section III, we detail how we incorporate the interactions of QCD into our equations by constructing the relativistic version of the Adler-Piran static quark potential[@adl] that we use when we apply our equations to meson spectroscopy. In Section IV, we examine the numerical spectral results that are generated by this application of the Two-Body Dirac Equations. The feature of our approach that most distinguishes it from other more traditional   two-body formalisms is its use of  two coupled constituent equations (instead of one) containing two-body minimal substitution forms and related structures that incorporate the minimal interaction form of the original one-body Dirac equation. In Section V we rewite this form of the Two Body Dirac Equations first as an equivalent one that incorporates the interactions through the kernel structures that appear in most older approaches and second as an equivalent form closely related to the Breit equation. In this section we examine how the relativistic interaction structures of the constraint approach lead even for QED to classifications of interaction terms that differ from the designations used in some of the other approaches. In Section VI, we examine an attempt to use the Salpeter Equation to treat the meson spectrum: the Wisconsin Model of Gara, Durand, Durand, and Nickisch[@wisc]. Although these authors try to keep relativistic structures, they ultimately employ weak-potential approximations and structures obtained from perturbative QED in a non-perturbative equation (with no check to see that the procedure even makes nonperturbative sense in QED itself). In Section VII we examine the Iowa State Model of Sommerer, Spence and Vary [@iowa] which uses a new quasipotential approach for which, in contrast to the Wisconsin model, the authors check that the equation makes nonperturbative sense in QED at least for the positronium ground state. In Section VIII, we examine the Breit Equation Model of Brayshaw [@bry], which illustrates the sort of successful fit that one can still obtain when one is allowed to introduce confining interactions (into the Breit equation) through terms whose relativistic transformation properties are ambiguous. In Section IX, we look at the most popular treatment - the Semirelativistic Model of Godfrey and Isgur [@isgr]. This model includes a different smearing and momentum dependent factor for each part of the various spin-dependent interactions. Although each interaction is introduced for apparently justifiable physical reasons, this approach breaks up (or spoils) the full relativistic spin structure that is the two-body counterpart of that of the one-body Dirac equation with its *automatic* relations among the various interaction terms. We examine this model to see how well our fully covariant set of two-body Dirac equations, employing only three potential parameters used in two different invariant interaction functions, can do versus Godfrey’s and Isgur’s semirelativistic equation with relativistic kinematics and pieces of relativistic dynamical corrections (introduced in a patchwork manner with ten potential parameters used in six different interaction functions), when required to fit the whole meson spectrum (including the light-quark mesons). Finally, in Section X, we conclude the paper by reviewing some of the features of the constraint approach that played important roles in the relative success of its fit to the meson spectrum. We then use apparent successes of recent fits produced by the ordinary nonrelativistic quark model to point out dangers inherent in judging rival formalisms on the basis of fits to portions of the spectrum. At the end of the paper, we supply sets of tables for spectral comparisons and appendices detailing the radial form of our Two-Body Dirac equations that we use for our spectral calculations, and the numerical procedure that we use to construct meson wave functions. We also include a table summarizing the important features of the various methods that we compare in this paper. The Two-Body Dirac Equations of Constraint Dynamics =================================================== In order to treat a single relativistic spin-one-half particle, Dirac originally constructed a quantum wave equation from a first-order wave operator that is the matrix square-root of the corresponding Klein-Gordon operator [@di28]. Our method extends his construction to the system of two interacting relativistic spin-one-half particles with quantum dynamics governed by a pair of compatible Dirac equations on a single 16-component wave function. For an extensive review of this approach, see Refs.[@cra87; @bckr; @cra94] and works cited therein. For the reader unfamiliar with this approach, we present a brief review. About 27 years ago, the relativistic constraint approach first successfully yielded a covariant yet canonical formulation of the relativistic two-body problem for two interacting spinless classical particles. It accomplished this by covariantly controlling the troublesome relative time and relative energy, thereby reducing the number of degrees of freedom of the relativistic two-body problem to that of the corresponding nonrelativistic problem[@ka75]-[@drz75]. In this method, the reduction takes place through the enforcement of a generalized mass shell constraint for each of the two interacting spinless particles: $p_{i}^{2}+m_{i}^{2}+\Phi_{i}\approx0$. Mathematical consistency then requires that the two constraints be compatible in the sense that they be conserved by a covariant system-Hamiltonian. Upon quantization, the quantum version of this compatibility condition becomes the requirement that the quantum versions of the constraints (two separate Klein-Gordon equations on the same wave function for spinless particles) possess a commutator that vanishes when applied to the wave-function. In 1982, the authors of this paper used a supersymmetric classical formulation of the single-particle Dirac equation due to Galvao and Teitelboim to successfully extend this construction to the pseudoclassical mechanics of two spin-one-half particles [@tei81; @cra82]. Upon quantization, this scheme produces a consistent relativistic quantum mechanics for a pair of interacting fermions governed by two coupled Dirac equations. When specialized to the case of two relativistic spin-one-half particles interacting through four-vector and scalar potentials, the two compatible 16-component Dirac equations [@cra87; @bckr; @cra94] take the form $$\begin{aligned} \mathcal{S}_{1}\psi & =\gamma_{51}(\gamma_{1}\cdot(p_{1}-A_{1})+m_{1}+S_{1})\psi=0\label{tbdea}\\ \mathcal{S}_{2}\psi & =\gamma_{52}(\gamma_{2}\cdot(p_{2}-A_{2})+m_{2}+S_{2})\psi=0, \label{tbdeb}$$ in terms of $\mathcal{S}_{i}$ operators that in the free-particle limit become operator square roots of the Klein-Gordon operator. The relativistic four-vector potentials $A_{i}^{\mu}$ and scalar potentials $S_{i}$ are effective constituent potentials that in either limit $m_{i}\rightarrow\infty$ go over to the ordinary external vector and scalar potentials of the light-particle’s one-body Dirac equation. Note that the four-vector interactions enter through minimal substitutions inherited (along with the accompanying gauge structure) from the corresponding classical field theory[@cra84; @yng; @cra92]. The covariant spin-dependent terms in the constituent vector and scalar potentials (see Eq.(\[vecp\] and Eq.(\[scalp\]) below) are recoil terms whose forms are nonperturbative consequences of the compatibility condition $$\lbrack\mathcal{S}_{1},\mathcal{S}_{2}]\psi=0. \label{cmpt}$$ This condition also requires that the potentials depend on the space-like interparticle separation only through the combination $$x_{\perp}^{\mu}=(\eta^{\mu\nu}+\hat{P}^{\mu}\hat{P}^{\nu})(x_{1}-x_{2})_{\nu}$$ with no dependence on the relative time in the c.m. frame. This separation variable is orthogonal to the total four-momentum $$P^{\mu}=p_{1}^{\mu}+p_{2}^{\mu};\ -P^{2}\equiv w^{2}.$$ $\hat{P}$ is the time-like unit vector $$\hat{P}^{\mu}\equiv P^{\mu}/w.$$ The accompanying relative four-momentum canonically conjugate to $x_{\perp}$ is $$\ p^{\mu}=(\epsilon_{2}p_{2}^{\mu}-\epsilon_{1}p_{2}^{\mu})/w;\mathrm{where}\ \epsilon_{1}+\epsilon_{2}=w,\ \epsilon_{1}-\epsilon_{2}=(m_{1}^{2}-m_{2}^{2})/w$$ in which $w$ is the total c.m. energy. The $\epsilon_{i}$’s are the invariant c.m. energies of each of the (interacting) particles[@eps]. The wave operators in Eqs.(\[tbdea\],\[tbdeb\]) operate on a single 16-component spinor which we decompose as $$\psi=\left( \begin{array} [c]{c}\psi_{1}\\ \psi_{2}\\ \psi_{3}\\ \psi_{4}\end{array} \right) \label{spinor}$$ in which the $\psi_{i}$ are four-component spinors. Once we have ensured that the compatibility condition is satisfied, Eqs.(\[tbdea\],\[tbdeb\]) provide a consistent quantum description of the relativistic two-body system incorporating several important properties [@cra87; @bckr; @cra94] . They are manifestly covariant. They reduce to the ordinary one body Dirac equation in the limit in which either of the particles becomes infinitely heavy. They can be combined to give [@bckr; @long] coupled second-order Schrödinger-like equations (Pauli-forms) for the sixteen component Dirac spinors. In the center of momentum (c.m.) system, for the vector and scalar interactions of Eq.(\[vecp\]) and Eq.(\[scalp\]) below, these equations resemble ordinary Schrödinger equations with interactions that include central-potential, Darwin, spin-orbit, spin-spin, and tensor terms. These customary terms are accompanied by others that provide important additional couplings between the upper-upper ($\psi_{1}$) and lower-lower ($\psi_{4}$) four component spinor portions of the full sixteen component Dirac spinor. The interactions are completely local but depend explicitly on the total energy $w$ in the c.m. frame. In this paper we use a recently developed rearrangement of these equations [@long] (similar to that first presented in [@saz94]) that provides us with ones simpler to solve but physically equivalent  The resulting local Schrödinger-like equation depending on the four-component spinor $\phi_{+}\equiv\psi_{1}+\psi_{4}$ takes the general c.m. form $$(-\mathbf{\nabla}^{2}+\Phi(\mathbf{r},\boldsymbol{\sigma}_{1}\boldsymbol{,\sigma}_{2},w))\phi_{+}=b^{2}(w)\phi_{+}. \label{clpds}$$ with no coupling to other four component spinors.  The explicit version of the potential $\Phi$ in Eq.(\[clpds\]) that results from the rearrangement has a structure that produces couplings between the spin components of $\phi_{+\text{ }}$ that are no more complicated than those of  its nonrelativistic counterpart - with the customary spin-spin,  spin-orbit, non-central tensor or spin-orbit difference terms appearing.  We have checked that both the simpler form Eq.(\[clpds\])  and the equivalent coupled forms give the same numerical spectral results when tested for QED bound states as in [@bckr] and when tested for our new QCD spectral results appearing in this paper.  (This provides an important cross check on our numerical calculation of the meson spectra).  Eq.(\[clpds\]) is accompanied by similar equations for $\phi_{-}\equiv\psi_{1}-\psi_{4}$ and $\chi_{\pm}\equiv\psi_{2}\pm\psi_{3}.$ Once Eq.(\[clpds\]) is solved, one can use Eq.(\[tbdea\],\[tbdeb\]) to determine $\phi_{-}~$and $\chi_{\pm}$. Because of the decoupling it is not necessary to determine $\phi_{-}$ and $\chi_{\pm}$ to solve the eigenvalue equation (\[clpds\]).  However, the detailed form of $\Phi$ for $\phi_{+}$ results from their elimination through the Pauli reduction procedure. In these equations, the usual invariant $$b^{2}(w)\equiv(w^{4}-2w^{2}(m_{1}^{2}+m_{2}^{2})+(m_{1}^{2}-m_{2}^{2})^{2})/4w^{2}$$ plays the role of energy eigenvalue. This invariant is the c.m. value of the square of the relative momentum expressed as a function of the invariant total c.m. energy $w$. Note that in the limit in which one of the particles becomes very heavy, this Schrödinger-like equation turns into the one obtained by eliminating the lower component of the ordinary one-body Dirac equation in terms of the other component. The vector potentials appearing in Eqs.(\[tbdea\],\[tbdeb\]) depend on three invariant functions $E_{1}$, $E_{2},$ and $G$ that define time-like vector interactions (proportional to $\hat{P}$) and space-like vector interactions (orthogonal to $\hat{P}$, with $\partial_{\mu}\equiv \partial/\partial x^{\mu}$) [@cra87; @bckr] $$\begin{aligned} A_{1}^{\mu} & =\big((\epsilon_{1}-E_{1})-i{\frac{G}{2}}\gamma_{2}\cdot (\frac{\partial E_{1}}{E_{2}}+\partial G)\gamma_{2}\cdot\hat{P}\big )\hat {P}^{\mu}+(1-G)p^{\mu}-{\frac{i}{2}}\partial G\cdot\gamma_{2}\gamma_{2}^{\mu }\nonumber\\ A_{2}^{\mu} & =\big((\epsilon_{2}-E_{2})+i{\frac{G}{2}}\gamma_{1}\cdot (\frac{\partial E_{2}}{E_{1}}+\partial G)\gamma_{1}\cdot\hat{P}\big )\hat {P}^{\mu}-(1-G)p^{\mu}+{\frac{i}{2}}\partial G\cdot\gamma_{1}\gamma_{1}^{\mu}, \label{vecp}$$ while the scalar potentials $S_{i}$ depend on $G$ and two additional invariant functions $M_{1}$ and $M_{2}$ $$\begin{aligned} S_{1} & =M_{1}-m_{1}-{\frac{i}{2}}G\gamma_{2}\cdot{\frac{\partial M_{1}}{M_{2}}}\nonumber\\ S_{2} & =M_{2}-m_{2}+{\frac{i}{2}}G\gamma_{1}\cdot{\frac{\partial M_{2}}{M_{1}}.} \label{scalp}$$ Note that the terms in \[vecp\] and \[scalp\] which are explicitly spin-dependent through the gamma matrices are essential in order to satisfy the compatibility condition \[cmpt\]. Later on, when the equation is reduced to second-order Pauli-form, yet other spin dependences eventually arise from gamma matrix terms (that, when squared, lose their gamma matrix dependence). These are typical of what occurs in the reduction of the one-body Dirac equation to the Pauli form. The gamma matrices also give rise to spin independent terms in the Pauli-forms. These terms emerge in a manner similar to the above two sources of spin dependent terms in the Pauli-form of the equations. In the case in which the space-like and time-like vectors are not independent but combine into electromagnetic-like four-vectors, the constituent vector interactions appear in a more compact form $$\begin{aligned} A_{1}^{\mu} & =\big(\epsilon_{1}-\frac{G(\epsilon_{1}-\epsilon_{2})}{2}+\frac{(\epsilon_{1}-\epsilon_{2})}{2G}\big)\hat{P}^{\mu}+(1-G)p^{\mu }-\frac{i}{2}\partial G\cdot\gamma_{2}\gamma_{2}^{\mu}\nonumber\\ A_{2}^{\mu} & =\big(\epsilon_{2}-\frac{G(\epsilon_{2}-\epsilon_{1})}{2}+\frac{(\epsilon_{1}-\epsilon_{2})}{2G}\big)\hat{P}^{\mu}-(1-G)p^{\mu }+\frac{i}{2}\partial G\cdot\gamma_{1}\gamma_{1}^{\mu}. \label{emvec}$$ In that case $E_{1},E_{2}$ and $G$ are related to each other[@cra84; @cra87] (${\partial}E_{1}/E_{2}=-\partial\log G$) and for our QCD applications (as well as for QED) are functions of only one invariant function $\mathcal{A}(r)$ in which $r$ is the invariant $$r\equiv\sqrt{x_{\perp}^{2}}.$$ They take the forms $$\begin{aligned} E_{1}^{2}(\mathcal{A}) & =G^{2}(\epsilon_{1}-\mathcal{A})^{2},\nonumber\\ E_{2}^{2}(\mathcal{A}) & =G^{2}(\epsilon_{2}-\mathcal{A})^{2}, \label{tvecp1}$$ in which $$G^{2}={\frac{1}{(1-2\mathcal{A}/w)}.} \label{gp}$$ In the forms of these equations used below, Todorov’s collective energy variable $$\epsilon_{w}=(w^{2}-m_{1}^{2}-m_{2}^{2})/2w,$$ will eventually appear. In general $M_{1}$ and $M_{2}$ are related to each other[@cra82; @cra87] and for QCD applications are functions of two invariant functions $\mathcal{A}(r)$ and $S(r)$ appearing in the forms: $$\begin{aligned} M_{1}^{2}(\mathcal{A},S) & =m_{1}^{2}+G^{2}(2m_{w}S+S^{2})\nonumber\\ M_{2}^{2}(\mathcal{A},S) & =m_{2}^{2}+G^{2}(2m_{w}S+S^{2}), \label{mp}$$ in which $$m_{w}=m_{1}m_{2}/w.$$ (In these equations, $m_{w}$ and $\epsilon_{w}$ are the relativistic reduced mass and energy of the fictitious particle of relative motion introduced by Todorov [@tod71; @tod76], which satisfy the effective one-body Einstein condition $$\epsilon_{w}^{2}-m_{w}^{2}=b^{2}(w).$$ In the limit in which one of the particles becomes infinitely heavy, $m_{w}$ and $\epsilon_{w}$ reduce to the mass and energy of the lighter particle.) The invariant function $S(r)$ is primarily responsible for the constituent scalar potentials since $S_{i}=0$ if $S(r)=0$ , while $\mathcal{A}(r)$ contributes to the $S_{i}$ (if $S(r)\not =0$) as well as to the vector potentials $A_{i}^{\mu}$. Originally, we derived the general forms of Eqs.(\[mp\],\[tvecp1\],\[gp\]) for the scalar and vector potentials using classical field theoretic arguments [@yng; @cra92] (see also [@tod71; @cra82]). Surprisingly, the resulting forms for the mass and energy potential functions $M_{i}$, $G$ and $E_{i}$ automatically embody collective minimal substitution rules for the spin-independent parts of the Schrödinger-like forms of the equations. Classically those forms turn out to be modifications of the Einstein condition for the free effective particle of relative motion $$p^{2}+m_{w}^{2}=\epsilon_{w}^{2}$$ For the vector interaction they automatically generate the replacement of $\epsilon_{w}$ by $\epsilon_{w}-\mathcal{A}$ and for the scalar interaction the replacement of $m_{w}$ by $m_{w}+S$. The part of  Eq.(\[clpds\]) that results from the vector and scalar interactions then takes the form $$(p^{2}+2m_{w}S+S^{2}+2\epsilon_{w}\mathcal{A}-\mathcal{A}^{2})\phi_{+}=b^{2}\phi_{+}. \label{mnml}$$ Now, we originally found these forms starting from relativistic classical field theory. The deceptively simple form of Eq.(\[mnml\]) in fact incorporates retarded and advanced effects through its dependnce on the c.m. energy $w$. On the other hand, recently Jallouli and Sazdjian [@saz97] obtained Eqs.(\[tvecp1\]) and (\[mp\]) in quantum field theory after performing a necessarily laborious Eikonal summation to all orders of ladder and cross ladder diagrams together with all constraint diagrams (Lippmann-Schwinger like iterations of the simple Born diagram)[@infrared]. Thus, the structure first discovered simply in the correspondence limit has now been verified through direct but difficult derivation from perturbative quantum field theory. These equations contain an important hidden hyperbolic structure (which we could have used to introduce the interactions in the first place). To employ it we introduce two independent invariant functions $L(x_{\perp})$ and $\mathcal{G}(x_{\perp})$, in terms of which the invariant functions of Eqs.(2.10,2.11) take the forms: $$\begin{aligned} M_{1} & =m_{1}\ \cosh L\ +m_{2}\sinh L\nonumber\\ M_{2} & =m_{2}\ \cosh L\ +m_{1}\ \sinh L \label{hyp}$$$$\begin{aligned} E_{1} & =\epsilon_{1}\ \cosh\mathcal{G}\ -\epsilon_{2}\sinh\mathcal{G}\nonumber\\ E_{2} & =\epsilon_{2}\ \cosh\mathcal{G}\ -\epsilon_{1}\ \sinh\mathcal{G} \label{epot}$$$$G=\exp\mathcal{G}.$$ In terms of $\mathcal{G}$ and the constituent momenta $p_{1}$ and $p_{2}$ , the individual four-vector potentials of Eq.(\[tvecp1\]) take the suggestive forms $$\begin{aligned} A_{1} & =[1-\mathrm{\cosh}(\mathcal{G})]p_{1}+\mathrm{\sinh}(\mathcal{G})p_{2}-\frac{i}{2}(\partial\exp\mathcal{G}\cdot\gamma_{2})\gamma _{2}\nonumber\\ A_{2} & =[1-\mathrm{\cosh}(\mathcal{G})]p_{2}+\mathrm{\sinh}(\mathcal{G})p_{1}+\frac{i}{2}(\partial\exp\mathcal{G}\cdot\gamma_{1})\gamma_{1} \label{veca}$$ Eqs.(\[hyp\]), (\[epot\]) and (\[veca\]) together display a further consequence of the compatibility condition, a kind of relativistic version of Newton’s third law in the sense that the two sets of  constituent scalar and vector potentials are each given in terms of just one invariant function, $S$ and $\mathcal{A}$ respectively. In terms of these functions the coupled two-body Dirac equations then take the form $$\begin{aligned} \mathcal{S}_{1}\psi & =\big(-G\beta_{1}\Sigma_{1}\cdot\mathcal{P}_{2}+E_{1}\beta_{1}\gamma_{51}+M_{1}\gamma_{51}-G{\frac{i}{2}}\Sigma_{2}\cdot\partial(\mathcal{G}\beta_{1}+L\beta_{2})\gamma_{51}\gamma_{52}\big)\psi=0\nonumber\\ \mathcal{S}_{2}\psi & =\big(G\beta_{2}\Sigma_{2}\cdot\mathcal{P}_{1}+E_{2}\beta_{2}\gamma_{52}+M_{2}\gamma_{52}+G{\frac{i}{2}}\Sigma_{1}\cdot\partial(\mathcal{G}b_{2}+L\beta_{1})\gamma_{51}\gamma_{52}\big)\psi=0 \label{tbdes}$$ in which $$\mathcal{P}_{i}\equiv p-{\frac{i}{2}}\Sigma_{i}\cdot\partial\mathcal{G}\Sigma_{i}$$ depending on gamma matrices with block forms $$\beta_{1}=\bigg({\genfrac{}{}{0pt}{}{1_{8}}{0}}{\genfrac{}{}{0pt}{}{0}{-1_{8}}}\bigg),\ \ \gamma_{51}=\bigg({\genfrac{}{}{0pt}{}{0}{1_{8}}}{\genfrac{}{}{0pt}{}{1_{8}}{0}}\bigg),\ \ \beta_{1}\gamma_{51}\equiv\rho_{1}=\bigg({\genfrac{}{}{0pt}{}{0}{-1_{8}}}{\genfrac{}{}{0pt}{}{1_{8}}{0}}\bigg)$$ $$\beta_{2}=\bigg({\genfrac{}{}{0pt}{}{\beta}{0}}{\genfrac{}{}{0pt}{}{0}{\beta}}\bigg),\ \beta=\bigg({\genfrac{}{}{0pt}{}{1_{4}}{0}}{\genfrac{}{}{0pt}{}{0}{-1_{4}}}\bigg)$$ $$\gamma_{52}=\bigg({\genfrac{}{}{0pt}{}{\gamma_{5}}{0}}{\genfrac{}{}{0pt}{}{0}{\gamma_{5}}}\bigg),\ \gamma_{5}=\bigg({\genfrac{}{}{0pt}{}{0}{1_{4}}}{\genfrac{}{}{0pt}{}{1_{4}}{0}}\bigg)$$ $$\beta_{2}\gamma_{52}\equiv\rho_{2}=\bigg({\genfrac{}{}{0pt}{}{\rho}{0}}{\genfrac{}{}{0pt}{}{0}{\rho}}\bigg),\ \rho=\bigg({\genfrac{}{}{0pt}{}{0}{-1_{4}}}{\genfrac{}{}{0pt}{}{1_{4}}{0}}\bigg)$$ $$\beta_{1}\gamma_{51}\gamma_{52}=\bigg({\genfrac{}{}{0pt}{}{0}{-\gamma_{5}}}{\genfrac{}{}{0pt}{}{\gamma_{5}}{0}}\bigg),$$ $$\beta_{2}\gamma_{52}\gamma_{51}=\bigg({\genfrac{}{}{0pt}{}{0}{\rho}}{\genfrac{}{}{0pt}{}{\rho}{0}}\bigg).$$ $$\Sigma_{i}=\gamma_{5i}\beta_{i}\gamma_{\perp i}$$ As described in Appendix A, a  procedure analogous to the Pauli reduction procedure of the one-body Dirac equation case yields $$\lbrack p^{2}+2m_{w}S+S^{2}+2\epsilon_{2}\mathcal{A}-\mathcal{A}^{2}$$$$-[2\mathcal{G}^{\prime}-\frac{E_{2}M_{2}+E_{1}M_{1}}{E_{2}M_{1}+E_{2}M_{1}}(L-\mathcal{G})^{\prime}]i\hat{r}\cdot p-{\frac{1}{2}}\nabla^{2}\mathcal{G}-{\frac{1}{4}}{(\mathcal{G})^{\prime}}^{2}-(\mathcal{G}^{\prime }+L^{\prime})^{2}+\frac{E_{2}M_{2}+E_{1}M_{1}}{E_{2}M_{1}+E_{2}M_{1}}{\frac {1}{2}}\mathcal{G}^{\prime}(L-\mathcal{G}^{\prime})$$$$+\frac{L\cdot(\sigma_{1}+\sigma_{2})}{r}[\mathcal{G}^{\prime}-{\frac{1}{2}}\frac{E_{2}M_{2}+E_{1}M_{1}}{E_{2}M_{1}+E_{2}M_{1}}(L-\mathcal{G})^{\prime }]-\frac{L\cdot(\sigma_{1}-\sigma_{2})}{2r}\frac{E_{2}M_{2}-E_{1}M_{1}}{E_{2}M_{1}+E_{2}M_{1}}(L-\mathcal{G})^{\prime}$$$$+\sigma_{1}\cdot\sigma_{2}({\frac{1}{2}}\nabla^{2}\mathcal{G}+{\frac{1}{2r}}L^{\prime}+{\frac{1}{2}}(\mathcal{G}^{\prime})^{2}-{\frac{1}{2}}\mathcal{G}^{\prime}(L-\mathcal{G})^{\prime}\frac{E_{2}M_{2}+E_{1}M_{1}}{E_{2}M_{1}+E_{2}M_{1}})$$$$+\sigma_{1}\cdot\hat{r}\sigma_{2}\cdot\hat{r}({\frac{1}{2}}\nabla^{2}L-{\frac{3}{2r}}L^{\prime}+\mathcal{G}^{\prime}L^{\prime}-{\frac{1}{2}}L^{\prime}(L-\mathcal{G})^{\prime}\frac{E_{2}M_{2}+E_{1}M_{1}}{E_{2}M_{1}+E_{2}M_{1}})$$$$+\frac{i}{2}(L+\mathcal{G})^{\prime}(\sigma_{1}\cdot\hat{r}\sigma_{2}\cdot p+\sigma_{2}\cdot\hat{r}\sigma_{1}\cdot p)+\frac{i}{2}(L-\mathcal{G}){\frac{E_{1}M_{2}-E_{2}M_{1}}{E_{2}M_{1}+E_{2}M_{1}}}{\frac{L\cdot(\sigma _{1}\times\sigma_{2})}{r}}]\phi_{+} \label{sch}$$$$=b^{2}(w)\phi_{+}$$ Eq.(\[sch\]) is the coupled four-component Schrödinger-like form of our equations that we use for our quark model bound state calculations for the mesons in the present paper. It can be solved nonperturbatively not only for quark model calculations but also for QED calculations since in that case every term is quantum-mechanically well defined (less singular than $-1/4r^{2}$).   From this equation we obtain two coupled radial Schrödinger-like equations in the general case. But for $j=0$ or spin singlet states these equations reduce to uncoupled equations. The extra component for the general case arises from orbital angular momentum mixing or spin mixing, the latter absent for equal mass states. The detailed radial forms of these equations are given in Appendix A. For the case of QED ( $S=0$, $\mathcal{A}=-\alpha/r),$ we have solved these coupled Schrödinger-like equations numerically obtaining results that are explicitly accurate through order $\alpha^{4}$ (with errors on the order of $\alpha^{6}$)[@bckr]. We have even obtained analytic solutions to the full system of coupled 16 component Dirac equations in the important case of spin-singlet positronium [@va86]. For both numerical and analytic solution, the results agree with those produced by perturbative treatment of these equations and with standard spectral results [@cnl]. Meson Spectroscopy ================== We use the constraint Eq.(\[sch\]) to construct a relativistic naive quark model by choosing the two invariant functions $\mathcal{G}$ and $L$ or equivalently $\mathcal{A}$ and $S$ to incorporate a version of the static quark potential originally obtained from QCD by Adler and Piran [@adl] through a nonlinear effective action model for heavy quark statics. They used the renormalization group approximation to obtain both total flux confinement and a  linear static potential at large distances. Their model uses nonlinear electrostatics with displacment and electric fields related through a nonlinear constitutive equation with the effective dielectric constant given by a leading log log model which fixes all parameters in their model apart from a mass scale $\Lambda.$  Their static potential contains an infinite additve constant which in turn results in the inclusion of  an unknown constant $U_{0}$ in the final form of their potential (hereafter called $V_{AP}(r)$).  We insert into Eq.(\[sch\]) invariants $\mathcal{A}$ and $S$ with forms determined so that the sum $\mathcal{A}+S$ appearing as the potential in the nonrelativistic limit of our equations becomes the Adler-Piran nonrelativistic $Q\bar{Q}$ potential (which depends on two parameters $\Lambda$ and $U_{0})$ plus the Coulomb interaction between the quark and antiquark. That is, $$V_{AP}(r)+V_{coul}=\Lambda(U(\Lambda r)+U_{0})+\frac{e_{1}e_{2}}{r}=\mathcal{A}+S\ , \label{asap}$$ As determined by Adler and Piran, the short and long distance behaviors of $U(\Lambda r)$ generate known lattice and continuum results through the explicit appearance of an effective running coupling constant in coordinate space. That is, the Adler-Piran potential incorporates asymptotic freedom through $$\Lambda U(\Lambda r<<1)\sim1/(r\ln\Lambda r),$$ and linear confinement through $$\Lambda U(\Lambda r>>1)\sim\Lambda^{2}r.$$ The long distance ( $\equiv\Lambda r>2$) behavior of the static potential $V_{AP}(r)$ is given explicitly by $$\Lambda(c_{1}x+c_{2}\ln(x)+\frac{c_{3}}{\sqrt{x}}+\frac{c_{4}}{x}+c_{5})$$ in which $x=\Lambda r$ while the coefficients $c_{i}$ are given by the Adler-Piran leading log-log model [@adl]. In addition to obtaining these analytic forms for short and long distances they converted the numerically obtained values of the potential at intermediate distances to a compact analytic expression. The nonrelativistic analysis used by Adler and Piran, however, does not determine the relativistic transformation properties of the potential. How this potential is apportioned between vector and scalar is therefore somewhat, although not completely, arbitrary. In earlier work [@cra88] we divided the potential in the following way among three relativistic invariants $\mathcal{V}(r),S$, and $\mathcal{A}$.(In our former construction, the additional invariant $\mathcal{V}$ was responsible for a possible independent time-like vector interaction.) $$\begin{aligned} S & =\eta(\Lambda(c_{1}x+c_{2}\ln(x)+\frac{c_{3}}{\sqrt{x}}+c_{5}+U_{0}),\nonumber\\ \mathcal{V} & =(1-\eta)(\Lambda(c_{1}x+c_{2}\ln(x)+\frac{c_{3}}{\sqrt{x}}+c_{5}+U_{0}),\nonumber\\ \mathcal{A} & =V_{A}-S-\mathcal{V},\end{aligned}$$ in which $\eta={\frac{1}{2}}$. That is, we assumed that (with the exception of the Coulomb-like term ($c_{4}/x$)) the long distance part was equally divided between scalar and a proposed time-like vector. In the present paper we drop the time-like vector for reasons detailed below and assume instead that the scalar interaction is solely responsible for the long distance confining terms ($\eta=1$). The attractive ($c_{4}=-0.58$) QCD Coulomb-like portion (not to be confused with the electrostatic $V_{coul}$) is assigned completely to the electromagnetic-like part $\mathcal{A}$. That is, the constant portion of the running coupling constant corresponding to the exchange diagram is expected to be electromagnetic-like. Elsewhere we have treated another model explicitly containing these features: the Richardson potential. Its momentum space form $$\tilde{V}(\vec{q})\sim1/\mathbf{q}^{2}\ln n(1+\mathbf{q}^{2}/\Lambda^{2})$$ interpolates in a simple way between asymptotic freedom $\tilde{V}(\vec {q})\sim1/\mathbf{q}^{2}ln(\mathbf{q}^{2}/\Lambda^{2})$ and linear confinement $\tilde{V}(\mathbf{q})\sim1/\mathbf{q}^{4}$. Even though the Richardson potential is not tied to any field theoretic base in the intermediate region (unlike the Adler-Piran potential) and does not give as good fits to the data, it does provide a convenient form for displaying our points about the static quark potential. The Richardson radial form is $$V(r)=8\pi\Lambda^{2}r/27-8\pi f(\Lambda r)/(27r)$$ For $r\rightarrow0$, $f(\Lambda r)\rightarrow-1/\ln(\Lambda r)$, while for $r\rightarrow\infty$, $f(\Lambda r)\rightarrow1$. Thus, in this model, if the confining part of the potential is a world scalar, then in the large $r$ limit the remaining portion, regarded as an electromagnetic-like interaction corresponding to our invariant function $\mathcal{A}(r)$, would be an attractive $1/r$ potential with a coupling constant on the order of 1. This is in reasonable agreement with the Adler model which also has an attractive $1/r$ part. Support for the assumption that the $c_{4}$ term belongs only to $\mathcal{A}$ also arises from phenomenological considerations. We find that attempts to assign the $c_{4}$ term to the scalar potential have a drastic effect on the spin-spin and spin-orbit splittings. In fact, using this term in $S$ through Eqs.(\[mp\]) generates spin-spin and spin-orbit splittings that are much too small. In our previous work, we divided the confining part equally between scalar and time-like vector so that the spin-orbit multiplets would not be inverted. This was done in order to obtain from our model the $a_{0}(980)$ meson which was then considered as the prime candidate for the relativistic counterpart of the $^{3}P_{0}$ meson. However, recent analysis indicates that that meson may be instead a meson-meson or four quark bound state (see however, [@ishida], which even interprets the $a_{0}(980)$ meson as part of a new scalar ($^{1}S_{0}$) meson $q\bar{q}$ multiplet outside of the usual quark model) while a meson with mass of 1450 MeV may be the correct candidate for the quark model state [@prtl]. Interpretation of this other state as the $^{3}P_{0}$ meson would in fact require a partial inversion of the spin-orbit triplet (from what one would expect based on the positronium analog). This partial inversion is consistent with the $^{3}P_{0}$ candidate for the $u\bar{s}$ system also appearing in a position that partially inverts the spin-orbit splitting. Since the sole purpose of including $\mathcal{V}$ in our previous treatment was to prevent the inversion, we exclude it from our present treatment. In our older treatment [@cra88], we neglected the tensor coupling, unequal mass spin-orbit difference couplings, and the $u-d$ quark mass differences. In the present treatment, we treat the entire interaction present in our equations, thereby keeping each of these effects. In our former treatment we also performed a decoupling between the upper-upper and lower-lower components of the wave functions for spin-triplet states which turned out to be defective but which we subsequently corrected in our numerical test of our formalism for QED [@bckr]. The corrected decoupling (appearing in Eq.(\[sch\])) is included in the new meson calculations appearing in this paper. In the present investigation, we compute the best fit meson spectrum for the following apportionment of the Adler-Piran potential: $$\mathcal{A}=\exp(-\beta\Lambda r)[V_{AP}-\frac{c_{4}}{r})]+\frac{c_{4}}{r}+\frac{e_{1}e_{2}}{r},\ \ \label{apa}$$ $$S=V_{AP}+\frac{e_{1}e_{2}}{r}-\mathcal{A}=(V_{AP}-\frac{c_{4}}{r})(1-\exp(-\beta\Lambda r)) \label{aps}$$ In order to covariantly incorporate the Adler-Piran potential into our equations, we treat the short distance portion as purely electromagnetic-like (in the sense of the transformation properties of the potential). Through the additional parameter $\beta$, the exponential factor gradually turns off the electromagnetic-like contribution to the potential at long distance except for the $1/r$ portion mentioned above, while the scalar portion gradually turns on, becoming fully responsible for the linear confining and subdominant terms at long distance. Altogether our two invariant potential functions depend on three parameters: $\Lambda,U_{0},$ and $\beta$. When inserted into the constraint equations, $S$ and $\mathcal{A}$ become relativistic invariant functions of the invariant separation $r=\sqrt {x_{\perp}^{2}}$ . The covariant structures of the constraint formalism then embellish the central static potential with accompanying spin-dependent and recoil terms.  In general applications of these two-body Dirac equations one must ensure that the values assumed by $\mathcal{A}$ and $S$ always result in real interaction functions $E_{i},M_{i},$ and $G$ while preserving the correct heavy particle limits.  In particular a large repulsive $\mathcal{A}$ will give an imaginary $G$ while a large attractive $S$ would lead in the limit when one particle becomes heavy to an incorrect form of the one-body Dirac equation ( for $m_{2}\rightarrow\infty$ the interaction mass potential function $M_{1}\rightarrow|m_{1}+S|$ instead of $m_{1}+S).$  In the calculations contained in the present paper, the best fit parameters turn out to be such that $\mathcal{A}$ always remains attractive while $S$ always remains replusive so we need not make any modifications. Such problems do arise in the nucleon-nucleon scattering problem. See [@bliu] for a discussion of these problems and their resolution. Numerical Spectral Results ========================== Tabulation and Discussion of Computed Meson Spectra --------------------------------------------------- We now use our formalism as embodied in Eqs. (\[sch\]), and (\[apa\],\[aps\]) to calculate the full meson spectrum including the light-quark mesons. (As a check on these calculations we have also used the older forms derived in [@bckr]). Note that the nonrelativistic quark model when used in conjunction with realistic QCD potentials such as Richardson’s potential or the Adler-Piran potential fails for light mesons since the ordinary nonrelativistic Schrödinger equation’s lack of relativistic kinematics leads to increasing meson masses as the quark masses drop below a certain point [@cra81], thereby spoiling proper treatment of the pion, as well as other states. Here, we shall see how our relativistic equations remedy this situation. In addition to including the proper relativistic kinematics, our equations also contain energy dependence in the dynamical quasipotential. Mathematically, this feature turns our equations into wave equations that depend nonlinearly on the eigenvalue. Their solution, which we have treated in detail elsewhere (see [@cra88; @crcmp]), requires an efficient iteration scheme to supplement our algorithm for the eigenvalue $b^{2}(w)$ when our equations are written as coupled Schrödinger-like forms. We display our results in Table I at the end of the paper. In the first two columns of each of the tables we list quantum numbers and experimental rest mass values for 89 known mesons. We include all well known and plausible candidates listed in the standard reference ([@prtl]). We omit only those mesons with substantial flavor mixing. In the tables, the quantum numbers listed are those of the $\phi_{+}$ part of the sixteen-component wave function. To generate the fits, in addition to the the quark masses we employ the parameters $\Lambda,U_{0}$ and $\beta$. We merely insert the static Adler-Piran potential into our relativistic wave equations just as we have inserted the Coulomb potential $\mathcal{A}=-\alpha/r$ to obtain the results of QED[@va86; @bckr]. Note especially that we use a single $\Phi (\mathcal{A},S)$ for all quark mass ratios - hence a single structure for all the $\bar{Q}Q,\ \bar{Q}q,$ *and* $\bar{q}q$ mesons in a single overall fit. In the third column in Table I we present the results for the model defined by Eqs.(\[apa\],\[aps\]). The entire confining part of the potential in this model transforms as a world scalar. In our equations, this structure leads to linear confinement at long distances and quadratic confinement at extremely long distances (where the quadratic contribution $S^{2}$ outweighs the linear term $2m_{w}S$). At distances at which $\exp(-\beta\Lambda r)<<1,$ the corresponding spin-orbit, Thomas, and Darwin terms are dominated by the scalar interaction, while at short distances ($\exp(-\beta\Lambda r)\sim1)$ the electromagnetic-like portion of the interaction gives the dominant contribution to the fine structure. Furthermore because the signs of each of the spin-orbit and Darwin terms in the Pauli-form of our Dirac equations are opposite for the scalar and vector interactions, the spin-orbit contributions of those parts of the interaction produce opposite effects with degrees of cancellation depending on the size of the quarkonium atom. We obtain the meson masses given in column three as the result of a least squares fit using the known experimental errors, an assumed calculational error of 1.0 MeV, and an independent error conservatively taken to be 5% of the total width of the meson. We employ the calculational error not to represent the uncertainty of our algorithm but instead to prevent the mesons that are stable with respect to the strong interaction from being weighted too heavily. Our $\chi^{2}$ is per datum (89) minus parameters (8). The resulting best fit turns out to have quark masses $m_{b}=4.877,\ m_{c}=1.507,\ m_{s}=0.253,\ m_{u}=0.0547,\ m_{d}=.0580\ \mathrm{GeV}$ , along with potential parameters $\Lambda=0.216,\Lambda U_{0}=1.865\ \mathrm{GeV}$ and inverse distance parameter $\beta=1.936\mathrm{.}$ This value of $\beta$ implies that (in the best fit) as the quark separation increases, our apportioned Adler-Piran potential switches from primarily vector to scalar at about 0.5 fermi. This shift is a relativistic effect since the effective nonrelativistic limit of the potential ($\mathcal{A}+S$) exhibits no such shift (i.e., by construction $\beta$ drops out). In Table I, the numbers given in parentheses to the right of the experimental meson masses are experimental errors in $\mathrm{MeV}$. The numbers given in parentheses to the right of the predicted meson masses are the contribution of that meson’s calculation to the total $\chi^{2}$ of 101 . The 17 mesons that contain a $b$ (or $\bar{b}$) quark contribute a total of about 5.4 to the $\chi^{2}$, at an average of about 0.3 each. This is the lowest contribution of those given by any family. Since the Adler-Piran potential was originally derived for static quarks, one should not be surprised to find that most of the best fit mesons are members of the least relativistic of the meson families. Note, however, that five of the best fit mesons of this type contain highly relativistic $u$ and $s$ quarks (for which our equation reduces essentially to the one-body Dirac equation for the light quark). The 24 mesons that contain a $c$ (or $\bar{c}$) quark contribute a total of about 50.7 to the $\chi^{2}$ at an average of about 2.2 each. This is the highest contribution of those given by any family. A significant part of this contribution is due to the $\psi$ meson mass being about 32 MeV off its experimental value. Another part of the contribution is due to fact that the mass of the high orbital excitation of the $D^{\ast}$ tensor meson is 80 MeV below its experimental value. In addition, the high orbital excitation of the $D_{s}^{\ast}$ is 60 MeV low. The 24 mesons that contain an $s$ (or $\bar{s}$) quark contribute a total of about 46.3 to the $\chi^{2}$ at an average of about 1.3 each, less than that for the $c$-quark mesons. This is important because the $s$ quarks are lighter than the $c$ quarks. Part of the reason for this unexpected effect is that our $\chi^{2}$ fitting procedure accounts for the fact that our meson model ignores the level shifts (due to the instability of many of the mesons that contain an $s$ quark) through the introduction of a theoretical error on the order of 5% of the width of the unstable mesons. The 36 mesons that contain a $u$ (or $\bar{u}$) quark contribute a total of about 54.6 to the $\chi^{2}$ at an average of about 1.5 each while the 16 mesons on our list that contain a $d$ (or $\bar{d}$) quark contribute a total of about 18.6 to the $\chi^{2}$ at an average of about 1.2 each. The worst fits produced by our model are those to the $\psi$ and the $D^{*}$ and $D_{s}^{*}$ high orbital excitations. Although two of these mesons contain the light $u$ and $d$ quarks, in our fit the more relativistic bound states are not in general fit less well. In fact, the $\pi,K,D$ and $\rho$ mesons are fit better than these two excited $D^{*}$ and $D_{s}^{*}$ mesons. We see that over all, the two-body Dirac equations together with the relativistic version of the Adler-Piran potential account very well for the meson spectrum over the entire range of relativistic motions, using just the two parametric functions $\mathcal{A}$ and $S$. We now examine another important feature of our method: the goodness with which our equations account for spin-dependent effects (both fine- and hyperfine- splittings). Table I shows that the best fit versus experimental, ground state singlet-triplet splittings for the $b\bar{u},\ b\bar{s},\ c\bar{c},\ c\bar{u},\ c\bar{d},\ c\bar{s},\ s\bar{u},\ s\bar{d},\ u\bar{d}$ systems are 48 vs 46, 59 vs 47, 151 vs 117, 134 vs 142, 132 vs 142, 147 vs 144, 418 vs 398, 418 vs 398, and finally 648 vs 627 MeV. We obtain a uniformly good fit for all hyperfine ground state splittings except for the $\eta _{c}-\psi$ system. The problem with the fit for that system of mesons occurs because the $D^{\ast}\ ^{3}P_{2}$ and $D_{s}^{\ast}\ ^{3}P_{2}$ states are significantly low while the $\psi$ is significantly high. Furthermore, the singlet and triplet $P$ states are uniformly low. An attempt to lower the $c$ quark mass by correcting the $\psi$ mass while raising the charmonium and $D^{\ast},D_{s}^{\ast}\ P$ state masses would require raising the $c$ quark mass. Reducing one discrepancy would worsen the other. Below, we will uncover what we believe is the primary cause for this discrepancy as we examine other aspects of the spectrum. For the spin-orbit splittings we obtain values for the $R$ ratios $(^{3}P_{2}-^{3}P_{1})/(^{3}P_{1}-^{3}P_{0}))$ of 0.71,0.67,0.42,-0.19,-0.58,-3.35 for the two $b\bar{b}$ triplets, and the $c\bar{c},s\bar{s},u\bar{s},u\bar{d}$ spin triplets compared to the experimental ratios of 0.66,0.61,0.48,0.09,-0.97,-0.4. This fit ranges from very good in the case of the light $\Upsilon$ multiplet to miserably bad for the two lightest multiplets. From the experimental point of view some of the problem may be blamed on the uncertain status of the $^{3}P_{0}$ light quark meson bound states and the spin-mixing in the case of the $K^{\ast}$ multiplet. From the theoretical point of view, the lack of any mechanism in our model to account for the effects of decay rates on level shifts undoubtedly has an effect. Another likely cause is that as we proceed from the heavy mesons to the light ones, the radial size of the meson grows so that the long distance interactions, in which the scalar interaction becomes dominant, play a more important role. The spin-orbit terms due to scalar interactions are opposite in sign to and tend (at long distance) to dominate the spin-orbit terms due to vector interactions. This results in partial to full multiplet inversions as we proceed from the $s\bar{s}$ to the $u\bar{d}$ mesons. This inversion mechanism may also be responsible for the problems of the two orbitally excited $D^{\ast}$ and $D_{s}^{\ast}$ mesons described above. It may be responsible as well for the problem of the singlet and triplet $P$ states since the scalar interaction tends to offset the dominant shorter range vector interaction, at least slightly. We also examine the effect of the hyperfine structure of our equations on the splitting between the $^{1}P_{1}$ and weighted sum $[5(^{3}P_{2})+3(^{3}P_{1})+1(^{3}P_{0})]/9$ of bound states. We obtain pairs of values equal to 3.520,3.520;1.408,1.432; 1.392,1416 for the $c\bar{c},u\bar{s},u\bar{d}$ families versus the experimental pairs of 3.526,3.525;1.402,1.375;1.231,1.303. The agreement of the theoretical and experimental mass differences is excellent for the $\psi$ system, slightly too large and of the wrong sign for the $K$ system and too small and of the wrong sign for the $u\bar{d}$ system. Part of the cause of this pattern is that pure scalar confinement worsens the fit for the light mesons because of its tendency to reverse the spin-orbit splitting, thereby shifting the center of gravity. The agreement, however, for the light systems is nevertheless considerably better than that in the case of the fine structure splitting $R$ ratios. Another part of the discrepancy may be due to the uncertain status of the light $^{3}P_{0}$ meson as well as the spin-mixing in the case of the $K^{\ast}$ multiplet. Note that in the case of unequal mass $P$ states, our calculations of the two values incorporate the effects of the $\vec{L}\cdot(\vec{s}_{1}-\vec{s}_{2})$ spin-mixing effects. (The use of nonrelativistic notation is only for convenience.) These differences between heavy and light meson systems also occur in the mixing due to the tensor term between radial $S$ and orbital $D$ excitations of the spin-triplet ground states. This mixing occurs most notably in the $c\bar{c},u\bar{s}$ and $u\bar{d}$ systems. The three pairs of values that we obtain are 3.808,3.688;1.985,1.800;1.986,1.775 respectively versus the data 3.770,3.686;1.714,1.412;1.700;1.450. Our results are quite reasonable for the charmonium system but underestimate considerably the splitting for the light quark systems. As happened for the significant disagreement in the case of the fine structure, our results here worsen significantly for the light meson systems. The spectroscopy of the lighter mesons is undoubtedly more complex due to their extreme instability (not accounted for in our approach). Note, however, that for the spin-spin hyperfine splittings of the ground states the more relativistic (lighter quark) systems yield masses that agree at least as well with the experimental data as do the heavier systems. This same mixed behavior shows up again for the radial excitations. The incremental $\chi^{2}$ contributions for the six $^{3}S_{1}$ states of the $\Upsilon$ system is just 1.8. It is 12.9 over three states for the triplet charmonium system (primarily due to the $\psi$ deviation), 3.0 for the two $\phi$ states, 1.6 for the three $^{1}S_{0}$ states of the $K$ system (note, however that these fits include expected errors due to the lack of level shift mechanisms and are thus reduced), 7.3 for the two $^{3}S_{1}$ states of for the $K^{\ast}$ system, 2.2 for the three triplet $u\bar{d}$ states and 8.2 for the three singlet $u\bar{d}$ states . The $\chi^{2}$ contribution at first increases, then decreases with the lighter systems. Overall, the masses are much too large for the radially excited light quark mesons. These discrepancies may be due both to neglect of decay-induced level shifts and to the increased confining force for large $r$ from linear to quadratic (there is no term to compensate for the quadratic $S^{2}$ term). The isospin splitting that we obtain for the spin singlet $B$ meson system is 1 MeV. Our calculation includes the contribution from the $u-d$ mass difference of 3.3 MeV as well as that due to different charge states. The effect of the latter tends to offset the effects of the former since the $b$ and $\bar{u}$ have the same sign of the charge while the $b$ and $\bar{d}$ have the opposite while $m_{d}>m_{u}$. In the experimental data this offset is complete (0). In the case of the $D^{+}-D^{0}$ splitting our mass difference of 7 MeV represents the combined effects of the $u-d$ mass difference and the slightly increased electromagnetic binding present in the case of the $D^{0}$ and the slightly decreased binding in the case of the $D^{+}$. The experimental mass difference is just 4 MeV. These effects work in the same way for the spin-triplet splitting resulting in the theoretical value 5 MeV compared with the experimental value 3 MeV. For the $^{3}P_{2}$ isodoublet we obtain 4 MeV versus about 0 for the experimental values. Our isospin splittings are enhanced because of the large $u-d$ quark mass difference that gives the best overall fit. For the $K-K^{\ast}$ family the experimental value for the isospin splitting is $4\ $MeV for the singlet and triplet ground states. This splitting actually grows for the orbital excitation ($K_{2}^{\ast}$) to 7 MeV. The probable reason for this increase is that at the larger distances, the weak influence of the Coulomb differences becomes small while only the actual $u-d$ mass difference influences the result (although it does seem rather large). It is difficult to understand why our results stay virtually zero for all three isodoublets. Note that as with the $B$ doublets, the theoretical contributions of the combined effects of the $u-d$ mass differences and the electrostatic effects tend to cancel. However, the experimental masses do not show this expected cancellation. ### Implications of our Model for the New 2.32 GeV D$_{s}^{\ast}$ Meson Recently, the BaBar Collaboration [@bbr] found evidence for a new $0^{+}$ strange-charmed meson at 2.32 GeV.  Using the parameters above and assuming the state is a $^{3}P_{0}$ $c\bar{s}$ meson we find a predicted mass of 2.35 GeV, about 130 MeV below our predicted value for the $^{3}P_{2}$ counterpart.  The corresponding mass difference in the Godfrey-Isgur model is 2.590-2.480 =110 MeV.  Both are well off the experimental mark of 2.572-2.317=255 MeV.   It is not surprising that its place in the quark model has been the subject of some debate.   Overall comparison with the experimental data shows that the primary strength of our approach is that it provides very good estimates for the ground states for all families of mesons and for the radial excitation and fine structure splittings for the heavier mesons. On the other hand, it overestimates the radial and orbital excitations for the light mesons. Its worst results are those for the fine structure splittings for the $u\bar{s},d\bar{s}$ and $u\bar{d}$ mesons. Both weaknesses are probably due to long distance scalar potential effects. Below, we shall discuss other aspects of our fit to the spectrum when we compare its results to those of other approaches to the relativistic two-body bound state problem. Explicit Numerical Construction of Meson Wave Functions ------------------------------------------------------- There are 89 mesons in our fit to the meson spectrum.  An important advantage of the constraint formalism is that its local wave equation provides us with a direct way to picture the wave functions. As examples, we present the wave functions that result from our overall spectral fit for three mesons: the $\pi$ (Figure 1) , for which we present the radial part of $\phi_{+}=\psi _{1}+\psi_{4}$ that solves Eq.(\[spi\])$;$ \[ptb\] [plttthdr3.eps]{} the $\rho$ (Figures 2 and 3) for which we present the radial parts of the wave functions $\phi_{+}$ for both $S$ and $D$ states that solve Eqs.(\[swv\],\[dwv\]) ; \[ptbptb\] [plttthdr1.eps]{} \[ptbptbptb\] [plttthdr2.eps]{} and the $\psi/J$  (Figures 4 and 5) for which we present the radial parts of the wave functions $\phi_{+}$ for both $S$ and $D$ states that also solve Eq.(\[swv\],\[dwv\]). \[ptbptbptbptb\] [plttthdr4.eps]{} \[ptbptbptbptbptb\] [plttthdr5.eps]{} In each plot the scale $r_{0}$ is proportional to the Compton wavelength corresponding to the nonrelativistic reduced mass $\mu$ of the two quark system. In the table below, for each of the plotted mesons, we give the scale factor $r_{0}$ and the root mean square radius (in Fermis) computed from these meson wave functions$.$ For the $\rho$ and $\psi$ mesons we also give the computed probabilities for residing in the $S$ and $D$ states. $$\begin{tabular} [c]{lllll}Meson & $r_{0}\mu$ & $\sqrt{<r^{2}>}$ & $S$ & $D$\\ $\pi$ & 0.0004 & 0.21$\mathrm{fm}$ & 1.00 & 0.0\\ $\rho$ & 0.013 & 0.73\textrm{fm} & 0.861 & 0.139\\ $\psi$ & 0.084 & 0.36\textrm{fm} & 0.9974 & 0.0026 \end{tabular} \ \ \ \ \ \ $$ Using a scheme outlined in Appendix B, we obtain an analytic approximation to the meson wave functions in terms of harmonic oscillator wave functions.  The two primary parameters we use for each meson are the scale factor $a$ and the leading power (short distance behavior) exponent $k$.  In addition we take as parameters the coefficients of the associated Laguerre polynomials. We write the radial wave function for each meson in the form $$u(r)\doteq\sum_{n=0}^{N}c_{n}v_{n}(r)$$ where $$v_{n}(r)\ =\sqrt{\frac{2(n!)}{(n+k-1/2)!}}\exp(-y^{2}/2)y^{k}L_{n}^{k-1/2}(y^{2})$$ in which $y=r/a=\alpha e^{x}$ and (with $z=y^{2}$) $$L_{n}^{k-1/2}(z)=\frac{e^{z}z^{-k+1/2}}{n!}\frac{d^{n}}{dz^{n}}(e^{-z}z^{k+n-1/2}).$$ We then vary the two parameters $a$ and $k$ to obtain the best fit$.$ The coefficients are fixed by $$c_{n}=\int_{0}^{+\infty}v_{n}(r)u(r)dr$$ For meson radial wave functions with more than one component  (like the $\psi/J)$ we fit each component separately.  In the table below we give a typical list for parameters $a,k,c_{n}$ for the $\pi$, $\rho$, and $\psi/J $.    $$\begin{tabular} [c]{llll} & $\pi$ & $\rho$ & $\psi/J$\\ $k$ & 2.30734E-001 \ & 9.85790E-001 \ & 9.27248E-001 \ \\ $\alpha^{2}$ & 1.22106E--004 \ \ \ \ & 2.04708E-001 \ & 5.85947E--002 \ \ \\ $c_{0}$ & -9.70613E-001 \ & 5.68290E-001 \ & 8.63401E-001 \ \\ $c_{1}$ & 1.97188E-001 \ & -5.54267E-001 \ & -3.77851E-001 \ \\ $c_{2}$ & -1.18926E-001 \ & 4.55647E-001 \ & 2.70111E-001 \ \\ $c_{3}$ & 3.93232E--002 \ \ & -2.95969E-001 \ & -1.44888E-001 \ \\ $c_{4}$ & -4.74935E--002 \ \ & 2.11945E-001 \ & 1.05621E-001 \ \\ $c_{5}$ & 1.59519E--002 \ \ & -1.29901E-001 \ & -5.85549E--002 \ \ \\ $c_{6}$ & -2.21638E--002 \ \ & 8.87707E--002 \ \ & 4.46522E--002 \ \ \\ $c_{7}$ & 9.35388E--003 \ \ \ & -5.36537E--002 \ \ & -2.44101E--002 \ \ \\ $c_{8}$ & -1.12997E--002 \ \ & 3.57731E--002 \ \ & 1.98781E--002 \ \ \\ $c_{9}$ & 5.74799E--003 \ \ \ & -2.16185E--002 \ \ & -1.03167E--002 \ \ \\ $c_{10}$ & -6.24195E--003 \ \ \ & 1.42167E--002 \ \ & 9.24913E--003 \ \ \ \\ $c_{11}$ & 3.44862E--003 \ \ \ & -8.57381E--003 \ \ \ & -4.34130E--003 \ \ \ \\ $c_{12}$ & -3.63673E--003 \ \ \ & 5.67698E--003 \ \ \ & 4.49675E--003 \ \ \ \\ $c_{13}$ & 2.04307E--003 \ \ \ & -3.31349E--003 \ \ \ & -1.77086E--003 \ \ \ \\ $c_{14}$ & -2.16019E--003 \ \ \ & 2.33901E--003 \ \ \ & 2.29266E--003 \ \ \ \\ $c_{15}$ & 1.22870E--003 \ \ \ & -1.19431E--003 \ \ \ & -6.63516E--004 \ \ \ \ \\ $c_{16}$ & -1.26919E--003 \ \ \ & 1.03806E--003 \ \ \ & 1.23170E--003 \ \ \ \\ $c_{17}$ & 7.72030E--004 \ \ \ \ & -3.42741E--004 \ \ \ \ & -1.93158E--004 \ \ \ \ \\ $c_{18}$ & -7.16255E--004 \ \ \ \ & 5.20857E--004 \ \ \ \ & 6.97788E--004 \ \ \ \ \\ $c_{19}$ & 5.18700E--004 \ \ \ \ & -5.02603E-006 \ & -3.64677E-007 \ \\ $c_{20}$ & -3.71156E--004 \ \ \ \ & & \\ $c_{21}$ & 3.77233E--004 \ \ \ \ & & \\ $c_{22}$ & -1.56718E--004 \ \ \ \ & & \end{tabular}$$ We note several features.  First, the fit to the $\pi$ wave function appears to converge significantly more slowly than those for the $\rho$ and $\psi/J$.  (We do not present plots comparing the numerical wave function with the harmonic oscillator wave function fits since there are no visible differences). Also note  that the $\pi$’s short distance behavior is distinctly different from those of the other two, having a stronger radial dependence at the origin. All three wave functions possess polynomial coefficients that exhibit an oscillatory behavior. Numerical Evidence for Goldstone Boson Behavior ----------------------------------------------- In our equations, the pion is a Goldstone boson in the sense that its mass tends toward zero numerically in the limit in which the quark mass numerically goes toward zero. This may be seen in the accompanying plot Figure 6 (units are in $\operatorname{MeV})$ . Note that the $\rho$ meson mass approaches a finite value in the chiral limit. This non-Goldstone behavior also holds for the excited pion states.  None of the alternative approaches discussed in the following sections have displayed this property. Another distinction we point out is that our $u$ and $d$ quark masses (on the order of 55-60 $\operatorname{MeV}$) are significantly smaller than the constituent quark masses appearing in most all other models (on the order of 300 $\operatorname{MeV}$) - closer to the small current quark masses of a few $\operatorname{MeV}$.  Note, however, that the shape of our pion curve is not what one would expect  from the Goldberger-Trieman relation$$m_{q}=m_{\pi}^{2}F_{\pi}.$$ \[ptb\] [f50.ps]{} Thus this aspect of our model requires further investigation.   **Comparison of Structures of Two-Body Dirac Equations with Those of Alternative Approaches** ============================================================================================= So far, we have obtained spectral results given by our equations when solved in their own most convenient form. In Sections (VI-IX) we shall compare our results with recent universal fits to the meson spectrum produced by a number of other authors. These approaches employ equations whose structures (at  first sight) appear radically different from ours. However, as we have shown elsewhere [@va86], because our approach starts from a pair of coupled but compatible Dirac equations, these equations can be rearranged in a multitude of forms all possessing the same solutions. Among the rearrangements are those with structures close to those of the authors whose spectral fits we shall shortly examine. In order to see how structural differences in each case may lead to differences in the resulting numerical spectra, we shall begin by considering relevant rearrangements of the two-body Dirac equations. The first two alternative approaches which we shall discuss use truncated versions of the Bethe-Salpeter equation (Salpeter and quasipotential) while the third uses a modified form of the Breit equation. In order to relate the detailed predictions of our approach to these alternatives, we need to relate our minimal substitution method for the introduction of interactions to the introduction of interaction through the use of kernels that dominates the older approaches. The field-theoretic kernel employs a direct product of gamma matrices times some function of the relative momentum or coordinate. What is the analog of the kernel in our approach? In earlier work we found that we could obtain our external potential or minimal interaction form of our two-body Dirac equations from yet another form displaying a remarkable hyperbolic structure. We were able to recast our compatible Dirac equations (\[tbdea\],\[tbdeb\]) as $$\begin{aligned} \mathcal{S}_{1}\psi & =(\cosh(\Delta)\mathbf{S}_{1}+\sinh(\Delta )\mathbf{S}_{2})\psi=0,\nonumber\\ \mathcal{S}_{2}\psi & =(\cosh(\Delta)\mathbf{S}_{2}+\sinh(\Delta )\mathbf{S}_{1})\psi=0, \label{cnhyp}$$ in which [@jmath] $$\begin{aligned} \mathbf{S}_{1}\psi & \equiv(\mathcal{S}_{10}\cosh(\Delta)+\mathcal{S}_{20}\sinh(\Delta))\psi=0,\nonumber\\ \mathbf{S}_{2}\psi & \equiv(\mathcal{S}_{20}\cosh(\Delta)+\mathcal{S}_{10}\sinh(\Delta))\psi=0, \label{cnmyp}$$ in tems of free Dirac operators $$\begin{aligned} \mathcal{S}_{10}\psi & =\big(-\beta_{1}\Sigma_{1}\cdot p+\epsilon_{1}\beta_{1}\gamma_{51}+m_{1}\gamma_{51}\big)\psi\nonumber\\ \mathcal{S}_{20}\psi & =\big(\beta_{2}\Sigma_{2}\cdot p+\epsilon_{2}\beta _{2}\gamma_{52}+m_{2}\gamma_{52}\big)\psi\label{s0}$$ and the kernel $$\Delta={\frac{1}{2}}\gamma_{51}\gamma_{52}[L(x_{\perp})+\gamma_{1}\cdot \gamma_{2}\mathcal{G}(x_{\perp})]. \label{del}$$ We then recover the explicit external potential forms of our equations, (\[tbdea\],\[tbdeb\]) from (\[cnhyp\],\[cnmyp\]) by moving the free Dirac operators $\mathcal{S}_{i0}$ to the right to operate on the wave function. This rearrangement produces the derivative recoil terms apparent in Eqs.(\[tbdea\],\[tbdeb\]a)). $\Delta$ may take any one of (or combination of) eight invariant forms. In terms of $$\mathcal{O}_{1}=-\gamma_{51}\gamma_{52},$$ these become $\Delta(x_{\perp})=-L(x_{\perp})\mathcal{O}_{1}/2,\gamma_{1}\cdot\hat{P}\gamma_{2}\cdot\hat{P}J(x_{\perp})\mathcal{O}_{1}/2,\gamma _{1\perp}\cdot\gamma_{2\perp}\mathcal{G}(x_{\perp})\mathcal{O}_{1}/2$ or $\alpha_{1}\cdot\alpha_{2}\mathcal{F}(x_{\perp})\mathcal{O}_{1}/2$ for scalar, time-like vector, space-like vector, or tensor (polar) interactions respectively. Note that in our $\Delta(x_{\perp})$ in Eq.(\[del\]) above, $\mathcal{G}(x_{\perp})$ enters multiplied by the electromagnetic-like combination $\gamma_{1}\cdot\gamma_{2}=-\gamma_{1}\cdot\hat{P}\gamma_{2}\cdot\hat{P}+\gamma_{1\perp}\cdot\gamma_{2\perp}$ of time and space-like parts. This structure appears as a result of our use of  the Lorentz gauge to introduce vector interactions in the classical version of the constraint equations or as a result of our use of the Feynman gauge to treat the field-theoretic version[@infrared]. The axial counterparts to the constraints with polar interactions are given by (note the minus sign compared with the plus sign in Eqs.(\[cnhyp\])) [@jmath] $$\begin{aligned} \mathcal{S}_{1}\psi & =(\cosh(\Delta)\mathbf{S}_{1}-\sinh(\Delta )\mathbf{S}_{2})\psi=0\\ \mathcal{S}_{2}\psi & =(\cosh(\Delta)\mathbf{S}_{2}-\sinh(\Delta )\mathbf{S}_{1})\psi=0,\nonumber\end{aligned}$$ in which $\mathbf{S}_{1}$ and $\mathbf{S}_{2}$ are still given by (\[cnmyp\]) with axial counterparts to the above $\Delta$’s given by $C(x_{\perp})/2,\gamma_{51}\gamma_{1}\cdot\hat{P}\gamma_{52}\gamma_{2}\cdot\hat{P}H(x_{\perp})\mathcal{O}_{1}/2$,$\gamma_{51}\gamma_{1\perp}\cdot\gamma_{52}\gamma_{2\perp}I(x_{\perp})\mathcal{O}_{1}/2$ and $\sigma _{1}\cdot\sigma_{2}Y(x_{\perp})\mathcal{O}_{1}/2$ respectively. The advantage of the hyperbolic form is that with its aid we may first choose among the 8 interaction types in an unambiguous way to introduce interaction (without struggling to restore compatibility) and then, for computational convenience, transform the Dirac equations to external potential form. In the weak-potential limit of our equations, the coefficients of $\gamma_{51}\gamma_{52}$ in the expansion of our $\Delta$ interaction matrix in Eq.(\[del\]) directly correspond to the interaction kernels of the Bethe-Salpeter equation. Note however, that because of the hyperbolic structure, what we call a vector interaction actually corresponds to a particular combination of vector and pseudovector interactions in the older approaches (see Eq.(\[eff\]) below). This difference in classification of interactions becomes apparent when we put our equations into a Breit-like form. Consider the linear combination $$\beta_{1}\gamma_{51}\mathbf{S}_{1}+\beta_{2}\gamma_{52}\mathbf{S}_{2} \label{add}$$ For later convenience, form the interaction matrix $$\mathcal{D}(x_{\perp})={\frac{1}{2}}\beta_{1}\gamma_{51}\beta_{2}\gamma _{52}\Delta(x_{\perp}).$$ After simplification, the linear combination Eq.(\[add\]) of our two hyperbolic equations becomes $$w\Psi=[H_{10}+H_{20}+V(x_{\perp},\alpha_{1},\alpha_{2},\beta_{1},\beta _{2},\gamma_{51},\gamma_{52})]\Psi$$ in which $$\Psi=exp(-\mathcal{D})\psi$$ and $$H_{10}=\alpha_{1}\cdot p_{\perp}+\beta_{1}m_{1},\ H_{20}=-\alpha_{2}\cdot p_{\perp}+\beta_{2}m_{2}.$$ For the electromagnetic vector kernel $\Delta(x_{\perp})={\frac{1}{2}}[\gamma_{51}\gamma_{52}]\gamma_{1}\cdot\gamma_{2}\mathcal{G}(x_{\perp}),$ $\mathcal{D}$ then becomes $$\mathcal{D}={\frac{1}{2}}\mathcal{G}(x_{\perp})(\alpha_{1}\cdot\alpha_{2}-1),$$ so that the relativistic Breit-like equation takes the c.m. form $$w\Psi=[\boldsymbol{\alpha}_{1}\cdot\mathbf{p}-\boldsymbol{\alpha}_{2}\cdot\mathbf{p}+\beta_{1}m_{1}+\beta_{2}m_{2}+w(1-\exp[\mathcal{G}(\mathbf{r})(\boldsymbol{\alpha}_{1}\cdot\boldsymbol{\alpha}_{2}-1)])]\Psi\label{edgau}$$ In lowest order this equation takes on the familiar form for four-vector interactions (seemingly missing the traditional Darwin interaction piece $\sim\mathbf{\hat{r}}\cdot\boldsymbol{\alpha}_{1}\mathbf{\hat{r}}\cdot\boldsymbol{\alpha}_{2})$. $$w\Psi=[\boldsymbol{\alpha}_{1}\cdot\mathbf{p}-\boldsymbol{\alpha}_{2}\cdot\mathbf{p}+\beta_{1}m_{1}+\beta_{2}m_{2}-w\mathcal{G}(\mathbf{r})(\boldsymbol{\alpha}_{1}\cdot\boldsymbol{\alpha}_{2}-1)]\Psi. \label{sims}$$ However, as we first showed in [@cra94], expanding the simple structure of Eq.(\[edgau\]) to higher order in fact generates the correct Darwin dynamics. As a consequence, our unapproximated equation yields analytic and numerical agreement with the field theoretic spectrum through order $\alpha^{4}$. Explicitly, our full interaction is $$\begin{aligned} \exp[(\boldsymbol{\alpha}_{1}\cdot\boldsymbol{\alpha}_{2}-1)\mathcal{G}] & ={\frac{\exp(-\mathcal{G})}{4}}[3\cosh(\mathcal{G})+\cosh(3\mathcal{G})+\gamma_{51}\gamma_{52}(3\sinh(\mathcal{G})-\sinh(3\mathcal{G}))\nonumber\\ & +\boldsymbol{\alpha}_{1}\cdot\boldsymbol{\alpha}_{2}(\sinh(3\mathcal{G})+\sinh(\mathcal{G}))+\boldsymbol{\sigma}_{1}\cdot\boldsymbol{\sigma}_{2}(\cosh(\mathcal{G})-\sinh(3\mathcal{G}))] \label{eff}$$ so that our Breit-like potential contains a combination of vector and pseudovector interactions originating from the four-vector potentials of the original constraint equations in external-potential form. [@qed] In this section we have seen how the two-body Dirac equations with field-theoretic interaction structure automatically retain the correct Darwin structure of QED. Such a demonstration should be carried out for each alternative treatment (if possible) in order to check that truncations and numerical procedures have not destroyed its own version of the field-theoretic Darwin structure for its treatment of the vector interaction of QED (and associated vector structures in QCD).  Explicitly in our own work we find that including all the couplings to smaller components of the wave function is crucial not only for our nonperturbative QED spectral results (see [@bckr]) but also for our good results for $\pi-\rho$ splittings and the Goldstone behavior of the pion as the quark mass tends toward zero. Without those couplings the good results for the positronium splittings and light mesons evaporate. The Wisconsin Model of Gara, Durand, Durand, and Nickisch ========================================================== Definition of The Model and Comparison of Structure with Two-Body Dirac Approach -------------------------------------------------------------------------------- The authors of reference [@wisc] base their analysis of quark-antiquark bound states on the reduced Salpeter equation containing a mixture of scalar and vector interactions between quarks of the same or different flavors. When rewritten in a notation that aids comparison with our approach, their bound state equation takes the c.m. form $$\lbrack w-\omega_{1}-\omega_{2}]\Phi(\mathbf{p})=\Lambda^{+}(\mathbf{p})\gamma^{0}\int{\frac{d^{3}p^{\prime}}{(2\pi)^{3}}}[\mathcal{A}(\mathbf{p}-\mathbf{p}^{\prime})\gamma_{\mu}\Phi(\mathbf{p}^{\prime})\gamma^{\mu }+S(\mathbf{p}-\mathbf{p}^{\prime})\Phi(\mathbf{p}^{\prime})]\gamma^{0}\Lambda^{-}(-\mathbf{p}) \label{wisc}$$ in which $\mathcal{A}$ and $S$ are functions that parametrize the electromagnetic-like and scalar interactions, $\Lambda^{\pm}$ are projection operators, $w$ is the c.m. energy, $\omega_{i}=(\mathbf{p}^{2}+m_{i}^{2})^{1/2}$, while $\Phi$ is a four by four matrix wave function represented in block matrix form as $$\Phi=\bigg[{\genfrac{}{}{0pt}{}{\phi^{+-}}{\phi^{--}}}{\genfrac{}{}{0pt}{}{\phi^{++}}{\phi^{-+}}}\bigg]$$ They obtain this equation from the full Bethe-Salpeter equation by making an assumption equivalent to using a position-space description in which they calculate the interaction potential with the equal time constraint, neglecting retardation. (These are the usual ad-hoc assumptions that in our approach are automatic consequences (in covariant form) of our two simultaneous, compatible Dirac equations.) These restrictions turn Eq.(\[wisc\]) into the standard Salpeter equation. In addition the Wisconsin group employs what we call the weak potential assumption: $(w+\omega _{1}+\omega_{2})>>V$. This assumption turns Eq.(\[wisc\]) into the reduced Salpeter equation which, because of the properties of the projection operator, allows the Wisconsin group to perform a Gordon reduction of its equation to obtain a reduced final equation in terms of $\phi^{++}$ alone. In our approach we make no such weak potential assumption and therefore must deal directly with the fact that our Dirac equations themselves relate components of the sixteen component wave function to each another. Unlike what happens in the reduced Salpeter equation, in our method this coupling leads to potential dependent denominators, a strong potential structure that we found crucial in demonstrating that our formalism yields legitimate relativistic two-body equations. Just as we do, however, the Wisconsin group works in coordinate space where the dynamical potentials are local and easy to handle. However, in their method upon Fourier transformation the kinetic factors $\omega_{i}$ then become nonlocal operators. In contrast, the entire dynamical structure of our two-body Dirac equations is local as long as the potentials are local. The Wisconsin group uses local static potentials that play the role of our Adler-Piran potential: $$\begin{aligned} \mathcal{A}(r) & =-{\frac{4}{3}}\frac{\alpha_{s}(r)}{r}e^{-\mu^{\prime}r}+\delta(-\frac{\beta}{r}+\Lambda r)(1-e^{-\mu r})\nonumber\\ S(r) & =(1-\delta)(-\frac{\beta}{r}+Br)(1-e^{-\mu r})+(C+C_{1}r+C_{2}r^{2})(1-e^{-\mu r})e^{-\mu r} \label{wsint}$$ Note that Gara et al introduce a confining electromagnetic-like vector potential proportional to a parameter $\delta$. This differs from our approach in which the (dominant) linear portion of the confinement potential has no electromagnetic part. Like Adler’s potential, theirs has a long range $1/r$ part (the so-called Luscher term). Its short range part is electromagnetic-like just as is ours, and like Adler’s is obtained from a renormalization group equation. They base their analysis on a nonperturbative, numerical solution of the reduced Salpeter equation Eq.(\[wisc\]) with interaction Eq.(\[wsint\]). Comparison of Wisconsin Fit with that of Two-Body Dirac Equations ----------------------------------------------------------------- In Table II we include the Wisconsin variable-$\delta$ (vector and scalar confinement) best fit results, and the best fit results our method gives when restricted to the 25 mesons they consider. For uniformity of presentation we give all of the Wisconsin results in terms of absolute masses (rather than the mass differences and averages these authors presented for the spin-orbit triplets). Although Gara et al. did not perform the same $\chi^{2}$ fit that we do, we present (in parentheses) the incremental $\chi^{2}$ contribution for each meson so that we can easily compare the results of the two methods. We also compare their $R$ values and $^{3}P\ avg.$ to ours directly in the discussion below. Our results are closer to the experimental results for 16 of the 25 mesons. In detail, their $R$ values for the  $\Upsilon$ and $\psi$ families of 0.83,0.78, and 0.60 are less accurate than two of our values of 0.64,0.68, 0.35 respectively. Their $^{3}P$ averages $[5(^{3}P_{2})+3(^{3}P_{1})+1(^{3}P_{0})]/9$ of 9.902 ,10.262, 3.513 and ours (9.901, 10.264, 3.513) are essentially the same compared to the experimental results of 9.900, 10.273, 3.525 MeV. Their hyperfine splittings for the two charmonium multiplets of 200 and 47 MeV are significantly worse than our fits of 150 and 79 MeV. Their hyperfine splittings for the mesons with one $d$ or $s$ quark are 27, 51, and 127 MeV. Our fits of 128,138, 420 MeV respectively are much closer to the experimental results of 141, 141, 398 MeV. The radial excitation energies for the two lowest $\Upsilon$ excitations and the singlet and triplet charmonium excitations are again accounted for significantly better by three of four of our values of 569,335,636,568 MeV for the results in the last column than by the Wisconsin results of 602,331,654,491 MeV. In summary, the major strength of our approach is reflected in its better fits to the hyperfine splittings and radial excitations. The Wisconsin group’s results for the fine structure splitting are overall about the same as ours. Moreover, even a casual glance at the results shows one glaring discrepancy that results from their approach - their hyperfine splittings for the light quark mesons. The cause of this is probably the fact that their reduced Salpeter approach does not include coupling of the upper-upper piece to the other 12 components of the 16 component wave function. In fact, the lighter the meson, the worse is their result. In our QED numerical investigations we found that couplings to the other components of the wave function were essential in order to obtain agreement with the standard perturbative spectral results of QED. We have found that the same strong-potential effects that led to our successful results in QED are responsible for the goodness of our hyperfine splitting, particularly for the mesons containing the light quarks. It would be important to test the Wisconsin group’s procedure (with its deleted couplings to the other wave functions) numerically with $\mathcal{A=-}\alpha/r$ and $S=0$ for positronium to determine whether the problems that the Wisconsin model has with mesonic hyperfine splittings in QCD are reflected in its results for QED. Gara et al. point out that in their approach the straight line Regge trajectories ($j$ versus $w^{2}$) for the light quark systems are much too steep, with slopes greater than twice the observed slopes for pure scalar confinement. The best fit experimental slope and intercept values for the $\rho,a_{2},\rho_{3}$ trajectory are (0.88,0.48). The slope and intercept values that we obtain for our model in Table I are (0.87,0.47), in excellent agreement with the best experimental fit. For the $\phi_{1},f_{2},\phi_{3}$ trajectory the experimental values are (0.83,0.11) while our model of Table I produces the set of values (0.85,0.095). The intercepts are not as accurate as those for the $\rho$ trajectory although our results actually produce a tighter fit to a straight line trajectory than do the experimental results. Finally we come to the $\pi,b_{1},\pi_{2}$ trajectory. We obtain the values (0.57,-0.04). Compared to the experimental values of (0.72,-0.04) our slopes are about 25-30% small, although our fit to the straight line is just as tight. The probable reason for the relative advantage of our results over those of the Wisconsin group is that our bound state equations include a strong potential structure, and are not limited by the weak potential approximation built into the reduced Salpeter equation.   The Iowa State Model of Sommerer, El-Hady, Spence and Vary =========================================================== Definition of The Model and Comparison of Structure with Two-Body Dirac Approach -------------------------------------------------------------------------------- The Iowa State group introduces a new relativistic quasipotential reduction of the Bethe-Salpeter equation.  They use the well known fact that there are an infinite number of such reductions [@yaes] to construct a formal quasipotential parametrized in terms of two independent constants.  They show that when all of the most often used reductions are specialized to QED, they fail to numerically reproduce the correct ground state result for singlet positronium through order $\alpha^{4}$[@qed1]. These authors then fix the free parameters in their quasipotential by requiring that their resulting ground state energy lie close to the well-known perturbative value. In addition, the form of the quasipotential reduction they use produces a projection to positive energy states only. The Iowa State group uses a scalar linear confinement plus massless vector boson exchange-potential with the kernel $$\frac{-4\pi\alpha_{s}\gamma_{0}\gamma_{\mu}\times\gamma_{0}\gamma_{\mu}}{-(q-q^{\prime})^{2}}+4\pi b{\genfrac{}{}{0pt}{}{lim}{\mu\rightarrow0}}\big[{\frac{\partial}{\partial\mu}}\big ]^{2}\frac{\gamma_{0}\times\gamma _{0}}{-(q-q^{\prime})^{2}+\mu^{2}}$$ The QCD coupling $\alpha_{s}$ that they use is treated as a running coupling constant that depends on the momentum transfer and two parameters. Their quasipotential reduction incorporates zero relative energy in the c.m. frame. Comparison of Fit with that of Constraint Approach -------------------------------------------------- In Table III, we give the Iowa State group’s results for a set of mesons together with our results for the same set of mesons. In the fourth column of this table we present the results we would obtain from our approach if we limited our fit just to the 47 mesons used by the Iowa State group. We use the same RMS fitting procedure used by these authors instead of the $\chi^{2}$ fit used in our Table I.  The results are quite similar, 50 for the Ohio State model and 53 for our model. Of the 47 mesons in their table, our fits are closer to the data in 25. Thus, according to this crude measure there is no significant difference between the results of the two approaches.[@rncpl] We proceed now with a detailed comparison. Their $R$ values for the two bottomonium and one charmonium multiplets are 3.25,1.09,1.09.  Our $R$ values of 0.70,0.74,0.44 are considerably closer to the experimental ratios of 0.66 (0.61),0.56(0.61),0.47. (We make no comparison for the three light quark multiplets ($s\bar{s},s\bar{u},u\bar{d}$) since the Iowa State Group did not calculate the $^{3}P_{0}$ states. ) We note, however, that for the pairs of $s\bar{u}$ and $u\bar{d}$ their results for $^{3}P_{2}$ - $^{3}P_{1}$ splittings are substantially better than our results.  In particular, unlike our results, theirs do not have an inversion of the splitting.  Our poor results for these splittings are likely due to a larger influence of the scalar than the vector portion of the spin-orbit interaction. Comparing their $^{3}P$ averages $[5(^{3}P_{2})+3(^{3}P_{1})+1(^{3}P_{0})]/9$ of 9.859,3.497,1.433,1.015 GeV for the lowest lying spin-orbit multiplets listed in the table with our values of 9.902,3.516,1.470,1.386 and the experimental results of 9.900,3.527,1.503,1.303 GeV we see that ours are closer in each case to the experimental results. We see also that for charmonium, our average is nearly equal to our $^{1}P_{1}$ level while the Iowa State results are 75 MeV higher than their $^{1}P_{1}$ level. For the $u\bar{d}$ system, our average is 25 MeV higher than our $^{1}P_{1}$ level while theirs is 122 MeV above their calculated $^{1}P_{1}$ level. Their values of the hyperfine ($^{3}S_{1}-^{1}S_{0}$) splittings are 98, 48, 100,108,421,677 MeV for the two charmonium multiplets, and the $D-D^{\ast},D_{s}-D_{s}^{\ast},K-K^{\ast},\pi-\rho$ pairs. Comparison with the experimental splittings of 117,92,142,139,398,628 MeV and our results of 159,82,137,156,376,593 MeV show the constraint results closer to the experimental splittings on all but the ground state charmonium pair. (We have commented earlier on the origin of the descrepancy between our $\psi$ value and the experimental result.) We next wish to compare the results generated in both approaches for the spin-spin effect embodied in the $^{3}P_{1}$ - $^{1}P_{1}$splittings.  For the $c\bar{c},s\bar{u},u\bar{d}$ pairs the Iowa State results are 10,43,4 MeV compared to the experimental results of 15,136(129),28 (0) MeV and the two-body Dirac results of 16,78,279 MeV.   For the heavier two pairs, the constraint splitting results are substantially closer to the experimental results.  This resembles the similar spin-spin pattern found in the $S-$state hyperfine splittings.  Our poor result for the $u\bar{d}$ meson has the same origin as our poor result for the $R$ value mentioned above. Finally, we compare the radial excitations. The six upsilon states in the experimental column of the table occur at intervals of 562,332,225,285,254 MeV. The three charmonium triplet states and the two charmonium singlet states occur at intervals of 589,354,614 MeV while the two $s\bar{s}$ and $u\bar{d}$ states occur at intervals of 661 and 1160 MeV. The corresponding Iowa State intervals are 544,335,270,259,226,597,416,647,625,1304 MeV while our intervals are 578,345,260,218,191,560,395,637,753,1331 MeV. The Iowa State radial excitation splittings are closer to the experimental values on four of the five upsilon splittings, one of the three charmonium splittings and both of the lighter quark splittings.  Even though the RMS values obtained in each approach are nearly the same, on most of the detailed comparisons made above the constraint approach appears to give better fits.  The exceptions to this are the radial excitations and some of the heavier light-meson excitations. The largest portion of our RMS values come from the heavy-light meson orbital and radial excitations.  We have long argued that any proposed relativistic wave equation should be tested in terms of its ability to reproduce known perturbative results of  QED and other relevant relativistic quantum field theories when solved nonperturbatively before being applied to QCD. The Iowa State group in fact adopts this philosophy in order to resolve an ambiguity in the construction of the quasipotential in their wave equation by demanding that it reproduce the ground state level of singlet positronium numerically.  This requirement fixes the values of the two parameters of their quasipotential mentioned above. In contrast, the constraint approach has no free parameters of the type used by [@iowa] for the quasipotential reductions.  Instead, its Green function is fixed. While within the constraint approach the connection between the kernel and the invariant constraint functions (e.g. $\mathcal{G},L$) does involve some freedom of choice (see Eqs.(\[tvecp1\],\[gp\],\[mp\])), that freedom is not determined by the requirement that the model fit a particular state but instead is fixed by fundamental dynamical requirements following equivalently from classical or from quantum field theory and resulting in the appearance of a  minimal form of the potential (see Eq.(\[mnml\]) and below). Several features separate the two approaches.  First, as we found in [@bckr] the QED results provided by our equation agree with those of standard perturbative QED for more than just the ground state while it is unknown if the parameters that the Iowa State model uses that ensured its fit to the singlet ground state of positronium would work for the other states. Second the constraint approach generates similar structures for scalar interactions and systems of vector and scalar interactions with agreement with the corresponding perturbative field-theoretic results while again it is unknown whether the parameters that the Iowa State model uses that gave good fits to the singlet ground state of positronium would work in the presence of other potentials. Third, the match to singlet positronium that we obtained was an analytic consequence of our equations for QED and therefore a test of those equations [@exct], not the result of a numerical fit.  Fourth, our approach includes essential contributions from all sixteen components of the relativistic wave function, not just the positive energy components [@pstv]. Fifth, an important consequence of the fully relativistic dynamics and gauge-theoretic structure of the constraint equations is that they produce values of the light quark masses closer to current algebra values than do alternative approaches. The quark masses that we obtained in our comparison fit with the Iowa State model are $m_{s}=314$ MeV and $m_{u}=m_{d}=67$ MeV which are significantly closer to the current  algebra  values of $m_{s}\sim125$ MeV and $m_{u},m_{d}\sim 3-6~$MeV than the Iowa State model’s values of 405 and 346 MeV respectively. The Breit Equation Model of Brayshaw ==================================== Definition of The Model and Comparison of Structure with Two-Body Dirac Approach -------------------------------------------------------------------------------- Brayshaw [@bry]treats quarkonium with the aid of the Breit equation and an interaction Hamiltonian with five distinct parts, four of which are independent. As usually done for the Breit equation the times associated with each particle are identified or related in some favored frame (normally c.m.) selected so that the relative time does not enter the potential. In that frame Brayshaw uses the equation $$H\Psi=(H_{0}+H_{C}+H_{B}+H_{S}+H_{I}+H_{L})\Psi=w\Psi\label{bry}$$ in which $H_{0}$ is the free Breit Hamiltonian $$H_{0}=\boldsymbol{\alpha}_{1}\cdot\mathbf{p}-\boldsymbol{\alpha}_{2}\cdot\mathbf{p}+\beta_{1}m_{1}+\beta_{2}m_{2}$$ while $H_{C}$ and $H_{B}$ are a Coulomb and an associated Breit interaction $$\begin{aligned} H_{C} & =\frac{c_{1}}{r}\nonumber\\ H_{B} & =-\frac{c_{1}(\boldsymbol{\alpha}_{1}\cdot\boldsymbol{\alpha}_{2}+\boldsymbol{\alpha}_{1}\cdot\mathbf{\hat{r}\alpha}_{2}\cdot\mathbf{\hat{r}})}{2r}.\end{aligned}$$ As indicated in our discussion about the Salpeter equation in Section(VI), this part of the interaction comes from the vector portion of the kernel. The author acknowledges the difficulties associated with the Breit interaction, pointing out that the radial equation has a singularity at a radial separation of $r_{0}=-c_{1}/w>0$. He bypasses Breit’s proposal that this interaction be used only in first order perturbation theory by using only positive energy spinors in his variational procedures. We point out that this was not necessary in our approach since the hyperbolic structure of our eight basic interactions avoids problems inherent in Breit’s formulation [@cwyw]. In particular, it avoids appearance of midpoint singularities. Unfortunately, just like the Wisconsin group, having avoided the pitfalls of the Breit equation, he uses his replacement without testing whether or not his formalism would yield the standard QED results numerically if he limited his interaction to the usual Coulomb interaction. Once again such a test would (if successful) help eliminate the possibility that the wave equation introduces spurious physics. In Eq.(\[bry\]), $H_{L}$ is a long range confining portion which incorporates the requirement that the wave function vanish identically for radial separations $r>a$ with a boundary condition at $r=a$. Brayshaw argues for this term over and above a linear confinement piece on the grounds that at some separation $r_{p}$ corresponding to a threshold energy $E_{p}$ , production of $q\bar{q}$ pairs should become energetically favorable. His radial parameter $a$ plays the role of $r_{p}$ in specifying the range at which such effects (among others) dominate confinement. He expects that $a$ is on the order of $\langle r\rangle$ for the light quark mesons while wave functions for the heavy quark mesons would have fallen to zero for $r<<a$. When introducing the explicit form of his linear confinement potential, the author finds that it cannot simply be added as a Lorentz scalar to the Hamiltonian since such a term produces far too large a mass shift for the light quark systems. Instead he chooses $$H_{I}=c_{2}(\beta_{1}+\beta_{2})r. \label{bra}$$ which he shows contributes very weakly for the light quark systems, while contributing significantly for the heavy quark systems with an intermediate contribution for the hydrogen-like intermediate mass mesons. Unfortunately, however, we note the important fact that the Lorentz transformation character of this confining interaction is ambiguous, being neither scalar ($\sim \beta_{1}\beta_{2}$) nor (time-like) vector ($\sim1_{1}1_{2}$). Finally Brayshaw introduces a special short range attractive piece solely in order to obtain a good fit to the pion and kaon. Instead of a spin-dependent contact term used in a number of semirelativistic approaches [@licht; @rob; @isgr] he uses $$H_{S}=H_{B}(1_{1}1_{2}+\beta_{1}\beta_{2})\frac{c_{4}r\theta(b-r)}{2(m_{1}+m_{2}+c_{4})}$$ This term resembles a cross term between a linear confinement piece and the Breit term that might emerge from some sort of iteration. The short range character of this part-scalar, part-vector interaction is specified through taking $b<<a$. In contrast, our approach possesses a short range spin-spin interaction that is quantum mechanically well defined and which arises straightforwardly from the Schrödinger reduction of our Dirac equations. We do not need to add it in by hand. Comparison of Fit with that of Constraint Approach -------------------------------------------------- In spite of its ad hoc nature, we have included the procedure of Brayshaw among our comparisons because it turns out that his resultant fit for the 56 mesons (that overlap with our fit) is quite good, just slightly worse than our fit. In Table IV we include in the fourth column the fit we would obtain with our model if we included only the 56 mesons that our fit has in common with Brayshaw’s. On a meson by meson basis we compare by using incremental $\chi^{2}$ values. Of the 56 mesons in the table, our fits are closer to data in only 26, although overall our fit is better. However, this overall difference may not be as significant as in the previous examples because here we did not use identical fitting procedures for both models. Brayshaw’s $R$ values for the two upsilon, the one charmonium, the $K^{\ast}$, $\phi$ and $\rho-\pi$ triplet $P$ multiplets are 0.47,0.34,0.32,0.55,0.25,0.19 and are distinctly different from our values of 0.66,0.69,0.39,-0.71,-0.25,-5.67 and the experimental numbers of 0.66,0.61,0.48,0.09,-0.97,-0.4. Although the constraint/Adler-Piran combination is distinctly better than the Breit/Brayshaw approach for the heavier mesons, both give poor $R~\ $results for the lighter mesons. All of his light spin-orbit multiplets have masses that increase monotonically with $j$, unlike the pattern of the experimental numbers.  Although our results show a non-monotonic pattern that pattern also differs from that of the data. Note that the details of our patterns are greatly influenced by the presence of the scalar potential. Brayshaw’s approach includes (see $H_{S}$) a partial Hamiltonian that governs intermediate range behavior, in which time-like and scalar interactions contribute equally. This may be responsible for the difference between his montonic pattern and that displayed by the data. Comparing his $^{3}P$ averages $[5(^{3}P_{2})+3(^{3}P_{1})+1(^{3}P_{0})]/9$ to the $^{1}P_{1}$ mesons for the charmonium, $K^{\ast}$, and $\rho-\pi$ systems we find the following three pairs of numbers: 3.517,3.498;1.335,1.355;1.251,1.202. Comparison to our numbers of 3.519,3.520;1.435,1.421;1.434,1.411 and the experimental numbers of 3.526,3.525;1.402,1.375;1.231,1.303 shows that our approach gives better agreement for the heavier mesons, his somewhat better for the lighter while both do about the same for the $K^{\ast}$. His values of the hyperfine splittings are 118,100,143,158,410,636 MeV for the two charmonium multiplets, and the $D-D^{\ast},D_{s}-D_{s}^{\ast},K-K^{\ast },\pi-\rho$ pairs. Comparing with the experimental splittings of 117,92,142,144,398,627 MeV shows a clear pattern of excellent to good results for the heaviest, lightest, and the intermediate more hydrogen-like mesons. Our results are 151,79,133,145,416,647 MeV. Our ground state charmonium result is not nearly as good as Brayshaw’s while for the others we have about the same quality of fit. It may be that his choice of $H_{S}$ rectifies the problem our treatment encounters. But, the disadvantage of this is that his $R$ values for the heavy mesons are worse. This effect appears to be similar to the trouble we encountered, mentioned in our discussion of Table I in Sec. IVA. For the radial excitations, the four upsilon states in the data portion of the table occur at intervals of 563,332,225 MeV while the three charmonium triplet states and the two charmonium singlet states occur at intervals of 589,354,614 MeV. The pion excitation is 1160 MeV. The corresponding Brayshaw intervals are 555,335,320,551,566,569,888 MeV while our intervals are 572,337,257,564,395,636,1403 MeV. With the exception of the second radial triplet upsilonium and charmonium excitation intervals, the fits of both models are of about the same quality. Note that excited pion predictions bracket the experimental results. This appears to be a common feature of the radial and orbital excitations of the light quark mesons, with his results on average closer to the experimental values. Our results are, on average, better for the heavier mesons. However, his apparently good fit emerges from a potential structure that has ambiguous Lorentz transformation properties. The potentials are chosen in a patchwork manner using the 5 parameters $a,c_{1},c_{2},c_{3},c_{4}$(he sets $b=a/10$). In terms of Lorentz transformation properties his scheme uses four invariant functions (scalar, time-like, electromagnetic like and mixed ($H_{S},H_{B}$ and $H_{I}$)). The Adler-Piran potential that we use has only two invariant functions corresponding to scalar and electromagnetic like interactions. The constraint approach is not a patchwork; instead its wave equation itself (once $\mathcal{A}$ and $S$ are chosen) fixes the spin, orbital and radial aspects of its potential and its spectra. We also note that just as in the case of the Wisconsin model, Brayshaw has not tested the nonperturbative reliability of his equation. On the other hand an important result of his approach is that the $u,d$ quark masses required for his fit are very small (10 MeV) and significantly closer to the current quark mass values than ours.  His strange quark mass (200 MeV) is also closer to the proposed current quark mass values than our value. The most important warning provided by Brayshaw’s approach is that an ad hoc structure with ambiguous Lorentz properties can do so well at fitting the spectrum. The Semirelativistic Model of Godfrey and Isgur =============================================== Definition of The Model and Comparison of Structure with Two-Body Dirac Approach -------------------------------------------------------------------------------- We begin with a general discussion of Semirelativistic Quark Models (with and without full Relativistic Kinematics). We term a semirelativistic quark model one that uses a two-body wave equation that takes one of the following three forms in the c.m. frame: $$\begin{aligned} (\mathbf{p}^{2}+\Phi(\mathbf{r},\mathbf{s}_{1},\mathbf{s}_{2}))\psi & =(w-m_{1}-m_{2})\psi\nonumber\\ (\sqrt{\mathbf{p}^{2}+m_{1}^{2}}+\sqrt{\mathbf{p}^{2}+m_{2}}+\Phi (\mathbf{r},\mathbf{s}_{1},\mathbf{s}_{2}))\psi & =w\psi\nonumber\\ (\mathbf{p}^{2}+\Phi(\mathbf{r,s}_{1},\mathbf{s}_{2}))\psi & =b^{2}(w)\psi.\end{aligned}$$ In each of these equations $\mathbf{p}^{2}$ is the square of the c.m. relative momentum while $\Phi(\mathbf{r},\mathbf{s}_{1},\mathbf{s}_{2})$ is an effective potential which includes central, spin-orbit, spin-spin, tensor and possibly Darwin terms. In each, the wave function has four components with no coupling to lower-lower components. The most important difference between the first form and the others is that the latter two have exact relativistic kinematics. The former is almost always called a nonrelativistic quark model although strictly speaking almost all spin dependences (at least those that arise from vector and scalar interactions) vanish in the nonrelativistic limit. These equations differ from the Two-Body Dirac equations and the Breit and instantaneous Bethe-Salpeter approaches primarily in that their spin-dependences are put in by hand, abstracted from the Fermi-Breit reductions of the Breit and instantaneous Bethe-Salpeter approaches. For Coulomb-like potentials originating in the Coulomb Gauge, these terms contain singular potentials. Consequently they must either be treated purely perturbatively (thus ruling out application to the light quark mesons) or through the introduction of smoothing parameters that may or may not be features of the actual potential. The two-body Dirac equations of constraint dynamics, like their one-body cousin, have a natural smoothing mechanism - potential dependent denominators in the spin-dependent and Darwin terms of the resultant Schrodinger-like form - that eliminates the necessity for ad hoc introduction of such terms. The Breit equation may also possess a natural smoothing mechanism, but a nonperturbative treatment of it leads to erroneous results in QED [@kro81]. The instantaneous Salpeter equation may have a natural smoothing mechanism, but has not been tested nonperturbatively for QED even though the equation is over 50 years old. Authors who have attempted to use these types of semi-relativistic equations to treat the entire meson spectrum include Lichtenberg [@licht](the third type), Stanley and Robson [@rob] and Godfrey and Isgur [@isgr] (the second type), and Morpurgo, Ono, and Schöberl [@mor90](the first type) . Each of these authors ignore the spin-independent part of the Fermi-Breit interaction. This neglect is not justifiable since this part of the interaction will have an effect on $S$ states that is significantly different from its effect on non $S$-states, being normally short ranged compared with the rest of the central force part of the problem. In this paper, we select one of these models for our final comparison, the model of Godfrey and Isgur, since this model, even though already 18 years old,  is by far the most often cited in recent experimental works and theoretical papers on rival approaches. As we have said, Godfrey and Isgur assume a semi-relativistic wave equation of the second type possessing exact relativistic kinematics but through the inconvenient sum-of-square-roots form. They then determine the form of interaction in the following way. They assume that the confining piece of the interaction is a world scalar. They modify the Coulomb potential with the aid of a smoothing function. At the same time they appear to ignore the Darwin term (e.g. the spin independent contact term present in the one-body limit) in the on-shell reduction of the $q\bar{q}$ scattering amplitude. Although they modify the short range part of their interaction with the aid of a smearing function, this modification does not compensate for the ignored Darwin term. We have shown elsewhere [@yng], [@cra84d] that the Darwin interactions for scalar and vector interactions lead, through a canonical transformation to the quadratic local terms $S^{2}$ and $\mathcal{A}^{2}$ that appear in our equations.  Since the authors have ignored this part of the Darwin interactions their results contain none of the dynamical consequences of the $S^{2}$ or $-\mathcal{A}^{2}$ pieces. What portion of the Darwin term they include they parametrize separately just as they do the other portions of the Fermi-Breit interaction.  These terms include the spin-spin contact term, the spin-orbit terms, and the tensor terms. In our opinion, this patchwork way of handling the physics blurs the relativistic significance of their quark model. In our two-body Dirac equations the Darwin portion and each of the spin-dependent portions is tied directly to and fixed by the Lorentz forms $L(x_{\perp}),\mathcal{G(}x_{\perp})$ of the interaction which are in turn set by the $S,\mathcal{A}$ invariant potentials. In QED these fixed terms yield the correct spectrum with no additional parameters needed to adjust their relative sizes. In addition to bypassing the problems of singular spin-dependent terms by assuming a smoothing parameter, Godfrey and Isgur include nonlocal (momentum-dependent) potentials by replacing the mass dependent $m_{i}^{-1}$ in the Fermi-Breit term by $(\mathbf{p}^{2}+m_{i}^{2})^{-1/2}$. They claim that this is necessary because the Fermi-Breit reduction (or the on-shell $q\bar{q}$ scattering amplitude in c.m.) does not adequately express the full momentum dependence (or nonlocal nature) of the potential. While this might be true, we have found that such nonlocal behavior is not necessary to obtain very good results either in lowest order QED or in the quark model. Like the Adler-Piran potential that we use in our approach, their potential includes a running coupling constant. In fact, by convolving a parametric Gaussian fit to the running coupling constant with the ${\frac{1}{\mathbf{q}^{2}}}$ , they obtain their desired smoothing of the Coulomb potential, thus killing two birds with one stone. In addition, they are able to treat the zero isospin mesons like the $\eta$ and $\eta^{\prime}$ by including a phenomenological annihilation term. We leave out this term in our results of Table I-IV and in our comparison with the results of Godfrey and Isgur in Table V. Lichtenberg [@licht] has compared an earlier version of our quark model for the meson spectrum with that of Godfrey and Isgur. The potential we used in that earlier version was the one-parameter Richardson potential, with the confinement piece chosen to be one-half time-like vector and one-half scalar. As Lichtenberg pointed out, Godfrey and Isgur obtained significantly better agreement with the data than we did. He states that this is because they use significantly more parameters than we do including four in the potential and six to describe relativistic effects, ten altogether, compared to our one. However, we do not believe that as a general rule the number of parameters that appear in the potential is, in itself, of as much significance as how these parameters are distributed. For example, in our present and previous models there are two invariant functions, $\mathcal{A}$ and $S$ related to the single nonrelativistic (Adler-Piran) $V_{AP}$ that itself depends on two parameters. These parametric functions are not entirely independent, being related by Eqs.(\[asap\],\[apa\],\[aps\]). Specifying their form fixes both spin-independent and spin-dependent parts of the quasipotential $\Phi_{w}$. We might say that our formalism has 5 quark mass parameters and two parametric functions. Increasing the number of parameters that $\mathcal{A},S$ depend on may or may not increase the goodness of the fit. According to our way of counting, Godfrey and Isgur have independent parametric functions for the two spin-orbit parts of the potential, the spin-spin contact part, the tensor part, the scalar potential, and the spin-independent part of the vector potential, altogether 6 parametric functions. From our way of counting the number of parameters the number of parametric functions would not increase no matter how many parameters are included in fixing the functional form of each of these six functions. Likewise, in our case, no matter how many parameters we use in fixing $\mathcal{A},S$ there are only two independent parametric functions. Our approach is distinct from that of Godfrey and Isgur in that we do not alter the functional form at the level of the spin-dependence but rather at the level of the kernels. Finally, before we compare our present work with that of Godfrey and Isgur, we note that our present model differs from our earlier one used by Lichtenberg in his comparison of the two approaches. Our present treatment differs in its replacement of the Richardson potential by the Adler-Piran potential. The intermediate range form of the A-P potential is closely tied to an effective field theory related to QCD and is therefore superior to Richardson’s ansatz. Furthermore, in calculations based on our earlier treatment we ignored the tensor coupling and unequal mass spin-orbit difference couplings which we explicitly include in the present calculations. We have also corrected a defect in the decoupling we used between the upper-upper and lower-lower components of the wave functions for spin-triplet states in our older treatment. Comparison of Fit with that of Constraint Approach -------------------------------------------------- We now compare the fit given by our model to that provided by the model of Godfrey and Isgur. In Table V  we display in the fourth column the fit we would obtain with our model if we included only the 77 mesons that our fit has in common with that of Godfrey and Isgur. We then compare the fits by examining the incremental $\chi^{2}$ values for each meson.  (In an RMS comparison they would obtain about 63 compared to our value of 79). For the 77 mesons in their table, our fits are closer to data in only 32; overall their fit is better. Generally speaking our results are better on the newer mesons while their fit is better on the older mesons. A detailed comparison reveals the following. Their $R$ values for the two upsilon, the one charmonium, the $K^{\ast}$, $\phi$ and $\rho-\pi$ triplet $P$ multiplets are 0.29,0.50,0.57,0.36,0.42,0.47 and are distinctly different from our values of 0.68,0.76,0.41,-0.66,-0.21,-4.00 and the experimental numbers of 0.66,0.61,0.48,0.09,-0.97,-0.4. As was true for the Brayshaw analysis, the constraint/Adler-Piran combination gives a distinctly better fit than the Isgur-Wise approach for the heavier mesons, while both give poor results for the lighter mesons. As was the case for Brayshaw’s spectrum, none of their light multiplets are inverted, whereas although ours are inverted they are not inverted in the same way as the experimental numbers are. Again, our inversions are due to the action of the scalar potential. Godfrey and Isgur include a time-like contribution in the spin-orbit part of their Hamiltonian. This may be responsible for their lack of the partial inversion that appears in the data. Computing their $^{3}P$ averages $[5(^{3}P_{2})+3(^{3}P_{1})+1(^{3}P_{0})]/9$ along with the $^{1}P_{1}$ mesons for the charmonium, $K^{\ast}$ and $\rho -\pi$ system we find the following three pairs of numbers: 3.524,3.520;1.392,1.340;1.262,1.220. Comparison with our numbers of 3.519,3.520;1.424,1.411;1.419,1.397 and the experimental numbers of 3.526,3.525;1.402,1.375;1.231,1.303 shows the constraint approach giving slightly better numbers for the heavier mesons and the $K^{\ast}$ while the Godfrey-Isgur results are somewhat better for the lighter mesons. Their $^{3}D$ average $[7(^{3}D_{2})+5(^{3}D_{1})+3(^{3}D_{1})]/15$ and their $^{1}D_{2}$ meson for the $K^{\ast}$ are 1.795,1.780 MeV while our results and the experimental results are 1.873,1.879 and 1.774,1.773 MeV respectively. Our results are relatively closer to one another while theirs are closer to the data in an absolute sense. This is indicative of the general trend of our orbitally excited light mesons being somewhat high. We suspect that this is due to the $S^{2}$ behavior becoming dominant at longer distance, changing the behavior of the confining potential in the effective Schrödinger-like equation from linear to quadratic. Their values of the hyperfine splittings are 130,60,160,150,430,130,620,150,120 MeV for the two charmonium multiplets, and the $D-D^{\ast},D_{s}-D_{s}^{\ast},$ two $K-K^{\ast},$ and three $\pi-\rho$ pairs. Comparison with the experimental splittings of 117,92,142,144,398,-48,627,165,354 MeV and our results of 150,78,133,145,403,208,645,239,166 MeV demonstrates that while our results are closer than theirs for most of the newer mesons and the $K-K^{\ast}$, their results are more in line for most of the older mesons. Again this shows a pattern of our method overestimating the radially excited states of the light mesons. Let us see if this trend of overestimation by the constraint approach continues for the radial excitations of fixed quantum numbers. The six upsilon states in the data portion of the table occur at intervals of 563,332,225,285,154 MeV while the three charmonium triplet states and the two charmonium singlet states occur at intervals of 589,354,614 MeV whereas the three singlet $K$ and the two triplet $K^{\ast}$ states occur at intervals of 977,370 and 520 MeV. Finally the three pion and three rho excitations occur at 1160,495 and 698,654 MeV. The corresponding Isgur-Wise intervals are 540,350,280,250,220,580,420,650,980, 570,680,1150,580,680,550 MeV compared to our intervals of 570,336,256,213,186,561,393,633,1099,495,894,1383,634,986,561 MeV. Again we encounter a pattern of our results being more accurate overall for the newer mesons while theirs are more accurate for the older ones (with our results too large for all of the older ones). Primarily what we learn from this comparison is that not only does the scalar interaction lead to partial triplet inversions for the lighter mesons but also yields radial and orbital excitations that are too high for a related reason: the presence of the $S^{2}$ term in the effective potential. On the other hand, as Godfrey and Isgur themselves point out, their treatment of the relativistic effects is schematic, with no wave equation involved, allowing an uncontrolled approach in which there are no tightly fixed connections among the various spin-dependent and spin-independent parts of the effective potential $\Phi$. An important feature of our approach that differs significantly from the model of Godfrey and Isgur  (as well as those of the Wisconsin and Iowa State groups ) is the size of its resulting light quark masses.  Our $u,d$ quark masses are about a factor of  four or five smaller than theirs, significantly closer to the current algebra values.   Godfrey and Isgur argue that since a constituent quark model requires dressed quarks of a finite size (to avoid singular potentials in their wave equation among other reasons) one should not expect the model quarks to have current-quark masses.  We argue that a properly structured relativistic wave equation should not require finite quark sizes. Similar remarks have been made historically to justify tampering with the wave equation in QED to avoid treating singular terms. However, in QED those terms are perturbative artifacts. In fact, in the constraint equations for QED, they arise from premature weak-potential approximation to terms that are actually well-behaved at the origin. Similarly, when we apply the constraint approach to QCD we need no size parameters. Finally we mention what we consider the major theoretical shortcoming in the approach of Godfrey and Isgur. The formalism that they use gives very good results on the hyperfine splittings for light and heavy mesons. However, it is unknown if this is an artifact of their smearing factors and the introduction of relativistic momentum dependent corrections to the potentials (that is, through the replacement of quark masses $m$ by $\sqrt{p^{2}+m^{2}}$) needed to modify the singular nature of the potentials that they start with.  It would be of interest to test the wave equation used by Godfrey and Isgur numerically with $\mathcal{A=-}\alpha/r$ and $S=0$ for positronium to see if any of their successes with mesonic hyperfine splittings are reflections of corresponding nonperturbative successes in QED. If their method were not able to obtain an acceptable fit to the QED spectral results through order $\alpha^{4}$, then the legitimacy of its fits in QCD would be seriously called into question. Without such tests one could not be sure whether the method they employ to avoid the singular potentials has distorted the dynamics. The constraint approach has passed this test in that without introducing additional parameters it does faithfully reproduce the correct spectral results in QED. Conclusion and Warnings About the Dangers of Relativistic and Nonrelativistic Spectral Fits =========================================================================================== In this paper, we have investigated how well the relativistic constraint approach performs in comparison with selected alternatives when used to produce a single fit of experimental results over the whole meson spectrum. This approach is distinguished from others by its foundation - a set of coupled, compatible, fully covariant wave equations whose nonperturbative numerical solution yields the mass spectrum along with wave functions for the $q\bar{q}$ meson bound states. Its virtue - generation of fully covariant spin structures - also serves to restrict and relate plausible interaction terms just as the ordinary single-particle Dirac equation determines relations among Pauli spin dependences and fixes the proper strength of the Thomas precession term in electrodynamics. The dynamical structures of the constraint approach were originally discovered in classical relativistic mechanics but have since been verified for electrodynamics through diagrammatic summation in quantum field theory in the field-theoretic eikonal approximation [@saz97]. To use such relativistic equations to treat the phenomenological chromodynamic $q\bar{q}$ bound-state, one must construct a relativistic interaction that possesses the limiting behaviors of QCD. In our approach we have done this by using the nonrelativistic static Adler-Piran potential to construct a plausible relativistic interaction that regenerates the AP potential as its nonrelativistic limit. In our equations, this process generates a host of accompanying interaction terms. When describing these interactions, one must guard against a semantic difficulty in the verbal classification of the various parts of the interaction as scalar, vector, pseudovector etc. The various formalisms classify these in different ways but in our equations, the meaning of these terms can be readily determined through examining their roles in the defining equation Eq.(\[tbdes\],\[cnhyp\],\[cnmyp\],\[del\]). Once these terms have been introduced, the constraint formalism automatically produces a system of important accompanying terms like quadratic terms that dominate at long distance (reinforcing or undermining confinement) or spin dependences that accompany chosen interactions producing level splits that agree or disagree with the experimental results in various parts of the spectrum. After identification of the relativistic transformation properties of interaction terms the constraint method leaves almost no leeway for fiddling with (unnecessary) cutoffs, etc. Some years ago, when applied to the $e^{-}e^{+}$ system, its structure proved restrictive enough to rule out within it the presence of postulated anomalous resonances[@bckr; @spence]. In recent work on the relation of our equations to the Breit and earlier Eddington-Gaunt equations for electromagnetic bound-states, the method has explicitly demonstrated the importance of keeping spin couplings among pieces of the full 16-component wave-functions whose counterparts are often truncated or discarded in alternative treatments [@cwyw; @va97]. The fits that we have examined as alternatives fall into different classes: motivated relativistic fits ( constraint vs truncations of standard field-theoretic), ad-hoc relativistic fits, and cautious semirelativistic fits. Among the relativistic ones, there is a danger exemplified by the Brayshaw model which achieves relative success despite the dubious relativistic nature of its interaction. As always, what makes fits hard to judge at this stage is the ease with which one can achieve apparent success over limited regions of the spectrum using highly-parametrized interactions. We have attempted to avoid this problem by limiting comparisons to published treatments that include both the light and heavy meson portions of the spectrum, not just one of the two sectors. Our choices for comparison are meant to be representative (we do not attempt an exhaustive review) (see [@tjon] for other important treatments).  With the exception of the Iowa State model [@iowa] all of the comparison models fail to test whether or not a nonperturbative treatment of  their wave equations would yield the known results if the QCD kernels used were to be replaced by ones appropriate for QED.  With the exception of the quark masses obtained by Brayshaw, [@bry] our light quark masses are substantially closer to the current algebra values than are those produced by the other comparison models. In our application of the constraint approach, it is possible to describe the  nonperturbative physics that accommodates a typical size for an effective or constituent quark mass used in the other approaches and which at the same time has the size necessary to account for baryon magnetic moments.  Even though our $u$ and $d$ quark masses are small compared with constituent quark masses found in the competing approach, if we compute the expectation value $\langle M_{i}(\mathcal{A},S)\rangle$ we find a range that includes those values.  We find the range of values for this effective mass from $64$ MeV for the pion to $390$ MeV for the rho.  Its value depends not only on the quantum numbers of the meson but also the flavor of the other quark.  For example, for the $D$ meson we find $\langle M_{u}(\mathcal{A},S)\rangle=190~$MeV whereas for the $B$ we obtain 258 MeV. Finally, some authors have even produced unabashedly nonrelativistic fits. They claim to obtain good fits to the meson spectrum through the use of variants of the nonrelativistic quark model (NRQM).  [@mor90], [@martin].  These authors even claim success at fitting the light quark mesons for which the assumptions $T<<mc^{2}$, $\vert$$V|<<mc^{2}$ of the nonrelativistic Schrödinger equation are patently false. What can account for the apparent success of the NRQM?   Morpurgo states [@mor90] that the various potential models, including the nonrelativistic quark model, are merely different parametrizations of an underlying exact QCD Lagrangian description. That is, all use essentially the same spin and flavor structures. For example, for the mesons one can derive a parametrized mass with general form (for the present discussion restricted to $\pi,K,\rho,K^{\ast}$) $$``\mathrm{parametrized\ mass}^{\prime\prime}=A+B(P_{1}^{s}+P_{2}^{s})+C\boldsymbol{\sigma}_{1}\boldsymbol{\cdot\sigma}_{2}+D(P_{1}^{s}+P_{2}^{s})\boldsymbol{\sigma}_{1}\boldsymbol{\cdot\sigma}_{2} \label{mor}$$ in which $P_{i}^{s}$ is the projector onto the strange quark sector. These authors say that this structure although typical of an NRQM description, follows from QCD itself. They state that the form Eq.(\[mor\]) is common to all of the relativistic or semirelativistic quark models. They assert that any one of them can be successful but not superior to any other, if it merely reproduces the spin flavor structure of the general parametrization. Thus, from their point of view selection of the best model is entirely a matter of taste and simplicity. We disagree with this assessment for the following reasons. First, the kinetic and potential parameters have significances beyond simply producing a fit for the two-body bound-state sector in isolation. When the spin-flavor structure in (\[mor\]) appears in the constraint approach, its accompanying constituent quark masses turn out to be closer to the current-quark masses than those produced by most other approaches while the constraint method requires only two parametric functions to be used beyond the parameters of the constituent quark masses. The constraint scheme successfully uses one set of these parametric functions for the entire spectrum of meson states including the radial as well as orbital excitations. But most importantly, within the bound-state spectrum itself, in our relativistic approach even though superficially sharing the basic spin-flavor structure (\[mor\]), all potentials do not fare equally well. The essential point is that even in the simplest form of our equations, the parametrization is different from that given in the Morpurgo form in that its parameters A,B,C,D, are themselves dependent on the energy operator on the left hand side. When that happens, some relativistic potentials do better than others. In particular, of those we investigated, the potential that works the best (the Adler-Piran potential) is one possessing many of the features important in lattice QCD calculations (e.g. linear and subdominant logarithmic confining pieces). The combination of the constraint approach with the Adler-Piran potential embodies more of the important physical effects contained in QCD-related effective or numerical field theories. Can one understand the apparent successes of the NRQM fits by starting from the relativistic treatments? Some authors [@martin88] and [@jaczko] have used bounds on the kinetic square-root operator $\sqrt{\mathbf{p}^{2}+m^{2}}$ to attempt to understand the apparent success of the nonrelativistic potential models for relativistic quark-antiquark states. Instead, we will give an explanation that starts directly from the relativistic constraint approach. Some years ago, Caswell and Lepage [@cas] rewrote a relativistic constraint equation in an effective nonrelativistic Schrodinger-like form. Here, we do the opposite and recast the NRQM Schrodinger equation in a form resembling the constraint equation. As we have seen our two-body Dirac equations lead to an effective Schrödinger-like equation of the form $$\lbrack p^{2}+\Phi_{w}(x_{\perp},\sigma_{1},\sigma_{2})]\psi=b^{2}(w)\psi$$ In the c.m. system this becomes $$\lbrack\mathbf{p}^{2}+\Phi_{w}(\mathbf{r},\boldsymbol{\sigma}_{1},\boldsymbol{\sigma }_{2})]\psi=b^{2}(w)\psi$$ Even though the stationary state nonrelativistic Schrödinger equation $$\lbrack\frac{\mathbf{p}^{2}}{2\mu}+V(\mathbf{r},\boldsymbol{\sigma}_{1},\boldsymbol{\sigma}_{2})]\psi=E_{B}\psi\label{nre}$$ has a similar form, the corresponding structures in each have entirely different physical significances. For example, in Eq.(\[nre\]), the vectors $\mathbf{p}$ and $\mathbf{r}$ are nonrelativistic quantities in contrast with their counterparts in the constraint approach that appear in the relativistic equation in the c.m. system. One can easily manipulate the nonrelativistic Schrödinger equation into a form similar in appearance to the constraint Schrödinger form by multiplying both sides of the equation by $2\mu$ and adding $b^{2}(w)-2\mu E_{B}$ to both sides. The result is $$\lbrack\mathbf{p}^{2}+\Phi_{w}(\mathbf{r},\boldsymbol{\sigma}_{1},\boldsymbol{\sigma }_{2})]\psi=b^{2}(w)\psi$$ in which $$\Phi_{w}(\mathbf{r},\boldsymbol{\sigma}_{1},\boldsymbol{\sigma}_{2})=2\mu V(\mathbf{r},\boldsymbol{\sigma}_{1},\boldsymbol{\sigma}_{2})+b^{2}(w)-2\mu E_{B}$$ In numerical calculations the $\mathbf{p}$ operator and $\mathbf{r}$ variable are treated in the same manner in calculations based on both the relativistic constraint equation and the nonrelativistic equation.  But as we have seen, they have different physical significances in each equation.  When used to fit parts of the meson spectrum, the apparent success of the NRQM from this point of view is then due to its incorporation of variables numerically indistinguishable from their covariant versions together with a potential that fortuitously coincides (for a limited range of states) with a covariant one modified by an energy dependent constant term that varies from state to state.   Pauli-form of the Two-Body Dirac Equations for $\phi_{+}=\psi_{1}+\psi_{4}$ and their Radial Forms ===================================================================================================== Reference [@long] sets out Two-Body Dirac Equations containing general covariant interactions along with their accompanying Schrödinger-like forms. The general interactions consist of the eight Lorentz invariant forms corresponding to scalar, time and space-like vector studied here along with five others: pseudoscalar, time- and space-like pseudovector, axial and polar tensor. When Eq.(\[tbdes\]) is written in terms of the four four-component spinors $\psi_{1...4}$ it decomposes into eight coupled equations. In [@long] Long and Crater showed how these may be rearranged in Pauli-form or Schrodinger-like equations in terms of the combination $\phi_{+}=\psi _{1}+\psi_{4}$ in the process providing a simpler coupling scheme than that used in [@bckr] which involves coupled equations between $\psi_{1}$ and $\psi_{4}$. Eq.(4.24) of reference [@long] yields the following equation (simplified here for electromagnetic-like interactions ($\partial J\equiv{\frac{\partial E_{1}}{E_{2}}}=-\partial G$) and scalar interactions alone): $$\lbrack E_{1}D_{1}^{-+}{\frac{1}{E_{1}M_{2}+E_{2}M_{1}}}(M_{2}D_{1}^{++}-M_{1}D_{2}^{++})$$$$+M_{1}D_{1}^{--}{\frac{1}{E_{1}M_{2}+E_{2}M_{1}}}(E_{2}D_{1}^{++}+E_{1}D_{2}^{++})]\phi_{+}$$$$=(E_{1}^{2}-M_{1}^{2})\phi_{+}$$ in which the kinetic-recoil terms appear through the combinations: $$D_{1}^{++}=\exp\mathcal{G}\Bigl[\sigma_{1}\cdot{p}+{\frac{i}{2}}{\sigma _{2}\cdot\partial}\bigl[L+\mathcal{G}(1-{\sigma_{1}\cdot\sigma_{2}})\bigr]\Bigr]$$ $$D_{2}^{++}=\exp\mathcal{G}\Bigl[\sigma_{2}\cdot{p}+{\frac{i}{2}}{\sigma _{1}\cdot\partial}\bigl[L+\mathcal{G}(1-{\sigma_{1}\cdot\sigma_{2}})\bigr]\Bigr]$$ $$D_{1}^{-+}=\exp\mathcal{G}\Bigl[\sigma_{1}\cdot{p}+{\frac{i}{2}}{\sigma _{2}\cdot\partial}\bigl[-L+\mathcal{G}(1-{\sigma_{1}\cdot\sigma_{2}})\bigr]\Bigr]$$$$D_{1}^{--}=\exp\mathcal{G}\Bigl[\sigma_{1}\cdot{p}+{\frac{i}{2}}{\sigma _{2}\cdot\partial}\bigl[L-\mathcal{G}(1+{\sigma_{1}\cdot\sigma_{2}})\bigr]\Bigr].$$ Manipulations using both sets of Pauli-matrices then lead to the form presented in the text in Eq.(\[sch\]). We obtain the radial forms of Eq.(\[sch\]) that we use for our numerical solution for the general fermion-antifermion system by forming standard matrix elements of spin-dependent operators (see Appendix C of Ref.([@bckr]  )). We start from the general wave function of the form $$\psi_{ijm}=\sum_{l,s}c_{ils}R_{ilsj}\mathcal{Y}_{lsjm};\ i=1,2,3,4$$ in which $R_{ilsj}={\frac{u_{ilsj}}{r}}$ is the associated radial wave function and $\mathcal{Y}_{lsjm}$ is the total angular momentum eigenfunction. In terms of $\mathcal{D}=E_{1}M_{2}+E_{2}M_{1}$ the corresponding radial forms then become $$s=0, \ \ j=l$$ $$\big\{-\frac{d^{2}}{dr^{2}}+\frac{j(j+1)}{r^{2}}+2m_{w}S+S^{2}+2\epsilon _{2}\mathcal{A}-\mathcal{A}^{2}$$$$-(2\mathcal{G}-\log(\mathcal{D})+\mathcal{G}+L)^{\prime}(\frac{d}{dr}-\frac {1}{r})$$$$-{\frac{1}{2}}\nabla^{2}(L+4\mathcal{G})-{\frac{1}{4}}(-L-2\mathcal{G}+2\log(\mathcal{D}))^{\prime}(-L-4\mathcal{G})^{\prime}\big\}u_{j0j}$$$$+\mathrm{\exp}(-\mathcal{G}-L)\frac{w(m_{1}-m_{2})}{\mathcal{D}}(-\mathcal{G}+L)^{\prime}\frac{\sqrt{j(j+1)}}{r}u_{j1j}=b^{2}(w)u_{j0j}, \label{spi}$$ coupled to $$s=1,\ \ j=l$$ $$\big\{-\frac{d^{2}}{dr^{2}}+\frac{j(j+1)}{r^{2}}+2m_{w}S+S^{2}+2\epsilon _{2}\mathcal{A}-\mathcal{A}^{2}$$ $$-(\mathcal{G}-L-\log(\mathcal{D}))^{\prime}\frac{d}{dr}-\frac{L^{\prime}}{r}$$ $$+{\frac{1}{2}}\nabla^{2}L+{\frac{1}{4}}(2\log(\mathcal{D})+(-L+2\mathcal{G}))^{\prime}L^{\prime}\big\}u_{j1j}$$ $$+\mathrm{\exp}(-\mathcal{G}-J)\frac{(\epsilon_{1}-\epsilon_{2})(m_{1}+m_{2})}{\mathcal{D}}(-\mathcal{G}+L)^{\prime}\frac{\sqrt{j(j+1)}}{r}u_{j0j}=b^{2}(w)u_{j1j},$$ and $s=1,j=l+1$ $$\bigl\{(-\frac{d^{2}}{dr^{2}}+\frac{j(j-1)}{r^{2}})+2m_{w}S+S^{2}+2\epsilon_{2}\mathcal{A}-\mathcal{A}^{2}$$$$+[\log(\mathcal{D})-2\mathcal{G}+\frac{1}{2j+1}(G+L)]^{\prime}\frac{d}{dr}$$$$\lbrack-j\log(\mathcal{D})+\frac{1}{2j+1}\big((4j^{2}+j+1)\mathcal{G}-\mathcal{G}-L\big )]^{\prime}\frac{1}{r}$$$$+{\frac{1}{4}}(-{(\mathcal{G}+L)^{\prime}}^{2})+{\frac{1}{2j+1}}\big(({\frac{1}{2}}\nabla^{2}L+\mathcal{G}^{\prime}({\frac{2j-3}{4}}\mathcal{G}+\mathcal{G}+L)^{\prime}-{\frac{1}{2}}\log^{\prime}(\mathcal{D})L^{\prime}\big)\bigl\}u_{j-11j}$$$$+\frac{\sqrt{j(j+1)}}{2j+1}\bigl\{2[\mathcal{G}+L]^{\prime}{\frac{d}{dr}}+[(-\mathcal{G}-L)(1-2j)+3\mathcal{G}]^{\prime}{\frac{1}{r}}$$$$+\nabla^{2}(L)-L^{\prime}(\log(\mathcal{D})-2\mathcal{G})^{\prime }\bigl\}u_{j+11j}=b^{2}(w)u_{j-11j}, \label{swv}$$ coupled to $s=1,j=l-1$ $$\bigl\{(-{\frac{d^{2}}{dr^{2}}}+\frac{(j+1)(j+2)}{r^{2}})+2m_{w}S+S^{2}+2\epsilon_{2}\mathcal{A}-\mathcal{A}^{2}$$ $$+[\log(\mathcal{D})-2\mathcal{G}-\frac{1}{2j+1}(G+L)]^{\prime}\frac{d}{dr}$$ $$\lbrack(j+1)\log(\mathcal{D})-{\frac{1}{2j+1}}\big((4j^{2}+7j+4)\mathcal{G}-\mathcal{G}-L\big )]^{\prime}{\frac{1}{r}}$$ $$+{\frac{1}{4}}(-{(\mathcal{G}+L)^{\prime}}^{2})-{\frac{1}{2j+1}}\big(({\frac{1}{2}}\nabla^{2}L+\mathcal{G}^{\prime}({\frac{2j+5}{4}}\mathcal{G}-\mathcal{G}-L-C)^{\prime}+{\frac{1}{2}}\log^{\prime}(\mathcal{D})L^{\prime}\big)\bigl\}u_{j+11j}$$ $$+{\frac{\sqrt{j(j+1)}}{2j+1}}\bigl\{2[\mathcal{G}+L]^{\prime}{\frac{d}{dr}}+[(-\mathcal{G}-L)(2j+3)+3\mathcal{G}]^{\prime}{\frac{1}{r}}$$ $$+2\nabla^{2}L+L^{\prime}(\mathrm{\log}(\mathcal{D})-2\mathcal{G})^{\prime }\bigl\}u_{j-11j}=b^{2}(w)u_{j+11j}. \label{dwv}$$ Numerical Construction of Meson Wave Functions ============================================== We obtain from our computer program a numerical wave function $\bar{u}(x)$ normalized so that $$\int_{-\infty}^{+\infty}\bar{u}(x)^{2}dx=1.$$ The radial variable is related to $x$ by $r=r_{0}e^{x}$ and the radial wave function $u(r)=\bar{u}(x)e^{-x/2}/\sqrt{r_{0}}$.  Hence $$\int_{0}^{+\infty}u(r)^{2}dr=\int_{-\infty}^{+\infty}\bar{u}(x)^{2}dx.$$ Now let $v_{n}(r)$ be some radial basis functions that are orthonormalized so that $$\int_{0}^{+\infty}v_{n}(r)v_{n^{\prime}}(r)dr=\delta_{nn^{\prime}}.$$ Thus $$u(r)=\sum_{n=0}^{\infty}u_{n}v_{n}(r)$$ where $$u_{n}=\int_{0}^{+\infty}v_{n}(r)u(r)dr=\int_{-\infty}^{+\infty}\bar{v}_{n}(x)\bar{u}(x)dx.$$ Note that $\bar{v}_{n}(x)=v_{n}(r)e^{x/2}\sqrt{r_{0}}$ so that we can compute the $u_{n}$ in a straightforward way.  Thus we have as an approximation $$\begin{aligned} u(r) & \doteq\sum_{n=0}^{N}v_{n}(r)\int_{-\infty}^{+\infty}\bar{v}_{n}(x)\bar{u}(x)dx\nonumber\\ & =\sum_{n=0}^{N}c_{n}v_{n}(r)\equiv w_{N}(r).\end{aligned}$$ Now we use a least squares fit to determine the $c_{n}$ . In the limit of large $N$ we have $c_{n}\rightarrow u_{n}$ since we minimize the quantity $$\chi^{2}\equiv\int_{-\infty}^{+\infty}|\bar{u}(x)-\bar{w}_{N}(x)|^{2}dx$$ For the $v_{n}(r)$ we use harmonic oscillator (Laguerre)  functions defined by $$v_{n}^{k}(y)=c(n,k)e^{-y^{2}/2}y^{k}L_{n}^{k-1/2}(y^{2})$$ in which $c(n,k)=\sqrt{\frac{2(n!)}{a(n+k-1/2)!}}$ is the normalization constant and in terms of $z=y^{2}$ $$L_{n}^{k-1/2}(z)=\frac{e^{z}z^{-k+1/2}}{n!}\frac{d^{n}}{dz^{n}}(e^{-z}z^{k+n-1/2}).$$ So for example $$\begin{aligned} L_{0}^{k-1/2}(z) & =1\nonumber\\ L_{1}^{k-1/2}(z) & =k+1/2-z\nonumber\\ L_{2}^{k-1/2}(z) & =\frac{1}{2}[(5/2+k-z)L_{1}^{k-1/2}(z)-(1/2+k)L_{0}^{k-1/2}(z)\nonumber\\ & =[(k+3/2)(k+1/2)-2(k+3/2)z+z^{2}]/2\nonumber\\ & ...\nonumber\\ L_{n+1}^{k-1/2}(z) & =\frac{1}{n+1}[(2n+1/2+k-z)L_{n}^{k-1/2}(z)-(n+k-1/2)L_{n-1}^{k-1/2}(z)]\end{aligned}$$ Thus letting $\ y=r/a=\alpha e^{x}$ we obtain $$\begin{aligned} \bar{v}_{0}(x) & =c(0,k)\alpha^{k}\exp(x(2k+1)/2)\exp(-\alpha^{2}e^{2x}/2)\nonumber\\ \bar{v}_{1}(x) & =\sqrt{\frac{1}{k+1/2}}\bar{v}_{0}(x)(k+1/2-\alpha ^{2}e^{2x})\nonumber\\ \bar{v}_{2}(x) & =\sqrt{\frac{2!}{(k+1/2)(k+3/2)}}\bar{v}_{0}(x)[(k+3/2)(k+1/2)-2(k+3/2)\alpha^{2}e^{2x}+\alpha^{4}e^{4x}]/2.\nonumber\\ & ...\nonumber\\ \bar{v}_{n}(x) & =\sqrt{\frac{n!}{(k+1/2)..(k+n-1/2)}}\bar{v}_{0}(x)\sum_{m=0}^{n}(-)^{m}\frac{(n+k-1/2)!}{(n-m)!(k-1/2+m)!m!}(\alpha e^{x})^{2m}$$  Table VI - Comparison of Important Features of Approaches Treated in this Paper ================================================================================ $$\begin{tabular} [c]{llllll} & HC-PVA & Wisconsin & Iowa State & Brayshaw & Godfrey,Isgur\\ Wave Eqn & Two-Body Dirac & Reduced BSE & Quasipotential & Breit & None\\ Covariance & Explicit & Implicit & Implicit & Implicit & Implicit\\ Nonperturb. Tests & Strng. ptnl -QED & Wk ptnl. & Str. ptnl. & Str. ptnl & Str. ptnl.\\ \# of \ Parametric fns & 2 & 2 & 2 & 3 & 6\\ $\chi^{2}$ & 101 & 5169 vs 73 & RMS 50 vs 53 & 204 vs 111 & 85 vs 105\\ Locality & Local & Non-local & Non-local & Local & Non-local\\ Running coupling cnst. & Yes & Yes & Yes & No & Yes \end{tabular} $$ [99]{} G. Breit, Phys. Rev. 34, 553, 1929 G. Breit, Phys. Rev. 36, 383, 1930 G. Breit, Phys. Rev. 39, 616, 1932 H.A. Bethe and E. E. Salpeter, *Quantum Mechanics of One and* *Two Electron Atoms* (Springer, Berlin, 1957). W. Krolikowski, Acta Physica Polonica, **B12**, 891 (1981). Nonperturbative treatments of truncated versions of the Breit equation (with just the Coulomb term) have yielded the same results as a perturbative treatment of the same truncations but none of these have included the troublesome transverse photon parts. See J. Malenfant, Phys. Rev. A **43**, 1233 (1991), and T.C. Scott, J. Shertzer, and R.A. Moore, ibid **45**,4393 (1992) R. W. Childers, Phys. Rev. **D26**, 2902 (1982). C. W. Wong and C. Y. Wong Phys. Lett. **B301**, 1 (1993). C. W. Wong and C. Y. Wong Nucl. Phys. **A562**, 598 (1993). A generalization of the Breit equation which can be treated nonperturbatively based on constraint dynamics is given in H. W. Crater, C. W. Wong, C. Y. Wong, and P. Van Alstine, Intl. Jour. of Mod. Phys. **E** **5**, 589 (1996) P.A.M. Dirac, (Yeshiva University, Hew York, 1964). M. Kalb and P. Van Alstine, Yale Reports, C00-3075-146 (1976),C00-3075-156 (1976); P. Van Alstine, Ph.D. Dissertation Yale University, (1976). I. T. Todorov, Dynamics of Relativistic Point Particles as a Problem with Constraints, Dubna Joint Institute for Nuclear Research No. E2-10175, 1976; Ann. Inst. H. Poincare’ **A28**,207 (1978). L.P. Horwitz and F. Rohrlich, Phys. Rev. **D24**, 1928 (1981), F. Rohrlich, Phys. Rev. **D23** 1305,(1981). See also H. Sazdjian, Nucl. Phys. **B161**, 469 (1979).It is even a nontrivial task to disentangle the two superfluous relative time degrees of freedom from the kinematics for three noninteracting Dirac equations to obtain a collective constraint-like formalism [@saz89],[@drz99]. A. Komar, Phys. Rev. **D18**, 1881,1887 (1978). P. Droz-Vincent Rep. Math. Phys.,**8**,79 (1975). H. Sazdjian, Annals of Physics, **191**, 82 (1989). Ph. Droz-Vincent, Reduction of Relativistic Three-Body Kinematics, hep-th/9905119, May 17,1999 P. Van Alstine and H.W. Crater, J. Math. Phys. **23**, 1997 (1982) H. W. Crater and P. Van Alstine, Ann. Phys. (N.Y.) **148**, 57 (1983). H. Sazdjian, Phys. Rev. **D1 33**, 3401(1986), derives compatible two-body Dirac equations but from a different starting point without the use of supersymmetry.  See also H. Sazdjian, Phys. Rev. **D1 33**, 3425 (1986). F.A. Berezin and M. S. Marinov, JETP Lett.**21**, 678 (1975), Ann. Phys. (N.Y.) **104**, 336 (1977). C.A.P. Galvao and C. Teitelboim, J. Math. Phys. , 1863 (1980). See also A. Barducci, R. Casalbuoni and L. Lusanna, Nuovo Cimento **A32**,377 (1976). H. W. Crater and P. Van Alstine, Phys. Rev. Lett. **53**, 1577 (1984). H. W. Crater and P. Van Alstine, Phys. Rev. **D36**, 3007 (1987). N. Nakanishi, Suppl. Prog. Theor. Phys.**43** 1, (1969) H. W. Crater, R. Becker, C. Y. Wong and P. Van Alstine, Phys. Rev. **D1 46**, 5117 (1992). I. T. Todorov, Phys. Rev. **D3**, 2351, 1971 H. Jallouli and H. Sazdjian, Annals of Physics, **253**, 376 (1997). H.W. Crater and D. Yang, J. Math. Phys. **32** 2374, (1991). Our version of the constraint formalism fixes the relative values of these two parameters in terms of the invariant center of momentum energy. These fixed invariants are not free unlike the analogous center of mass parameter $\eta$ that often appears in the Bethe-Salpeter bound state formalism.  In that formalism $\eta$ is only restricted to lie between 0 and 1 (although in QED is it often taken to be $m_{i}/(m_{1}+m_{2}))$.  It is disconcerting that certain approaches that adopt this formalism use $\eta$ as a parameter in their spectral fits.  We refer in particular to the work of  P. Jain and H. Munczek, Phys. Rev. D **48**, 5403 (1993) which combines the Bethe-Salpeter bound state formalism with that of the Schwinger-Dyson equation. H. W. Crater and P. Van Alstine, Phys. Rev. **D30**, 2585 (1984). H. W. Crater and P. Van Alstine, Phys. Rev. **D46** 476, (1992). H. Sazdjian, Phys. Lett.**156B**, 381 (1985). See H. Sazdjian, Proceedings of the International Symposium on Extended Objects and Bound Systems, Kairuzawa, Japan, (1992), pp 117-130 and [@saz97] for a more recent treatment. H. W. Crater and P. Van Alstine, Phys. Rev. **D** **37**, 1982 (1988) P. Van Alstine and H. W. Crater, Phys. Rev. **D 34**, 1932 (1986). A. Eddington, Proc. Roy. Soc. A122, 358 (1929); J.A. Gaunt, Phil. Trans. Roy. Soc, Vol 228, 151, (1929);Proc. Roy. Soc. **A122**, 153 (1929). P. Van Alstine, H. W. Crater Found. of Phys., **27** ,67 (1997). B.L. Aneva, J.I. Krapchev, and V.A. Rizov, Bulg. J. Phys **2,**409, (1975). V.A. Rizov and I.T. Todorov, Fiz. Elem. Chastits At. Yadre **6**,669 (1975) \[translated in Sov. Jl. Part. Nucl. 6. 269 (1975)\]. G. Bohnert, R. Decker, A. Hornberg, H. Pilkuhn, H.G. Schlaile, Z. Phys. **2**, 23 (1986)). J. Schwinger, (Addison-Wesley, Reading, 1973), Vol. 2, pp.348-349. P.G. Bergmann, Rev. Mod. Phys. **33**, 510 (1961) M.H.L. Pryce, Proc. Roy. Soc. (London) **A195**,6 (1948). T.D. Newton and E.P. Wigner, Rev. Mod. Phys. **21**, 400 (1949). A.J. Hanson and T. Regge, Ann. Phys. (N.Y.) **87**, 498 (1974) H. W. Crater and P. Van Alstine, Found. Of Phys. , 297 (1994). G. Morpurgo, Phys. Rev. D1, **41**, 2865 (1990) Dillon and Morpurgo S. L. Adler and T. Piran, Phys. Lett., **117B**, 91 (1982) and references contained therein. A. Gara, B. Durand, and L. Durand, Phys. Rev D1 **40**,843 (1989), **42**, 1651 (1990). A.J. Sommerer, J.R. Spence and J.P. Vary, Mod. Phys. Lett. A**8**, 3537 (1994), A.J. Sommerer, J.R. Spence and J.P. Vary, Phys.Rev C, **49**, 513 (1994), A.J. Sommerer et al, Phys. Lett. B**348,** 277 (1995). D.D. Brayshaw, Phys. Rev. D **36**, 1465 (1987) S. Godfrey and N. Isgur, Phys. Rev. D1 **32**, 189 (1985) P. A. M. Dirac, Proc. R. Soc. London **A 117**, 610 (1928). P. Long and H. W. Crater, J. Math. Phys. **39**, 124 (1998) J.Mourad and H. Sazdjian , Journal of Physics G, **21**, 267 (1995). Mourad and Sazdjian show that the covariant constraint approach in QED is free order by order from the spurious infrared singularities that force other approaches to abandon the manifestly covariant Feynman  gauge in favor of the noncovariant Coulomb gauge. See also H. Jallouli and H. Sazdjian, J. Math. Phys. **38** 4951, (1997) For positronium our results agree with the standard results in E.E. Salpeter, Phys. Rev. **87** 328,(1952) and M. A. Stroscio, Phys. Rep. C**22**, 215(1975) without the annihilation diagram. Stroscio also gives the spectrum for the general hydrogenic (unequal mass) case but in the $j-j$ coupling scheme, rather than the $LS$ coupling scheme present in [@bckr] here. The results for the $LS$ scheme is given in [@krp79] but with a spectrum that does not have the correct $j=l$ spin mixing fine-structure even though the weak potential bound state equations given there are correct. The corrected spectrum appears in [@bckr] and in J. Connell, Phys. Rev. **D43**, 1393, (1991). Review of Particle Physics, D. E. Groom et al., The European Physical Journal **C15** 1 (2000) The Babar Collabotation: B. Aubert et al, Phys.Rev.Lett. **90** 242001,(2003) S. Ishida, M. Ishida, and T. Maeda, Prog. Theo. Phys. **104**, 785 (2000) B Liu and H. Crater,  Phys. Rev C **67** 024001 (2003) H. Crater, J. Comp. Phys. **115,** 470 (1994) H. W. Crater, and P. Van Alstine, J. Math. Phys.**31**, 1998 (1990). All this extra structure makes possible straight-forward nonperturbative solution of the constraint equations and may account for the differences between the resulting (correct field-theoretic) spectrum and the incorrect spectrum produced by nonperturbative solution of the usual Breit equation for QED [@va97]. H. Crater and P. Van Alstine, Phys. Lett., **100B,** 166 (1981) R.J. Yaes, Phys D **3**, 3086 (1971) The value of that binding energy to this order is $w-2m=m(-\alpha^{2}/4-21\alpha^{4}/64).$ The results presented in this version of the Iowa State model are significantly improved over those produced by a different earlier version of the model, in which the Iowa State Group used the Salpeter equation with included couplings between positive and negative energy states, and an additional Breit interaction, J.R. Spence and J.P. Vary, Phys. Rev. C**47,** 1282 (1993).  A significant part of the improvement may be due to alterations in the later paper [@iowa] in the short distance part of the QCD potential, including a running coupling constant. The Adler-Piran potential that we use also includes a running coupling constant, but in configuration space instead of momentum space. We point out that our analytic singlet positronium result for the ground state is the closed-form Sommerfeld-like result [@va86] $w=m\sqrt{2+2/\sqrt{1+{\frac{\alpha^{2}}{(1+\sqrt{\frac{1}{4}-\alpha^{2}}-\frac{1}{2})^{2}}}}}\dot{=}m(2-\frac{\alpha^{2}}{4}-\frac{21\alpha^{4}}{64})$ whose precise agreement with the perturbative results supports this reasoning. Note, however, that as long as $w$ is positive, our constituent c.m. energies are fixed to be positive by the constraints themselves (see [@bliu]). D.B. Lichtenberg et al, Z. Phys. **C19,** 19 (1983) D.P. Stanley and D. Robson, D**21** 31 **** (1980) J.R. Spence and J.P. Vary, Phys. Lett **B254**, 1 ,(1991) Others on this relatively short list include [@rob] and P.C. Termeijer and J.A. Tjon, Phys. Rev C **49**, 494 (1994). The latter authors use quasipotential equations in configuration space, including the Blankenbecler-Sugar equation and the equation of Mandelzweig and Wallace. Like ours, their equations contain the full Dirac structure of positive and negative energy states. We point out, however, that according to the work in [@iowa], both of these quasipotential reductions fail to reproduce numerically even the ground state of positronium correctly in contrast to the work of [@va86; @bckr] and [@iowa]. A. Martin, Phys. Lett. **B100**, 511 (1981) A.  Martin, Phys. Lett. **B214**, 561 (1988) G. Jazcko and L. Durand, Phys. Rev. D 58 1998, 114017-1,114017-9 W.E. Caswell and G.P. Lepage, Phys. Rev. **A18,** 863 (1977) **TABLE I - MESON MASSES FROM COVARIANT CONSTRAINT DYNAMICS** **TABLE II COMPARISON OF MESON MASSES FROM** **WISCONSIN MODEL II and COVARIANT CONSTRAINT DYNAMICS** **TABLE III -COMPARISON OF MESON MASSES FROM** **SPENCE-VARY MODEL and COVARIANT CONSTRAINT DYNAMICS** **TABLE IV - COMPARISON OF MESON MASSES FROM** **BRAYSHAW MODEL and COVARIANT CONSTRAINT DYNAMICS** **TABLE V - COMPARISON OF MESON MASSES FROM** **ISGUR-WISE MODEL and COVARIANT CONSTRAINT DYNAMICS**
--- abstract: | In this work we propose and analyze a numerical method for electrical impedance tomography of recovering a piecewise constant conductivity from boundary voltage measurements. It is based on standard Tikhonov regularization with a Modica-Mortola penalty functional and adaptive mesh refinement using suitable *a posteriori* error estimators of residual type that involve the state, adjoint and variational inequality in the necessary optimality condition and a separate marking strategy. We prove the convergence of the adaptive algorithm in the following sense: the sequence of discrete solutions contains a subsequence convergent to a solution of the continuous necessary optimality system. Several numerical examples are presented to illustrate the convergence behavior of the algorithm. **Keywords:** electrical impedance tomography, piecewise constant conductivity, Modica-Mortola functional, *a posteriori* error estimator, adaptive finite element method, convergence analysis author: - - 'Bangti Jin[^1]' - 'Yifeng Xu[^2]' bibliography: - 'eit.bib' title: Adaptive Reconstruction for Electrical Impedance Tomography with a Piecewise Constant Conductivity --- Introduction {#sect:intro} ============ Electrical impedance tomography (EIT) aims at recovering the electrical conductivity distribution of an object from voltage measurements on the boundary. It has attracted much interest in medical imaging, geophysical prospecting, nondestructive evaluation and pneumatic oil pipeline conveying etc. A large number of reconstruction algorithms have been proposed; see, e.g., [@LechleiterRieder:2006; @LechleiterHyvonen:2008; @HintermullerLaurain2008; @KnudsenLassasMueller:2009; @JinMaass:2010; @JinKhanMaass:2012; @HarrachUllrich:2013; @ChowItoZou:2014; @GehreJin:2014; @MalonedosSantosHolder:2014; @LiuKolehmainen:2015; @DunlopStuart:2015; @AlbertiAmmari:2016; @Klibanov:2017; @KlibanovLiZhang:2019; @ZhouHarrachSeo:2018; @HyvonenMustonen:2018; @XiaoLiuZhao:2018; @TanLvDong:2019] for a rather incomplete list. One prominent idea underpinning many imaging algorithms is regularization, especially Tikhonov regularization [@ItoJin:2014]. In practice, they are customarily implemented using the continuous piecewise linear finite element method (FEM) on quasi-uniform meshes, due to its flexibility in handling spatially variable coefficients and general domain geometry. The convergence analysis of finite element approximations was carried out in [@GehreJinLu:2014; @Rondi:2016; @Hinze:2018]. In several practical applications, the physical process is accurately described by the complete electrode model (CEM) [@ChengIsaacsonNewellGisser:1989; @SomersaloCheneyIsaacson:1992]. It employs nonstandard boundary conditions to capture characteristics of the experiment. In particular, around the electrode edges, the boundary condition changes from the Neumann to Robin type, which, according to classical elliptic regularity theory [@Grisvard:1985], induces weak solution singularity around the electrode edges; see, e.g., [@Pidcock:1995] for an early study. Further, the low-regularity of the sought-for conductivity distribution, especially that enforced by a nonsmooth penalty, e.g., total variation, can also induce weak interior singularities of the solution. Thus, a (quasi-)uniform triangulation of the domain can be inefficient for resolving these singularities, and the discretization errors around electrode edges and internal interfaces can potentially compromise the reconstruction accuracy. These observations motivate the use of an adaptive strategy to achieve the desired accuracy in order to enhance the overall computational efficiency. For direct problems, the mathematical theory of AFEM, including *a posteriori* error estimation, convergence and computational complexity, has advanced greatly [@AinsworthOden:2000; @NSV:2009; @Ver:2013; @CFPP:2014]. A common adaptive FEM (AFEM) consists of the following successive loops: $$\label{afem_loop} \mbox{SOLVE}\rightarrow\mbox{ESTIMATE}\rightarrow\mbox{MARK}\rightarrow\mbox{REFINE}.$$ The module `ESTIMATE` employs the given problem data and computed solutions to provide computable quantities on the local errors, and distinguishes different adaptive algorithms. In this work, we develop an adaptive EIT reconstruction algorithm with a piecewise constant conductivity. In practice, the piecewise constant nature is commonly enforced by a total variation penalty. However, it is challenging for AFEM treatment (see e.g., [@Bartels:2015] for image denoising). Thus, we take an indirect approach based on a Modica-Mortola type functional: $$\mathcal{F}_\varepsilon(\sigma) = \eps\int_\Om |\nabla \sigma|^2 {\,{\rm d}x}+ \frac{1}{\eps}\int_{\Om} W(\sigma) {\,{\rm d}x},$$ where the constant $\eps>0$ is small and $W(s):\mathbb{R}\to\mathbb{R}$ is the double-well potential, i.e., $$\label{eqn:double-well} W(s) = (s-c_0)^2(s-c_1)^2,$$ with $c_0,c_1>0$ being two known values that the conductivity $\sigma$ can take. The functional $\mathcal{F}_{\varepsilon}$ $\Gamma$-converges to the total variation semi-norm [@Modica:1987; @Modica:1977; @Alberti:2000]. The corresponding regularized least-squares formulation reads $$\label{eqn:tikh-MM-0} \inf_{\sigma\in \widetilde{\mathcal{A}}} \left\{\J_{\varepsilon}(\sigma) = \tfrac{1}{2} \|U(\sigma)-U^\delta\|^2 + \tfrac{\widetilde{\alpha}}{2}\mathcal{F}_\varepsilon(\sigma)\right\},$$ where $\tilde \alpha>0$ is a regularization parameter; see Section \[sect:ps\] for further details. In this work, we propose *a posteriori* error estimators and an adaptive reconstruction algorithm of the form for based on a separate marking using three error indicators in the module `MARK`; see Algorithm \[alg\_afem\_eit\]. Further, we give a convergence analysis of the algorithm, in the sense that the sequence of state, adjoint and conductivity generated by the adaptive algorithm contains a convergent subsequence to a solution of the necessary optimality condition. The technical proof consists of two steps: Step 1 shows the subsequential convergence to a solution of a limiting problem, and Step 2 proves that the solution of the limiting problem satisfies the necessary optimality condition. The main technical challenges in the convergence analysis include the nonlinearity of the forward model, the nonconvexity of the double well potential and properly treating the variational inequality. The latter two are overcome by pointwise convergence of discrete minimizers and Lebesgue’s dominated convergence theorem, and AFEM analysis techniques for elliptic obstacle problems, respectively. The adaptive algorithm and its convergence analysis are the main contributions of this work. Last, we situate this work in the existing literature. In recent years, several adaptive techniques, including AFEM, have been applied to the numerical resolution of inverse problems. In a series of works [@BeilinaClason:2006; @BeilinaKlibanov:2010a; @BeilinaKlibanov:2010b; @BeilinaKlibanovKokurin:2010], Beilina et al studied the AFEM in a dual weighted residual framework for parameter identification. Feng et al [@FengYanLiu:2008] proposed a residual-based estimator for the state, adjoint and control by assuming convexity of the cost functional and high regularity on the control. Li et al [@LiXieZou:2011] derived *a posteriori* error estimators for recovering the flux and proved their reliability; see [@XuZou:2015a] for a plain convergence analysis. Clason et al [@ClasonKaltenbacher:2016] studied functional *a posteriori* estimators for convex regularized formulations. Recently, Jin et al [@JinXuZou:2016] proposed a first AFEM for Tikhonov functional for EIT with an $H^1(\Omega)$ penalty, and also provided a convergence analysis. This work extends the approach in [@JinXuZou:2016] to the case of a piecewise constant conductivity. There are a number of major differences between this work and [@JinXuZou:2016]. First, the $H^1(\Omega)$ penalty in [@JinXuZou:2016] facilitates deriving the *a posteriori* estimator on the conductivity $\sigma$, by completing the squares and suitable approximation argument, which is not directly available for the Mordica-Mortola functional $\mathcal{F}_\varepsilon$. Second, we develop a sharper error indicator associated with the crucial variational inequality than that in [@JinXuZou:2016], by a novel constraint preserving interpolation operator [@ChenNochetto:2000]; see the proof of Theorem \[thm:gat\_mc\], which represents the main technical novelty of this work. Third, Algorithm \[alg\_afem\_eit\] employs a separate marking for the estimators, instead of a collective marking in [@JinXuZou:2016], which automatically takes care of different scalings of the estimators. The rest of this paper is organized as follows. In Section \[sect:ps\], we introduce the complete electrode model, and the regularized least-squares formulation. In Section \[sect:alg\], we give the AFEM algorithm. In Section \[sec:numer\], we present extensive numerical results to illustrate its convergence and efficiency. In Section \[sect:conv\], we present the lengthy technical convergence analysis. Throughout, $\langle\cdot,\cdot\rangle$ and $(\cdot,\cdot)$ denote the inner product on the Euclidean space and $(L^2(\Omega))^d$, respectively, by $\|\cdot\|$ the Euclidean norm, and occasionally abuse $\langle\cdot,\cdot\rangle$ for the duality pairing between the Hilbert space $\mathbb{H}$ and its dual space. The superscript $\rm t$ denotes the transpose of a vector. The notation $c$ denotes a generic constant, which may differ at each occurrence, but it is always independent of the mesh size and other quantities of interest. Regularized approach {#sect:ps} ==================== This part describes the regularized approach for recovering piecewise constant conductivities. Complete electrode model (CEM) ------------------------------ Let $\Omega$ be an open bounded domain in $\mathbb{R}^{d}$ $(d=2,3)$ with a polyhedral boundary $\partial\Omega$. We denote the set of electrodes by $\{e_l\}_{l=1}^L$, which are line segments/planar surfaces on $\partial\Omega$ and satisfy $\bar{e}_i\cap\bar{e}_k=\emptyset$ if $i\neq k$. The applied current on electrode $e_l$ is denoted by $I_l$, and the vector $I=(I_1,\ldots,I_L)^\mathrm{t}\in\mathbb{R}^L$ satisfies $\sum_{l=1}^LI_l=0$, i.e., $I\in \mathbb{R}_\diamond^L :=\{V\in \mathbb{R}^L: \sum_{l=1}^LV_l=0\}$. The electrode voltage $U=(U_1,\ldots,U_L)^\mathrm{t}$ is normalized, i.e., $U\in\mathbb{R}_\diamond^L$. Then the CEM reads: given the conductivity $\sigma$, positive contact impedances $\{z_l\}_{l=1}^L$ and input current $I\in\mathbb{R}_\diamond^L$, find $(u,U)\in H^1(\Omega)\otimes\mathbb{R}_\diamond^L$ such that [@ChengIsaacsonNewellGisser:1989; @SomersaloCheneyIsaacson:1992] $$\label{eqn:cem} \left\{\begin{aligned} \begin{array}{ll} -\nabla\cdot(\sigma\nabla u)=0 & \mbox{ in }\Omega,\\[1ex] u+z_l\sigma\frac{\partial u}{\partial n}=U_l& \mbox{ on } e_l, l=1,2,\ldots,L,\\[1ex] \int_{e_l}\sigma\frac{\partial u}{\partial n}\mathrm{d}s =I_l& \mbox{ for } l=1,2,\ldots, L,\\ [1ex] \sigma\frac{\partial u}{\partial n}=0&\mbox{ on } \partial\Omega\backslash\cup_{l=1}^Le_l. \end{array} \end{aligned}\right.$$ The inverse problem is to recover the conductivity $\sigma$ from a noisy version $U^\delta$ of the electrode voltage $U(\sigma^\dagger)$ (for the exact conductivity $\sigma^\dag$) corresponding to one or multiple input currents. Below the conductivity $\sigma$ is assumed to be piecewise constant, i.e., in the admissible set $$\mathcal{A}:=\{\sigma\in L^\infty(\Om):~\sigma=c_0+(c_1-c_0)\chi_{\Om_1}\},$$ where the constants $c_1>c_0>0$ are known, $\overline{\Om}_1\subset\Om$ is an unknown open set with a Lipschitz boundary and $\chi_{\Om_1}$ denotes its characteristic function. We denote by $\mathbb{H}$ the space $H^1(\Omega)\otimes \mathbb{R}_\diamond^L$ with its norm given by $$\|(u,U)\|_{\mathbb{H}}^2 = \|u\|_{H^1(\Omega)}^2 + \|U\|^2.$$ A convenient equivalent norm on the space $\mathbb{H}$ is given below. \[lem:normequiv\] On the space $\mathbb{H}$, the norm $\|\cdot\|_\mathbb{H}$ is equivalent to the norm $\|\cdot\|_{\mathbb{H},*}$ defined by $$\|(u,U)\|_{\mathbb{H},*}^2 = \|\nabla u\|_{L^2(\Omega)}^2 + \sum_{l=1}^L\|u-U_l\|_{L^2(e_l)}^2.$$ The weak formulation of the model reads [@SomersaloCheneyIsaacson:1992]: find $(u,U)\in \mathbb{H}$ such that $$\label{eqn:cemweakform} a(\sigma,(u,U),(v,V)) = \langle I,V\rangle \quad \forall (v,V)\in \mathbb{H},$$ where the trilinear form $a(\sigma,(u,U),(v,V))$ on $\mathcal{A}\times\mathbb{H}\times\mathbb{H}$ is defined by $$a(\sigma,(u,U),(v,V)) = (\sigma \nabla u ,\nabla v ) +\sum_{l=1}^Lz_l^{-1}(u-U_l,v-V_l)_{L^2(e_l)},$$ where $(\cdot,\cdot)_{L^2(e_l)}$ denotes the $L^2(e_l)$ inner product. For any $\sigma\in\mathcal{A}$, $\{z_l\}_{l=1}^L$ and $I\in \Sigma_\diamond^L$, the existence and uniqueness of a solution $(u,U)\in\mathbb{H}$ to follows from Lemma \[lem:normequiv\] and Lax-Milgram theorem. Regularized reconstruction -------------------------- For numerical reconstruction with a piecewise constant conductivity, the total variation (TV) penalty is popular. The conductivity $\sigma$ is assumed to be in the space $\mathrm{BV}(\Om)$ of bounded variation [@AttouchButtazzoMichaille:2006; @Evans:2015], i.e., $$\mathrm{ BV} (\Om) = \left\{ v \in L^1(\Om): |v|_{\mathrm{TV}(\Om)}<\infty \right\},$$ equipped with the norm $\|v\|_{\mathrm{BV}(\Om)}=\|v\|_{L^1(\Om)}+|v|_{\mathrm{TV}(\Om)}$, where $$|v|_{\mathrm{TV}(\Om)}:=\sup\left\{\int_\Om v \nabla\cdot\bold{\phi}{\,{\rm d}x}:~\bold{\phi}\in (C_c^1(\Om))^d,~\|\bold{\phi}(x)\|\leq 1\right\}.$$ Below we discuss only one dataset, since the case of multiple datasets is similar. Then Tikhonov regularization leads to the following minimization problem: $$\label{eqn:tikh} \min_{\sigma\in\mathcal{A}}\left\{\J(\sigma) = \tfrac{1}{2} \|U(\sigma)-U^\delta\|^2 + \alpha|\sigma|_{{\rm TV}(\Om)}\right\},$$ The scalar $\alpha>0$ is known as a regularization parameter. It has at least one minimizer [@Rondi:2008; @GehreJinLu:2014]. Since $\sigma$ is piecewise constant, by Lebesgue decomposition theorem [@AttouchButtazzoMichaille:2006], the TV term $|\sigma|_{\rm TV(\Omega)}$ in reduces to $\int_{S_{\sigma}} |[\sigma]| {\,{\rm d}}\mathcal{H}^{d-1} $, where $S_\sigma$ is the jump set, $[\sigma]=\sigma^+-\sigma^-$ denotes the jump across $S_\sigma$ and $\mathcal{H}^{d-1}$ refers to the $(d-1)$-dimensional Hausdorff measure. The numerical approximation of requires simultaneously treating two sets of different Hausdorff dimensions (i.e., $\Om$ and $S_\sigma$), which is very challenging. Thus, we replace the TV term $|\sigma|_{\rm TV(\Omega)}$ in by a Modica–Mortola type functional [@Modica:1977] $$\mathcal{F}_\varepsilon(z):= \left\{\begin{array}{ll} \varepsilon \|\nabla z\|^2_{L^2(\Om)}+\frac{1}{\varepsilon}\int_{\Om}W(z){\,{\rm d}x}& \mbox{if}~z\in H^1(\Om),\\ +\infty & \mbox{otherwise}, \end{array} \right.$$ where $\varepsilon$ is a small positive constant controlling the width of the transition interface, and $W: \mathbb{R}\rightarrow\mathbb{R}$ is the double-well potential given in . The functional $\mathcal{F}_\varepsilon$ was first proposed to model phase transition of two immiscible fluids in [@Cahn:1958]. It is connected with the TV semi-norm as follows [@Modica:1977; @Modica:1987; @Alberti:2000]; see [@Braides:2002] for an introduction to $\Gamma$-convergence. \[thm:G-conv\] With $c_W=\int_{c_0}^{c_1}\sqrt{W(s)}{\,{\rm d}}s$, let $$\mathcal{F}(z):=\left\{\begin{array}{ll} 2c_W|z|_{\rm TV(\Om)} & \mbox{if}~z\in \mathrm{BV}(\Om)\cap\mathcal{A}, \\ +\infty & \mbox{otherwise}. \end{array} \right.$$ Then $\mathcal{F}_\varepsilon$ $\Gamma$-converges to $\mathcal{F}$ in $L^1(\Om)$ as $\varepsilon\to 0^+$. Let $\{\eps_n\}_{n\geq1}$ and $\{v_{n}\}_{n\geq 1}$ be given sequences such that $\eps_n\to 0^+$ and $\{\F_{\eps_n}(v_n)\}_{n\geq1}$ is bounded. Then $v_n$ is precompact in $L^1(\Om)$. The proposed EIT reconstruction method reads $$\label{eqn:tikh-MM} \inf_{\sigma\in \widetilde{\mathcal{A}}} \left\{\J_{\varepsilon}(\sigma) = \tfrac{1}{2} \|U(\sigma)-U^\delta\|^2 + \tfrac{\widetilde{\alpha}}{2}\mathcal{F}_\varepsilon(\sigma)\right\},$$ where $\widetilde{\alpha}=\alpha/c_W$, and the admissible set $\widetilde{\mathcal{A}}$ is defined as $$\widetilde{\mathcal{A}}:=\left\{{\sigma \in H^1(\Omega)}: c_0\leq \sigma(x)\leq c_1\mbox{ a.e. } x\in\Omega\right\}.$$ Now we recall a useful continuity result of the forward map [@GehreJinLu:2014 Lemma 2.2], which gives the continuity of the fidelity term in the functional $\mathcal{J}_\varepsilon$. See also [@JinMaass:2010; @DunlopStuart:2015] for related continuity results. \[lem:fm\_cont\] Let $\{\sigma_n\}_{n\geq1}\subset\widetilde{\mathcal{A}}$ satisfy $\sigma_n\to\sigma^\ast$ in $L^1(\Om)$. Then $$\label{eqn:fm_cont} \lim_{n\to\infty}\|\left(u(\sigma_n)-u(\sigma^\ast),U(\sigma_n)- U(\sigma^\ast)\right)\|_{\mathbb{H}}=0.$$ Lemma \[lem:fm\_cont\] implies that $\{\mathcal{J}_\varepsilon\}_{\varepsilon>0}$ are continuous perturbations of $\mathcal{J}$ in $L^1(\Om)$. Then the stability of $\Gamma$-convergence [@Alberti:2000 Proposition 1(ii)] [@Braides:2002 Remark 1.4] and Theorem \[thm:G-conv\] indicate that $\J_\eps$ $\Gamma$-converges to $\mathcal{J}$ with respect to $L^1(\Om)$, and $\mathcal{J}_{\varepsilon}$ can (approximately) recover piecewise constant conductivities. Next we show the existence of a minimizer to $\J_\eps$. \[thm:tikh-MM\] For each $\varepsilon>0$, there exists at least one minimizer to problem . Since $\mathcal{J}_\varepsilon$ is nonnegative, there exists a minimizing sequence $\{\sigma_n\}_{n\geq1}\subset \widetilde{\mathcal{A}}$ such that $\mathcal{J}_\varepsilon(\sigma_n)\to m_\eps:=\inf_{\sigma\in\widetilde{\mathcal{A}}}\mathcal{J}_\varepsilon(\sigma)$. Thus, $\sup_n\|\nabla\sigma_n\|_{L^2(\Om)}<\infty$, which, along with $c_0\leq \sigma_n\leq c_1$, yields $\|\sigma_n\|_{H^1(\Om)}\leq c$. Since $\widetilde{\mathcal{A}}$ is closed and convex, there exist a subsequence, relabeled as $\{\sigma_n\}_{n\geq 1}$, and some $\sigma^\ast\in\widetilde{\mathcal{A}}$ such that $$\label{pf:min_cont01} \sigma_{n}\rightharpoonup\sigma^\ast\quad\mbox{weakly in}~H^1(\Om),\quad \sigma_n\to\sigma^\ast\quad\mbox{in}~L^1(\Om),\quad\sigma_n\to\sigma^\ast\quad\mbox{a.e. in}~\Om.$$ Since $W(s)\in C^2[c_0,c_1]$, $\{W(\sigma_n)\}_{n\geq1}$ is uniformly bounded in $L^\infty(\Omega)$ and converges to $W(\sigma^\ast)$ almost everywhere in $\Om$. By Lebesgue’s dominated convergence theorem [@Evans:2015 p. 28, Theorem 1.19], $\int_\Om W(\sigma_n){\,{\rm d}x}\to\int_\Om W(\sigma^\ast){\,{\rm d}x}.$ By Lemma \[lem:fm\_cont\] and the weak lower semi-continuity of the $H^1(\Omega)$-seminorm, we obtain $$\J_\varepsilon(\sigma^\ast)\leq \liminf_{n\to\infty}\J_{\varepsilon}(\sigma_n)\leq \lim_{n\to\infty}\J_{\varepsilon}(\sigma_n)=m_\eps.$$ Thus $\sigma^\ast$ is a global minimizer of the functional $\mathcal{J}_\varepsilon $. To obtain the necessary optimality system of , we use the standard adjoint technique. The adjoint problem for reads: find $(p,P)\in\mathbb{H}$ such that $$\label{eqn:cem-adj} a(\sigma,(p,P),(v,V)) = \langle U(\sigma)-U^\delta,V\rangle \quad \forall (v,V)\in \mathbb{H}.$$ By straightforward computation, the Gâteaux derivative $\mathcal{J}_\varepsilon'(\sigma)[\mu]$ of the functional $\mathcal{J}_{\varepsilon}$ at $\sigma\in\widetilde{\mathcal{A}}$ in the direction $\mu\in H^1(\Om)$ is given by $$\mathcal{J}_{\varepsilon}'(\sigma)[\mu] = \widetilde{\alpha}\left[\eps(\nabla\sigma,\nabla\mu) + \tfrac{1}{2\eps}(W'(\sigma),\mu)\right] - (\mu \nabla u(\sigma),\nabla p(\sigma)).$$ Then the minimizer $\sigma^*$ to problem and the respective state $(u^*,U^*)$ and adjoint $(p^*,P^*)$ satisfy the following necessary optimality system: $$\label{eqn:cem-optsys} \left\{\begin{aligned} &a(\sigma^*,(u^*,U^*),(v,V)) = \langle I,V\rangle \quad \forall (v,V)\in \mathbb{H},\\ &a(\sigma^*,(p^*,P^*),(v,V)) = \langle U^*-U^\delta,V\rangle \quad\forall(v,V)\in\mathbb{H},\\ & \widetilde{\alpha}\varepsilon (\nabla\sigma^{\ast},\nabla(\mu-\sigma^{\ast})) + \tfrac{\widetilde{\alpha}}{2\varepsilon}(W'(\sigma^\ast),\mu-\sigma^\ast) - ((\mu-\sigma^{\ast})\nabla u^{\ast},\nabla p^{\ast})\geq 0\quad\forall \mu\in\widetilde{\mathcal{A}}, \end{aligned}\right.$$ where the variational inequality at the last line is due to the box constraint in the admissible set $\widetilde{\mathcal{A}}$. The optimality system forms the basis of the adaptive algorithm and its convergence analysis. Adaptive algorithm {#sect:alg} ================== Now we develop an adaptive FEM for problem . Let $\cT_0$ be a shape regular triangulation of $\overline{\Omega}$ into simplicial elements, each intersecting at most one electrode surface $e_l$, and $\mathbb{T}$ be the set of all possible conforming triangulations of $\overline{\Omega}$ obtained from $\cT_0$ by the successive use of bisection. Then $\mathbb{T}$ is uniformly shape regular, i.e., the shape-regularity of any mesh $\mathcal{T}\in\mathbb{T}$ is bounded by a constant depending only on $\cT_0$ [@NSV:2009; @Traxler:1997]. Over any $\cT\in\mathbb{T}$, we define a continuous piecewise linear finite element space $$V_\cT = \left\{v\in C(\overline{\Omega}): v|_T\in P_1(T)\ \forall T\in\cT\right\},$$ where $P_1(T)$ consists of all linear functions on $T$. The space $V_\cT$ is used for approximating the state $u$ and adjoint $p$, and the discrete admissible set $\widetilde{\mathcal{A}}_\mathcal{T}$ for the conductivity is given by $$\widetilde{\mathcal{A}}_\cT:= V_\cT \cap {\widetilde{\mathcal{A}}}.$$ Given $\sigma_\cT\in \Atilde_\cT$, the discrete analogue of problem is to find $(u_\cT,U_\cT) \in \mathbb{H}_\cT \equiv V_\cT \otimes \mathbb{R}^L_\diamond$ such that $$\label{eqn:dispoly} a(\sigma_\cT,(u_\cT,U_\cT),(v_\cT,V)) = \langle I, V\rangle \quad \forall (v_{\cT},V)\in \mathbb{H}_\cT.$$ Then we approximate problem by minimizing the following functional over $\Atilde_\cT$: $$\begin{aligned} \label{eqn:discopt} J_{\eps,\cT}(\sigma_{\cT}) = \tfrac{1}{2}\|U_{\cT}(\sigma_{\cT})-U^\delta\|^2 + \tfrac{\widetilde{\alpha}}{2}\mathcal{F}_\varepsilon(\sigma_\cT).\end{aligned}$$ Then, similar to Theorem \[thm:tikh-MM\], there exists at least one minimizer $\sigma_\cT^*$ to , and the minimizer $\sigma_{\cT}^{\ast}$ and the related state $(u^*_\cT,U^*_\cT)\in \mathbb{H}_\cT$ and adjoint $(p^*_\cT,P^*_\cT)\in\mathbb{H}_{\cT}$ satisfy $$\label{eqn:cem-discoptsys} \left\{\begin{aligned} &a(\sigma_\cT^*,(u^*_\cT,U_\cT^*),(v,V)) = \langle I,V\rangle \quad\forall(v,V)\in \mathbb{H}_{\cT},\\ &a(\sigma_\cT^*,(p^*_\cT,P^*_\cT),(v,V)) = \langle U^*_\cT-U^\delta,V\rangle\quad \forall(v,V)\in\mathbb{H}_{\cT},\\ &\widetilde{\alpha}\varepsilon(\nabla\sigma_{\cT}^{\ast},\nabla(\mu-\sigma_{\cT}^{\ast})) + \frac{\widetilde{\alpha}}{2\varepsilon}(W'(\sigma_\cT^\ast),\mu-\sigma_\cT^\ast) -((\mu-\sigma_{\cT}^{\ast})\nabla u_{\cT}^{\ast},\nabla p_{\cT}^{\ast})\geq 0\quad\forall\mu\in\Atilde_{\cT}. \end{aligned}\right.$$ Further, $(u_{\cT}^\ast,U_{\cT}^\ast)$ and $(p_{\cT}^\ast,P_{\cT}^\ast)$ depend continuously on the problem data, i.e., $$\label{stab-discpolyadj} \|(u_{\cT}^\ast,U_{\cT}^\ast)\|_{\mathbb{H}}+\|(p_{\cT}^\ast,P_{\cT}^\ast)\|_{\mathbb{H}}\leq c(\|I\|+\|U^\delta\|),$$ [where the constant $c$ can be made independent of $\alpha$ and $\eps$.]{} To describe the error estimators, we first recall some useful notation. The collection of all faces (respectively all interior faces) in $\cT\in\mathbb{T}$ is denoted by $\mathcal{F}_{\cT}$ (respectively $\mathcal{F}_{\cT}^i$) and its restriction to the electrode $\bar{e}_{l}$ and $\partial\Omega\backslash\cup_{l=1}^Le_l$ by $\mathcal{F}_{ \cT}^{l}$ and $\mathcal{F}_{\cT}^{c}$, respectively. A face/edge $F$ has a fixed normal unit vector $\boldsymbol{n}_{F}$ in $\overline{\Omega}$ with $\boldsymbol{n}_{F}=\boldsymbol{n}$ on $\partial\Omega$. The diameter of any $T\in\cT$ and $F\in\mathcal{F}_{\cT}$ is denoted by $h_{T}:=|T|^{1/d}$ and $h_{F}:=|F|^{1/(d-1)}$, respectively. For the solution $(\sigma^{\ast}_{\cT},(u _{\cT}^{\ast},U_{\cT}^{\ast}), (p_{\cT}^{\ast},P_{\cT}^{\ast}))$ to problem , we define two element residuals for each element $T\in\cT$ and two face residuals for each face $F\in\mathcal{F}_{\cT}$ by $$\begin{aligned} R_{T,1}(\sigma_{\cT}^{\ast},u^{\ast}_{\cT}) & = \nabla\cdot(\sigma_{\cT}^{\ast}\nabla u^{\ast}_{\cT}),\\ R_{T,2}(\sigma_{\cT}^\ast,u^{\ast}_{\cT},p^{\ast}_{\cT})& =\tfrac{\widetilde{\alpha}}{2\varepsilon}W'(\sigma_\cT^\ast)-\nabla u^{\ast}_{\cT}\cdot\nabla p^{\ast}_{\cT},\\ J_{F,1}(\sigma_{\cT}^{\ast},u^{\ast}_{\cT},U_{\cT}^{\ast}) &= \left\{\begin{array}{lll} [\sigma_{\cT}^{\ast}\nabla u_{\cT}^{\ast}\cdot\boldsymbol{n}_{F}]\quad& \mbox{for} ~~F\in\mathcal{F}_{\cT}^i,\\ [1ex] \sigma_{\cT}^{\ast}\nabla u_{\cT}^{\ast}\cdot\boldsymbol{n}+(u_{\cT}^{\ast}-U_{\cT,l}^{\ast})/z_{l}\quad& \mbox{for} ~~ F\in\mathcal{F}_{\cT}^l,\\ [1ex] \sigma_{\cT}^{\ast}\nabla u_{\cT}^{\ast}\cdot\boldsymbol{n}\quad& \mbox{for} ~~F\in\mathcal{F}_{\cT}^{c}, \end{array}\right.\\ J_{F,2}(\sigma^{\ast}_{\cT}) &= \left\{\begin{array}{lll} \widetilde{\alpha}\varepsilon[\nabla\sigma_{\cT}^{\ast}\cdot\boldsymbol{n}_{F}]\quad& \mbox{for} ~~F\in\mathcal{F}_{\cT}^i,\\ [1ex] \widetilde{\alpha}\varepsilon\nabla\sigma_{\cT}^{\ast}\cdot\boldsymbol{n}\quad& \mbox{for} ~~F\in \mathcal{F}_\cT^l \cup \mathcal{F}_\cT^c, \end{array}\right. \end{aligned}$$ where $[\cdot]$ denotes the jump across interior face $F$. Then for any element $T\in \cT$, we define the following three error estimators $$\begin{aligned} \eta_{\cT,1}^{2}(\sigma_{\cT}^{\ast},u_{\cT}^{\ast},U_{\cT}^{\ast},T) & :=h_{T}^{2}\|R_{T,1}(\sigma^{\ast}_{\cT},u^{\ast}_{\cT})\|_{L^{2}(T)}^{2} +\sum_{F\subset\partial T}h_{F}\|J_{F,1}(\sigma^{\ast}_{\cT},u^{\ast}_{\cT},U^{\ast}_{\cT})\|_{L^{2}(F)}^{2},\\ \eta_{\cT,2}^{2}(\sigma_{\cT}^{\ast},p_{\cT}^{\ast},P_{\cT}^{\ast},T) &:=h_{T}^{2}\|R_{T,1}(\sigma^{\ast}_{\cT},p^{\ast}_{\cT})\|_{L^{2}(T)}^{2} +\sum_{F\subset\partial T}h_{F}\|J_{F,1}(\sigma^{\ast}_{\cT},p^{\ast}_{\cT},P^{\ast}_{\cT})\|_{L^{2}(F)}^{2},\\ \eta_{\cT,3}^{q}(\sigma_{\cT}^{\ast},u_{\cT}^{\ast},p_{\cT}^{\ast},T) &:=h_{T}^{q}\|R_{T,2}(\sigma_{\cT}^\ast,u_{\cT}^{\ast},p_{\cT}^{\ast})\|^{q}_{L^{q}(T)}+\sum_{F\subset\partial T}h_{F}\|J_{F,2}(\sigma_{\cT}^{\ast})\|^{q}_{L^{q}(F)}\end{aligned}$$ with $q=d/(d-1)$. The estimator $\eta_{\cT,1}(\sigma_{\cT}^{\ast},u_{\cT}^{\ast},U_{\cT}^{\ast},T)$ is identical with the standard residual error indicator for the direct problem: find $(\tilde u,\tilde U)\in \mathbb{H}$ such that $$a(\sigma_\cT^*,(\tilde u,\tilde U),(v,V)) = \langle I,V\rangle, \quad \forall (v,V)\in \mathbb{H}.$$ It differs from the direct problem in by replacing the conductivity $\sigma^*$ with $\sigma^*_\cT$ instead, and is a perturbation of the latter case. The perturbation is vanishingly small in the event of the conjectured (subsequential) convergence $\sigma^*_\cT\to \sigma^*$. The estimator $\eta_{\cT,2}(\sigma_{\cT}^{\ast},p_{\cT}^{\ast}, P_{\cT}^{\ast},T)$ admits a similar interpretation. These two estimators are essentially identical with that for the $H^1 (\Omega)$ penalty in [@JinXuZou:2016], and we refer to [@JinXuZou:2016 Section 3.3] for a detailed heuristic derivation. The estimator $\eta_{\cT,3}(\sigma_{\cT}^{\ast},u_{\cT}^{\ast},p_{\cT}^{\ast},T)$ is related to the variational inequality in the necessary optimality condition , and roughly provides a quantitative measure how well it is satisfied. The estimator (including the exponent $q$) is motivated by the convergence analysis; see the proof of Theorem \[thm:gat\_mc\] and Remark \[rmk:gat\_mc\] below. It represents the main new ingredient for problem , and differs from that for the $H^1(\Omega)$ penalty in [@JinXuZou:2016]. The estimator $\eta_{k,3}$ improves that in [@JinXuZou:2016], i.e., $$\eta_{\cT,3}^2(\sigma_{\cT}^{\ast},u_{\cT}^{\ast},p_{\cT}^{\ast},T) :=h_{T}^4\|R_{T,2}(\sigma_{\cT}^\ast,u_{\cT}^{\ast},p_{\cT}^{\ast})\|^2_{L^2(T)}+\sum_{F\subset\partial T}h_{F}^2\|J_{F,2}(\sigma_{\cT}^{\ast})\|^2_{L^2(F)},$$ in terms of the exponents on $h_T$ and $h_F$. This improvement is achieved by a novel constraint preserving interpolation operator defined in below. Now we can formulate an adaptive algorithm for ; see Algorithm \[alg\_afem\_eit\]. Below we indicate the dependence on the mesh $\cT_k$ by the subscript $k$, e.g., $\mathcal{J}_{\varepsilon,k}$ for $\mathcal{J}_{\varepsilon,\cT_k}$. Specify an initial mesh $\cT_{0}$, and set the maximum number $K$ of refinements. Solve problem - over $\cT_{k}$ for $(\sigma_k^{\ast},(u_{k}^{\ast},U_{k}^{\ast}))\in\Atilde_{k}\times\mathbb{H}_{k}$ and for $(p_k^\ast,P_k^\ast)\in \mathbb{H}_k$. Mark three subsets $\mathcal{M}_{k}^i\subseteq\cT_{k}$ ($i=1,2,3$) such that each $\mathcal{M}_k^i$ contains at least one element $\widetilde{T}_k^i\in\cT_{k}$ ($i=1,2,3$) with the largest error indicator: $$\label{eqn:marking} \eta_{k,i}(\widetilde{T}_k^i)=\max_{T\in\cT_{k}}\eta_{k,i}.$$ Then $\mathcal{M}_{k}:=\mathcal{M}_{k}^1\cup\mathcal{M}_{k}^2\cup\mathcal{M}_{k}^3$. Refine each element $T$ in $\mathcal{M}_{k}$ by bisection to get $\cT_{k+1}$. Check the stopping criterion. Output $(\sigma_k^*,(u_k^*,U_k^*),(p_k^*,P_k^*))$. The `MARK` module selects a collection of elements in the mesh $\mathcal{T}_k$. The condition covers several commonly used marking strategies, e.g., maximum, equidistribution, modified equidistribution, and Dörfler’s strategy [@Siebert:2011 pp. 962]. Compared with a collective marking in AFEM in [@JinXuZou:2016], Algorithm \[alg\_afem\_eit\] employs a separate marking to select more elements for refinement in each loop, which leads to fewer iterations of the adaptive process. The error estimators may also be used for coarsening, which is relevant if the recovered inclusions change dramatically during the iteration. However, the convergence analysis below does not carry over to coarsening, and it will not be further explored. Last, we give the main theoretical result: for each fixed $\eps>0$, the sequence of discrete solutions $\{\sigma_k^\ast, (u_k^\ast, U_k^\ast), (p_k^\ast, P_k^\ast)\}_{k\geq0}$ generated by Algorithm \[alg\_afem\_eit\] contains a subsequence converging in $H^1(\Om)\times\mathbb{H}\times\mathbb{H}$ to a solution of system . The proof is lengthy and technical, and thus deferred to Section \[sect:conv\]. \[thm:conv\_alg\] The sequence of discrete solutions $\{\sigma_{k}^\ast,(u_{k}^\ast,U_{k}^\ast),(p_{k}^\ast,P_{k}^\ast)\}_{k\geq0}$ by Algorithm \[alg\_afem\_eit\] contains a subsequence $\{\sigma_{k_j}^\ast,(u_{k_j}^\ast,U_{k_j}^\ast),(p_{k_j}^\ast, P_{k_j}^\ast)\}_{j\geq0}$ convergent to a solution $(\sigma^\ast,(u^\ast,U^\ast),(p^\ast,P^\ast))$ of system : $$\|\sigma^\ast_{k_j}-\sigma^\ast\|_{H^1(\Omega)},~\|(u^\ast_{k_j}-u^\ast,U^\ast_{k_j}-U^\ast)\|_{\mathbb{H}},~ \|(p^\ast_{k_j}-p^\ast,P^\ast_{k_j}-P^\ast)\|_{\mathbb{H}}\rightarrow 0\quad\mbox{as}~j\rightarrow\infty.$$ Numerical experiments and discussions {#sec:numer} ===================================== Now we present numerical results to illustrate Algorithm \[alg\_afem\_eit\] on a square domain $\Omega=(-1,1)^2$. There are sixteen electrodes $\{e_l\}_{l=1}^L$ (with $L=16$) evenly distributed along $\partial\Omega$, each of length $1/4$. The contact impedances $\{z_l\}_{l=1}^L$ are all set to unit. We take ten sinusoidal input currents, and for each voltage $U(\sigma^\dag)\in\mathbb{R}^L_\diamond$, generate the noisy data $U^\delta$ by $$\label{eqn:noisydata} U^\delta_l = U_l(\sigma^\dag) + \epsilon \max_l|U_l(\sigma^\dag)|\xi_l,\ \ l=1,\ldots, L,$$ where $\epsilon$ is the (relative) noise level, and $\{\xi_l\}_{l=1}^L$ follow the standard normal distribution. Note that $\epsilon=\text{1e-2}$ refers to a relatively high noise level for EIT. The exact data $U(\sigma^\dag)$ is computed using a much finer uniform mesh, to avoid the most obvious form of “inverse crime”. In the experiments, we fix $K$ (the number of refinements) at $15$, $q$ (exponent in $\eta_{k,3}^q$) at $2$, and $\varepsilon$ (the functional $\mathcal{F}_\varepsilon$) at $\text{1e-2}$. The marking strategy in the module `MARK` selects a minimal refinement set $\mathcal{M}_k{:=\cup_{i=1}^3\mathcal{M}_{k}^i}\subseteq\cT_k$ such that $$\begin{aligned} \eta_{k,1}^2(\sigma_k^{\ast},u_k^{\ast},U_k^{\ast},\mathcal{M}_{k}^1) &\geq\theta \eta_{k,1}^2(\sigma_k^{\ast},u_k^{\ast},U_k^{\ast}),\quad \eta_{k,2}^2(\sigma_k^{\ast},p_k^{\ast},P_k^{\ast},\mathcal{M}_{k}^2)\geq\theta \eta_{k,2}^2(\sigma_k^{\ast},p_k^{\ast},P_k^{\ast}),\\ &\eta_{k,3}^2(\sigma_k^{\ast},u_k^{\ast},p_k^{\ast},\mathcal{M}_{k}^3)\geq\theta \eta_{k,3}^2(\sigma_k^{\ast},u_k^{\ast},p_k^{\ast}), \end{aligned}$$ with a threshold $\theta=0.7$. The refinement is performed with one popular refinement strategy, i.e., newest vertex bisection [@Mitchell:1989]. Specifically, it connects the midpoint $x_T$, as a newest vertex, of a reference edge $F$ of an element $T\in\cT_k$ to the opposite node of $F$, and employs two edges opposite to the midpoint $x_T$ as reference edges of the two newly created triangles in $\cT_{k+1}$. Problem - is solved by a Newton type method; see Appendix \[app:newton\] for the detail. The conductivity on $\cT_0$ is initialized to $\sigma_0=c_0$, and then for $k=1,2,\ldots$, $\sigma_{k-1}^*$ (defined on $\cT_{k-1}$) is interpolated to $\cT_k$ to warm start the optimization. The regularization parameter $\widetilde\alpha$ in is determined in a trial-and-error manner. All computations are performed using `MATLAB` 2018a on a personal laptop with 8.00 GB RAM and 2.5 GHz CPU. ------------------------ --------------- --------------- -------------- -------------- \(a) true conductivity \(b) adaptive \(c) adaptive \(d) uniform \(e) uniform ------------------------ --------------- --------------- -------------- -------------- The first set of examples are concerned with two inclusions. \[exam2\] The background conductivity $\sigma_0(x)=1$. - The true conductivity $\sigma^\dag$ is given by $ \sigma_0(x)+\chi_{B_1}(x) + \chi_{B_2}(x)$, with $B_1$ and $B_2$ denote two circles centered at $(0,0.5)$ and $(0,-0.5)$, respectively, both with a radius 0.3. - The true conductivity $\sigma^\dag$ is given by $ \sigma_0(x)+1 + 1.2e^{-\frac{25(x_1^2+(x_2-0.5)^2)}{2}} + 1.2e^{-\frac{25(x_1^2+(x_2+0.5)^2)}{2}}$, i.e., two Gaussian bumps centered at $(0,0.5)$ and $(0,-0.5)$. - The true conductivity $\sigma^\dag$ is given by $ \sigma_0(x)+5\chi_{B_1}(x) + 5\chi_{B_2}(x)$, with $B_1$ and $B_2$ denote two circles centered at $(0,0.5)$ and $(0,-0.5)$, respectively, both with a radius 0.3. The numerical results for Example \[exam2\](i) with $\epsilon=\text{1e-3}$ and $\epsilon=\text{1e-2}$ are shown in Figs. \[fig:exam2i-recon\]–\[fig:exam2-efficiency\], where d.o.f. denotes the degree of freedom of the mesh. It is observed from Fig. \[fig:exam2i-recon\] that with both uniform and adaptive refinements, the final recoveries have comparable accuracy and capture well the inclusion locations. ------ ------ ------ ------- ------- -- -- -- 81 119 173 277 390 605 812 1266 1714 2630 3526 5482 7549 11450 15830 ------ ------ ------ ------- ------- -- -- -- Next we examine the adaptive refinement process more closely. In Figs. \[fig:exam2i-recon-iter1e3\] and \[fig:exam2i-recon-iter1e2\], we show the meshes $\mathcal{T}_k$ during the iteration and the corresponding recoveries $\sigma_k$ for Example \[exam2\](i) at two noise levels $\epsilon = \text{1e-3}$ and $\epsilon=\text{1e-2}$, respectively. On the coarse mesh $\cT_0$, the recovery has very large errors and can only identify one component and thus fails to correctly identify the number of inclusions, due to the severe under-resolution of both state and conductivity. Nonetheless, Algorithm \[alg\_afem\_eit\] can correctly recover the two components with reasonable accuracy after several adaptive loops, and accordingly, the support of the recovery is gradually refined with its accuracy improving steadily. In particular, the inclusion locations stabilize after several loops, and thus coarsening of the mesh seems unnecessary. Throughout, the refinement occurs mainly in the regions around the electrode edges and internal interface, which is clearly observed for both noise levels. This is attributed to the separable marking strategy, which allows detecting different sources of singularities simultaneously. In Fig. \[fig:exam2i\_err-ind\], we display the evolution of the error indicators for Example \[exam2\](i) with $\epsilon=\text{1e-3}$. The estimators play different roles: $\eta_{k,1}^2$ and $\eta_{k,2}^2$ indicate the electrode edges during first iterations and then also internal interface, whereas throughout $\eta_{k,3}^2$ concentrates on the internal interface. Thus, $\eta_{k,1}^2$ and $\eta_{k,2}^2$ are most effective for resolving the state and adjoint, whereas $\eta_{k,3}^2$ is effective for detecting internal jumps of the conductivity. The magnitude of $\eta_{k,2}^2$ is much smaller than $\eta_{k,1}^2$, since the boundary data $U^\delta-U(\sigma_k)$ for the adjoint is much smaller than the input current $I$ for the state. Thus, a simple collective marking strategy (i.e., $\eta_k^2 =\eta_{k,1}^2 +\eta_{k,2}^2 + \eta_{k,3}^2$) may miss the correct singularity, due to their drastically different scalings. In contrast, the separate marking in can take care of the scaling automatically. ------ ------ ------ ------- ------- -- -- -- 81 120 176 276 378 595 816 1324 1862 2884 4004 6250 8690 13374 18770 ------ ------ ------ ------- ------- -- -- -- ------- ------- ------- ------- ------- $k=0$ $k=1$ $k=2$ $k=3$ $k=4$ $k=5$ $k=6$ $k=7$ $k=8$ $k=9$ ------- ------- ------- ------- ------- --------------------------------------------------------- --------------------------------------------------------- \(a) $\epsilon=\text{1e-3}$, $\tilde\alpha=\text{2e-2}$ \(b) $\epsilon=\text{1e-2}$, $\tilde\alpha=\text{3e-2}$ --------------------------------------------------------- --------------------------------------------------------- In Fig. \[fig:exam2-efficiency\], we plot the $L^2(\Omega)$ and $L^1(\Omega)$ errors of the recoveries versus d.o.f. $N$, where the recovery on the corresponding finest mesh is taken as the reference (since the recoveries by the adaptive and uniform refinements are slightly different; see Fig. \[fig:exam2i-recon\]). Due to the discontinuity of the sought-for conductivity, the $L^1(\Omega)$ norm is especially suitable for measuring the convergence. The convergence of the algorithm is clearly observed for both adaptive and uniform refinements. Further, with a fixed d.o.f., AFEM gives more accurate results than the uniform one in both error metrics. These observations show the computational efficiency of the adaptive algorithm. Examples \[exam2\](ii) and (iii) are variations of Example \[exam2\](i), and the results are presented in Figs. \[fig:exam2ii-recon\]–\[fig:exam2iii-recon-iter-1e3\]. The proposed approach assumes a piecewise constant conductivity with known lower and upper bounds. Example \[exam2\](ii) does not fulfill the assumption, since the true conductivity $\sigma^\dag$ is not piecewise constant. Thus the algorithm can only produce a piecewise constant approximation to the exact one. Nonetheless, the inclusion support is reasonably identified. When the noise level $\epsilon$ increases from $\text{1e-3}$ to $\text{1e-2}$, the reconstruction accuracy deteriorates significantly; see Fig. \[fig:exam2ii-recon\]. Example \[exam2\](iii) involves high contrast inclusions, which are well known to be numerically more challenging. This is clearly observed in Fig. \[fig:exam2iii-recon\], where the recovery accuracy is inferior, especially for the noise level $\epsilon=\text{1e-2}$. However, the adaptive refinement procedure works well similarly as the preceding examples: the refinement occurs mainly around the electrode edges and inclusion interface; see Figs. \[fig:exam2ii-recon-iter-1e3\] and \[fig:exam2iii-recon-iter-1e3\] for the details. ------------------------ --------------- --------------- -------------- -------------- \(a) true conductivity \(b) adaptive \(c) adaptive \(d) uniform \(e) uniform ------------------------ --------------- --------------- -------------- -------------- ------ ------ ------ ------- ------- -- -- -- 81 120 177 276 389 606 821 1308 1792 2764 3816 5934 8299 12661 17736 ------ ------ ------ ------- ------- -- -- -- ------------------------ --------------- --------------- -------------- -------------- \(a) true conductivity \(b) adaptive \(c) adaptive \(d) uniform \(e) uniform ------------------------ --------------- --------------- -------------- -------------- ------ ------ ------ ------- ------- -- -- -- 81 111 150 231 331 535 775 1215 1735 2594 3521 5085 7080 10360 14620 ------ ------ ------ ------- ------- -- -- -- Now we consider one more challenging example with four inclusions. \[exam3\] The true conductivity $\sigma^\dag$ is given by $\sigma_0(x)+\sum_{i=1}^4\chi_{B_i}(x)$, with the circles $B_i$ centered at $(0.6,\pm0.6)$ and $(-0.6,\pm0.6)$, respectively, all with a radius 0.2, and the background conductivity $\sigma_0(x)=1$. The numerical results for Example \[exam3\] are given in Figs. \[fig:exam3-recon\]–\[fig:exam3-efficiency\]. The results are in excellent agreement with the observations from Example \[exam2\]: The algorithm converges steadily as the adaptive iteration proceeds, and with a low noise level, it can accurately recover all four inclusions, showing clearly the efficiency of the adaptive approach. The refinement is mainly around the electrode edge and interval interface. ------------ --------------- --------------- -------------- -------------- \(a) exact \(b) adaptive \(c) adaptive \(d) uniform \(e) uniform ------------ --------------- --------------- -------------- -------------- ------ ------ ------ ------- ------- -- -- -- 81 122 171 274 389 599 844 1339 1894 2904 3999 6133 8582 12883 18008 ------ ------ ------ ------- ------- -- -- -- --------------------------------------------------------- --------------------------------------------------------- \(a) $\epsilon=\text{1e-3}$, $\tilde\alpha=\text{2e-2}$ \(b) $\epsilon=\text{1e-2}$, $\tilde\alpha=\text{3e-2}$ --------------------------------------------------------- --------------------------------------------------------- Proof of Theorem \[thm:conv\_alg\] {#sect:conv} ================================== The lengthy and technical proof is divided into two steps: Step 1 shows the convergence to an auxiliary minimization problem over a limiting admissible set in Section \[subsect:conv1\], and Step 2 shows that the solution of the auxiliary problem satisfies the necessary optimality system in section \[subsect:conv2\]. The overall proof strategy is similar to [@JinXuZou:2016], and hence we omit relevant arguments. Auxiliary convergence {#subsect:conv1} --------------------- Since the two sequences $\{\mathbb{H}_k\}_{k\geq0}$ and $\{\Atilde_k\}_{k\geq0}$ generated by Algorithm \[alg\_afem\_eit\] are nested, we may define $$\mathbb{H}_{\infty}:=\overline{\bigcup_{k\geq 0}\mathbb{H}_{k}}~ (\mbox{in}~\mathbb{H}\mbox{-norm})\quad\mbox{and}\quad \Atilde_{\infty}:=\overline{\bigcup_{k\geq 0}\Atilde_{k}}~(\mbox{in}~H^{1}(\Omega)\mbox{-norm}).$$ Clearly $\mathbb{H}_{\infty}$ is a closed subspace of $\mathbb{H}$. For the set $\Atilde_{\infty}$, we have the following result [@JinXuZou:2016 Lemma 4.1]. \[lem:convexclosed\] $\Atilde_{\infty}$ is a closed convex subset of $\Atilde$. Over the limiting set $\Atilde_{\infty}$, we define an auxiliary limiting minimization problem: $$\label{eqn:medopt} \min_{\sigma_{\infty}\in\Atilde_{\infty}}\left\{\J_{\varepsilon,\infty}(\sigma_{\infty}) = \tfrac{1}{2}\|U_{\infty}(\sigma_{\infty})-U^\delta\|^2 + \tfrac{\widetilde{\alpha}}{2}\mathcal{F}_\varepsilon(\sigma_\infty)\right\},$$ where $(u_{\infty},U_{\infty})\in \mathbb{H}_{\infty}$ satisfies $$\label{eqn:medpoly} a(\sigma_\infty,(u_\infty,U_\infty),(v,V)) = \langle I,V\rangle \quad \forall (v,V)\in\mathbb{H}_\infty.$$ By Lemma \[lem:normequiv\] and Lax-Milgram theorem, problem is well-posed for any fixed $\sigma_\infty\in\mathcal{A}_\infty$. The next result gives the existence of a minimizer to –. \[thm:existmedmin\] There exists at least one minimizer to problem –. Let $\{\sigma_k^\ast, (u_k^\ast, U_k^\ast)\}_{k\geq 0}$ be the sequence of discrete solutions given by Algorithm \[alg\_afem\_eit\]. Since $c_1\in \Atilde_k$ for all $k$, by , $\J_{\eps,k} (\sigma_k^\ast)\leq \J_{\eps,k}(c_1)\leq c$, and thus $\{\sigma_{k}^\ast\}_{k\geq 0}$ is uniformly bounded in $H^1(\Omega)$. By Lemma \[lem:convexclosed\] and Sobolev embedding, there exist a subsequence, denoted by $\{\sigma_{k_j}^\ast\}_{j\geq 0}$, and some $\sigma^\ast\in \Atilde_\infty$ such that $$\label{pf:medmin_01} \sigma_{k_j}^\ast \rightharpoonup \sigma^\ast\quad \mbox{weakly in}~H^1(\Om),\quad \sigma_{k_j}^\ast \to \sigma^\ast\quad \mbox{in}~L^2(\Om),\quad \sigma_{k_j}^\ast \to \sigma^\ast\quad \mbox{a.e. in}~\Om,$$ Next we introduce a discrete analogue of problem with $\sigma_\infty=\sigma^\ast$: find $(u_{k_j},U_{k_j})\in \mathbb{H}_{k_j}$ such that $$\label{pf:medmin_02} a(\sigma^\ast,(u_{k_j},U_{k_j}),(v,V)) = \langle I,V\rangle \quad \forall (v,V)\in\mathbb{H}_{k_j}.$$ By Lemma \[lem:normequiv\], Cea’s lemma and the construction of the space $\mathbb{H}_{\infty}$, the solution $(u^\ast_\infty,U_\infty^\ast)\in\mathbb{H}_\infty$ of with $\sigma_\infty=\sigma^\ast$ satisfies $$\label{pf:medmin_03} \|(u^\ast_\infty-u^\ast_{k_j},U^\ast_\infty-U^\ast_{k_j})\|_{\mathbb{H}}\leq c \inf_{(v,V)\in \mathbb{H}_{k_j}}\|(u^\ast_\infty-v,U^\ast_\infty-V)\|_{\mathbb{H}} \to 0.$$ Taking the test function $(v,V)=(u_{k_j}-u_{k_j}^\ast,U_{k_j}-U_{k_j}^\ast)\in\mathbb{H}_{k_j}$ in the first line of and and then applying the Cauchy-Schwarz inequality lead to $$\begin{aligned} &\quad a(\sigma_{k_j}^\ast,(u_{k_j}-u_{k_j}^\ast,U_{k_j}-U_{k_j}^\ast),(u_{k_j}-u_{k_j}^\ast,U_{k_j}-U_{k_j}^\ast))\\ & =((\sigma_{k_j}^\ast-\sigma^\ast)\nabla(u_{k_j}-u^\ast_{\infty}),\nabla(u_{k_j}-u_{k_j}^\ast)) +((\sigma_{k_j}^\ast-\sigma^\ast)\nabla u_{\infty}^\ast,\nabla(u_{k_j}-u_{k_j}^\ast))\\ &\leq (\|(\sigma_{k_j}^\ast-\sigma^\ast)\nabla(u_{k_j}-u^\ast_{\infty})\|_{L^2(\Omega)} + \|(\sigma_{k_j}^\ast-\sigma^\ast)\nabla u_{\infty}^\ast\|_{L^2(\Omega)} ) \|\nabla(u_{k_j}-u_{k_j}^\ast)\|_{L^2(\Omega)}.\end{aligned}$$ In view of , pointwise convergence in and Lebesgue’s dominated convergence theorem, $$\|(\sigma_{k_j}^\ast-\sigma^\ast)\nabla(u_{k_j}-u^\ast_{\infty})\|_{L^2(\Omega)}\leq c_1 \|\nabla(u_{k_j}-u^\ast_{\infty})\|_{L^2(\Omega)} \to 0,\quad \|(\sigma_{k_j}^\ast-\sigma^\ast)\nabla u_{\infty}^\ast\|_{L^2(\Omega)} \to 0,$$ This and Lemma \[lem:normequiv\] imply $\|(u_{k_j}-u_{k_j}^\ast,U_{k_j}-U_{k_j}^\ast)\|_{\mathbb{H}} \to 0.$ Then, and the triangle inequality imply $$\label{pf:medmin_04} \|(u_{k_j}^\ast-u_{\infty}^\ast,U_{k_j}^\ast-U_{\infty}^\ast)\|_{\mathbb{H}} \to 0.$$ Meanwhile, repeating the argument of Theorem \[thm:tikh-MM\] gives $$\label{pf:medmin_05} \int_\Om W(\sigma_{k_j}^\ast){\,{\rm d}x}\to \int_\Om W(\sigma^\ast){\,{\rm d}x}.$$ next we apply a density argument. For any $\sigma_{\infty}\in \Atilde_\infty$, by the construction of the space $\mathbb{H}_\infty$, there exists a sequence $\{\sigma_k\}_{k\geq 0}\subset \bigcup_{k\geq0}\Atilde_{k}$ such that $\sigma_k\to\sigma_\infty$ in $H^1(\Om)$. Repeating the preceding argument gives $\|U(\sigma_{k})-U^\delta\|^2\to\|U(\sigma_\infty)-U^\delta\|^2$ and $\int_{\Om} W(\sigma_k){\,{\rm d}x}\to \int_\Om W(\sigma_{\infty}){\,{\rm d}x}$. Now , the weak lower semicontinuity of the $H^1(\Omega)$-norm, and the minimizing property of $\sigma_{k}^\ast$ to $\J_{\eps,k}$ over the set $\Atilde_k$ imply $$\begin{aligned} \label{eqn:lsc-med} \J_{\eps,\infty}(\sigma^\ast) &\leq \liminf_{j\to\infty}\J_{\eps,k_j}(\sigma^\ast_{k_j})\leq \limsup_{j\to\infty}\J_{\eps,k_j}(\sigma^\ast_{k_j})\nonumber\\ &\leq\limsup_{k\to\infty}\J_{\eps,k}(\sigma^\ast_{k})\leq \limsup_{k\to\infty}\J_{\eps,k}(\sigma_{k})=\J_{\eps,\infty}(\sigma_\infty)\quad\forall\sigma_\infty\in \Atilde_{\infty}. \end{aligned}$$ Since $\sigma^\ast\in\Atilde_{\infty}$, $\sigma_\infty^\ast:=\sigma^\ast$ is a minimizer of $\J_{\eps,\infty}$ over $\Atilde_{\infty}$. Further, we have the following auxiliary convergence. \[thm:conv\_medmin\] The sequence of discrete solutions $\{\sigma_{k}^{\ast},(u^{\ast}_{k},U_{k}^{\ast})\}_{k\geq0}$ to problem contains a subsequence $\{\sigma_{k_{j}}^{\ast},(u_{k_{j}}^{\ast},U_{k_{j}}^\ast)\}_{j\geq 0}$ convergent to a minimizer $(\sigma_{\infty}^{\ast},(u_{\infty}^{\ast},U_{\infty}^\ast))$ to problem –: $$\sigma_{k_{j}}^\ast\rightarrow\sigma_{\infty}^\ast\quad\mbox{ in}~H^1(\Omega), \quad \sigma_{k_{j}}^\ast\rightarrow\sigma_{\infty}^\ast\quad\mbox{a.e. in}~\Omega, \quad (u_{k_{j}}^\ast,U_{k_j}^*)\rightarrow (u_{\infty}^\ast,U_\infty^*)\quad\mbox{ in}~\mathbb{H}.$$ The convergence of $(u_{k_j}^*,U_{k_j}^*)$ was already proved in Theorem \[thm:existmedmin\]. Taking $\sigma_\infty=\sigma_\infty^\ast$ in gives $\lim_{j\to\infty}\J_{\eps,k_j}(\sigma_{k_j}^\ast) =\J_{\eps,\infty}(\sigma^\ast_\infty)$. By and , we have $\|\nabla\sigma_{k_j}^\ast \|^2_{L^2(\Om)}\to \|\nabla\sigma_{\infty}^\ast\|^2_{L^2(\Om)}$. Thus, the sequence $\{\sigma_{k_j}^\ast\}_{j\geq 0}$ converges to $\sigma_{\infty}^\ast$ in $H^1(\Om)$. Next we consider the convergence of the sequence $\{(p_k^\ast,P_k^\ast)\}_{k\geq 0}$. With a minimizer $(\sigma_\infty^\ast,(u^\ast_\infty, U^\ast_\infty))$ to problem , we define a limiting adjoint problem: find $(p^\ast_{\infty},P^\ast_{\infty})\in\mathbb{H}_\infty$ such that $$\label{eqn:medadj} a(\sigma_\infty^*,(p^*_\infty,P_\infty^*),(v,V)) = \langle U_\infty^*-U^\delta,V\rangle\quad \forall (v,V)\in\mathbb{H}_\infty.$$ By Lemma \[lem:normequiv\] and Lax-Milgram theorem, is uniquely solvable. We have the following convergence result for $(p_\infty^\ast, P_\infty^\ast)$. The proof is identical with [@JinXuZou:2016 Theorem 4.5], and hence omitted. \[thm:conv\_medadj\] Under the condition of Theorem \[thm:conv\_medmin\], the subsequence of adjoint solutions $\{(p_{k_{j}}^\ast,P_{k_{j}}^\ast)\}_{j\geq 0}$ generated by Algorithm \[alg\_afem\_eit\] converges to the solution $(p_{\infty}^\ast, P_{\infty}^\ast)$ of problem : $$\lim_{j\rightarrow\infty}\|(p_{k_j}^\ast-p_{\infty}^\ast,P_{k_j}^\ast-P_{\infty}^\ast)\|_{\mathbb{H}}=0.$$ Proof of Theorem \[thm:conv\_alg\] {#subsect:conv2} ---------------------------------- Theorem \[thm:conv\_alg\] follows directly by combining Theorems \[thm:conv\_medmin\]-\[thm:conv\_medadj\] in Section \[subsect:conv1\] and Theorems \[thm:vp\_mc\]-\[thm:gat\_mc\] below. The proof in this part relies on the marking condition . First, we show that the limit $(\sigma^\ast_{\infty},(u^\ast_{\infty},U^\ast_{\infty}),(p^\ast_{\infty},P^\ast_{\infty}))$ solves the variational equations in . \[thm:vp\_mc\] The solutions $(\sigma_\infty^*,u_\infty^*,U_\infty^*)$ and $(p_\infty^*,P_\infty^*)$ to problems - and satisfy $$\begin{aligned} a(\sigma^*_\infty,(u^\ast_\infty,U_\infty^\ast),(v,V)) &= \langle I, V\rangle\quad \forall (v,V)\in\mathbb{H},\\ a(\sigma^*_\infty,(p^\ast_\infty,P_\infty^\ast),(v,V)) &= \langle U_\infty^*-U^\delta, V\rangle\quad \forall (v,V)\in\mathbb{H}.\\ \end{aligned}$$ The proof is identical with [@JinXuZou:2016 Lemma 4.8], using Theorems \[thm:conv\_medmin\]-\[thm:conv\_medadj\], and hence we only give a brief sketch. By [@JinXuZou:2016 Lemma 3.5], for each $T\in\cT_k$ with its face $F$ (intersecting with $e_l$), there hold $$\begin{aligned} \eta_{k,1}^2(\sigma_{k}^\ast,u^\ast_{k},U^\ast_{k},T) &\leq c(\|\nabla u^\ast_{k}\|^2_{L^2(D_{T})}+h_{F}\|u^\ast_{k}-U^{\ast}_{k,l}\|^2_{L^2(F\cap e_{l})}),\\ \eta_{k,2}^2(\sigma_{k}^\ast,p^\ast_{k},P^\ast_{k},T)&\leq c(\|\nabla p^\ast_{k}\|^2_{L^2(D_{T})}+h_{F}\|p_k^\ast-P^{\ast}_{k,l}\|^2_{L^2(F\cap e_{l})}),\end{aligned}$$ where the notation $D_T$ is defined below. Then by the marking condition , [@JinXuZou:2016 Lemma 4.6] implies that for each convergent subsequence $\{\sigma^{\ast}_{k_j},(u^{\ast}_{k_j},U^{\ast}_{k_j}),(p^{\ast}_{k_j}, P^{\ast}_{k_j})\}_{j\geq0}$ from Theorems \[thm:conv\_medmin\] and \[thm:conv\_medadj\], there hold $$\begin{aligned} \lim_{j\rightarrow\infty}\max_{T\in\mathcal{M}_{k_j}^1}\eta_{k_{j},1}(\sigma_{k_j}^*,u_{k_j}^*,U_{k_j}^*,T)=0 \quad\mbox{and}\quad \lim_{j\rightarrow\infty}\max_{T\in\mathcal{M}_{k_j}^2}\eta_{k_{j},2}(\sigma_{k_j}^*,p_{k_j}^*,P_{k_j}^*,T)=0.\end{aligned}$$ Last, by [@JinXuZou:2016 Lemma 4.7] and Theorems \[thm:conv\_medmin\]-\[thm:conv\_medadj\], the argument of [@JinXuZou:2016 Lemma 4.8] completes the proof. \[rem:residual\_weakconv\] The argument of Theorem \[thm:vp\_mc\] dates back to [@Siebert:2011], and the main tools include the Galerkin orthogonality of the residual operator, the Lagrange and the Scott-Zhang interpolation operators [@Ciarlet:2002; @ScottZhang:1990], the marking condition and a density argument. Further, the error estimators $\eta_{k,1}(\sigma_k^*,u_k^*,U_k^*)$ and $\eta_{k,2}(\sigma_k^*,p_k^*,P_k^*)$ emerge in the proof and are then employed in the module *`ESTIMATE`* of Algorithm \[alg\_afem\_eit\]. Next we prove that the limit $(\sigma^\ast_{\infty},(u^\ast_{\infty},U^\ast_{\infty}), (p^\ast_{\infty}, P^\ast_{\infty}))$ satisfies the variational inequality in . The proof relies crucially on a constraint preserving interpolation operator. We denote by $D_{T}$ the union of elements in $\cT$ with a non-empty intersection with an element $T\in\cT$, and by $\omega_F$ the union of elements in $\cT$ sharing a common face/edge with $F\in\mathcal{F}_\cT$. Let $$\cT_{k}^{+}:=\bigcap_{l\geq k}\cT_{l},\quad \cT_{k}^{0}:=\cT_{k}\setminus\cT_{k}^{+},\quad \Omega_{k}^{+}:=\bigcup_{T\in\cT^{+}_{k}}D_{T},\quad \Omega_{k}^{0}:=\bigcup_{T\in\cT^{0}_{k}}D_{T}.$$ The set $\cT_{k}^{+}$ consists of all elements not refined after the $k$-th iteration, and all elements in $\cT_{k}^{0}$ are refined at least once after the $k$-th iteration. Clearly, $\cT_{l}^{+} \subset\cT_{k}^{+}$ for $l<k$. We also define a mesh-size function $h_{k}:\overline{\Omega}\rightarrow \mathbb{R}^{+}$ almost everywhere $$h_k(x) = \left\{\begin{array}{ll} h_T, & \quad x\in T^i,\\ h_F , & \quad x\in F^i, \end{array}\right.$$ where ${T}^i$ denotes the interior of an element $T\in\cT_{k}$, and ${F}^i$ the relative interior of an edge $F\in\mathcal{F}_{k}$. It has the following property [@Siebert:2011 Corollary 3.3]: $$\label{eqn:conv_zero_mesh} \lim_{k\rightarrow\infty}\|h_{k}\chi_{\Omega_k^0}\|_{L^\infty(\Omega)}=0.$$ The next result gives the limiting behaviour of the maximal error indicator $\eta_{k,3}$. \[lem:estmarked\] Let $\{(\sigma^{\ast}_k,(u^{\ast}_k,U^{\ast}_k),(p^{\ast}_k,P^{\ast}_k))\}_{k\geq 0}$ be the sequence of discrete solutions generated by Algorithm \[alg\_afem\_eit\]. Then for each convergent subsequence $\{\sigma^{\ast}_{k_j},(u^{\ast}_{k_j},U^{\ast}_{k_j}),(p^{\ast}_{k_j},P^{\ast}_{k_j})\}_{j\geq0}$, there holds $$\lim_{j\rightarrow\infty}\max_{T\in\mathcal{M}_{k_j}^3}\eta_{k_{j},3}(\sigma_{k_j}^*,u_{k_j}^*,p_{k_j}^*,T)=0.$$ The inverse estimate and scaled trace theorem imply that for each $T\in\cT_k$ (with its face $F$) $$\begin{aligned} h_{T}^q\|\tfrac{\tilde\alpha}{2\varepsilon}W'(\sigma_k^*) - \nabla u^*_k\cdot\nabla p^*_k\|^q_{L^q(T)} & \leq ch_T^q\|\nabla u^*_k\cdot\nabla p^*_k\|_{L^q(T)}^q+ ch_T^q\|W'(\sigma_k^*)\|_{L^q(T)}^q \\ &\leq ch_T^{q}h_T^{d-dq}\|\nabla u^*_k\cdot\nabla p^*_k\|^q_{L^1(T)}+ch_T^q\|W'(\sigma_k^*)\|_{L^q(T)}^q,\\ \sum_{F\subset\partial T}h_F\|J_{F,2}(\sigma_k^*)\|_{L^q(F)}^q&\leq c\sum_{F\subset\partial T}h_Fh_{F}^{-1}\|\nabla\sigma_{k}^*\|^q_{L^q(\omega_F)}.\end{aligned}$$ With the choice $q=d/(d-1)$, combining these two estimates gives $$\begin{aligned} \label{eqn:stab-indicator} \eta_{k,3}^q(\sigma_{k}^\ast,u^\ast_{k},p^\ast_{k},T)&\leq c(\|\nabla u^*_k\cdot\nabla p^*_k\|^q_{L^1(T)}+h_T^q\|W'(\sigma_k^*)\|_{L^q(T)}^q+\|\nabla\sigma_{k}^\ast\|^q_{L^q(D_{T})}),\end{aligned}$$ where $c$ depends on $\widetilde{\alpha}$ and $\eps$ in $\F_\eps$. Next, for the subsequence $\{\sigma^{\ast}_{k_j},(u^{\ast}_{k_j},U^{\ast}_{k_j}),(p^{\ast}_{k_j},P^{\ast}_{k_j})\}_{j\geq0}$, let $\widetilde{T}_{j}^3 \in \mathcal{M}_{k_{j}}^3$ be the element with the largest error indicator $\eta_{k_j,3}(\sigma_{k_j}^*,u_{k_j}^*,p_{k_j}^*,T)$. Since $D_{\widetilde{T}_j^i}\subset\Omega_{k_j}^0$, implies $$\label{eqn:estmarked} |D_{\widetilde{T}^i_j}|\leq c\|h_{k_{j}}\|^d_{L^{\infty}(\Omega_{k_j}^0)}\rightarrow 0 \quad\mbox{as}~j\rightarrow\infty.$$ By , Cauchy-Schwarz inequality and triangle inequality, there holds $$\begin{aligned} \eta_{k_j,3}^q(\sigma^{\ast}_{k_j},&u^\ast_{k_j},p^\ast_{k_j},\widetilde{T}_{j}^3) \leq c(\|\nabla u^\ast_{k_j}\|_{L^2(\widetilde{T}_{j}^3)}^q\|\nabla p^\ast_{k_j}\|_{L^2(\widetilde{T}_{j}^3)}^q +h_{\widetilde{T}_{j}^3}^q\|W'(\sigma_k^*)\|_{L^q(\widetilde{T}_{j}^3)}^q+\|\nabla\sigma_{k_j}^\ast\|^q_{L^q(D_{\widetilde{T}_{j}^3})})\\ &\leq c\big((\|\nabla (u^\ast_{k_j}-u^\ast_{\infty})\|_{L^2(\Om)}^q+\|\nabla u^\ast_{\infty}\|_{L^2(\widetilde{T}_{j}^3)}^q)(\|\nabla (p^\ast_{k_j}-p^\ast_{\infty})\|_{L^2(\Om)}^q+\|\nabla p^\ast_{\infty}\|_{L^2(\widetilde{T}_{j}^3)}^q)\\ &\quad+h_{\widetilde{T}_j^3}^q(\|W'(\sigma_{k_j}^\ast)-W'(\sigma_\infty^\ast)\|^q_{L^q(\Om)}+\|W'(\sigma_\infty^\ast)\|^q_{L^q(\widetilde{T}_{j}^3)})\\ &\quad +(\|\nabla(\sigma_{k_j}^\ast-\sigma_{\infty}^\ast)\|^q_{L^q(\Om)} +\|\nabla\sigma_{\infty}^\ast\|^q_{L^q(D_{\widetilde{T}_{j}^3})})\big).\end{aligned}$$ By Theorems \[thm:conv\_medmin\] and \[thm:conv\_medadj\], Lebesgue’s dominated convergence theorem, the choice $q=d/(d-1)\leq 2$ and Hölder inequality, we obtain $\|W'(\sigma_{k_j}^\ast)-W'(\sigma_\infty^\ast)\|^q_{L^q(\Om)}\to 0$ and $\|\nabla(\sigma_{k_j}^\ast- \sigma_{\infty}^\ast)\|^q_{L^q(\Om)}\to 0$. Then the absolute continuity of the norm $\|\cdot\|_{L^q(\Omega)}$ with respect to Lebesgue measure and complete the proof. Due to a lack of Galerkin orthogonality for variational inequalities, we employ a local $L^r$-stable interpolation operator of Clément/Chen-Nochetto type. Let $ \mathcal{N}_k$ be the set of all interior nodes of $\cT_k$ and $\{\phi_{x}\}_{x\in\mathcal{N}_k} $ be the nodal basis functions in $V_k$. For each $x\in\mathcal{N}_k$, the support of $\phi_x$ is denoted by $\omega_x$, i.e., the union of all elements in $\cT_k$ with a non-empty intersection with $x$. Then we define $\Pi_k:L^1(\Om)\to V_k$ by $$\label{eqn:cn_int_def} \Pi_k v := \sum_{x\in\mathcal{N}_k} \frac{1}{|\omega_x|}\int_{\omega_x}v {\,{\rm d}x}\phi_x.$$ Clearly, $\Pi_k v\in \Atilde_k$ if $c_0 \leq v \leq c_1$ a.e. $x\in\Om$. The definition is adapted from [@ChenNochetto:2000] (for elliptic obstacle problems) by replacing the maximal ball $\Delta_x\subset\omega_x$ centered at an interior node $x$ by $\omega_x$. $\Pi_k v$ satisfies following properties; see Appendix \[app:int-oper\] for a proof. \[lem:cn-int-oper\] For any $v\in W^{1,r}(\Omega)$, there hold for all $r\in[1, +\infty]$, any $T\in \cT_k$ and any $F\subset \partial T$, $$\begin{aligned} \|\Pi_k v\|_{L^r(T)} \leq c \|v\|_{L^r(D_T)}, \quad \|\nabla\Pi_k v\|_{L^r(T)} \leq c \|\nabla v\|_{L^r(D_T)},\\ \| v - \Pi_k v \|_{L^r(T)} \leq c h_T \| \nabla v \|_{L^r(D_T)}, \quad \| v - \Pi_k v\|_{L^r(F)} \leq c h_F^{1-1/r} \|\nabla v\|_{L^r(D_T)}.\end{aligned}$$ Last we show that the limit $(\sigma^\ast_{\infty},(u^\ast_{\infty},U^\ast_{\infty}),(p^\ast_{\infty},P^\ast_{\infty}))$ satisfies the variational inequality in . \[thm:gat\_mc\] The solutions $(\sigma_\infty^*,u_\infty^*,U_\infty^*)$ and $(p_\infty^*,P_\infty^*)$ to problems - and satisfy $$\widetilde{\alpha}\eps(\nabla\sigma^\ast_\infty,\nabla(\mu-\sigma^\ast_\infty)) + \tfrac{\widetilde{\alpha}}{2\eps}(W'(\sigma^\ast_\infty),\mu-\sigma^\ast_\infty) - (\nabla u_\infty^\ast,\nabla p_{\infty}^\ast(\mu-\sigma_{\infty}^\ast))\geq0\quad\forall\mu\in\Atilde.$$ The proof is lengthy, and we break it into five steps. [**Step i.**]{} Derive a preliminary variational inequality. We relabel the subsequence $\{\sigma_{k_j}^\ast,(u_{k_j}^\ast,U_{k_j}^\ast),(p_{k_j}^\ast,P_{k_j}^\ast)\}_{j\geq0}$ in Theorems \[thm:conv\_medmin\] and \[thm:conv\_medadj\] as $\{\sigma_{k}^\ast,(u_{k}^\ast,U_{k}^\ast),(p_{k}^\ast,P_{k}^\ast)\}_{k\geq0}$. Let $I_k$ be the Lagrange interpolation operator on $V_k$, and let $\alpha' = \widetilde\alpha\varepsilon $ and $\alpha''=\frac{\widetilde\alpha}{2\varepsilon}$. For any $\mu\in\widetilde{\mathcal{A}}\cap C^\infty(\overline{\Omega})$, $I_{k}\mu\in\widetilde{\mathcal{A}}_{k}$ and let $\nu=\mu-I_k\mu$. Direct computation gives $$\begin{aligned} \alpha'&(\nabla\sigma^\ast_k,\nabla(\mu-\sigma^\ast_k)) +\alpha''(W'(\sigma^\ast_k),\mu-\sigma^\ast_k) -((\mu-\sigma_{k}^\ast)\nabla u_k^\ast,\nabla p_{k}^\ast )\nonumber\\ &=\alpha'(\nabla\sigma^\ast_k,\nabla(\mu-I_k\mu)) +\alpha''(W'(\sigma^\ast_k),\mu-I_k\mu) -((\mu-I_{k}\mu)\nabla u_k^\ast,\nabla p_{k}^\ast)\nonumber\\ &\qquad+\alpha'(\nabla\sigma^\ast_k,\nabla(I_k\mu-\sigma_k^\ast)) +\alpha''(W'(\sigma^\ast_k),I_k\mu-\sigma^\ast_k) -((I_{k}\mu-\sigma_k^\ast)\nabla u_k^\ast,\nabla p_{k}^\ast)\nonumber\\ & =\alpha'(\nabla\sigma^\ast_k,\nabla(\nu-\Pi_k\nu)) +\alpha''(W'(\sigma^\ast_k),\nu-\Pi_k\nu) -((\nu-\Pi_{k}\nu)\nabla u_k^\ast,\nabla p_{k}^\ast) \nonumber\\ &\qquad+\alpha'(\nabla\sigma^\ast_k,\nabla\Pi_{k}\nu) +\alpha''(W'(\sigma^\ast_k),\Pi_{k}\nu) -(\Pi_k\nu\nabla u_k^\ast,\nabla p_{k}^\ast)\nonumber\\ &\qquad+\alpha'(\nabla\sigma^\ast_k,\nabla(I_k\mu-\sigma_k^\ast)) +\alpha''(W'(\sigma^\ast_k),I_k\mu-\sigma^\ast_k) -((I_{k}\mu-\sigma_k^\ast)\nabla u_k^\ast,\nabla p_{k}^\ast)\nonumber\\ &\geq\left[\alpha'(\nabla\sigma^\ast_k,\nabla(\nu-\Pi_k\nu)) +\alpha''(W'(\sigma^\ast_k),\nu-\Pi_k\nu) -((\nu-\Pi_{k}\nu)\nabla u_k^\ast,\nabla p_{k}^\ast )\right]\nonumber\\ &\quad+\left[\alpha'(\nabla\sigma^\ast_k,\nabla\Pi_{k}\nu) +\alpha''(W'(\sigma^\ast_k),\Pi_{k}\nu) -(\Pi_k\nu\nabla u_k^\ast,\nabla p_{k}^\ast)\right]:={\rm I}+ {\rm II},\label{eqn:gat_mc_01}\end{aligned}$$ where the last inequality is due to the variational inequality in with $\mu_k=I_k\mu$. **Step ii.** Bound the ${\rm I}$. By elementwise integration by parts, Hölder inequality, the definition of the estimator $\eta_{k,3}$ and Lemma \[lem:cn-int-oper\] with $r=q'$ (with $q'$ being the conjugate exponent of $q$), $$\begin{aligned} |{\rm I}| &= \left| \sum_{T \in \cT_k} \int_T R_{T,2}(\sigma_{k}^\ast, u_k^\ast, p_k^\ast)(\nu-\Pi_k\nu){\,{\rm d}x}+ \sum_{F\in\mathcal{F}_k}\int_F J_{F,2}(\sigma_k^\ast)(\nu-\Pi_k\nu) \mathrm{d}s \right| \\ &\leq \sum_{T \in \cT_k}\Big(\|R_{T,2}(\sigma_{k}^\ast, u_k^\ast, p_k^\ast)\|_{L^q(T)}\|\nu-\Pi_k\nu\|_{L^{q'}(T)}+\sum_{F\subset\partial T}\| J_{F,2}(\sigma_k^\ast)\|_{L^{q}(F)}\|\nu-\Pi_k\nu\|_{L^{q'}(F)}\Big)\\ &\leq c\sum_{T \in \cT_k}\Big(h_T\|R_{T,2}(\sigma_{k}^\ast, u_k^\ast, p_k^\ast)\|_{L^q(T)}+\sum_{F\subset\partial T}h_F^{1/q}\| J_{F,2}(\sigma_k^\ast)\|_{L^{q}(F)}\Big)\|\nabla\nu\|_{L^{q'}(D_T)}\\ &\leq c\sum_{T\in\cT_{k}}\eta_{k,3}(\sigma_k^\ast,u_k^\ast,p_k^\ast,T)\|\nabla\nu\|_{L^{q'}(D_T)}.\end{aligned}$$ Thus, for any $k>l$, by (discrete) Hölder’s inequality and the finite overlapping property of the patches $D_T$, due to uniform shape regularity of the meshes $\mathcal{T}_k\in\mathbb{T}$, there holds $$\begin{aligned} |{\rm I}| &\leq c\big(\sum_{T\in\cT_{k}\setminus\cT_l^+}\eta_{k,3}(\sigma_k^\ast,u_k^\ast,p_k^\ast,T)\|\nabla\nu\|_{L^{q'}(D_T)} +\sum_{T\in\cT_l^+}\eta_{k,3}(\sigma_k^\ast,u_k^\ast,p_k^\ast,T)\|\nabla\nu\|_{L^{q'}(D_T)}\big)\\ &\leq c\Big(\big(\sum_{T\in\cT_{k}\setminus\cT^+_l}\eta^q_{k,3}(\sigma_k^\ast,u_k^\ast,p_k^\ast,T)\big)^{1/q}\|\nabla(\mu-I_k\mu)\|_{L^{q'}(\Omega_l^0)}\\ &\qquad +\big(\sum_{T\in\cT_{l}^+}\eta^q_{k,3}(\sigma_k^\ast,u_k^\ast,p_k^\ast,T)\big)^{1/q}\|\nabla(\mu-I_k\mu)\|_{L^{q'}(\Omega_l^+)}\Big).\end{aligned}$$ Since $W'(s)\in C^1[c_0,c_1]$, by the pointwise convergence of $\{\sigma_{k}^\ast\}_{k\geq0}$ in Theorem \[thm:conv\_medmin\] and Lebesgue’s dominated convergence theorem, we deduce $$\label{eqn:gat_mc_aux} W'(\sigma_{k}^\ast)\to W'(\sigma_\infty^\ast)\quad\mbox{in}~L^2(\Om).$$ Since $q=d/{(d-1)}\leq 2$, the sequence $\{W'(\sigma_k^\ast)\}_{k\geq0}$ is uniformly bounded in $L^q(\Omega)$. By Theorems \[thm:conv\_medmin\] and \[thm:conv\_medadj\], the sequences $\{\sigma_{k}^\ast\}_{k\geq0}$, $\{u_{k}^\ast\}_{k\geq0}$ and $\{p_{k}^\ast\}_{k\geq0}$ are uniformly bounded in $H^1(\Omega)$. Thus, and , and Hölder inequality give $$\begin{aligned} &\sum_{T\in\cT_{k}\setminus\cT^+_l}\eta^q_{k,3}(\sigma_k^\ast,u_k^\ast,p_k^\ast,T)\nonumber\\ \leq& c\Big(\|\nabla u_k^\ast\cdot\nabla p_k^\ast\|_{L^1(\Omega)}^{q-1}\sum_{T\in\cT_{k}\setminus\cT^+_l}\|\nabla u_k^\ast\cdot\nabla p_k^\ast\|_{L^1(T)} +\|h_{l}\|^{q}_{L^\infty(\Omega^{0}_l)}\|W'(\sigma_k^\ast)\|^q_{L^q(\Om)} +\|\nabla\sigma_{k}^\ast\|^q_{L^q(\Omega)}\Big)\nonumber\\ \leq & c\Big(\|\nabla u_{k}^\ast\|^q_{L^2(\Omega)}\|\nabla p_k^\ast\|^{q}_{L^2(\Omega)}+\|h_{l}\chi_{\Omega_l^0}\|^{q}_{L^\infty(\Omega)}\|W'(\sigma_k^\ast)\|^q_{L^q(\Om)} +\|\nabla\sigma_{k}^\ast\|^q_{L^2(\Omega)}\Big)\leq c.\label{eqn:bdd-eta}\end{aligned}$$ Then by the error estimate of $I_k$ [@Ciarlet:2002], $$|{\rm I}|\leq c\|h_{l}\chi_{\Omega^0_l}\|_{L^\infty(\Omega)}\|\mu\|_{W^{2,q'}(\Om)}+c\big(\sum_{T\in\cT_{l}^+}\eta^q_{k,3}(\sigma_k^\ast,u_k^\ast,p_k^\ast,T)\big)^{1/q}\|\mu\|_{W^{2,q'}(\Om)}.$$ By , $c\|h_{l}\chi_{\Omega^0_l}\|_{L^\infty(\Omega)}\|\mu\|_{W^{2,q'}(\Om)}\to 0$ as $l\to\infty$. Since $\cT_l^+\subset\cT_{k}$ for $k>l$, implies $$\begin{aligned} \big(\sum_{T\in\cT^+_l}\eta^q_{k,3}(\sigma_k^\ast,u_k^\ast,p_k^\ast,T)\big)^{1/q} &\leq{|\cT_{l}^+|}^{1/q}\max_{T\in\cT_l^+}\eta_{k,3}(\sigma_{k}^\ast,u_{k}^\ast,p_{k}^\ast,T)\leq{|\cT^+_l|}^{1/q}\max_{T\in\mathcal{M}_k^3}\eta_{k,3}(\sigma_{k}^\ast,u_{k}^\ast,p_{k}^\ast,T).\end{aligned}$$ By Lemma \[lem:estmarked\], for any small $\varepsilon>0$, we can choose $k_1>l_1$ for some large fixed $l_1$ such that whenever $k>k_{1}$, $$c({\sum_{T\in\cT_{l}^+}\eta^q_{k,3}}(\sigma_k^\ast,u_k^\ast,p_k^\ast,T))^{1/q}\|\mu\|_{W^{2,q'}(\Omega)}<\varepsilon.$$ Consequently, $$\label{eqn:gat_mc_02} {\rm I} \rightarrow 0\quad\forall\mu\in\widetilde{\mathcal{A}}\cap C^\infty(\overline{\Om}).$$ **Step iii.** Bound the term ${\rm II}$. For the term $\rm II$, elementwise integration and Hölder inequality yield $$\begin{aligned} |{\rm II}|&= \left| \sum_{T \in \cT_k} \int_T R_{T,2}(\sigma_{k}^\ast, u_k^\ast, p_k^\ast)\Pi_k\nu{\,{\rm d}x}+ \sum_{F\in\mathcal{F}_k}\int_F J_{F,2}(\sigma_k^\ast)\Pi_k\nu \mathrm{d}s \right|\\ &\leq \sum_{T \in \cT_k}\Big(\|R_{T,2}(\sigma_{k}^\ast, u_k^\ast, p_k^\ast)\|_{L^q(T)}\|\Pi_k\nu\|_{L^{q'}(T)}+\sum_{F\subset\partial T}\| J_{F,2}(\sigma_k^\ast)\|_{L^{q}(F)}\|\Pi_k\nu\|_{L^{q'}(F)}\Big) \end{aligned}$$ By the scaled trace theorem, local inverse estimate, $L^{q'}$-stability of $\Pi_k$ in Lemma \[lem:cn-int-oper\], local quasi-uniformity and interpolation error estimate for $I_k$ [@Ciarlet:2002], we deduce that for $k>l$ $$\begin{aligned} |{\rm II}|& \leq c\sum_{T \in \cT_k}\Big(h_T\|R_{T,2}(\sigma_{k}^\ast, u_k^\ast, p_k^\ast)\|_{L^q(T)}h_T^{-1}\|\Pi_k\nu\|_{L^{q'}(T)}+\sum_{F\subset\partial T}h_F^{1/q}\| J_{F,2}(\sigma_k^\ast)\|_{L^{q}(F)}h_F^{-1/q-1/q'}\|\Pi_k\nu\|_{L^{q'}(T)}\Big) \\ &\leq c\sum_{T \in \cT_k}\Big(h_T\|R_{T,2}(\sigma_{k}^\ast, u_k^\ast, p_k^\ast)\|_{L^q(T)}+\sum_{F\subset\partial T}h_F^{1/q}\| J_{F,2}(\sigma_k^\ast)\|_{L^{q}(F)}\Big)h_T^{-1}\|\nu\|_{L^{q'}(D_T)}\\ &\leq c\sum_{T\in\cT_{k}}\eta_{k,3}(\sigma_k^\ast,u_k^\ast,p_k^\ast,T)h_T^{-1}\|\mu-I_k\mu\|_{L^{q'}(D_T)}\\ &= c\big(\sum_{T\in\cT_{k}\setminus\cT_l^+}\eta_{k,3}(\sigma_k^\ast,u_k^\ast,p_k^\ast,T)h_T\|\mu\|_{W^{2,q'}(D_T)}+\sum_{T\in\cT_l^+}\eta_{k,3}(\sigma_k^\ast,u_k^\ast,p_k^\ast,T)h_T\|\mu\|_{W^{2,q'}(D_T)}\big)\\ &\leq c\|h_{l}\chi_{\Omega_l^0}\|_{L^\infty(\Omega)}\big(\sum_{T\in\cT_k\setminus\cT_{l}^+}\eta^q_{k,3}(\sigma_k^\ast,u_k^\ast,p_k^\ast,T)\big)^{1/q}\|\mu\|_{W^{2,q'}(\Om)} +c\big(\sum_{T\in\cT_{l}^+}\eta^q_{k,3}(\sigma_k^\ast,u_k^\ast,p_k^\ast,T)\big)^{1/q}\|\mu\|_{W^{2,q'}(\Om)}.\end{aligned}$$ Since $\big(\sum_{T\in\cT_k\setminus\cT_{l}^+}\eta^q_{k,3}(\sigma_k^\ast,u_k^\ast,p_k^\ast,T)\big)^{1/q}\leq c$, cf. , there holds $$|{\rm II}|\leq c\|h_{l}\chi_{\Omega_l^0}\|_{L^\infty(\Omega)}\|\mu\|_{W^{2,q'}(\Om)}+c\big(\sum_{T\in\cT_{l}^+}\eta^q_{k,3}(\sigma_k^\ast,u_k^\ast,p_k^\ast,T)\big)^{1/q}\|\mu\|_{W^{2,q'}(\Om)}.$$ Now by repeating the argument for the term $\rm I$, we obtain $$\label{eqn:gat_mc_aux2} {\rm II} \rightarrow 0\quad\forall\mu\in\widetilde{\mathcal{A}}\cap C^\infty(\overline{\Om}).$$ **Step iv.** Take limit in preliminary variational inequality. Using and the $H^1(\Omega)$-convergence of $\{\sigma_{k}^\ast\}_{k\geq0}$ in Theorem \[thm:conv\_medmin\], we have for each $\mu\in\widetilde{\mathcal{A}}\cap C^\infty(\overline{\Om})$ $$\label{eqn:gat_mc_03} \alpha'(\nabla\sigma^\ast_k,\nabla(\mu-\sigma^\ast_k)) +\alpha''(W'(\sigma_{k}^\ast),\mu-\sigma^\ast_k) \rightarrow \alpha'(\nabla\sigma^\ast_\infty,\nabla(\mu-\sigma^\ast_\infty)) +\alpha''(W'(\sigma_{\infty}^\ast),\mu-\sigma^\ast_\infty).$$ Further, the uniform boundedness on $\{u_k^\ast\}_{k\geq0}$ in $H^1(\Omega)$ and the convergence of $\{p^\ast_{k}\}_{k\geq0}$ to $p_\infty^\ast$ in $H^1(\Omega)$ in Theorem \[thm:conv\_medadj\] yield $$|(\mu\nabla u_k^\ast,\nabla (p_{k}^\ast-p_\infty^\ast))|\leq c\|\nabla(p_{k}^\ast-p_\infty^\ast)\|_{L^2(\Omega)}\rightarrow 0.$$ This and Theorem \[thm:conv\_medmin\] imply $$\label{eqn:gat_mc_04} (\mu\nabla u_k^\ast,\nabla p_{k}^\ast )=(\mu \nabla u_k^\ast,\nabla (p_{k}^\ast-p_\infty^\ast))+ (\mu\nabla u_k^\ast,\nabla p_\infty^\ast) \rightarrow (\mu\nabla u^\ast_\infty,\nabla p_\infty^\ast )\quad\forall\mu\in\widetilde{\mathcal{A}}\cap C^\infty(\overline{\Om}).$$ In the splitting $$\begin{aligned} (\sigma_k^\ast\nabla u_{k}^\ast, \nabla p_k^\ast)-(\sigma_\infty^\ast\nabla u_{\infty}^\ast, \nabla p_\infty^\ast )&= (\sigma_k^\ast\nabla u_{k}^\ast, \nabla(p_k^\ast-p_\infty^\ast)) +((\sigma_k^\ast-\sigma_{\infty}^\ast)\nabla u_{k}^\ast, \nabla p_\infty^\ast)\\ &\quad+(\sigma_{\infty}^\ast\nabla(u_{k}^\ast-u_\infty^\ast),\nabla p_\infty^\ast),\end{aligned}$$ the arguments for directly yields $$|(\sigma_k^\ast\nabla u_{k}^\ast, \nabla(p_k^\ast-p_\infty^\ast) )|\rightarrow 0\quad\mbox{and}\quad |(\sigma_{\infty}^\ast\nabla(u_{k}^\ast-u_\infty^\ast),\nabla p_\infty^\ast)|\rightarrow 0.$$ The boundedness on $\{u_k^\ast\}_{k\geq0}$ in $H^1(\Omega)$, pointwise convergence of $\{\sigma_{k}^\ast\}_{k\geq0}$ of Theorem \[thm:conv\_medmin\] and Lebesgue’s dominated convergence theorem imply $$|((\sigma_k^\ast-\sigma_{\infty}^\ast)\nabla u_{k}^\ast, \nabla p_\infty^\ast)|\leq c\|(\sigma_k^\ast-\sigma_{\infty}^\ast)\nabla p_\infty^\ast \|_{L^2(\Omega)}\rightarrow 0.$$ Hence, there holds $$\label{eqn:gat_mc_05} (\sigma_k^\ast\nabla u_{k}^\ast, \nabla p_k^\ast) \rightarrow (\sigma_\infty^\ast\nabla u_{\infty}^\ast, \nabla p_\infty^\ast).$$ Now by passing both sides of to the limit and combining the estimates -, we obtain $$\alpha'(\nabla\sigma^\ast_\infty,\nabla(\mu-\sigma^\ast_\infty)) + \alpha''(W'(\sigma^\ast_\infty),\mu-\sigma^\ast_\infty) -(\nabla u^\ast_\infty,\nabla p_\infty^\ast(\mu-\sigma_\infty^\ast))_{L^2(\Omega)}\geq 0\quad\forall\mu\in\widetilde{\mathcal{A}}\cap C^\infty(\overline{\Om}).$$ **Step v.** Density argument. By the density of $C^\infty(\overline{\Omega})$ in $H^1(\Omega)$ and the construction via a standard mollifier [@Evans:2015], for any $\mu\in\widetilde{\mathcal{A}}$ there exists a sequence $\{\mu_n\} \subset\widetilde{\mathcal{A}}\cap C^\infty(\overline{\Om})$ such that $\|\mu_n-\mu\|_{H^1(\Omega)}\rightarrow 0$ as $n\rightarrow\infty$. Thus, $(\nabla\sigma^\ast_\infty,\nabla \mu_n )\rightarrow(\nabla\sigma^\ast_\infty,\nabla\mu )$, $(W'(\sigma_\infty^\ast),\mu_n)\to (W'(\sigma_\infty^\ast),\mu)$, and $(\mu_n\nabla u^\ast_\infty,\nabla p_\infty^\ast )\rightarrow(\mu\nabla u^\ast_\infty,\nabla p_\infty^\ast)$, after possibly passing to a subsequence. The desired result follows from the preceding two estimates. \[rmk:gat\_mc\] The computable quantity $\eta_{k,3}(\sigma_k^*,u_k^*,p_k^*,T)$ emerges naturally from the proof, i.e., the upper bounds on ${\rm I}$ and ${\rm II}$, which motivates its use as the *a posteriori* error estimator in Algorithm \[alg\_afem\_eit\]. Acknowledgements {#acknowledgements .unnumbered} ================ The authors are grateful to an anonymous referee and the boarder member for the constructive comments, which have significantly improved the presentation of the paper. The work of Y. Xu was partially supported by National Natural Science Foundation of China (11201307), Ministry of Education of China through Specialized Research Fund for the Doctoral Program of Higher Education (20123127120001) and Natural Science Foundation of Shanghai (17ZR1420800). The solution of the variational inequality {#app:newton} ========================================== Now we describe an iterative method for minimizing the energy functional $$\frac{\tilde\alpha \varepsilon }{2}\|\nabla \sigma\|_{L^2(\Omega)}^2 + \frac{\tilde\alpha}{2\varepsilon}\int_\Omega W(\sigma){\rm d}x + \frac12\|U(\sigma)-U^\delta\|^2.$$ Let $p(z)=(z-c_0)(z-c_1)$. Then one linearized approximation $p_L(z,z_k)$ reads (with $\delta z=z-z_k$) $$\begin{aligned} p_L(z,z_k) &= p(z_k) + p'(z_k)(z-z_k)\\ &= (z_k^2-(c_0+c_1)z_k+c_0c_1) + (2z_k-c_0-c_1)\delta z.\end{aligned}$$ Upon substituting the approximation $p_L(z,z_k)$ for $p(z)$ and linearizing the forward map $U(\sigma)$, we obtain the following surrogate energy functional (with $\delta\sigma=\sigma-\sigma_k$ being the increment and $\delta U= U^\delta-U(\sigma_k)$) $$\begin{aligned} \label{eqn:surrogate} \tfrac{\tilde\alpha \varepsilon }{2}\|\nabla(\sigma_k+\delta\sigma)\|_{L^2(\Omega)}^2 + \tfrac{\tilde\alpha}{2\varepsilon}\|p(\sigma_k)+p'(\sigma_k)\delta\sigma\|_{L^2(\Omega)}^2 + \tfrac12\|U'(\sigma_k)\delta\sigma-\delta U\|^2.\end{aligned}$$ The treatment of the double well potential term $\int_\Omega W(\sigma){\rm d}x$ is in the spirit of the classical majorization-minimization algorithm in the following sense (see [@ZhangChenXu:2018] for a detailed derivation) $$\begin{aligned} \int_\Omega W(\sigma_k) {\rm d}x &= \int_\Omega p_L(\sigma_k,\sigma_k)^2{\rm d}x, \quad \nabla \int_\Omega W(\sigma_k) {\rm d}x = \nabla \int_\Omega p_L(\sigma_k,\sigma_k)^2{\rm d}x, \\ \quad\mbox{and} &\quad \nabla^2 \int_\Omega W(\sigma_k) {\rm d}x \leq \nabla^2 \int_\Omega p_L(\sigma_k,\sigma_k)^2{\rm d}x.\end{aligned}$$ This algorithm is known to have excellent numerical stability. Upon ignoring the box constraint on the conductivity $\sigma$, problem is to find $\delta\sigma \in H^1(\Omega)$ such that $$\begin{aligned} (U'(\sigma_k)^*U'(\sigma_k)\delta\sigma,\phi) + \tilde\alpha \varepsilon& (\nabla \delta\sigma,\nabla \phi) + \tfrac{\tilde\alpha}{\varepsilon} (p'(\sigma_k)^2\delta\sigma,\phi) \\ & = (U'(\sigma_k)^*\delta U, \phi)-\tfrac{\tilde\alpha}{\varepsilon}(p(\sigma_k)p'(\sigma_k),\phi)-\tilde\alpha\varepsilon(\nabla\sigma_k,\nabla\phi),\quad \forall \phi\in H^1(\Omega).\end{aligned}$$ This equation can be solved by an iterative method for the update $\delta\sigma$ (with the box constraint treated by a projection step). Note that $U'(\sigma_k)$ and $U'(\sigma_k)^*$ can be implemented in matrix-free manner using the standard adjoint technique. In our experiment, we employ the conjugate gradient method to solve the resulting linear systems, preconditioned by the sparse matrix corresponding to $\tilde\alpha \varepsilon (\nabla \delta\sigma,\nabla \phi) + \frac{\tilde\alpha}{\varepsilon}(p'(\sigma_k)^2\delta\sigma,\phi)$. Proof of Lemma \[lem:cn-int-oper\] {#app:int-oper} ================================== The proof follows that in [@ChenNochetto:2000; @HildNicaise:2005]. By Hölder inequality and $h_T^d \leq |\omega_x|$ for each node $x\in T$, $$\left|\frac{1}{|\omega_x|}\int_{\omega_x}v{\,{\rm d}x}\right| \leq |\omega_x|^{-1/r}\|v\|_{L^r(\omega_x)}\leq h_T^{-d/r}\|v\|_{L^r(\omega_x)}.$$ The desired $L^r$-stability follows from the estimate $\|\phi_x\|_{L^r(T)}\leq c h_T^{d/r}$, by the local quasi-uniformity of the mesh. In view of the definition , $\Pi_k \zeta = \zeta$ for any $\zeta\in\mathbb{R}$. By local inverse estimate, the $L^r$-stability of $\Pi_k$, standard interpolation error estimate [@Ciarlet:2002] and local quasi-uniformity, $$\begin{aligned} \|\nabla \Pi_k v\|_{L^r(T)}&=\inf_{\zeta\in \mathbb{R}} \|\nabla \Pi_k (v - \zeta)\|_{L^r(T)} \leq c h_T^{-1}\inf_{\zeta\in \mathbb{R}} \| \Pi_k (v - \zeta) \|_{L^r(T)} \nonumber\\ &\leq c h_T^{-1}\inf_{\zeta\in \mathbb{R}} \| v - \zeta \|_{L^r(D_T)} \leq c h_T^{-1} \| v - \frac{1}{|D_T|}\int_{D_T} v {\,{\rm d}x}\|_{L^r(D_T)} \leq c \|\nabla v \|_{L^r(D_T)}.\label{pf:int_err1}\end{aligned}$$ Similarly, $$\label{pf:int_err2} \begin{aligned} \|v - \Pi_k v\|_{L^r(T)}&= \|v - \zeta - \Pi_k ( v - \zeta )\|_{L^r(T)} \\ &\leq c \inf_{\zeta \in \mathbb{R}} \| v - \zeta\|_{L^r(D_T)} \leq c h_T \|\nabla v \|_{L^r(D_T)}. \end{aligned}$$ By the scaled trace theorem, for any $F\subset \partial T$, there holds $$\|v - \Pi_k v \|_{L^r(F)} \leq c (h_F^{-1/r} \| v - \Pi_k v\|_{L^r(T)} + h_F^{1-1/r} \|\nabla ( v - \Pi_k v) \|_{L^r(T)}).$$ Then and complete the proof of the lemma. [^1]: Department of Computer Science, University College London, Gower Street, London WC1E 6BT, UK (b.jin@ucl.ac.uk, bangti.jin@gmail.com) [^2]: Department of Mathematics and Scientific Computing Key Laboratory of Shanghai Universities, Shanghai Normal University, Shanghai 200234, China. (yfxu@shnu.edu.cn, mayfxu@gmail.com)
--- abstract: 'We report the discovery of KMT-2018-BLG-1292Lb, a super-Jovian $M_{\rm planet} = 4.5\pm 1.3\,M_J$ planet orbiting an F or G dwarf $M_{\rm host} = 1.5\pm 0.4\,M_\odot$, which lies physically within ${\cal O}(10\,\pc)$ of the Galactic plane. The source star is a heavily extincted $A_I\sim 5.2$ luminous giant that has the lowest Galactic latitude, $b=-0.28^\circ$, of any planetary microlensing event. The relatively blue blended light is almost certainly either the host or its binary companion, with the first explanation being substantially more likely. This blend dominates the light at $I$ band and completely dominates at $R$ and $V$ bands. Hence, the lens system can be probed by follow-up observations immediately, i.e., long before the lens system and the source separate due to their relative proper motion. The system is well characterized despite the low cadence $\Gamma=0.15$–$0.20\,{\rm hr^{-1}}$ of observations and short viewing windows near the end of the bulge season. This suggests that optical microlensing planet searches can be extended to the Galactic plane at relatively modest cost.' title: 'KMT-2018-BLG-1292: A Super-Jovian Microlens Planet in the Galactic Plane' --- author.tex [Introduction]{} \[sec:intro\] ============================== As a rule, optical microlensing searches heavily disfavor regions of high extinction and, as a result, systematically avoid the Galactic plane. For example, prior to the start of OGLE-IV (the fourth phase of the Optical Gravitational Lensing Experiment, @ogleref) in 2010, all but a small fraction of Galactic-bulge microlensing observations were restricted to the southern bulge despite the fact that the stellar content of the lines of sight toward the northern and southern bulge are extremely similar. With its larger format camera, OGLE-IV began systematically covering the northern bulge, but mainly at very low cadence. Hence, it remained the case that the great majority of observations were toward the southern bulge. However, @poleski16 showed that the microlensing event rate is basically proportional to the product of the surface density of clump stars and the surface density of stars below some magnitude limit (in the principal survey band), e.g., $I<20$; the two numbers being proxies for the column densities of lenses and sources, respectively[^1]. Guided in part by this work, the Korea Microlensing Telescope Network (KMTNet, @kmtnet) devised an observing strategy that much more heavily favored the northern bulge, which accounts for about 37% of the area covered and 24% of all the observations. Nevertheless, even with this more flexible attitude toward high-extinction fields, KMTNet still followed previous practice in systematically avoiding the Galactic plane. See Figure 12 of @eventfinder. Indeed, there is an additional reason for avoiding fields with high or very high extinction. That is, even if the high stellar-lens column densities near the plane partially compensate for the lower column density of sources, it remains the case that events, particularly planetary and binary events, in very high extinction fields are more difficult to interpret. Very often these events have caustic crossings from which one can usually measure $\rho=\theta_*/\theta_\e$, i.e., the ratio of the angular radius of the source to the Einstein radius. Then, one can usually determine $\theta_*$ from the offset of the source relative to the red clump in color and magnitude [@ob03262]. However, the color measurement required for this technique is only possible if the event is detected in a second band, which is usually $V$ band in most microlensing surveys. But $V$-band observations rarely yield usable results in very high-extinction fields. Hence, one must either take special measures to observe the event in a redder band (e.g., $H$) or one must estimate $\theta_*$ without benefit of a color measurement, which inevitably substantially increases the error in $\theta_*$ (and so $\theta_\e = \theta_*/\rho$). As a result of the almost complete absence of optical microlensing observations toward the Galactic plane, there is essentially no experience with how these theoretical concerns translate into practical difficulties, and similarly no practical approaches to overcoming these difficulties. This is unfortunate because the Galactic plane could potentially provide important complementary information to more standard fields in terms of understanding the microlensing event rate and Galactic distribution of planets. While this shortcoming is widely recognized, the main orientation of researchers in the field has been to await infrared microlensing surveys. @gould95 advocated a “K-band microlensing \[survey\] of the inner galaxy”. Although his focus was on regions projected close to the Galactic center, the same approach could be applied to any high-extinction region, in particular the Galactic plane. In fact, PRIME, a 1.8m field telescope with 1.3 deg$^2$ camera to be installed at SAAO in South Africa, will be the first to conduct a completely dedicated IR microlensing survey (T. Sumi 2019, private communication), While the exact survey strategy has not yet been decided, PRIME will certainly focus on heavily extincted regions toward the inner Galaxy. The VISTA Variables in the Via Lactea (VVV; @vvv-survey1 [@vvv-survey2]) Microlensing Survey [@vvv-ulens1; @vvv-ulens2] has already conducted wide-field IR observations covering a $(20.4^\circ \times 1.1^\circ)$ rectangle of the Galactic plane spanning 2010-2015. They discovered 630 microlensing events. However, given their low cadence (ranging from 73 to 104 epochs over 6 years), they were not sensitive to planetary deviations. In addition, @vvv-ulens3 used VVV near-IR photometry to search for microlensing events in fields along the Galactic minor axis, ranging from $b = -3.7$ to $b = 4$, covering a total area of $\sim 11.5\,{\rm deg}^2$. They found $N=238$ new microlensing events in total, $N=74$ of which have bulge red clump (RC) giant sources. They found a strong increase of the number of microlensing events with Galactic latitude toward the plane, both in the total number of events and in the RC subsample, in particular, an order of magnitude more events at $b = 0$ than at $|b| = 2$ along the Galactic minor axis. This gradient is much steeper than predicted by models that had in principle been tuned to explain the observations from the optical surveys farther from the plane. @ukirt-5events conducted a survey of high-extinction microlensing fields (Figure 1 of @ukirt-5events and Figure 1 of @ub17001), which had substantially higher cadence despite the relatively short viewing window from the 3.8m UKIRT telescope in Hawaii. This yielded the first infrared detection of a microlensing planet, UKIRT-2017-BLG-001Lb, which lies projected just $0.33^\circ$ from the Galactic plane and $0.35^\circ$ from the Galactic center [@ub17001]. Both values were by far the smallest for any microlensing planet up to that point. They estimated the extinction at $A_K=1.68$, which corresponds approximately to $A_I\simeq 7 A_K = 11.8$. This high extinction value might lead one to think that such planets are beyond the reach of optical surveys. In fact, KMTNet routinely monitors substantial areas of very high extinction simply because its cameras are so large that these are “inadvertently” covered while observing neighboring regions of lower extinction and high stellar density. For example, KMT-2018-BLG-0073[^2] lies at $(l,b)=(+2.32,+0.27)$ and has $A_K=1.3$. This raises the possibility that optical surveys could in fact probe very high extinction regions as well, albeit restricted to monitoring exceptionally luminous sources or very highly magnified events. Here we report the discovery of the planet KMT-2018-BLG-1292Lb, which at Galactic coordinates $(l,b)=(-5.23,-0.28)$ is the closest to the Galactic plane of any microlensing planet to date. The planetary perturbation is well characterized despite the fact that it occurred near the end of the season when it could be observed only about three hours per night from each site and that it lies in KMTNet’s lowest cadence field. Thus, this detection in the face of these moderately adverse conditions suggests that optical surveys could contribute to the study of Galactic-plane planetary microlensing at relatively modest cost. [Observations]{} \[sec:obs\] ============================ KMT-2018-BLG-1292 is at (RA,Dec) = (17:33:42.62,$-33$:31:14.41) corresponding to $(l,b)=(-5.23,-0.28)$. It was discovered by applying the KMTNet event-finder algorithm [@eventfinder] to the full-season of 2018 KMTNet data, which were taken from three identical 1.6m telescopes equipped with $(2^\circ \times 2^\circ)$ cameras in Chile (KMTC), South Africa (KMTS), and Australia (KMTA). The event lies in KMT field BLG13, which was observed in the $I$ band at cadences of $\Gamma=0.2\,{\rm hr}^{-1}$ from KMTC and $\Gamma=0.15\,{\rm hr}^{-1}$ from KMTS and KMTA. One out of every ten $I$-band observations was matched by an observation in the $V$ band. However, the $V$-band light curve is not useful due to high extinction. The event was initially classified as “clear microlensing” based on the relatively rough DIA pipeline photometry [@alard98; @wozniak2000], but planetary features were not obvious. The possibly planetary anomaly was noted on 5 January 2019, when the data were routinely re-reduced using the KMTNet pySIS [@albrow09] pipeline as part of the event-verification process. The first modeling was carried almost immediately, on 8 January 2019. This confirmed the planetary nature, thus triggering final tender-loving care (TLC) reductions. But, in addition, it also made clear that the event might still be ongoing after the bulge had passed behind the Sun. This led KMTNet to take two measures to obtain additional data. First, KMTNet began observing BLG13 from KMTC on 2 February, which was 17 days before the start of its general bulge observations. This was made possible by the fact that KMT-2018-BLG-1292 lies near the western edge of the bulge fields and so can be observed earlier in the season than most fields, given the pointing restrictions due to the telescope design. Second, KMTNet contacted C. Kochanek for special permission to obtain nine epochs of observations (17 pointings) from 31 January 2019 to 8 February 2019 on the dual channel (optical/infrared) ANDICAM camera [@depoy03] on the 1.3m SMARTS telescope in Chile. The primary objective of these observations was to obtain $H$-band data, which could yield an $I-H$ color, provided that the event remained magnified at these late dates. As mentioned above, it was already realized that the KMT $V$-band data would not yield useful source-color information. However, because the source turned out to be a low-amplitude variable (see Section \[sec:var\]) while the magnification at the first ANDICAM $H$-band observation was low, $A\sim 1.1$, the $(I-H)$ color measurement from these data was significantly impacted by systematic uncertainties. Fortunately, the VVVX survey [@vvvx-survey] obtained seven $K_s$-band data points on the rising part of the light curve, including three with magnifications $A=1.47$–1.58. While these are, of course, also affected by systematics from source variability, the impact is a factor $\sim 5$ times smaller. Hence, in the end, we use these VVV survey data to measure the source color. [Light Curve Analysis]{} \[sec:anal\] ===================================== [Source and Baseline Variability]{} \[sec:var\] ----------------------------------------------- The light curve exhibits low-level (few percent) variability, including roughly periodic variations with period $P\sim 13\,$days. This level of variation is far too small to have important implications for deriving basic model parameters, but could in principle affect subtle higher-order effects, in particular the microlens parallax. For clarity of exposition, we therefore initially ignore this variability when exploring static models (Section \[sec:static\]), and then use these to frame the investigation of the variability. We then account for its impact on the microlensing parameters (and their uncertainties) after introducing higher-order effects into the modeling in Section \[sec:par\]. [Static Model]{} \[sec:static\] ------------------------------- Figure \[fig:lc\] shows the KMT data and best-fit model for KMT-2018-BLG-1292. With the exception of a strong anomaly lasting $\delta t\simeq 6\,$days, the 2018 data take the form of the rising half of a standard @pac86 single-lens single-source (1L1S) curve. The early initiation of 2019 observations, discussed in Section \[sec:obs\], then capture the extreme falling wing of the same Paczyński profile. We therefore begin by searching for static binary (2L1S) models, which are characterized by seven non-linear parameters: $(t_0,u_0,t_\e,q,s,\alpha,\rho)$. The first three are the standard 1L1S Paczyński parameters, i.e., the time of lens-source closest approach, the impact parameter (in units of the Einstein radius $\theta_\e$), and the Einstein radius crossing time. The next three characterize the planet, i.e., the planet-host mass ratio, the magnitude of the planet-host separation (in units of $\theta_\e$), and the orientation of this separation relative to the lens-source relative proper-motion $\bmu_\rel$. The last, $\rho\equiv \theta_*/\theta_\e$, is the normalized source radius. We first conduct a grid search over $(s,q)$, in which these two parameters are held fixed while all others are allowed to vary in a Markov chain Monte Carlo (MCMC). The Paczyński parameters are seeded at values derived from a 1L1S fit (with the anomaly removed), and $\alpha$ is seeded at six values drawn uniformly around the unit circle. Given the very high extinction toward this field $A_I\simeq 7 A_K = 5.2$ and the relatively bright baseline flux $I_{\rm base}\sim 18.2$, the source is very likely to be a giant. In view of this, we seed the normalized source radius at $\rho=0.005$. This procedure yields only one local minimum. We then allow all seven parameters to vary and obtain the result shown as the first model in Table \[tab:ulens\]. The only somewhat surprising element of this analysis is that $\rho$ is measured reasonably well, with $\sim 15\%$ precision. This is unexpected because one does not necessarily expect to measure $\rho$ with such sparse sampling, roughly one point per day. However, from the solution, the source-radius crossing time is $t_*\equiv \rho t_\e = 9.4\,$hrs, so that the diameter crossing time is almost one day. Moreover, as shown by the caustic geometry in Figure \[fig:caustic\], the source actually runs almost tangent to caustic, which means that all six data points are affected by the caustic (and so finite-source effects). Hence, the relatively good measurement of $\rho$ is partly due to a generic characteristic of giant-star sources (which in turn are much more likely for optical microlensing searches in extincted fields) and partly due to a chance alignment of the source trajectory with the caustic. We note that UKIRT-2017-BLG-001 [@ub17001] had a similarly good ($\sim 10\%$) $\rho$ measurement with similar (1 day) cadence[^3], and for similar reasons: large source, whose detection was favored by heavy extinction, and consequently long $t_*$ ($\sim 16\,$ hrs). [Parallax Models]{} \[sec:par\] ------------------------------- We next attempt to measure the microlens parallax vector [@gould92; @gould00], $$\bpi_\e \equiv {\pi_\rel\over\theta_\e}{\bmu_\rel\over\mu_\rel}, \qquad \theta_\e^2 \equiv \kappa M\pi_\rel, \qquad \kappa \equiv {4G\over c^2\au}\simeq 8.1\,{\mas\over M_\odot}. \label{eqn:piethetae}$$ where, $M$ is the lens mass, $\bmu_\rel$ is the instantaneous geocentric lens-source relative proper motion, and $\pi_\rel$ is the lens-source relative parallax. Because the parallax effect due to Earth’s annual motion is quite subtle, such a measurement can be affected by source variability. Hence we must simultaneously model this variability together with the microlens parallax in order to assess its impact on both the best estimate and uncertainty of $\bpi_\e$. ### [Significant Parallax Constraints Are Expected]{} \[sec:expect\] The relatively long timescale, $t_\e\simeq 67\,$days, of the standard solution in Table \[tab:ulens\] suggests that it may be possible to measure or strongly constrain $\bpi_\e$. In addition to the relatively long timescale, the presence of sharply defined peaks (from the anomaly) tend to improve microlens parallax measurements [@angould]. Finally, while it would be relatively difficult to measure $\bpi_\e$ from 2018 data alone (because these contain only the rising part of the light curve), the 2019 data on the extreme falling wing add significant constraints to this measurement. We therefore add two parameters to the modeling $(\pi_{\e,N},\pi_{\e,E})$, i.e., the components of $\bpi_\e$ in equatorial coordinates. Because parallax effects, which are due to Earth’s orbital motion, can be mimicked in part by orbital motion of the lens system [@mb09387; @ob09020], one should always include lens motion, at least initially, when incorporating parallax into the fit. We model this with two parameters, $\bgamma\equiv ((ds/dt)/s,d\alpha/dt)$, where $ds/dt$ is the instantaneous rate of change in separation and $d\alpha/dt$ is the instantaneous rate of change of the orientation of the binary axis. Note that all “instantaneous” quantities $(\bmu,\bgamma)$ are defined at time $t=t_0$. However, we find that these two additional parameters are not significantly correlated with the parallax and are also not significantly constrained by the fit. Hence, we remove them from the fit. ### [Accounting for Variability]{} \[sec:account\] As mentioned in Section \[sec:var\], the source shows low-level variations in the standard-model residuals. We will show in Section \[sec:cmd\] that the source is a luminous red giant, so source variability would not be unexpected. These variations do not significantly affect the static model (and so were ignored up to this point) but could affect the parallax measurement, which depends on fairly subtle distortions of the light curve relative to the one defined by a static geometry. We therefore simultaneously fit for this variability together with the nine other non-linear parameters describing the 2L1S parallax solution. This will allow us, in particular, to determine whether the parallax parameters $(\pi_{\e,N},\pi_{\e,E})$ are correlated with the variability parameters. We consider models that incorporate variability into an “effective magnification” $$A_\eff(t) = A(t;t_0,u_0,t_\e,q,s,\alpha,\rho,\pi_{\e,N},\pi_{\e,E}) \biggl[1 + \sum_{i=1}^{N_{\rm per}}a_i\sin\biggl({2\pi t\over P_i} +\phi_i \biggr)\biggr], \label{eqn:var}$$ where $(a_i,P_i,\phi_i)$ are the amplitude, period, and phase of each of the $N_{\rm per}$ wave forms that are included. We search for initial values of the wave-form parameters by first applying Equation (\[eqn:var\]) to static models with the microlensing parameters seeded at the best fit non-variation model. We set $N_{\rm per}=1$ and find the three wave-form parameters. We then set $N_{\rm per}=2$ and seed the previous $(7+3)=10$ non-linear parameters at the $N_{\rm per}=1$ solution in order to find the next three. In principle this procedure could be repeated, but we find no additional significant periodic variations. We seeded the first component with $P_1=11\,$days based on our by-eye estimate of the periodic variations. Somewhat surprisingly, this fit converged to $P_1\sim 70\,$days. Hence, we seeded the second component again with $P_2=11\,$days, which converged to $P_2 \simeq 13\,$days. We show this $N_{\rm per}=2$ standard model in Table \[tab:ulens\] next to the model without periodic variation. As anticipated in Section \[sec:static\], the introduction of periodic components has virtually no effect on the standard microlensing parameter estimates, although the fit is improved by $\Delta\chi^2=27$ for six degrees of freedom (dof). These values served as benchmarks for the next phase of simultaneously fitting for parallax and periodic variations, in which the parallax fits could in principle become coupled to long-term variations. We seed the $N_{\rm per}=1$ parallax fits with a variety of periods, but these always converge to $P_1\sim 63\,$days. We then seed $P_2=13\,$days, which then converges to a similar value. Adding more wave forms does not significantly improve the fit. ### [Parallax Model Results]{} \[sec:parres\] Table \[tab:ulens\] shows the final results, i.e., for nine microlensing parameters plus six periodic-variation parameters. As usual, we test for the “ecliptic degeneracy”, which approximately takes $(u_0,\alpha,\pi_{\e,N}) \rightarrow -(u_0,\alpha,\pi_{\e,N})$ [@ob09020] and present this solution as well in Table \[tab:ulens\]. In addition, in Table \[tab:evolve\], we show the evolution of key microlens parameters as additional period terms are introduced. In fact, neither the microlens parallax nor the other key microlens parameters change significantly as a result of incorporating periodic variability into the fits. Because both $\pi_\e$ and $\rho$ are measured, one can infer the lens mass and lens-source relative parallax via, $$M = {\theta_\e\over\kappa \pi_\e}, \qquad \pi_\rel = \theta_\e\pi_\e, \label{eqn:massdist}$$ provided that the angular source size $\theta_*$ (and so $\theta_\e = \theta_*/\rho$) can be determined from the color-magnitude diagram (CMD). [Color-Magnitude Diagram]{} \[sec:cmd\] ======================================= There are two challenges to applying the standard procedure [@ob03262] of putting the source star on an instrumental CMD in order to determine $\theta_*$. Both challenges derive from the fact that the event lies very close to the Galactic plane. First, the extinction is high, which implies that the $V$-band data, which are routinely taken, will not yield an accurate source color. Fortunately, there are $K_s$ data from the VVVX survey taken when the event was sufficiently magnified to measure the $K_s$ source flux. The second issue is more fundamental. The upper panel in Figure \[fig:cmd\] shows an $I$ versus $(I-K)$ CMD, where the $I$-band data come from pyDIA reductions of the field stars within a $2^\prime\times 2^\prime$ square centered on the event and the $K$-band data come from the VVV catalog. The position of the “baseline object” (magenta) is derived from the field-star photometry of these two surveys, while the position of the source star (blue) is derived from the $f_S$ measurements from the model fit to the light curves. The position of the blended light is shown as an open circle because, while its $I$-band magnitude is measured from the fit, its $K$-band flux is too small to be reliably determined. Hence its position is estimated from the $I$ versus $(V-I)$ CMD, which is described immediately below. The centroid of the red clump is shown in red. The lower panel of Figure \[fig:cmd\] shows the same quantities for the $I$ versus $(V-I)$ CMD. It is included to facilitate analysis of the properties of the blend, which is discussed further below. In this case, the source (blue) and clump centroid (red) are shown as open symbols because neither can be reliably determined from the data and so are estimates rather then measurements. The source lies $\Delta(I-K,I)=(+0.70,-0.63)$ redward and brighter than the clump. We first interpret this position under the assumption that the lens suffers similar extinction as the clump itself. In this case, the source is a very red, luminous giant, $[(I-K)_0,M_I]\simeq (2.1,-0.7)$, which would explain why it is a low-amplitude semi-regular variable. Adopting the assumption that the source suffers the same extinction as the clump, together with the intrinsic clump position $[(V-I),I]_{0,\rm cl} = (1.06,14.66)$ from @bensby13 and @nataf13, as well as the color-color relations of @bb88, we obtain $[(V-K),K]_0 = (3.90,11.87)$. Then using the color/surface-brightness relation of @groenewegen04 $$\log (\theta_*/\muas) = 3.286 - 0.2\,K_0 + 0.039(V-K)_0, \label{eqn:csb}$$ we obtain $$\theta_* = 11.59\pm 1.00\,\muas. \label{eqn:thetastareval}$$ The error bar in Equation (\[eqn:thetastareval\]) is determined as follows. First, while the formal error $\Delta(I-K)$ (from fitting the $I$ and $K$ light curves to the model and centroiding the clump) is only $\sim 0.05\,$mag, we assign a total error $\sigma[\Delta(I-K)]= 0.11\,$mag (i.e., adding 0.1 mag in quadrature). We do so because the source is variable, and this variation may have a different phase and amplitude in $I$ (where it is measured) than $K$. Hence, we determine $I-K$ by fitting both light curves to a standard model without periodic wave-forms and account for the unknown form of the variation with this error term. This error directly propagates to errors of 0.28 mag in $(V-K)_0$ and 0.11 mag in $K_0$, which are perfectly anti-correlated, and so add constructively via Equation (\[eqn:thetastareval\]) to $0.2\times 0.11 + 0.039\times 0.28 = 0.329\,$dex. Finally, there is a statistically independent error in $\Delta I$ of 0.09 mag, which comes from a 0.07 mag error in centroiding the clump and a 0.05 mag error from fitting the model. This yields an additional error in Equation (\[eqn:thetastareval\]) of $0.2\times 0.09= 0.018\,$dex, which is added in quadrature to obtain the final result. We consider the assumption underlying Equation (\[eqn:thetastareval\]) that the source suffers the same extinction as the clump to be plausible because there is a well-defined clump, meaning that there is a strong overdensity of stars at the bar. Hence, it is quite reasonable that the source would lie in this overdensity. However, because the line of sight passes through the bar only about 45 pc below the Galactic plane, it is also possible that the source lies in front of, or behind, the bar. For example, the source star for UKIRT-2017-BLG-001Lb, the only other microlensing planet that was discovered so close to the Galactic plane, was found to lie in the far disk [@ub17001]. From the standpoint of determining $\theta_*$, the distance to the source does not enter directly because only the apparent magnitude and color enter into Equation (\[eqn:csb\]). But the distance does enter indirectly because if the source lies farther or closer than the clump, then it suffers more or less extinction. In most microlensing events this issue is not important because the line of sight usually intersects the bulge well above (or below) the dust layer. We can parameterize the extra dust (or dust shortfall) relative to the clump by $\Delta A_K$. Then, from Equation (\[eqn:csb\]), the inferred change in $\theta_*$ for a given excess dust column is $${\Delta\log\theta_*\over \Delta A_K} = 0.2\biggl(0.195{E(V-K)\over A_K}-1\biggr) \rightarrow 0.23, \label{eqn:dustderiv}$$ where we have adopted $E(V-K)=11\,A_K$. The dust column to the clump has $A_K = 0.75$. The source cannot lie in front of substantially less dust than the clump because then it would be intrinsically both much redder and much less luminous than we derived above for the color and absolute magnitude. For example, if $\Delta A_K= -0.1$ and the source were at $D_S=6\,\kpc$ then, $[(I-K)_0,M_I]\rightarrow (2.7,+0.9)$. Such low luminosity extremely red giants are very rare. By the same token, if $\Delta A_K = +0.1$ and $D_S=11\,\kpc$, then $[(I-K)_0,M_I]\rightarrow (1.5,-1.8)$. This is a marginally plausible combination, although higher values of $A_K$ would imply giants that are bluer than the clump but several magnitudes brighter. We adopt a $1\,\sigma$ uncertainty in $\sigma(A_K) = 0.05$, and hence a fractional error $\sigma(\ln\theta_*) = 0.05\cdot 0.23\,\ln 10 = 2.6\%$. This uncertainty is actually small compared to the 8.6% error in Equation (\[eqn:thetastareval\]). Finally we adopt an error of 9.0% by adding these two errors in quadrature. (We will provide some evidence in Section \[sec:pietest\] that the source is actually in the bar.) Combining the value of $\theta_*$ from Equation (\[eqn:thetastareval\]) with the average of the two virtually identical values of $\rho$ in Table \[tab:ulens\] (but using the larger error), we obtain $$\theta_\e = {\theta_*\over \rho} = 1.72\pm 0.34\,\mas \qquad \mu_\rel = {\theta_\e\over t_\e} = 10.7 \pm 2.0\,\masyr \label{eqn:thetaemu}$$ Together with the parallax measurement $\pi_\e\sim 0.125$, this result for $\theta_\e$ implies that the lens mass and relative parallax are $M\sim 1.7\,M_\odot$ and $\pi_\rel\simeq 0.22\,\mas$, and so $D_L\sim 3.0\,\kpc$. In fact, because the fractional errors on both $\theta_\e$ and $\pi_\e$ are relatively large, these estimates will require a more careful treatment. However, from the present perspective the main point to note is that these values make the blended light seen in Figure \[fig:cmd\] a plausible candidate for the lens. [Blend = Lens?]{} \[sec:blend\] =============================== We therefore begin by gathering the available information about the blend. [Astrometry: Blend is Either The Lens or Its Companion]{} \[sec:astrometry\] ---------------------------------------------------------------------------- We first measure the astrometric offset between the “baseline object” and the source, initially finding $\Delta\theta=60\,\mas$ (0.15 pixels), with the source lying almost due west of the “baseline object”. This offset substantially exceeds the formal measurement error ($\sim 8\,\mas$) based on the standard error of the mean of seven near-peak measurements, as well as our estimate of $\sim 15\,\mas$ for the astrometric error of the “baseline object”. However, such an offset could easily be induced by differential refraction. That is, the source position is determined from difference images formed by subtracting the template from images near peak, i.e., late in the season when the telescope is always pointed toward the west, whereas the template is formed from images taken over the season (and in any case, the source contributes less than half the light to these images). Moreover, the image alignments are dominated by foreground main-sequence stars because these are the brightest in $I$ band. This contrasts strongly with the situation for typical microlensing events for which the majority of bright stars are bulge giants. Hence, the color offset between the reference-frame stars and the source is about $\Delta (I-K)\sim 4$. This means that the mean wavelength of source photons passing through the $I$-band filter is close to the red edge of this band pass, while the mean wavelength of reference-frame photons is closer to the middle. As the effective width of KMT $I$ band is about 160 nm, the wavelength offset between the two should be about $\Delta\lambda\sim 50\,$nm. Because blue light has a higher index of refraction than red light, it appears relatively displaced toward the zenith. Stated otherwise, the red light is displaced in the direction of the telescope pointing, i.e., west. To quantify this argument, we first review the expected displacement starting from Snell’s Law[^4] ($n=\sin i/\sin r^\prime$), where $n$ is the index of refraction, $i$ is the angle of incidence, and $r^\prime$ is the angle of refraction. We then quantitatively evaluate the astrometric data within this formalism. The angular displacement $\delta(i)$ of the source should obey $$\delta(i) = r^\prime_{\rm source} -r^\prime_{\rm frame} \simeq {d r^\prime \over d\lambda}\Delta\lambda \simeq {d \sin r^\prime \over d\lambda}{\Delta\lambda\over \cos i} \simeq -\tan i {d n\over d\lambda}\Delta\lambda . \label{eqn:snell}$$ Figure \[fig:snell\] shows the seven measurements of the $x$ (east-west) coordinate of the source position in pixels versus $\tan i$ in black and the “baseline object” position in red. The line is a simple regression without outlier removal. The scatter about this line is $\sigma=10\,\mas$ (0.025 pixels). The $y$ intercept is the extrapolation of the observed trend to the zenith. The offset from the “baseline object” is only $16\,\mas$ (0.04 pixels), i.e., of order the error in measuring its position on the template. The offset in the other (north-south) coordinate (which is not significantly affected by differential refraction) is likewise $16\,\mas$. We note that the slope of the line is $d\theta/d\tan i = (2.56\pm 0.54)\times 10^{-7}\,$radians. Substituting[^5] $dn/d\lambda = -6.17\times 10^{-9}\,{\rm nm}^{-1}$, into Equation (\[eqn:snell\]) yields $$\Delta\lambda = \lambda_{\rm source} - \lambda_{\rm frame} =(41\pm 9)\,{\rm nm}. \label{eqn:deltalambda}$$ The close proximity of the baseline object with the source implies that the excess light is almost certainly associated with the event, i.e., it is either the lens itself or a companion to the lens or to the source. That is, the surface density of stars brighter in $I$ than the blend is only $90\,{\rm arcmin}^{-2}$. Hence, the chance of a random alignment of such a star with the source within $25\,\mas$ is only $\sim 5\times 10^{-5}$. However, the blend is far too blue to be a companion to the source, which would require that it be behind the same $E(V-I)\sim 4$ column of dust. [Is the Blend a Companion to the Lens?]{} \[sec:dm91\] ------------------------------------------------------ Thus, the blend must be either the lens or a companion to the lens. To evaluate the relative probability of these two options, we should consider the matter from the standpoint of the blend, which is definitely in the lens system whether it is the lens or not. There is a roughly 70% probability that the blend has a companion, and if it does, some probability that this companion to the blend is the lens. However, this conditional probability is actually quite low due to three factors. We express the arguments in terms of $Q\ga 1$, the mass ratio of the blend to the host-lens (viewed as companion to the blend) and $a_b$, the projected separation between them. For purposes of this argument, we assume that the lens is at $D_L\sim 3\,\kpc$, but the final result depends only weakly on this choice. First, $a_b<75\,\au$. Otherwise the astrometric offset between the source and the “baseline object” would be larger than observed. Second, the source must pass no closer than about 2.5 blend-Einstein-radii from the blend. Expressed quantitatively: $a_b> 2.5\,D_L\,\theta_\e Q^{1/2}$. Smaller separations can be divided into two cases. Case 1: $0.5\,D_L\,\theta_\e Q^{1/2}\la a_b < 2.5\,D_L\,\theta_\e Q^{1/2}$. In this case,the blend would give recognizable microlensing signatures to the light curve. Actually, this is a fairly conservative limit because such signatures will often be present even at larger separations. Case 2: $a_b \la 0.5\,D_L\,\theta_\e Q^{1/2}$. Such cases are possible, but the planet would then be a circumbinary planet rather than a planet of the companion to the blend, which would be required to make the blend a distinct source of light. Third, the cross section for lensing is lower for the blend’s putative companion than for the blend itself by $Q^{-1/2}$. We take account of all three factors using the binary statistics of @dm91 and plot the cumulative probability as a function of host to blend mass ratio in Figure \[fig:dm91\]. The total probability that the blend is a companion to the lens is only 6.6%. [Gaia Proper Motion of the “Baseline Object”]{} \[sec:gaia\] ------------------------------------------------------------ Regardless of whether the blend is the lens or a companion to the lens, the blend proper motion $\bmu_b$ is essentially the same as that of the lens. In principle, the two could differ due to orbital motion. However, we argued in Section \[sec:dm91\] that the projected separation is at least $a_b\ga 12 Q^{1/2}\,\au$, meaning that the velocity of the blend relative to the center of mass of the system is less than $5\,\kms$, which is small compared to the measurement errors in the problem. The proper motion of the “baseline object” has been measured by [*Gaia*]{} $$\bmu_\base(N,E) = (-3.0,+0.9)\pm (0.8,1.1)\,\masyr, \label{eqn:gaia}$$ with a correlation coefficient of 0.51. In fact, $\bmu_\base$ is the flux weighted proper motion of the blend and source in the [*Gaia*]{} band, $$\bmu_{\rm base} = (1-\eta)\bmu_B + \eta\bmu_S \rightarrow (1-\eta)\bmu_L + \eta\bmu_S \label{eqn:mubase}$$ where $\eta$ is the fraction of total [*Gaia*]{} flux due to the source. It may eventually be possible to measure $\eta$ directly from [*Gaia*]{} data because there are two somewhat magnified ($A\simeq 1.34$) epochs at ${\rm JD}^\prime = 8342.62$ and 8342.69 and one moderately magnified ($A\simeq 1.75$) epoch at 8364.62. Based on the reported photometric error and number of observations, we estimate that individual [*Gaia*]{} measurements of the “baseline object” have 2% precision. If so, [*Gaia*]{} will determine $\eta$ with fractional precision $\sigma(\eta)/\eta\simeq 0.022/\eta$. Pending release of [*Gaia*]{} individual-epoch photometry, we estimate $\eta$ by first noting that the blend is 0.32 mag brighter than the source, even in the $I$ band, and that only the blend will effectively contribute at shorter wavelengths where the [*Gaia*]{} passband peaks. We therefore estimate that the blend will contribute an equal number of photons at these shorter wavelengths, while the source will contribute almost nothing, which implies $\eta=0.27$. We can relate the [*Gaia*]{} proper motion to the heliocentric proper motions of the source and lens by writing $$\bmu_\hel \equiv \bmu_L - \bmu_S; \qquad \bmu_\hel = \bmu_\rel + {\pi_\rel\over \au} \bv_{\oplus,\perp}, \label{eqn:bmuhel}$$ where $\bv_{\oplus,\perp}(N,E) = (-3.9,-15.0)\,\kms$ is Earth’s velocity projected on the event at $t_0$. We can then simultaneously solve Equations (\[eqn:mubase\]) and (\[eqn:bmuhel\]) to obtain $$\bmu_L = \eta\bmu_\hel + \bmu_{\rm base}; \qquad \bmu_S = -(1-\eta)\bmu_\hel + \bmu_{\rm base}. \label{eqn:bmusol}$$ Next, we note that Equation (\[eqn:bmusol\]) depends only weakly on the somewhat uncertain $\pi_\rel$ via the $\bv_{\oplus,\perp}$ term in Equation (\[eqn:bmuhel\]). For example, if $\pi_\rel = 0.22\,\mas$, then this term is only $v_{\oplus,\perp}\pi_\rel/\au \sim 0.7\,\masyr$, which is quite small compared to $\mu_\rel$. Therefore, to simplify what follows, we evaluate $\bmu_\hel$ using this value. [A New Test of the $\bpi_\e$ Measurement]{} \[sec:pietest\] =========================================================== The [*Gaia*]{} measurement of the “baseline object” and the resulting Equation (\[eqn:bmusol\]) allow us to test the reliability of the parallax measurement. Such tests are always valuable, but especially so in the present case because the modeling of the source variability could introduce systematic errors into the parallax measurement. We have already conducted one test by showing in Table \[tab:evolve\] that $\bpi_\e$ does not significantly change as we introduce additional wave-form parameters. However, the opportunity for additional tests is certainly welcome, particularly because introducing $\bpi_\e$ only improves the fit by $\Delta\chi^2=13$. From a mathematical standpoint, the two degrees of freedom of $\bpi_\e$ can be equally well expressed in Cartesian $(\pi_{\e,N},\pi_{\e,E})$ or in polar $(\pi_\e,\phi_\pi)$ coordinates. Here, $\tan\phi_\pi\equiv \pi_{\e,E}/\pi_{\e,N}$, i.e., the position angle of $\bmu_\rel$ north through east. Cartesian coordinates are usually more convenient for light-curve modeling because their covariances are better behaved (but see @ob161045). However, from a physical standpoint, polar coordinates are more useful because the amplitude of $\bpi_\e$ contains all the information relevant to $M$ and $\pi_\rel$ (see Equation (\[eqn:massdist\])) while the direction contains none. In particular, a test of the measurement of $\phi_\pi$ that does not involve any significant assumption about $\pi_\e$ can give added confidence to the measurement of the latter. Figure \[fig:path\] illustrates such a test. It shows the source and lens proper motions as functions of $\phi_\pi$ in $15^\circ$ steps. The cardinal directions are marked in color and labeled. The error ellipses (shown for cardinal directions only) take account of both the [*Gaia*]{} proper motion error and the uncertainty in the magnitude of $\mu_\rel$ (at fixed direction). The cyan ellipses show the expected dispersions of Galactic-disk (left) and Galactic-bar (right) sources. Hence, it is expected that if the parallax solutions are correct, then at least one of them should yield $\phi_\pi$ that is reasonably consistent with one of these two cyan ellipses. Note that there are substantial sections of the source “circle of points” that would be inconsistent or only marginally consistent with these ellipses. The yellow line segments show the ranges of source (outer) and lens (inner) proper motions implied by the $1\,\sigma$ range of the $\phi_\pi$ measurements from the two ($u_0>0$ and $u_0<0$) solutions. The source proper motion derived from these solutions is clearly consistent with a Galactic bar source. This increases confidence that $\pi_\e$ is correctly measured within its quoted uncertainties as well. Finally, we note that in order to limit the complexity of Figure \[fig:path\], we have fixed both $\pi_\rel=0.22$ and $\eta=0.27$. We therefore now consider how this Figure would change for other values of these quantities. Changing $\pi_\rel$ by $\Delta\pi_\rel$ would displace the center of each “circle of points” very slightly, i.e., by $-(1-\eta)\Delta\pi_\rel \bv_{\oplus,\perp}\simeq (0.06,0.23)(\Delta\pi_\rel/0.1\,\mas)\masyr$ for the source and by $(-0.02,0.08)(\Delta\pi_\rel/0.1\,\mas)\masyr$ for the lens. The effect of such a shift on this figure would hardly be discernible. Changing $\eta$, for example from 0.27 to 0.22 or 0.32, would make the source “circle of points” larger or smaller by 7%. Again, such changes would hardly impact the argument given above. [Physical Parameters]{} \[sec:phys\] ==================================== While both $\theta_\e$ and $\pi_\e$ are measured, they have relatively large fractional errors: of order 20% and 25%, respectively. Hence, it is inappropriate to evaluate the physical parameters simply by algebraically propagating errors, using for example, Equation (\[eqn:massdist\]). Instead, we evaluate all physical quantities by applying these (and other) algebraic equations to the output of the MCMC. The results are tabulated in Table \[tab:phys\] and illustrated in Figure \[fig:phys\]. Because the source proper motion is consistent with Galactic-bar (but not Galactic-disk) kinematics, we simply assign the source distance $D_S=9\,\kpc$. See Section \[sec:pietest\] and Figure \[fig:path\]. The errors are relatively large, but based on the microlensing data alone, the lens is likely to be an F or G star, with a super-Jovian planet. This result is supported by the fact that the blend (lens) lies near the “bottom edge” (alternatively “blue edge”) of the foreground main-sequences stars on the CMD (Figure \[fig:cmd\]). To understand the implications of this position, consider two stars of the same apparent color $(V-I)$, but which differ in reddening by $\Delta E(V-I)$ and in intrinsic color by $\Delta(V-I)_0$. Tautologically, $\Delta E(V-I) + \Delta(V-I)_0 = 0$. We then adopt estimates $\Delta A_I = 1.25\Delta E(V-I)$ and $\Delta M_I = 2.3\Delta(V-I)$. This leads to an estimate $$\Delta I = \Delta M_I + \Delta A_I + \Delta{\rm DM} = -0.84\Delta A_I + \Delta{\rm DM}, \label{eqn:dmod}$$ where $\Delta{\rm DM}$ is the difference in distance modulus. Now, $A_I$ is roughly linear in distance $A_I = 5.2\,{\rm mag}/(9\,\kpc) = 0.58\,{\rm mag\,kpc^{-1}}$, while DM is logarithmic, $d{\rm DM}/d D= (5/\ln 10)\,D^{-1}$. Hence, the derivatives of the two terms in Equation (\[eqn:dmod\]) are equal and opposite at $D_{\rm stationary}\simeq 4.45\,\kpc$. As the second derivative of Equation (\[eqn:dmod\]) is strictly negative, this stationary point is a maximum. That is, the bottom of the foreground track in the CMD corresponds roughly to stars at this distance, which implies that the lens/blend has $D_L\sim D_{\rm stationary}$, $A_{I,L}\sim 2.6$, and $M_{I,L}\sim 2.9$. This would be consistent with an $M\sim 1.5\,M_\odot$ main-sequence star, or perhaps a star of somewhat lower mass on the turn off (which is not captured by the simplified formalism of Equation (\[eqn:dmod\])). That is, this qualitative argument is broadly consistent with the results in Table \[tab:phys\]. We discuss how followup observations can improve the precision of these estimates in Section \[sec:followup\]. We note that at the distances indicated in Figure \[fig:phys\] (or by this more qualitative argument), the lens lies quite close to the Galactic plane, $$z_L = z_\odot\biggl(1-{D_L\over R_0}\biggr) + D_L\sin(b-b_{\rm sgrA*}) = -0.0060(D_L - 2.48\,\kpc), \label{eqn:zperp}$$ where $b_{\rm sgrA*}$ is the Galactic latitude of SgrA\*, $R_0$ is the Galactocentric distance, and where we have adopted $z_\odot = 15\,\pc$ for the height of the Sun above the Galactic plane. That is, if $D_L$ is within a kpc of 2.48 kpc, then the lens is within 6 pc, of the Galactic plane. [Discussion]{} \[sec:discuss\] ============================== [Lowest Galactic-Latitude Planet]{} \[sec:lowb\] ------------------------------------------------ At $b=-0.28$, KMT-2018-BLG-1292Lb is the lowest Galactic-latitude microlensing planet yet detected. Yet, KMTNet did not consciously set out to monitor the Galactic plane. Instead, it has a few fields, including BLG13, BLG14, BLG18, BLG38, and BLG02/BLG42, whose corners “inadvertently” cross the Galactic plane or come very close to it. See Figure \[fig:fields\]. This is a side effect of having a large-format square camera on an equatorial mount telescope (together with the fact that the Galactic plane is inclined by $\sim 30^\circ$ relative to north toward the Galactic center). Of these five fields, BLG13 has the lowest cadence ($\Gamma = 0.15$–$0.2\,{\rm hr}^{-1}$), with BLG14 and BLG18 being 5 times higher and BLG02/42 being 20 times higher. Nevertheless, despite this low-cadence (further aggravated by the fact that the anomaly occurred near the end of the season, when the Galactic bulge was visible for only a few hours per night) and the very high extinction $A_I\sim 5.2$, KMT-2018-BLG-1292Lb is reasonably well characterized, with measurements of both $\theta_\e$ and $\bpi_\e$. This leads us to assess the reason for this serendipitous success. The first point is that the source is very luminous and very red, which together made the event reasonably bright in spite of the high extinction. It also implies a large source radius, with a source-diameter crossing time of almost one day, $2 t_* = 19\,$hr. Hence, despite the low effective combined cadence from all three observatories $\Gamma\sim 1\,{\rm day}^{-1}$, the source profiles on the source plane nearly overlap as it transits the caustic. See Figure \[fig:caustic\]. Thus, although the actual trajectory fortuitously rides the edge of a caustic, even random trajectories through the caustic would have led to significant finite source effects for some measurements, and therefore to a measurement of $\theta_\e$. This large source size is not fortuitous: in high extinction fields, such large sources are the only ones that will give rise to detectable microlensing events in the optical, apart from a handful of very high magnification events. That is, although high-extinction fields necessarily greatly reduce the number of sources that can be probed for microlensing events, those that can shine through the dust can yield well-characterized events even with very low cadence. This means that optical surveys could in principle more systematically probe the Galactic plane for microlens planets at relatively low cost in observing time. Although Figure \[fig:fields\] is presented primarily to show current optical coverage of the Galactic plane and to illustrate the possibilities for future coverage, it also has more general implications for understanding past and possible future strategies for microlensing planet detection. We summarize these here. The colored circles in Figure \[fig:fields\] represent published microlensing planets discovered in 2003-2017, while the black squares show 2018 event locations that we assess as likely to yield future planet publications. The blue points, which are from 2003-2010, i.e., prior to OGLE-IV, are uniformly distributed over the southern bulge. By contrast (and restricting attention for the moment to the southern bulge) planet detections in all subsequent epochs are far more concentrated toward the regions near $(l,b)\sim (+1,-2.5)$. During 2003-2010, the cadence of the survey observations was typically too low to detect and characterize planets by themselves[^6]. Hence, most planets were discovered by a combination of follow-up observations (including survey auto-follow-up) and survey observations of events alerted by OGLE and/or MOA. The choice of these follow-up efforts was not strongly impacted by survey cadence, which in any case was relatively uniform. It is still slightly surprising that the planet detections do not more closely track the underlying event rate, which is higher toward the concentration center of later planet detections. As soon as the OGLE-IV survey started (green points 2011-2013), the overall detection rate increases by a factor 2.7, but the southern-bulge planets also immediately become more concentrated. This partly reflects that the OGLE and MOA surveys (together with the Wise survey, @shvartzvald16) were very capable of detecting planets without followup observations in their higher-cadence regions, which were near this concentration. But in addition, these higher-cadence regions began yielding vastly more alerted events and also better characterization of these events, which also tended to concentrate the targets for follow-up observations. Also notable in this period are the first three planets in the northern bulge, to which OGLE-IV devoted a few relatively high-cadence fields. In the next period (yellow points, 2014-2015), the surveys remained similar, but follow-up observations were sharply curtailed due to reduction of work by the Microlensing Follow-Up Network ($\mu$FUN, @gould10). The rate drops by 45%, but the main points to note are that the southern bulge discoveries become even more concentrated and there are no northern bulge discoveries. In particular, comparing 2003-2010 with 2011-2015, the dispersion in the $l$ direction in the southern bulge drops by more than a factor two, from $3.21^\circ\pm 0.50^\circ$ to $1.45^\circ\pm 0.20^\circ$. The magenta and black points together show the planets discovered during the three years when the KMT wide-area survey joined the ongoing OGLE and MOA surveys, which is also the first time that the KMT fields shown in the figure become relevant to the immediate discussion. There are several points to note. First, the rate of detection increases by a factor 2.7 relative to the previous two years (or by a factor 1.8 relative to the previous five years). Second, the southern bulge planets become somewhat less concentrated, but still tend to follow the KMT very-high-cadence (numbered in red) and high-cadence (numbered in magenta) fields. In fact, only four out of 24 planets in the southern bulge lie outside of these fields. This should be compared to the 22 blue (2003-2010) points, 11 (half) of which lie outside these fields. Finally, there are eight planets in the northern bulge, all in the four high cadence fields. This history seems to indicate that there is substantial potential for finding microlensing planets in low cadence fields by carrying out aggressive follow-up observations similar to those of the pre-OGLE-IV era. [Precise Lens Characterization From Spectroscopic Followup]{} \[sec:followup\] ------------------------------------------------------------------------------ As shown in Section \[sec:astrometry\], the blend is almost certainly either the lens or its companion and as shown in Section \[sec:dm91\], it is very likely to be the lens. See Figure \[fig:dm91\]. Hence, a medium-resolution spectrum of the blend would greatly clarify the nature of the lens in two ways. First, by spectrally typing the blend one could obtain a much better estimate of its mass. Second, if the mass turns out to be, e.g., $M\sim 1.5\,M_\odot$ in line with the results in Table \[tab:phys\], then this would further reduce the probability that the lens is a companion to the blend relative to the 6.6% probability that we derived in Section \[sec:dm91\]. This is because companions to the blend with mass ratio $Q^{-1}\la 0.5$ would then have masses $M\la 0.75\,M_\odot$, which are significantly disfavored by the results of Section \[sec:phys\]. Hence, of order half the probability allowed by Figure \[fig:dm91\] would be eliminated, which would further increase confidence that the blend (now spectrally typed) was the lens. Such a spectrum could be taken immediately. Of course, the source would remain in the aperture for many years, but it is unlikely to contribute much light in the $V$- and $R$-band ranges of the spectrum, as we discussed in Section \[sec:astrometry\]. In addition, the source spectrum is likely to be displaced by many tens of $\kms$ from that of the blend. We thanks Christopher Kochanek for providing SMARTS ANDICAM $I/H$ data. AG was supported by AST-1516842 from the US NSF and by JPL grant 1500811. Work by CH was supported by grant 2017R1A4A1015178 of the National Research Foundation of Korea. This research has made use of the KMTNet system operated by the Korea Astronomy and Space Science Institute (KASI) and the data were obtained at three host sites of CTIO in Chile, SAAO in South Africa, and SSO in Australia. We gratefully acknowledge the use of data from the ESO Public Survey program IDs 179.B-2002 and 198.B-2004 taken with the VISTA telescope, and data products from the Cambridge Astronomical Survey Unit (CASU). D.M. gratefully acknowledges support provided by the Ministry for the Economy, Development and Tourism, Programa Iniciativa Cientifica Milenio grant IC120009, awarded to the Millennium Institute of Astrophysics (MAS), by the BASAL Center for Astrophysics and Associated Technologies (CATA) through grant AFB-170002, and by project Fondecyt No. 1170121. R.K.S. acknowledges support from CNPq/Brazil through through projects 308968/2016-6 and 421687/2016-9. J.A-G. acknowledges support by the Ministry of Economy, Development, and Tourism’s Millennium Science Initiative through grant IC120009, awarded to the Millennium Institute of Astrophysics (MAS). [99]{} Alard, C. & Lupton, R.H.,1998, , 503, 325 Albrow, M. D., Horne, K., Bramich, D. M., et al. 2009, , 397, 2099 An, J.H., & Gould, A. 2001, , 563, L111 Batista, V., Gould, A., Dieters, S. et al. , 529, 102 Bennett, D.P., Bond, I.A., Udalski, A., et al. 2008, , 684, 663 Bennett, D.P., Sumi, T., Bond, I.A., et al. 2012, , 757, 119 Bensby, T. Yee, J.C., Feltzing, S. et al. 2013, , 549A, 147 Bessell, M.S., & Brett, J.M. 1988, , 100, 1134 DePoy, D.L., Atwood, B., Belville, S.R., et al. 2003, SPIE 4841, 827 Duquennoy, A., & Mayor, M. 1991, , 248, 485 Gould, A. 1992, , 392, 442 Gould, A. 1995, , 446, L71 Gould, A. 2000, , 542, 785 Gould, A. & Loeb, A. 1992, , 396, 104 Gould, A., Dong, S., Gaudi, B.S. et al. 2010, , 720, 1073 Groenewegen, M.A.T., 2004, , 353, 903 Kim, S.-L., Lee, C.-U., Park, B.-G., et al. 2016, JKAS, 49, 37 Kim, D.-J., Kim, H.-W., Hwang, K.-H., et al., 2018a, , 155, 76 Koshimoto1, N., Udalski2, A., Sumi, T., et al. 2014, , 788, 128 Minniti, D., Lucas, P. W., Emerson, J. P., et al. 2010, New Astron., 15, 433 Minniti, D. 2018, in The Vatican Observatory, Castel Gandolfo: 80th Anniversary Celebration (ed. G. Gionti, S.J., & J.-B. Kikwaya Eluo, S.J). Astrophysics and Space Science Proceedings, 51, 63 Nataf, D.M., Gould, A., Fouqué, P. et al. 2013, , 769, 88 Navarro, M.G., Minniti, D., & Contreras-Ramos, R. 2017, , 851, L13 Navarro, M.G., Minniti, D., & Contreras-Ramos, R. 2018, , 865, L5 Navarro, M.G., Minniti, D., & Contreras-Ramos, R. 2019, submitted Paczyński, B. 1986, , 304, 1 Poleski, R., Skowron, J., Udalski, A., et al. 2014a, , 755, 42 Poleski, R. 2016, , 455, 3656 Rattenbury, N.J., Bennett, D.P., Sumi, T., et al. 2015, , 454, 946 Saito, R.K., Hempel, M., Minniti, D., et al. 2012, , 537, A107 Shin, I.-G., Yee, J.C., Skowron, J. et al. 2018, , 863, 23 Shvartzvald, Y., Maoz, D., Udalski, A. et al. 2016, , 457, 4089 Shvartzvald, Y.,Bryden, G., Gould, A., et al. 2017, , 153, 61 Shvartzvald, Y., Calchi Novati, S., Gaudi, B.S., et al. 2018, , 857, 8 Skowron, J., Udalski, A., Gould, A et al. 2011, , 738, 87 Suzuki, D., Udalski, A., Sumi, T., et al. 2014, , 780, 123 Udalski, A., Syzmanski, M.K., & Szymanski, G., et al. 2015a, AcA, 65, 1 Woźniak, P. R. 2000, Acta Astron., 50, 421 Yoo, J., DePoy, D.L., Gal-Yam, A. et al. 2004, , 603, 139 tab\_ulens tab\_evolve tab\_phys [^1]: His formula, derived from a fit to OGLE data, is actually slightly more complicated. [^2]: http://kmtnet.kasi.re.kr/ulens/event/2018/view.php?event=KMT-2018-BLG-0073 [^3]: Formally, the cadence was $\Gamma = 3\,{\rm day}^{-1}$ compared to an average of $\Gamma \sim 1\,{\rm day}^{-1}$ for KMT-2018-BLG-1292. However, these three points were confined to a few hours (see Figure 1 of @ub17001), so the gaps in the data were similar. [^4]: Actually due to Ibn Sahl, circa 984 C.E. [^5]: From $n-1 = 0.05792105/(238.0185 - (\lambda/\mu{\rm m})^{-2}) + 0.0016917/(57.362-(\lambda/\mu{\rm m})^{-2})$, https://refractiveindex.info/?shelf=other&book=air&page=Ciddor . [^6]: However, note that even in this period, six of the 22 planetary events were detected and characterized in pure survey mode: MOA-2007-BLG-192, MOA-bin-1, MOA-2008-BLG-379, OGLE-2008-BLG-092, OGLE-2008-BLG-355, MOA-2010-BLG-353 [@mb07192; @moabin1; @mb08379; @ob08092; @ob08355; @mb10353].
--- abstract: 'Starting from the stochastic thermodynamics description of two coupled underdamped Brownian particles, we showcase and compare three different coarse-graining schemes leading to an effective thermodynamic description for the first of the two particles: Marginalization over one particle, bipartite structure with information flows and the Hamiltonian of mean force formalism. In the limit of time-scale separation where the second particle locally equilibrates, the effective thermodynamics resulting from the first and third approach is shown to capture the full thermodynamics and to coincide with each other. In the bipartite approach, the slow part does not, in general, allow for an exact thermodynamic description as the entropic exchange between the particles is ignored. Physically, the second particle effectively becomes part of the heat reservoir. In the limit where the second particle becomes heavy and thus deterministic, the effective thermodynamics of the first two coarse-graining methods coincides with the full one. The Hamiltonian of mean force formalism however is shown to be incompatible with that limit. Physically, the second particle becomes a work source. These theoretical results are illustrated using an exactly solvable harmonic model.' author: - Tim Herpich - Kamran Shayanfard - Massimiliano Esposito bibliography: - 'bibliography.bib' title: Effective thermodynamics of two interacting underdamped Brownian particles --- Introduction {#sec:introduction} ============ Over the past two decades, stochastic thermodynamics established the tools to formulate thermodynamics for small systems subjected to significant fluctuations and driven far from equilibrium [@seifert2012rpp; @broeck2015physica; @sekimoto2010; @zhang2012pr; @ge2012pr; @jarzynski2010ar; @rao2018njp]. This theory has been successful in various contexts, *e.g.* Brownian particles [@ciliberto2017prx; @proesmans2016prx], electronic systems [@pekkola2013rmp], chemical reaction networks [@rao2016prx; @rao2018jcp], active matter [@cates2017prx; @eichhorn2019prx] and information processing [@parrondo2015np]. In a nutshell, stochastic thermodynamics consistently builds a thermodynamic structure on top of a stochastic process described by master equations [@vandenbroeck2010pre] or Fokker-Planck equations [@vandenbroeck2010pre2], implicitly assuming that the traced out degrees of freedom always stay at equilibrium. In this paper we want to address two apparently distinct questions within the framework of underdamped Fokker-Planck dynamics. First, we want to shed light on the nature of heat and work by understanding how a subset of degrees of freedom from the system can start to behave as a thermal bath or a work source, respectively. For systems characterized by master equations, it was proven that if there is a time-scale separation between the slow and the fast degrees of freedom, the latter equilibrate with respect to the slow coordinates and represent an ideal heat reservoir the slow degrees of freedom are coupled with [@esposito2012pre]. Instead, the conditions under which a subset of degrees of freedom can generate a stochastic driving on the system energies that can be treated as a work source have been identified in Ref. [@verley2014njp]. However, the limit of a smooth deterministic driving of the energies requires a limit that only an underdamped Fokker-Planck equation can provide. We aim therefore at reconsidering these questions in this paper within the framework of underdamped Brownian dynamics. Secondly, we want to consider various coarse-graining schemes preserving thermodynamic consistency that have been proposed in the literature [@altaner2012prl; @aurell2012prl; @cuetara2011prb; @wachtel2018njp; @seifert2012prl; @herpich2018prx; @herpich2019pre; @shengentropy2019; @sekimoto2007pre; @parrondo2008pre; @seifert2013jcp; @polettini2017jsm; @speck2015njp; @parrondo2018nc; @kahlen2018jsm; @seifertjsm2018]. In particular, we want to focus on three different well-established approaches that have been considered for stochastic dynamics governed by master equations: First, the most straightforward approach where a subset of states is explicitly coarse-grained and the effective thermodynamics is defined for that reduced dynamics as one formally would for the full dynamics [@esposito2012pre; @bo2014jsp]. Next, an approach based on splitting the full system in two parts resulting in effective second laws for each parts which are modified by a term describing the transfer of mutual information between each parts. This approach provides a convenient framework to describe how a Maxwell demon [@leff1990] mechanism can produce an information flow that is consumed by the system to drive processes against their spontaneous direction [@esposito2014prx; @horowitz2014njp; @horowitz2015jstatmech; @hartrich2014jsm]. Finally, the so-called Hamiltonian of mean force approach which introduces a notion of energy for a system strongly coupled to its environment [@seifert2016prl; @jarzynski2017prx; @strasberg2017pre]. In this paper, we will consider these various coarse-grainings for underdamped Brownian particles and discuss how they are related. As we will see, far from being distinct, the question of the connection between the different coarse-graining schemes will provide us with a good framework to get insight into the nature of heat and work. To achieve these goals, we will consider two coupled underdamped particles as this model already contains the key ingredients to generalize to multiple underdamped particles. Besides interesting formal connections between entropic contributions appearing in the different coarse-graining schemes, we will find that the effective thermodynamics based on marginalization and the Hamiltonian of mean force become equivalent and capture the correct global thermodynamics in the limit of time-scale separation. In this limit, the second particle is so much faster than the first one that it instantaneously relaxes to a local equilibrium corresponding to the coordinates of the first particle. Conversely, the thermodynamics based on the slow part of the bipartite structure does not agree with the full thermodynamics. The mismatch corresponds to the entropic contribution due to the coupling of the second particle. Physically, the coarse-grained particle becomes part of the heat reservoir. Moreover, in the limit where one particle has an exceedingly large mass compared to the other one, we will find that the former becomes a work source acting on the latter. In that case, the effective thermodynamics emerging from the first two coarse-graining schemes, marginalization and bipartite structure, again captures the correct global thermodynamics (at least up to a trivial macroscopic friction term in the work source). In contrast, we will show that the Hamiltonian is incompatible with that limit. These theoretical predictions will be confirmed using an analytically tractable model made up of two linearly coupled harmonic oscillators. The plan of the paper is as follows. In Sec. \[sec:singletheory\] the stochastic thermodynamics for both a single underdamped particle and two interacting and underdamped particles is formulated. Next, in Sec. \[sec:coarsegraining\] we formulate and compare the three different coarse-graining approaches - marginalization, bipartite perspective and Hamiltonian of mean force - for our underdamped two-particle system. The respective effective thermodynamic description is furthermore compared with the full one in the two aforementioned limits. As an example, we consider an analytically solvable model in Sec. \[sec:example\]. We conclude with an outlook to potential future works is provided in Sec. \[sec:conclusion\]. Stochastic Thermodynamics ========================= Single Underdamped Particle {#sec:singletheory} --------------------------- We consider a particle of mass ${m}$ with the phase-space coordinate $\bm{{\Gamma}}=(\bm{{x}},\bm{{v}})^\top \in \mathbb{R}^6$, where $\bm{{x}} \in \mathbb{R}^3$ and $\bm{{v}} \in \mathbb{R}^3$ denote position and velocity of the particle, respectively. The particle moves in a time-dependent potential ${V}(\bm{{x}},t)$, hence its Hamiltonian reads $$\begin{aligned} \label{eq:singleenergydensity} {e}(\bm{{\Gamma}},t) = \frac{{m}}{2} \bm{{v}}^2 + {V}(\bm{{x}},t) .\end{aligned}$$ The particle is furthermore subjected to a generic force $\bm{{g}}(\bm{{\Gamma}},t)$. When the force is conservative, $\bm{{g}}(\bm{{x}},t) ~=~ - {\partial}_{\bm{{x}}} {V}(\bm{{x}},t)$, its associated potential is assumed to not contribute to the Hamiltonian. In order to discriminate $\bm{{g}}$ from the force $-{\partial}_{\bm{\bm{{x}}}} {V}$, the former is called exclusive and the latter referred to as inclusive force. This terminology has been used for instance in Ref. [@Jarzynski07b]. If the force $\bm{{g}}$ is nonconservative, it does not derive from a potential. For generality, and since it will be useful later, we assume that the force may be velocity-dependent, $\bm{{g}}(\bm{{\Gamma}},t)$. The system is coupled to a heat reservoir at inverse temperature ${\beta}$, giving rise to zero-mean delta-correlated Gaussian white noise $$\begin{aligned} \label{eq:noise} \langle \bm{{\eta}}_i(t) \rangle = 0, \quad \langle \bm{{\eta}}_i(t) \bm{{\eta}}_j(t') \rangle = 2 {\xi}\, {\beta}^{-1} \, \delta_{ij} \delta(t-t') ,\end{aligned}$$ for $i,j=1,2,3$. We denote by $\xi$ the friction the particle experiences and set $k_B \equiv 1$ in the following. Then, the stochastic dynamics of the system is governed by the following Langevin equation $$\begin{aligned} \label{eq:singlelangevin} \begin{pmatrix} \dot{\bm{{x}}} \\ \dot{\bm{{v}}} \end{pmatrix} = \begin{pmatrix} \bm{{v}} \\ \tfrac{1}{{m}} \left[ -{\partial}_{\bm{{x}}} {V}(\bm{{x}},t) + \bm{{g}}(\bm{{\Gamma}},t) - {\xi}\, \bm{{v}} + \bm{{\eta}}(t) \right] \end{pmatrix},\end{aligned}$$ and the equivalent Fokker-Planck equation ruling the time evolution of the probability density ${P}(\bm{{\Gamma}},t)$ reads $$\begin{aligned} \label{eq:singlefokkerplanckone} {\partial}_t \, {P}= - \nabla \cdot ( \bm{\mu} {P}) + \nabla \cdot \big( \bm{D} \cdot \nabla {P}\big) ,\end{aligned}$$ with the drift and diffusion matrices $$\begin{aligned} \bm{\mu} &= \begin{pmatrix} \bm{{v}} \\ \tfrac{1}{{m}} \left[ -{\partial}_{\bm{{x}}} {V}(\bm{{x}},t) + \bm{{g}}(\bm{{\Gamma}},t) - {\xi}\, \bm{{v}} \right] \end{pmatrix} \\ \bm{D}_{ij} &= \frac{{\xi}\, \delta_{i j} }{{\beta}{m}^2} \sum\limits_{n=4}^6 \delta_{i n} \, ,\end{aligned}$$ and the nabla operator $\nabla \equiv ({\partial}_{\bm{{x}}},{\partial}_{\bm{{v}}})^{\top} $. The Fokker-Planck Eq. can be cast into a continuity equation $$\begin{aligned} \label{eq:singlefokkerplanck} {\partial}_t {P}= - \nabla \cdot {\bm{J}}= -\nabla \cdot \left( \bm{L}^{det} + \bm{L}^{diss} \right) {P}.\end{aligned}$$ Here, the probability current ${\bm{J}}$ is split into a deterministic contribution $$\begin{aligned} \label{eq:singlefokkerplanckdeterministiccurrent} \bm{L}^{det} = \begin{pmatrix} \bm{{v}} \\ \tfrac{1}{{m}} \left[ - {\partial}_{\bm{{x}}} {V}(\bm{{x}},t) + \bm{{g}}(\bm{{\Gamma}},t) \right] \end{pmatrix} ,\end{aligned}$$ and a dissipative one $$\begin{aligned} \label{eq:singlefokkerplanckdissipativecurrent} \bm{L}^{diss} = - \frac{{\xi}}{{m}^2} \begin{pmatrix} 0 \\ {m}\bm{{v}} + {\beta}^{-1} \, {\partial}_{\bm{{v}}} \ln {P}\end{pmatrix}.\end{aligned}$$ The average energy of the particle is $$\begin{aligned} \label{eq:singleaverageeneergy} {E}= \int {\mathrm{d}}\bm{{\Gamma}} \; {e}\, {P},\end{aligned}$$ and its rate of change $$\begin{aligned} \label{eq:singlefirstlaw} d_t {E}= \dot{{Q}} + \dot{{W}}, \end{aligned}$$ can be decomposed into a work current $$\begin{aligned} \label{eq:singlework} \dot{{W}} = \int {\mathrm{d}}\bm{{\Gamma}} \; {P}\, {\partial}_t {e}+ \int {\mathrm{d}}\bm{{\Gamma}} \; {P}\, \bm{{g}} \cdot \bm{{v}} ,\end{aligned}$$ and into a heat current $$\begin{aligned} \label{eq:singleheat} \dot{{Q}} = \int {\mathrm{d}}\bm{{\Gamma}} \; {e}\, {\partial}_t {P}- \int {\mathrm{d}}\bm{{\Gamma}} \; {P}\, \bm{{g}} \cdot \bm{{v}} .\end{aligned}$$ Eq. constitutes the first law of thermodynamics ensuring energy conservation [@sekimoto1997ptps]. Using Eq. , the heat current can be written as follows $$\begin{aligned} \label{eq:singleheatcurrent} \dot{{Q}} = - {\xi}\int {\mathrm{d}}\bm{{\Gamma}} \; {P}\, \left(\bm{{v}} + \frac{1}{{\beta}{m}} {\partial}_{\bm{{v}}} \ln {P}\right) \bm{{v}} \, .\end{aligned}$$ The nonequilibrium system entropy associated with the particle at $\bm{{\Gamma}}$ is defined as [@seifert2005prl] $$\begin{aligned} \label{eq:singleentropydensity} {s}(\bm{{\Gamma}}) = -\ln {P},\end{aligned}$$ where the ensemble average coincides with the Shannon entropy $$\begin{aligned} \label{eq:singleentropy} {S}= - \int {\mathrm{d}}\bm{{\Gamma}} \; {P}\, \ln {P}. \end{aligned}$$ Its time-derivative $$\begin{aligned} {\mathrm{d}}_t {S}&= \int {\mathrm{d}}\bm{{\Gamma}} \, [ \nabla \cdot \bm{L}^{diss} ] \, {P}+ \dot{{I}}_F = {\beta}\dot{{Q}} + \dot{{\Sigma}} + \dot{{I}}_F , \label{eq:singleentropybalance}\end{aligned}$$ can be split into the entropy flow from the bath to the system, ${\beta}\dot{{Q}}$, and the entropy production rate $$\begin{aligned} \label{eq:singlesecondlaw} \dot{{\Sigma}} = {\beta}\, {\xi}\int {\mathrm{d}}\bm{{\Gamma}} \; {P}\, \left(\bm{{v}} + \frac{1}{{\beta}{m}} {\partial}_{\bm{{v}}} \ln {P}\right)^2 \geq 0 ,\end{aligned}$$ whose nonnegativity constitutes the second law of thermodynamics. Since it will be useful later, we introduced the notation $$\begin{aligned} \label{eq:singleinformationforce} \dot{{I}}_F \equiv \frac{1}{{m}} \int {\mathrm{d}}\bm{{\Gamma}} \; {P}\, {\partial}_{\bm{{v}}} \cdot \bm{{g}} .\end{aligned}$$ Defining the nonequilibrium free-energy density ${f}(\bm{{\Gamma}}) ~=~ {e}(\bm{{\Gamma}}) - {\beta}^{-1} {s}(\bm{{\Gamma}})$, one has for the average nonequilibrium free energy $$\begin{aligned} \label{eq:singlefreenergy} {F}= \int {\mathrm{d}}\bm{{\Gamma}} \, {P}\; {f}= {E}- {\beta}^{-1} {S}.\end{aligned}$$ Eq. allows us to rewrite the work and heat current in Eqs. and as $$\begin{aligned} \label{eq:singleheatworkcurrent} \begin{aligned} \dot{{W}} &= \int {\mathrm{d}}\bm{{\Gamma}} \; {P}\, {\partial}_t {f}+ \int {\mathrm{d}}\bm{{\Gamma}} {P}\, \bm{{g}} \cdot \bm{{v}} \\ \dot{{Q}} &= d_t ({F}+ {\beta}^{-1} {S}) - \dot{{W}} , \end{aligned}\end{aligned}$$ and the entropy production rate in Eq. as $$\begin{aligned} \label{eq:singleentropyproduction} \dot{{\Sigma}} = {\beta}( \dot{{W}} - {\mathrm{d}}_t {F}) - \dot{{I}}_F \, \geq 0 .\end{aligned}$$ The additional term $ \dot{{I}}_F $ in Eqs. and illustrates that the presence of the velocity-dependent nonconservative force $\bm{{g}}$ modifies the thermodynamics as noted in Refs. [@quian2004prl; @quian2007pre]. Special Cases {#sec:singlespecialcases} ------------- ### Standard Stochastic Thermodynamics Owing to the velocity-dependence of ${g}(\bm{{\Gamma}},t$), Eqs. and constitute a generalized entropy balance and a generalized second law, respectively. The standard thermodynamic formulation $$\begin{aligned} \label{eq:singlestandardentropybalancesecondlaw} {\mathrm{d}}_t {S}= {\beta}\dot{{Q}} + \dot{{\Sigma}}, \quad \quad {T}\dot{{\Sigma}} = \dot{{W}} - {\mathrm{d}}_t {F}\geq 0 , \end{aligned}$$ is recovered for velocity-independent or nonconservative Lorentz forces, that is forces that are orthogonal to the velocity, ${\partial}_{\bm{\bm{{v}}}} \cdot \bm{{g}} = 0 $. In one dimension, this is only true for velocity-independent forces $ {\partial}_{{v}} \, {g}= 0$. ### Deterministic Limit The dynamics is deterministic if ${\xi}=0$, which physically corresponds to a decoupling of the particle from the thermal reservoir. According to Eq. , one has $\dot{{Q}}=0$ and ${\mathrm{d}}_t {E}= {\mathrm{d}}_t {W}$. It follows furthermore from Eq. that $\dot{{\Sigma}}=0$, hence it holds, using Eq. , that $$\begin{aligned} \label{eq:singleentropybalancedeterministic} {\mathrm{d}}_t {S}= \frac{1}{{m}} \int {\mathrm{d}}\bm{{\Gamma}} \, {P}\; {\partial}_{\bm{{v}}} \cdot \bm{{g}}.\end{aligned}$$ Again, if $\bm{{g}}$ is velocity-independent or a Lorentz force, the deterministic dynamics becomes Hamiltonian and the rate of entropy change is identically zero, ${\mathrm{d}}_t {S}= 0$. In this case the second law is a triviality. ### Heavy Particle Finally, we consider the limit where the mass of the particle diverges, ${m}\to \infty$. We suppose that the conservative force scales with the mass, [*i.e.* ]{}$ \mathit{O}({\partial}_{{x}_i} {V}/ {m}) = 1 \, \forall i$, to avoid the trivial case of a particle in a flat potential. If ${\xi}$ and $\bm{{g}}$ are finite, so that ${\xi}/{m}\to 0$ and $\bm{{g}}/{m}\to 0$, one finds using Eqs. , and that $$\begin{aligned} \label{eq:singlesecondlawheavyparticle} \dot{{\Sigma}} = - {\beta}\dot{{Q}} = {\beta}{\xi}\bm{{v}}^2_t \geq 0, \quad {\mathrm{d}}_t {S}= 0,\end{aligned}$$ where $\bm{{v}}_t$ is the solution of the deterministic Eqs. $$\begin{aligned} \label{eq:singleequationofmotionheavyparticle} {\mathrm{d}}_t \bm{{x}}_t = \bm{{v}}_t, \quad {\mathrm{d}}_t \bm{{v}}_t = - \frac{1}{{m}} \left. {\partial}_{\bm{{x}}} {V}(\bm{{x}},t) \right|_{\bm{{x}} = \bm{{x}}_t} .\end{aligned}$$ According to Eq. , the heavy particle corresponds to the limit of macroscopic friction. Two Coupled Underdamped Particles {#sec:twotheory} --------------------------------- We now consider two particles labeled by $i=1,2$ of mass ${m}_i$ with the phase-space coordinate $\bm{{\Gamma}}_i = (\bm{{x}}_i,\bm{{v}}_i)^\top$, as depicted in Fig. \[fig:modelschematics\]. The particles move in a time-dependent potential $$\begin{aligned} \label{eq:twopotential} {V}(\bm{{x}}_1,\bm{{x}}_2,t) = {V}_1(\bm{{x}}_1,t) + {V}_2(\bm{{x}}_2,t) + {V}^{int}_{12} (\bm{{x}}_1,\bm{{x}}_2,t),\end{aligned}$$ that contains the interaction potential ${V}^{int}_{12} (\bm{{x}}_1,\bm{{x}}_2,t)$ and the Hamiltonian therefore reads $$\begin{aligned} \label{eq:twoenergydensity} \begin{aligned} {e}(\bm{{\Gamma}},t) &= \frac{{m}_1}{2} \bm{{v}}_1^2 + \frac{{m}_2}{2} \bm{{v}}_2^2 + {V}(\bm{{x}}_1,\bm{{x}}_2,t) \\ &= \sum_i {e}_i(\bm{{\Gamma}}_i,t) + {V}^{int}_{12} (\bm{{x}}_1,\bm{{x}}_2,t) \, , \end{aligned} \end{aligned}$$ where we denote the bare Hamiltonian of each particle by $ {e}_i(\bm{{\Gamma}}_i,t) = {m}_i \bm{{v}}_i^2 /2 + {V}_i(\bm{{x}}_i,t)$ with $i$=1,2. Moreover, we assume that both particles are subjected to velocity-independent nonconservative forces $\bm{{g}}_i(\bm{{x}}_i,t)$ [^1]. ![On the left, schematics of the two underdamped and via ${V}^{int}_{12}$ interacting particles $\bm{1}$ and $\bm{2}$ that are in contact with heat reservoirs at inverse temperatures ${\beta}_1$ and ${\beta}_2$, respectively, are illustrated. It is furthermore assumed that both particles are subjected to nonconservative forces $\bm{{g}}_i(\bm{{x}}_i)$. The right depicts the coarse-grained description of solely the first particle in the presence of an additional nonconservative force $\bm{{g}}^{(1)}(\bm{{\Gamma}}_1)$ that encodes the interaction with the second particle. \[fig:modelschematics\] ](schematics.pdf) Each of the particles is connected to a heat reservoir at inverse temperature ${\beta}_i$ giving rise to uncorrelated zero-mean Gaussian white noise $$\begin{aligned} \label{eq:twolangevinnoise} \langle \bm{{\eta}}^{(i)}_j(t) \rangle = 0, \quad \langle \bm{{\eta}}_{j}^{(i)}(t) \bm{{\eta}}_{j'}^{(i)}(t') \rangle = 2 \, {\xi}_i {\beta}_i^{-1} \delta_{j,j'} \, \delta(t-t') ,\end{aligned}$$ where ${\xi}_i$ refers to the friction the particle $i$ experiences. The stochastic dynamics of the two-body system is ruled by the following Langevin equation $$\begin{aligned} \label{eq:twolangevinequation} \begin{pmatrix} \dot{\bm{{x}}}_i \\ \dot{\bm{{v}}}_i \end{pmatrix} \!\!=\! \begin{pmatrix} \bm{{v}}_i \\ \!\frac{1}{{m}_i} \![ -{\partial}_{\bm{{x}}_i} {V}(\bm{{x}}_1,\bm{{x}}_2,t) \!+\! \bm{{g}}_i(\bm{{x}}_i,t) \!-\! {\xi}_i \bm{{v}}_i \!+\! \bm{{\eta}}^{(i)}(t) ] \! \end{pmatrix} \! ,\end{aligned}$$ and the equivalent Fokker-Planck equation governing the time evolution of the probability density $ {P}(\bm{{\Gamma}},t) $ reads $$\begin{aligned} \label{eq:twofokkerplanckequation} {\partial}_t {P}= - \nabla \cdot {\bm{J}}= - \nabla \cdot \left( \bm{L}^{det} + \bm{L}^{diss} \right) {P},\end{aligned}$$ with $ \nabla = ( {\partial}_{\bm{{x}}_1} , {\partial}_{\bm{{v}}_1} , {\partial}_{\bm{{x}}_2} , {\partial}_{\bm{{v}}_2} )^\top$. The probability current ${\bm{J}}$ can be split into a deterministic part $$\begin{aligned} \label{eq:twoprobabilitycurrentconservative} \bm{L}^{det} = \begin{pmatrix} \bm{{v}}_1 \\ \frac{1}{{m}_1} \left[ - {\partial}_{\bm{{x}}_1} {V}(\bm{{x}}_1,\bm{{x}}_2,t) + \bm{{g}}_1(\bm{{x}}_1,t) \right] \\ \bm{{v}}_2 \\ \frac{1}{{m}_2} \left[ - {\partial}_{\bm{{x}}_2} {V}(\bm{{x}}_1,\bm{{x}}_2,t) + \bm{{g}}_2(\bm{{x}}_2,t) \right] \end{pmatrix} ,\end{aligned}$$ and a dissipative one $$\begin{aligned} \label{eq:twoprobabilitycurrentdissipative} \bm{L}^{diss}= \begin{pmatrix} 0 \\ \frac{-{\xi}_1}{{m}_1^2} ( {m}_1 \bm{{v}}_1 + {\beta}_1^{-1} {\partial}_{\bm{{v}}_1} \ln {P}) \\ 0 \\ \frac{-{\xi}_2}{{m}_2^2} ( {m}_2 \bm{{v}}_2 + {\beta}_2^{-1} {\partial}_{\bm{{v}}_2} \ln {P}) \end{pmatrix}.\end{aligned}$$ The average energy of the system is $$\begin{aligned} \label{eq:twoaverageenergy} {E}= \! \int {\mathrm{d}}\bm{{\Gamma}} \, {e}\, {P},\end{aligned}$$ and the first law of thermodynamics reads $$\begin{aligned} \label{eq:twofirstlaw} {\mathrm{d}}_t {E}= \dot{{Q}} + \dot{{W}} , \end{aligned}$$ with the heat and work current $$\begin{aligned} \label{eq:twoheatdefintion} \dot{{Q}} &= \int {\mathrm{d}}\bm{{\Gamma}} \, {e}\, \dot{{P}} - \int {\mathrm{d}}\bm{{\Gamma}} \; {P}\, ( \bm{{g}}_1 \cdot \bm{{v}}_1 + \bm{{g}}_2 \cdot \bm{{v}}_2 ) \\ \label{eq:twowork} \dot{{W}} &= \int {\mathrm{d}}\bm{{\Gamma}} \, \dot{{e}} \, {P}+ \int {\mathrm{d}}\bm{{\Gamma}} \; {P}\, ( \bm{{g}}_1 \cdot \bm{{v}}_1 + \bm{{g}}_2 \cdot \bm{{v}}_2 ) . \end{aligned}$$ Using the Fokker-Planck Eq. , we can write the heat current in terms of additive contributions, $$\begin{aligned} \label{eq:twoheat} \dot{{Q}} = \sum_{i=1}^2 \dot{q}^{(i)}, \; \dot{q}^{(i)} = -{\xi}_i \int {\mathrm{d}}\bm{{\Gamma}} {P}\left( \bm{{v}}_i + \frac{1}{{\beta}_i {m}_i} {\partial}_{\bm{{v}}_i} \ln {P}\right) \bm{{v}}_i .\end{aligned}$$ Like in the single-particle case , the nonequilibrium system entropy is defined as $$\begin{aligned} \label{eq:twoaverageentropy} {S}= - \int {\mathrm{d}}\bm{{\Gamma}} \, {P}\ln {P}, \end{aligned}$$ and the entropy balance is thus given by $$\begin{aligned} \label{eq:twoentropybalance} {\mathrm{d}}_t {S}= \sum_{i=1}^2 {\beta}_i \, \dot{q}^{(i)} + \dot{{\Sigma}} , \end{aligned}$$ where the non-negative entropy production rate $$\begin{aligned} \label{eq:twoentropyproduction} \dot{{\Sigma}} \!=\! \sum_{i=1}^2 \dot{\sigma}^{(i)}, \; \dot{\sigma}^{(i)} \!=\! {\beta}_i \, {\xi}_i \!\! \int \!\! {\mathrm{d}}\bm{{\Gamma}} \, {P}\!\left( \bm{{v}}_i \!+\! \frac{1}{{\beta}_i {m}_i } {\partial}_{\bm{{v}}_i} \ln {P}\! \right)^2 \!\! \geq \! 0 ,\end{aligned}$$ constitutes the second law of thermodynamics. In fact, Eq. formulates a stronger statement: the additive contributions $\dot{\sigma}^{(i)}$ are separately non-negative. Coarse graining {#sec:coarsegraining} =============== Effective Dynamics ------------------ We now shift our attention to the first particle alone. This formally amounts to integrating the Fokker-Plank Eq. over the coordinates of the second particle $\bm{{\Gamma}}_2 ~=~ (\bm{{x}}_2,\bm{{v}}_2)$ such that we obtain the marginalized probability distribution of particle one, $ {P}_1 \equiv \int {\mathrm{d}}\bm{{\Gamma}}_2 \, {P}$, that satisfies the following effective Fokker-Planck equation $$\begin{aligned} \label{eq:twocoarsegrainedfokkerplanckequation} {\partial}_t {P}_1 &= - \nabla_1 \cdot {\bm{J}}_1 = - \nabla_1 \cdot \left( \bm{L}^{det}_1 + \bm{L}^{diss}_1 \right) {P}_1 ,\end{aligned}$$ with $ \nabla_1 = ( {\partial}_{\bm{{x}}_1} , {\partial}_{\bm{{v}}_1} )^\top$. The marginal probability current ${\bm{J}}_1$ can be split into a deterministic part $$\begin{aligned} \label{eq:twoprobabilitycurrentconservative} \bm{L}^{det}_1 = \begin{pmatrix} \bm{{v}}_1 \\ \frac{1}{{m}_1} \left[ - {\partial}_{\bm{{x}}_1} {V}_1(\bm{{x}}_1,t) + \bm{{g}}_1(\bm{{x}}_1,t) + \bm{{g}}^{(1)}(\bm{{\Gamma}}_1,t) \right] \end{pmatrix} ,\end{aligned}$$ and a dissipative one $$\begin{aligned} \label{eq:twoprobabilitycurrentdissipative} \bm{L}^{diss}_1 = \begin{pmatrix} 0 \\ \frac{-{\xi}_1}{{m}_1^2} ( {m}_1 \bm{{v}}_1 + {\beta}_1^{-1} {\partial}_{\bm{{v}}_1} \ln {P}_1 ) \end{pmatrix} .\end{aligned}$$ By comparison with the exact single-particle Fokker-Planck Eq. , we note that the coarse-graining of the second particle encodes the interaction between the two particles in the effective and nonconservative force imposed on particle one $$\begin{aligned} \label{eq:twonoconservativeforce} \bm{{g}}^{(1)}(\bm{{\Gamma}}_1,t) = - \int {\mathrm{d}}\bm{{\Gamma}}_2 \, {P}_{2|1}(\bm{{\Gamma}},t) \, {\partial}_{\bm{{x}}_1} {V}^{int}_{12} (\bm{{x}}_1,\bm{{x}}_2,t).\end{aligned}$$ We note that the evolution Eq. is not closed since $\bm{{g}}^{(1)}$ depends on ${P}_{2|1}$. Thus, solving the effective Fokker-Planck Eq. is as difficult as treating the full-Fokker-Planck Eq. . Effective Thermodynamics {#sec:coarsegrainingthermo} ------------------------ ### Marginalization In the following, we attempt to formulate a consistent thermodynamic description for this reduced dynamics. Naively, it is tempting to use as an educated guess the single-particle expressions in Sec. \[sec:singletheory\] for the reduced dynamics. In this case, the naive entropy balance reads $$\begin{aligned} \label{eq:twocoarsegrainedentropybalanceexclusivenaive} {\mathrm{d}}_t {S}_{\bm{1}} = {\beta}_1 \dot{q}^{(1)} + \dot{{\Sigma}}^{(1)} + \dot{{I}}_F^{(1)} \, , \end{aligned}$$ where we use the notation from Eq. , $$\begin{aligned} \label{eq:twocoarsegrainedinformationforce} \dot{{I}}_F^{(1)} \equiv \frac{1}{{m}_1} \int {\mathrm{d}}\bm{{\Gamma}}_1 \, {P}_1 \, {\partial}_{\bm{{v}}_1} \cdot \bm{{g}}^{(1)} ,\end{aligned}$$ and denote the single-particle Shannon entropy by $$\begin{aligned} \label{eq:singleparticleshannonentropy} {S}_{\bm{1}} = - \int {\mathrm{d}}\bm{{\Gamma}}_1 \, {P}_1 \, \ln {P}_{1} ,\end{aligned}$$ which implies for the non-negative effective entropy production rate $$\begin{aligned} \label{eq:twocoarsegrainedentropyproductionexclusive} \dot{{\Sigma}}^{(1)} = {\beta}_1 {\xi}_1 \! \int {\mathrm{d}}\bm{{\Gamma}}_1 \, {P}_1 \! \left( \bm{{v}}_1 + \frac{1}{{\beta}_1 {m}_1} {\partial}_{\bm{{v}}_1} \ln {P}_1 \right)^2 \geq 0 .\end{aligned}$$ For reasons that will become clear soon, we however define the effective entropy balance as follows, $$\begin{aligned} \label{eq:twocoarsegrainedentropybalanceexclusive} {\mathrm{d}}_t {S}= {\beta}_1 \dot{{Q}}^{(1)} + \dot{{\Sigma}}^{(1)} + \dot{{I}}_F^{(1)} \, , \end{aligned}$$ where the effective heat $$\begin{aligned} \label{eq:twocoarsegrainedheatexclusive} \dot{{Q}}^{(1)} = \dot{q}^{(1)} + {\beta}_1^{-1} {S}_{\bm{2}|\bm{1}} ,\end{aligned}$$ is supplemented by the conditional Shannon entropy $$\begin{aligned} \label{eq:conditionalshannonentropy} {S}_{\bm{2}|\bm{1}} = {S}- {S}_{\bm{1}} = - \int {\mathrm{d}}\bm{{\Gamma}}_1 \, {P}_1 \, \int {\mathrm{d}}\bm{{\Gamma}}_2 \, {P}_{2|1} \, \ln {P}_{2|1} .\end{aligned}$$ The difference between the full and effective heat current can be written as $$\begin{aligned} \label{eq:twoheatdifferenceexclusive} \dot{{Q}} - \dot{q}^{(1)} = \dot{q}^{(2)} - {\beta}_1^{-1} {S}_{\bm{2}|\bm{1}} .\end{aligned}$$ Moreover, the difference between the full and the effective entropy production rate is given by $$\begin{aligned} \label{eq:twoentropyproductiondifferenceexclusive} \dot{{\Sigma}} - \dot{{\Sigma}}^{(1)} = \int {\mathrm{d}}\bm{{\Gamma}}_1 \, {P}_1 \, \mathbb{\dot{{\Sigma}}}_1 ,\end{aligned}$$ with the internal entropy production rate kernel $$\begin{aligned} \label{eq:twoentropyproductiondifferenceexclusivenotation} \mathbb{\dot{{\Sigma}}}_1 = \mathbb{\dot{{\Sigma}}}_1' + \mathbb{\dot{{\Sigma}}}_1'' ,\end{aligned}$$ that can be split in the following two non-negative contributions $$\begin{aligned} \label{eq:twoentropyproductiondifferenceexclusivenotationone} \mathbb{\dot{{\Sigma}}}_1' &= {\beta}_2 \, {\xi}_2 \! \int \! {\mathrm{d}}\bm{{\Gamma}}_2 \, {P}_{2|1} \! \left( \bm{{v}}_2 \!+\!\! \frac{1}{{\beta}_2 \, {m}_2} {\partial}_{\bm{{v}}_2} \ln {P}_{2|1} \! \right)^2 \geq 0 \\ \mathbb{\dot{{\Sigma}}}_1'' &= \frac{{\xi}_1}{{\beta}_1 {m}_1^2} \int {\mathrm{d}}\bm{{\Gamma}}_2 \, {P}_{2|1} \big( {\partial}_{\bm{{v}}_1} \ln {P}_{2|1} \big)^2 \geq 0 . \label{eq:twoentropyproductiondifferenceexclusivenotationtwo}\end{aligned}$$ The first contribution $\mathbb{\dot{{\Sigma}}}_1'$ is the entropy production rate of the second particle if the coordinates of the first one are fixed, see Eq. . Conversely, the second contribution $\mathbb{\dot{{\Sigma}}}_1''$ can be viewed as a contribution to the entropy production rate due to the correlation of the particles as we will see in Eq. . An equivalent decomposition to Eq. for Markovian master equations was found in Ref. [@esposito2012pre]. From the last two equations we deduce that the effective entropy production (rate) always underestimates the physical one $$\begin{aligned} \label{eq:entropyproductionraterelationpartial} \dot{{\Sigma}} \geq \dot{{\Sigma}}^{(1)} .\end{aligned}$$ It is important to note that at this general level it is impossible to fully capture the full thermodynamics solely in terms of properties of the reduced dynamics. The missing contributions require knowledge about the conditional probability ${P}_{2|1}$. ### Bipartite System A second approach to formulate an effective thermodynamics is provided by a bipartite system where the two-particle system is split into two single-particle subsystems. The effective entropic expressions in both subsystems are defined in the same formal way as one would for a single particle. Subsequently, the sum of the effective entropy balances in both subsystems is compared with the full one of the two-particle system in order to identify the so-called information flows exchanged between the subsystems. Physically, a bipartite system provides a simple and convenient representation of a Maxwell’s demon since the thermodynamic cost of the latter becomes fully accessible [@esposito2014prx; @horowitz2015jstatmech; @hartrich2014jsm]. Mathematically, the bipartite structure identifies the non-additive contributions of the full thermodynamic quantities for the two particles. We first note that the additive contributions to the two-particle heat current can be rewritten in terms of marginalized probabilities only as follows $$\begin{aligned} \label{eq:twoheatrewritten} \dot{q}^{(i)} = -{\xi}_i \int {\mathrm{d}}\bm{{\Gamma}}_i {P}_i \left( \bm{{v}}_i + \frac{1}{{\beta}_i {m}_i} {\partial}_{\bm{{v}}_i} \ln {P}_i \right) \bm{{v}}_i ,\end{aligned}$$ where the marginal probability ${P}_2$ is obtained analogously as ${P}_1$, that is by marginalizing the two-point probability ${P}$ over $\bm{{\Gamma}}_1$. Using the last Eq. along with Eqs. and , we see that the following relation holds, $$\begin{aligned} \label{eq:twoheatadditivehamiltonian} \dot{q}^{(i)} = \int {\mathrm{d}}\bm{{\Gamma}}_i \, {e}_i \, \dot{{P}_i} - \int {\mathrm{d}}\bm{{\Gamma}}_i \; {P}_i \; \bm{{v}}_i \cdot \left( \bm{{g}}_i + \bm{{g}}^{(i)} \right) ,\end{aligned}$$ with the nonconservative force $\bm{{g}}^{(2)}$ $$\begin{aligned} \label{eq:twonoconservativeforcetwo} \bm{{g}}^{(2)}(\bm{{\Gamma}}_2,t) = - \int {\mathrm{d}}\bm{{\Gamma}}_{1} \, {P}_{1|2}(\bm{{\Gamma}},t) \; {\partial}_{\bm{{x}}_2} {V}^{int}_{12} (\bm{{x}}_1,\bm{{x}}_2,t) .\end{aligned}$$ Conversely, the additive contributions $\dot{\sigma}^{(i)}$ to the entropy production rate in Eq. can not be represented by marginal distributions only. Therefore, the entropy-balance equations for the subsystems of the bipartite system can not be expressed in terms of its associated degrees of freedom only. We proceed by deriving the non-additive contribution to the entropy and identifying them as the information flow. To this end, we first define the relative entropy (or Kulback-Leibler divergence) as a statistical measure of the distance between the distributions ${P}$ and ${P}_1 {P}_2$ as follows $$\begin{aligned} \label{eq:mutualinformation} {I}= D[{P}\, || \, {P}_1 {P}_2] = \int {\mathrm{d}}\bm{{\Gamma}} \, {P}\ln \frac{{P}}{{P}_1 {P}_2} \geq 0 , \end{aligned}$$ whose non-negativity readily follows from the inequality $\ln {P}\leq {P}-1$. From Eqs. and follows that the relative entropy is the non-additive part of the two-particle system entropy, [*i.e.* ]{}$$\begin{aligned} \label{eq:mutualinformationtwo} {I}= {S}_{\bm{1}} + {S}_{\bm{2}} - {S}.\end{aligned}$$ Physically, this quantity corresponds to the mutual information that is a measure of correlations that quantifies how much one system knows about the other. If ${I}$ is large, the two systems are highly correlated, whereas small values of ${I}$ imply that the two systems know little about each other. The time-derivative of the mutual information $$\begin{aligned} \label{eq:mutualinformationderivative} {\mathrm{d}}_t {I}= \dot{{I}}^{(2 \rightarrow 1)} + \dot{{I}}^{(1 \rightarrow 2)} ,\end{aligned}$$ can be split into two directional information flows $$\begin{aligned} \label{eq:mutualinformationderivativeone} \dot{{I}}^{(2 \rightarrow 1)} &= \int {\mathrm{d}}\bm{{\Gamma}}_1 {P}_1 \; \left( \frac{1}{{m}_1} {\partial}_{\bm{{v}}_1} \cdot \bm{{g}}^{(1)} - \mathbb{\dot{{\Sigma}}}_1'' \right) \\ \dot{{I}}^{(1 \rightarrow 2)} &= \int {\mathrm{d}}\bm{{\Gamma}}_2 {P}_2 \; \left( \frac{1}{{m}_2} {\partial}_{\bm{{v}}_2} \cdot \bm{{g}}^{(2)} - \mathbb{\dot{{\Sigma}}}_2'' \right) , \label{eq:mutualinformationderivativetwo}\end{aligned}$$ where we used Eqs. and in the first equation. In the second equation we used Eq. and introduced the integral kernel specifying the difference between the full and the effective entropy production rate for the second particle, $$\begin{aligned} \label{eq:twoentropyproductiondifferenceexclusivetwo} \dot{{\Sigma}} - \dot{{\Sigma}}^{(2)} = \int {\mathrm{d}}\bm{{\Gamma}}_{2} \, {P}_{2} \, \mathbb{\dot{{\Sigma}}}_{2} = \int {\mathrm{d}}\bm{{\Gamma}}_{2} \, {P}_{2} ( \mathbb{\dot{{\Sigma}}}_{2}' + \mathbb{\dot{{\Sigma}}}_{2}'') , \end{aligned}$$ with $$\begin{aligned} \label{eq:twoentropyproductiondifferenceexclusivenotationonetwo} \mathbb{\dot{{\Sigma}}}_{2}' &= {\beta}_1 \, {\xi}_1 \int {\mathrm{d}}\bm{{\Gamma}}_{1} \, {P}_{1|2} \left( \bm{{v}}_1 + \frac{1}{{\beta}_1 \, {m}_1} {\partial}_{\bm{{v}}_1} \ln {P}_{1|2} \right)^2 \geq 0 \\ \mathbb{\dot{{\Sigma}}}_{2}'' &= \frac{{\xi}_2}{{\beta}_2 {m}_2^2} \int {\mathrm{d}}\bm{{\Gamma}}_{1} \, {P}_{1|2} \big( {\partial}_{\bm{{v}}_2} \ln {P}_{1|2} \big)^2 \geq 0 . \label{eq:twoentropyproductiondifferenceexclusivenotationtwotwo}\end{aligned}$$ The directional information flows can be interpreted as follows: When $\dot{{I}}^{(i \rightarrow j)} > 0$, the dynamics of particle $j$ increases the mutual information and thus the correlations between the two particles. In other words, $j$ is learning about $i$ and vice versa. Conversely, $\dot{{I}}^{(i \rightarrow j)} < 0$ corresponds to decreasing correlations between the two particles due to the evolution of particle $j$, which can be interpreted as either information erasure or the conversion of information into energy [@esposito2014prx]. We furthermore point out that a positive directional information flow indicates that its force contribution $$\begin{aligned} \label{eq:mutualinformationforce} \dot{{I}}^{(i \rightarrow j)}_F &\equiv \frac{1}{{m}_j} \int {\mathrm{d}}\bm{{\Gamma}}_j \, {P}_j \; {\partial}_{\bm{{v}}_j} \cdot \bm{{g}}^{(j)} , \intertext{dominates its entropic part} \dot{{I}}^{(i \rightarrow j)}_{{S}} &\equiv - \int {\mathrm{d}}\bm{{\Gamma}}_j \, {P}_j \; \mathbb{\dot{{\Sigma}}}_j'' \label{eq:mutualinformationentropy} ,\end{aligned}$$ since the latter is non-positive according to Eq. . Various other interpretations of these mutual information flows have been discussed in the literature [@schreiber2000measuring; @allahverdyan2009thermodynamic; @liang2005information; @majda2007information; @barato2013pre; @seifert2017prl]. An inspection of Eq. reveals that the force contribution of the information flow, $\dot{{I}}^{(i \rightarrow j)}_F$, is the additional term that enters in the effective entropy balance due to the velocity-dependent nonconservative force $\bm{{g}}^{(j)}$, $$\begin{aligned} {\mathrm{d}}_t {S}_{\bm{j}} = {\beta}_j \dot{q}^{(j)} + \dot{{\Sigma}}^{(j)} + \dot{{I}}^{(i \rightarrow j)}_F . \label{eq:twocoarsegrainedentropybalancemutualinformationtwo} \end{aligned}$$ Using Eq. , we furthermore find that the difference between the effective and the additive contribution to the two-particle entropy production rate corresponds to the entropic part of the information flow, $$\begin{aligned} \label{eq:twoentropyproductionrewritten} \dot{{I}}^{(i \rightarrow j)}_{{S}} = \dot{{\Sigma}}^{(j)} - \dot{\sigma}^{(j)} .\end{aligned}$$ The last two equations stipulate the following effective entropy balance equation for particle $j$, $$\begin{aligned} \label{eq:twocoarsegrainedentropybalancemutualinformation} {\mathrm{d}}_t {S}_{\bm{j}} = {\beta}_j \dot{q}^{(j)} + \dot{\sigma}^{(j)} + \dot{{I}}^{(i \rightarrow j) } .\end{aligned}$$ It is important to note that Eq. states that the directional information flows are the non-additive quantities entering in the effective entropy balance. We emphasize that Eq. is the underdamped Fokker-Planck analogue of the result found for master equations in Ref. [@esposito2014prx]. Moreover, using Eqs. and , it holds that $$\begin{aligned} \int {\mathrm{d}}\bm{{\Gamma}}_i \, {P}_i \, \dot{\mathbb{{\Sigma}}}_i' = \dot{\sigma}^{(j)} = \dot{{\Sigma}}^{(j)} - \dot{{I}}^{(i \rightarrow j)}_{{S}} ,\end{aligned}$$ which because of Eq. implies that $$\begin{aligned} \label{eq:entropyproductioncomparison} \dot{{\Sigma}} = \dot{{\Sigma}}^{(1)} + \dot{{\Sigma}}^{(2)} - \dot{{I}}^{(2 \rightarrow 1)}_{{S}} - \dot{{I}}^{(1 \rightarrow 2)}_{{S}},\end{aligned}$$ An identical result for bipartite master equations was found in Ref. [@esposito2014jstatmech] and recently for the more general case of systems undergoing a quantum dynamics formulated in terms of a density matrix, where the generator is additive with respect to the reservoirs [@esposito2019prl]. ### Hamiltonian of Mean Force Finally, we present a third approach to define an effective thermodynamics for the reduced dynamics of particle $\bm{1}$ in Fig. \[fig:modelschematics\], where we set ${\beta}_{1,2} = {\beta}$ and $\bm{{g}}_{2}=0$. For reasons that will become clear soon, we furthermore consider an explicitly time-independent bare Hamiltonian of the second particle $ {\partial}_t {e}_2 = {\partial}_t {V}_2 = 0$. As we will see, for this approach only a specific class of initial conditions can be considered. The key concept is the so-called Hamiltonian of mean force (HMF), originally utilized in equilibrium thermostatics [@kirkwood1935jcp], which defines an effective energy for particle $\bm{1}$ that accounts for the strong coupling [@jarzynski2017prx] to the second particle $\bm{2}$. Using it, this approach attempts to overcome the problem identified in the context of Eq. that there is *a priori* no systematic way to embed the global energetics into the reduced dynamics. The marginal of the global (Gibbs) equilibrium distribution over the second particle can be expressed as $$\begin{aligned} \label{eq:marginal1HMF} {P}_1^{hmf} \!=\!\! \int \!\! {\mathrm{d}}\bm{{\Gamma}}_2 \, {P}^{eq} \!=\!\! \int \!\! {\mathrm{d}}\bm{{\Gamma}}_2 \, {\mathrm{e}}^{-{\beta}({e}- {F}^{eq} ) } \!=\! {\mathrm{e}}^{- {\beta}({H}^{hmf} - {F}_{hmf}^{eq} ) } ,\end{aligned}$$ where we introduced the effective free energy ${F}_{hmf}^{eq}$ of particle one which is defined as the difference between the full equilibrium free energy $$\begin{aligned} \label{eq:equilibriumfreeenergy} {F}^{eq} = - \frac{1}{{\beta}} \, \ln \int {\mathrm{d}}\bm{{\Gamma}} \; {\mathrm{e}}^{-{\beta}{e}} ,\end{aligned}$$ and that of the second particle $$\begin{aligned} \label{eq:equilibriumfreeenergytwo} {F}^{eq}_{\bm{2}} = - \frac{1}{{\beta}} \, \ln \int {\mathrm{d}}\bm{{\Gamma}}_2 \; {\mathrm{e}}^{-{\beta}{e}_2 } ,\end{aligned}$$ that is ${F}_{hmf}^{eq} = {F}^{eq} - {F}_{\bm{2}}^{eq}$. Consequently the HMF is defined as $$\begin{aligned} \label{eq:hmfdefinition} {H}^{hmf} \equiv {e}_1 - {\beta}^{-1} \langle {\mathrm{e}}^{- {\beta}{V}^{int}_{12} } \rangle^{eq}_{\bm{2}} .\end{aligned}$$ We denote by $\langle \cdot \rangle^{eq}_{\bm{2}}$ and $\langle \cdot \rangle^{eq}$ an ensemble average over the equilibrium distribution of particle two, ${P}_2^{eq} ~=~ \exp[-{\beta}({e}_2 - {F}^{eq}_{\bm{2}})]$, and over the global equilibrium distribution, respectively. The conditional equilibrium distribution ${P}_{2|1}^{eq}$ is obtained by dividing the global (Gibbs) equilibrium distribution by the marginal one in Eq. $$\begin{aligned} \label{eq:equilibriumconditionalprobability} {P}_{2|1}^{eq} = \frac{{P}^{eq}}{{P}_1^{hmf}} = {\mathrm{e}}^{- {\beta}\left( {e}- {F}_{\bm{2}|1}^{eq} \right) } ,\end{aligned}$$ where the free-energy landscape of particle one for a conditionally equilibrated particle two is $$\begin{aligned} \label{eq:freeenergylandscape} {F}_{\bm{2}|1}^{eq} = {e}_1 - {\beta}^{-1} \langle {\mathrm{e}}^{- {\beta}{V}^{int}_{12} } \rangle^{eq}_{\bm{2}} + {F}_{\bm{2}}^{eq} = {H}^{hmf} + {F}_{\bm{2}}^{eq} .\end{aligned}$$ It is noteworthy that ${F}_{\bm{2}|1}^{eq}$ is parametrically time-dependent, whereas ${F}_{\bm{2}}^{eq}$ has no time-dependence due to the choice of a time-independent Hamiltonian ${e}_2$. Eq. shows that up to ${F}_{\bm{2}}^{eq}$, the HMF is equal to the free energy that the locally equilibrated second particle generates for given coordinates of the first particle. Furthermore, we note the standard equilibrium identities $$\begin{aligned} {2} {F}^{eq}_{\bm{2}|1} &= {E}^{eq}_{\bm{2}|1} -\! {\beta}^{-1} {S}^{eq}_{\bm{2}|1} ,\label{eq:conditionalfreeenergykernelfast} \\ {E}^{eq}_{\bm{2}|1} &= {\partial}_{{\beta}} ({\beta}{F}_{\bm{2}|1}^{eq} ) = \int {\mathrm{d}}\bm{{\Gamma}}_2 \, {P}_{2|1}^{eq} \, {e}\label{eq:conditionalenergykernelfast} \\ {S}^{eq}_{\bm{2}|\bm{1}} &= {\beta}^2 {\partial}_{{\beta}} {F}_{\bm{2}|1}^{eq} = - \int {\mathrm{d}}\bm{{\Gamma}}_2 \, {P}_{2|1}^{eq} \, \ln {P}_{2|1}^{eq} \label{eq:conditionalshannonentropykernelfast} ,\end{aligned}$$ which, using Eq. , can be rewritten as $$\begin{aligned} {E}^{eq}_{\bm{2}|1} &= {\partial}_{{\beta}} \big[ {\beta}\big( {H}^{hmf} + {F}_{\bm{2}}^{eq} \big) \big] \label{eq:conditionalenergykernelfastrewritten} \\ {S}^{eq}_{\bm{2}|\bm{1}} &= {\beta}^2 \, {\partial}_{{\beta}} \big( {H}^{hmf} + {F}_{\bm{2}}^{eq} \big) \label{eq:conditionalshannonentropykernelfastrewritten} .\end{aligned}$$ Inspired by [@seifert2016prl], we employ the HMF and its derived quantities in Eqs. and and average them over arbitrary *nonequilibrium* probabilities for particle one, [*i.e.* ]{}$$\begin{aligned} \label{eq:hmffirstlaw} {E}^{hmf} (t) = \langle {\partial}_{{\beta}} ( {\beta}\, {H}^{hmf} ) \rangle (t) ,\end{aligned}$$ and $$\begin{aligned} \label{eq:hmfentropy} {S}^{hmf}(t) &\equiv {S}_{\bm{1}}(t) + {\beta}^{2} \langle {\partial}_{{\beta}} \, {H}^{hmf} \rangle (t) ,\end{aligned}$$ where $\langle \cdot \rangle(t)$ refers to an ensemble average over a generic *nonequilibrium* distribution ${P}(t)$. We note that the definition of the entropy also includes the single-particle Shannon entropy of particle one in addition to the contribution that stems from the HMF. Choosing a definition of work that coincides with the global one , $$\begin{aligned} \label{eq:hmfwork} {W}^{hmf}(t) \equiv \int\limits_0^t {\mathrm{d}}t' \, \Big[ \, \langle \dot{{e}} \rangle (t') + \Big( \int {\mathrm{d}}\bm{{\Gamma}}_1 \; {P}_1 \; \bm{{v}}_1 \cdot \bm{{g}}_{1} \Big)(t') \Big] , \end{aligned}$$ the first law of thermodynamics imposes the following definition for heat $$\begin{aligned} \label{eq:hmfheatrewritten} {Q}^{hmf}(t) \! = \! - {W}(t) \!+\! \langle {\partial}_{{\beta}} ( {\beta}\, {H}^{hmf} ) \rangle (t) \!-\! \langle {\partial}_{{\beta}} ( {\beta}\, {H}^{hmf} ) \rangle (0) .\end{aligned}$$ Defining the nonequilibrium free energy to be of the same form as in the standard equilibrium case , $$\begin{aligned} \label{eq:hmffreeenergy} {F}^{hmf}(t) = {E}^{hmf}(t) - \frac{{S}^{hmf}(t)}{{\beta}} = \langle {H}^{hmf} \rangle (t) - \frac{{S}_{\bm{1}}(t)}{{\beta}} ,\end{aligned}$$ we can rewrite the entropy balance $$\begin{aligned} \label{eq:hmfsecondlaw} \Delta {S}^{hmf}(t) = {\beta}{Q}^{hmf}(t) + {\Sigma}^{hmf}(t) , \end{aligned}$$ in the form of a second law of thermodynamics as follows $$\begin{aligned} \label{eq:hmffreeenergysecondlaw} {\Sigma}^{hmf}(t) = {\beta}\big[ {W}(t) - \Delta {F}^{hmf}(t) \big] \geq 0. \end{aligned}$$ In order to prove the non-negativity of this definition for the entropy production [@seifert2016prl; @strasberg2017pre], an initial condition of the form $$\begin{aligned} \label{eq:hmfconditionalprobability} {P}(0) = {P}_{1}(0) \, {P}_{2|1}^{eq} = {P}_{1}(0) \, {\mathrm{e}}^{- {\beta}\left( {e}- {H}^{hmf} - {F}_{\bm{2}}^{eq} \right) } ,\end{aligned}$$ is required. Indeed, using Eqs. and , we have $$\begin{aligned} {\Sigma}^{hmf}(t) - {\Sigma}(t) = {\beta}\left( \Delta {F}- \Delta {F}^{hmf}(t) \right) .\end{aligned}$$ Due to the special choice for the initial condition , Eqs. and are valid at $t=0$ so that $$\begin{aligned} \label{eq:hmffreeenergytss} {F}(0) - {F}^{hmf}(0) = {F}_{\bm{2}}^{eq} . \end{aligned}$$ At later times, Eqs. and are no longer valid and we need to resort to the definitions and to obtain $$\begin{aligned} {F}(t) \!-\! {F}^{hmf}(t) \!=\! \langle {e}\rangle(t) \!-\! \langle {H}^{hmf} \! \rangle(t) \!+\! {\beta}^{-1} [ {S}_{\bm{1}}(t) \!-\! {S}(t) ] .\end{aligned}$$ Since the HMF can also be expressed as $$\begin{aligned} \langle {H}^{hmf} \rangle(t) = \langle {e}\rangle(t) + {\beta}^{-1} \langle \ln {P}_{2|1}^{eq} \rangle - {F}_{\bm{2}}^{eq} ,\end{aligned}$$ we have $$\begin{aligned} {F}(t) - {F}^{hmf}(t) = {F}_{\bm{2}}^{eq} + {\beta}^{-1} \Big\langle \ln \frac{{P}(t)}{{P}_{2|1}^{eq} \, {P}_{1}(t)} \Big\rangle , \end{aligned}$$ and finally arrive at $$\begin{aligned} \label{eq:entropyproductionraterelationpartialtwo} {\Sigma}^{hmf}(t) - {\Sigma}(t) = D[{P}(t) \, || \, {P}_{2|1}^{eq} {P}_{1}(t) ] \geq 0 .\end{aligned}$$ Thus, the entropy production based on the HMF always overestimates the global two-particle entropy production which, because of Eq. , proves the inequality in Eq. . Furthermore, with Eq. we obtain the following hierarchies of inequalities $$\begin{aligned} \label{eq:entropyproductionraterelationfull} {\Sigma}^{hmf}(t) \geq {\Sigma}(t) \geq {\Sigma}^{(1)}(t) ,\end{aligned}$$ where the equality signs hold in the limit of time-scale separation, as will be shown further below. The last equation is the Fokker-Planck analogue of the result found for master equations in Ref. [@strasberg2017pre]. This reference also identifies the conditions under which the *rate* of the entropy production is non-negative. Limiting Cases {#sec:twospecialcases} -------------- As already pointed out above, the effective Fokker-Planck Eq. is, in general, not closed because of the dependence on the conditional probability ${P}_{2|1}$. With the results of the preceding section at hand, we now study the three different coarse-graining schemes for two limiting cases in which the effective Fokker-Planck equation becomes closed and thus analytically tractable. ### Fast-Dynamics Limit: The Heat Reservoir {#sec:twofastparticle} First, we assume a time-scale separation (TSS) between the stochastic dynamics of the two particles where particle two evolves much faster than particle one. Hence for fixed coordinates of the first particle, the second generically relaxes towards a nonequilibrium steady state and the stationary conditional probability ${P}_{2|1}^{tss}$ can be determined by solving the fast dynamics for fixed $\bm{{\Gamma}}_1$. As a consequence, the effective Fokker-Planck Eq. becomes closed and the effective thermodynamics follows from replacing ${P}_{2|1}$ by ${P}_{2|1}^{tss}$ in all expressions in Sec. \[sec:coarsegrainingthermo\]. However, this effective thermodynamics naturally does not match with the full one, as we would neglect hidden degrees of freedom that are out-of-equilibrium. The latter equilibrate only if $\bm{{g}}_2 =0$ and ${\beta}_{1,2}={\beta}$, that is when the second particle instantaneously equilibrates with respect to each value of the slow coordinates of particle one. Then, the conditional probability is given at any time by the Gibbs distribution [@esposito2012pre] $$\begin{aligned} \label{eq:conditionalprobabilityfastparticle} {P}_{2|1}^{tss}(\bm{{x}}_1,\bm{{\Gamma}}_2) \equiv {P}_{2|1}^{eq}(\bm{{x}}_1,\bm{{\Gamma}}_2) = {\mathrm{e}}^{- {\beta}\left( {e}- {F}_{\bm{2}|1}^{eq} \right) } .\end{aligned}$$ As a result, the effective force $\bm{{g}}^{(1)}$ in Eq. , becomes a velocity-independent force that derives from an effective potential so that $$\begin{aligned} \label{eq:equilibriumconservativeforce} \left. \big[ - {\partial}_{\bm{{x}}_1} {V}_1 + \bm{{g}}^{{(1)}} \big] \right|_{tss} = -{\partial}_{\bm{{x}}_1} {F}_{\bm{2}|1}^{eq} ,\end{aligned}$$ where the notation $\left. Z \right|_{tss}$ corresponds to the conditional probability ${P}_{2|1}$ in the expression $Z$ being substituted by the equilibrium one in Eq. . Hence in the limit of TSS and local equilibrium, the particle is subjected to the effective potential given by the free-energy landscape of the first particle, $ {F}_{\bm{2}|1}^{eq} $.\ #### Marginalization. {#marginalization. .unnumbered} Substituting Eq. into Eq. and accounting for probability conservation, we get $$\begin{aligned} \label{eq:conditionalshannonentropyfast} \frac{\left. {\mathrm{d}}_t {S}_{\bm{2}|\bm{1}} \right|_{tss} }{{\beta}} &= \int {\mathrm{d}}\bm{{\Gamma}} \, \dot{{P}}_1(t) \, {P}_{2|1}^{eq} \left( {e}- {F}_{\bm{2}|1}^{eq} \right) \\ &= \int {\mathrm{d}}\bm{{\Gamma}} \, \dot{{P}}_1(t) \, {P}_{2|1}^{eq} \, {e}- \int {\mathrm{d}}\bm{{\Gamma}}_1 \, \dot{{P}}_1(t) \, {F}_{\bm{2}|1}^{eq} .\end{aligned}$$ With Eqs. and , we note the relation $$\begin{aligned} \left. \dot{q}^{(1)} \right|_{tss} = \int {\mathrm{d}}\bm{{\Gamma}}_1 \, \dot{{P}}_1(t) \, {F}_{\bm{2}|1}^{eq} ,\end{aligned}$$ from which along with Eq. follows that $$\begin{aligned} \label{eq:heatcurrentcoincidencefast} \left. \dot{{Q}}^{(1)} \right|_{tss} = \left. \dot{q}^{(1)} \right|_{tss} + \left. {\beta}^{-1} {\mathrm{d}}_t {S}_{\bm{2}|\bm{1}} \right|_{tss} = \left. \dot{{Q}} \right|_{tss} ,\end{aligned}$$ hence clarifying why the effective heat was defined to contain the conditional Shannon entropy. We have therefore proven that in the limit of TSS the effective and the global heat current coincide and the first law of thermodynamics remains formally the same as in Eq. , $$\begin{aligned} \label{eq:firstlawfastparticle} \left. {\mathrm{d}}_t {E}\right|_{tss} = \left. \dot{{Q}}^{(1)} \right|_{tss} + \left. \dot{{W}} \right|_{tss} = \left. \dot{{Q}} \right|_{tss} + \left. \dot{{W}} \right|_{tss} . \end{aligned}$$ Furthermore, in the limit of TSS, the time-dependence of all quantities stems only from the dynamics of particle one and the parametric time-dependence of the Hamiltonian. Equation proves that the second law of thermodynamics formally also remains the same as in Eq. , $$\begin{aligned} \label{eq:secondlawfastparticle} \left. \dot{{\Sigma}}^{(1)} \right|_{tss} &= \left. {\mathrm{d}}_t {S}\right|_{tss} - \left. {\beta}\dot{{Q}}^{(1)} \right|_{tss} \\ &= \left. {\mathrm{d}}_t {S}\right|_{tss} - \left. {\beta}\dot{{Q}} \right|_{tss} = \left. \dot{{\Sigma}} \right|_{tss} \geq 0 .\end{aligned}$$ Hence in the limit of TSS, the full thermodynamics of the two particles can be described solely by the reduced dynamics of a single particle that is subjected to the potential $ {F}_{\bm{2}|1}^{eq} $. Physically, the second particle can be viewed as being part of the heat reservoir the first particle is coupled to.\ #### Bipartite System. {#bipartite-system. .unnumbered} Furthermore, substituting into Eqs. , and , gives a vanishing directional information flow from the fast to the slow particle, $$\begin{aligned} \label{eq:informationflowfastparticle} \left. \dot{{I}}^{(2 \rightarrow 1)}_F \right|_{tss} = \left. \dot{{I}}^{(2 \rightarrow 1)}_S \right|_{tss} = \left. \dot{{I}}^{(2 \rightarrow 1)} \right|_{tss} = 0. \end{aligned}$$ This means that in the limit of TSS the information flow is completely asymmetric, $ \left. {\mathrm{d}}_t \, {I}\right|_{tss} = \left. {\mathrm{d}}_t \, {I}^{(1 \rightarrow 2)} \right|_{tss} $. From the last equation follows that the additive and effective entropy production rate agrees with the global one , $$\begin{aligned} \left. \dot{\sigma}^{(1)} \right|_{tss} = \left. \dot{{\Sigma}}^{(1)} \right|_{tss}= \left. \dot{{\Sigma}} \right|_{tss} ,\end{aligned}$$ which in turn implies that $\left. \dot{\sigma}^{(2)} \right|_{tss} = 0$. Though, there is a mismatch between the effective entropy balance of the slow particle and the full entropy balance given by the conditional Shannon entropy, $$\begin{aligned} \label{eq:twocoarsegrainedentropybalancemutualinformationfast} \left. {\mathrm{d}}_t {S}_{\bm{1}} \right|_{tss} = \left. {\beta}\dot{q}^{(1)} \right|_{tss} + \left. \dot{\sigma}^{(1)} \right|_{tss} = \left. {\mathrm{d}}_t {S}\right|_{tss} - \left. {\mathrm{d}}_t {S}_{\bm{2}|\bm{1}} \right|_{tss} .\end{aligned}$$ Moreover, the effective entropy balance of the second particle reads $$\begin{aligned} \label{eq:twocoarsegrainedentropybalancemutualinformationfasttwo} \left. {\mathrm{d}}_t {S}_{\bm{2}} \right|_{tss} = {\beta}\, \left. {\mathrm{d}}_t q^{(2)} \right|_{tss} + \left. {\mathrm{d}}_t {I}^{(1 \rightarrow 2)} \right|_{tss} ,\end{aligned}$$ that can be rewritten as $$\begin{aligned} \label{eq:twocoarsegrainedentropybalancemutualinformationfasttworewritten} \left. {\mathrm{d}}_t {S}_{\bm{2}} \right|_{tss} - \left. {\mathrm{d}}_t {S}_{\bm{2}|\bm{1}} \right|_{tss} = \left. {\mathrm{d}}_t {I}^{(1 \rightarrow 2)} \right|_{tss} .\end{aligned}$$ Equation stipulates that the information flow $\left. {\mathrm{d}}_t {I}\right|_{tss} ~=~ \left. {\mathrm{d}}_t {I}^{(1 \rightarrow 2)} \right|_{tss} $ from the slow to the fast particle does, in general, not vanish. This is physically plausible since the particles are still correlated. The information flow $\left. {\mathrm{d}}_t {I}^{(1 \rightarrow 2)} \right|_{tss} $ reflects time-varying correlations between the two particles due to the change of the probability distribution of both out-of-equilibrium particles. Consequently, the information flow is zero for a global equilibrium state characterized by ${P}^{eq} = {P}_{2|1}^{eq} \, {P}_1^{eq} $.\ #### Hamiltonian of Mean Force. {#hamiltonian-of-mean-force. .unnumbered} We now turn to the HMF formalism in the limit of TSS and local equilibrium, ${\beta}_{1,2}={\beta}$ and $\bm{{g}}_2=0$. Further, as done above in the introduction of the HMF formalism, we assume that the bare Hamiltonian of the second particle is time-independent, $ {\partial}_t \, {e}_2 = 0$. Because of Eq. , the requirement of an initial equilibrium conditional probability distribution is fulfilled at all times $t$. Hence Eqs. and are valid at any time $t$ and a comparison with Eqs. and , respectively, shows that $$\begin{aligned} \label{eq:hmfenergytss} \left. {E}^{hmf}(t) \right|_{tss} &= \left. {E}(t) \right|_{tss} - {\partial}_{{\beta}} ({\beta}\, {F}^{eq}_{\bm{2}}) \\ \left. {S}^{hmf} (t) \right|_{tss} &= \left. {S}(t) \right|_{tss} - {\beta}^2 \, {\partial}_{{\beta}} \, {F}^{eq}_{\bm{2}} \label{eq:hmfentropytss} .\end{aligned}$$ This explains the choice of a time-independent Hamiltonian ${e}_2$, since in this case ${F}_{\bm{2}}^{eq}$ has no time-dependence. As a result, the HMF definitions of the corresponding *currents* coincide with the global ones, $$\begin{aligned} \label{eq:hmfenergycurrenttss} \left. {\mathrm{d}}_t {E}^{hmf} (t) \right|_{tss} &= \left. {\mathrm{d}}_t {E}(t) \right|_{tss} \\ \left. {\mathrm{d}}_t {S}^{hmf} (t) \right|_{tss} &= \left. {\mathrm{d}}_t {S}(t) \right|_{tss} \label{eq:hmfentropycurrenttss} . \end{aligned}$$ Moreover, we conclude that an agreement of the definitions for the time-integrated quantities would be achieved in the limit of TSS, if the HMF was defined as ${H}^{hmf^*} ~\equiv~ {F}_{\bm{2}|1}^{eq} $ which corresponds to the definition ${F}_{hmf}^{eq^*} ~\equiv~ {F}^{eq} $. In this case, the equivalence of definitions would still be true for a time-dependent Hamiltonian ${e}_2$. By construction, the definitions of work agree \[cf. Eqs. and \], thus it follows from Eq. that the definitions of heat *current* also coincide $$\begin{aligned} \label{eq:twoentropyproductionequivalancetsspartial} \left. \dot{{Q}}^{hmf}(t) \right|_{tss} = \left. {\mathrm{d}}_t {E}^{hmf}(t) \right|_{tss} - \left. \dot{{W}}(t) \right|_{tss} = \left. \dot{{Q}}(t) \right|_{tss} . \end{aligned}$$ Since according to Eqs. and the entropy production *rates* are also identical, $$\begin{aligned} \label{eq:twoentropyproductionequivalancetsspartialtwo} \left. \dot{{\Sigma}}^{hmf}(t) \right|_{tss} = \left. {\mathrm{d}}_t {S}^{hmf}(t) \right|_{tss} - \left. {\beta}\dot{{Q}}^{hmf}(t) \right|_{tss} = \left. \dot{{\Sigma}}(t) \right|_{tss} , \end{aligned}$$ we find that at the differential level the Hamiltonian of mean-force formalism captures the full thermodynamics in the limit of TSS. Furthermore, we have proven that in the limit of TSS all definitions of the entropy production rate in Eqs. , and are equivalent, [*i.e.* ]{} $$\begin{aligned} \left. \dot{{\Sigma}}(t) \right|_{tss} = \left. \dot{{\Sigma}}^{(1)}(t) \right|_{tss} = \left. \dot{\sigma}^{(1)}(t) \right|_{tss} = \left. \dot{{\Sigma}}^{hmf}(t) \right|_{tss} .\end{aligned}$$ Together with Eq. , this proves the equality signs in Eq. in the limit of TSS. This constitutes our first main result: In the limit of TSS and local equilibrium, the effective thermodynamic descriptions resulting from marginalization and the HMF formalism fully capture the full thermodynamics. In contrast, the effective bipartite description does not match with the full thermodynamics since it neglects the correlations between the two particles. ### Large-Mass Limit: The Work Source {#sec:twoheavyparticle} We proceed by studying the limit of a diverging mass of the second particle, ${m}_2 \to \infty$, that has already been discussed in Sec. \[sec:singlespecialcases\]. Again, in order to avoid any triviality we assume that the potentials scale with the mass ${m}_2$ as follows: $ \mathit{O}({\partial}_{\bm{{x}}_{2_i}} {V}_2/ {m}_2) = 1 \, \forall i$ while $ {\partial}_{\bm{{x}}_{2_i}} {V}^{int} / {m}_2 \to 0 \, \forall i$ as ${m}_2 \to \infty$. Because of the infinite mass of particle two its motion occurs deterministically such that we can neglect the influence of particle one. Consequently, the marginal probabilities become statistically independent and the conditional distribution reads $$\begin{aligned} \label{eq:conditionalprobabilityheavyparticle} {P}_{2|1}^{det}(\bm{{\Gamma}_2},t) = {P}_2^{det}(\bm{{\Gamma}}_2,t) = \delta(\bm{{x}}_2 - \bm{{x}}_t) \, \delta(\bm{{v}}_2 - \bm{{v}}_t) ,\end{aligned}$$ for all times $t$ including the initial time $t=0$. Here, $\bm{{x}}_t$ and $\bm{{v}}_t$ are the solutions of the deterministic equations of motion . As a result, the effective force becomes conservative, $$\begin{aligned} \label{eq:nonconservativeforceheavyparticle} \left. \bm{{g}}^{(1)}(\bm{{x}}_1,t) \right|_{det} = - {\partial}_{\bm{{x}}_1} \left. {V}^{int}_{12}(\bm{{x}}_1,\bm{{x}}_2,t) \right|_{\bm{{x}}_2 = \bm{{x}}_t} ,\end{aligned}$$ where the notation $\left. Z \right|_{tss}$ corresponds to the conditional probability ${P}_{2|1}$ in the expression $Z$ being substituted by the delta-correlated one in Eq. . Thus, we are dealing with a closed effective Fokker-Planck Eq. for the light particle one that is externally driven by the deterministic motion of the heavy second particle.\ #### Marginalization. {#marginalization.-1 .unnumbered} Since the marginal probabilities are statistically independent, the conditional Shannon entropy vanishes, $$\begin{aligned} \label{eq:conditionalshannonentropyheavy} \left. {S}_{\bm{2}|\bm{1}} \right|_{det} = 0\end{aligned}$$ such that the definition of the effective heat reduces to the naive one , $ \left. \dot{{Q}}^{(1)} \right|_{det} = \left. \dot{q}^{(1)} \right|_{det} $. Therefore, by inserting Eq. into Eq. , we get $$\begin{aligned} \label{eq:heatheavyparticle} \left. \dot{{Q}} \right|_{det} - \left. \dot{q}^{(1)} \right|_{det} = \left. \dot{q}^{(2)} \right|_{det} = - {\xi}_2 \, \bm{{v}}_t^2 .\end{aligned}$$ Thus, the first law of thermodynamics remains - up to a macroscopic frictional term related to the heavy particle - formally the same as in Eq. , $$\begin{aligned} \left. {\mathrm{d}}_t {E}\right|_{det} = \left. \dot{q}^{(1)} \right|_{det} + \left. \dot{{W}} \right|_{det} - {\xi}_2 \, \bm{{v}}_t^2 = \left. \dot{{Q}} \right|_{det} + \left. \dot{{W}} \right|_{det} .\end{aligned}$$ Here, the difference is that the time-dependence of all quantities comes from the dynamical time-dependence of particle one alone, the parametric time-dependence of the Hamiltonian and from the deterministic trajectory of the second particle $(\bm{{x}}_t,\bm{{v}}_t)$. Further, Eq. implies that the definitions for the single-particle Shannon entropy and the full system entropy agree , $$\begin{aligned} \left. {\mathrm{d}}_t {S}_{\bm{1}} \right|_{det} = \left. {\mathrm{d}}_t {S}\right|_{det} , \end{aligned}$$ which, in turn, proves that the effective second law of thermodynamics - up to a macroscopic frictional term of the heavy particle - formally also remains the same as in Eq. , $$\begin{aligned} \label{eq:secondlawheavyparticle} \left. \dot{{\Sigma}}^{(1)} \right|_{det} = \left. {\mathrm{d}}_t {S}_{\bm{1}} \right|_{det} - \left. {\beta}_1 \dot{q}^{(1)} \right|_{det} = \left. \dot{\tilde{{\Sigma}}} \right|_{det} ,\end{aligned}$$ where $ \left. \dot{\tilde{{\Sigma}}} \right|_{det} = \left. \dot{{\Sigma}} \right|_{det} - {\beta}_2 \, {\xi}_2 \, \bm{{v}}_t^2 $. The effective thermodynamic description for the two particles therefore reduces, up to a simple macroscopic term, to the standard one of a single particle that is subjected to an external driving. Consequently, the physical interpretation of this limit is that the second particle represents a work source that modulates the energy landscape of the first particle according to a protocol $(\bm{{x}}_t,\bm{{v}}_t)$. If the deterministic particle is furthermore Hamiltonian, ${\xi}_2 =0$, the work source is non-dissipative and the effective description coincides with the full one.\ #### Bipartite System. {#bipartite-system.-1 .unnumbered} Owing to the statistical independence of the marginal distributions, the mutual information and thus the information flow is identically zero, $$\begin{aligned} \label{eq:informationrateheavyparticle} \left. {I}\right|_{det} = \left. \dot{{I}}^{2\to1} \right|_{det} = \left. \dot{{I}}^{1 \to 2} \right|_{det} = 0 .\end{aligned}$$ As a result, the effective entropy balance of the light particle coincides with the full one, $$\begin{aligned} \left. {\mathrm{d}}_t {S}_{\bm{1}} \right|_{det} \! = \! \left. {\beta}_1 \dot{q}^{(1)} \right|_{det} \!\! + \! \left. \dot{\sigma}^{(1)} \right|_{det} \!=\! \left. \dot{{Q}} \right|_{det} \!\!+\! \left. \dot{{\Sigma}} \right|_{det} \!\!= \left. {\mathrm{d}}_t {S}\right|_{det} ,\end{aligned}$$ while the corresponding effective entropy balance equation for the heavy particle takes the simple macroscopic form $$\begin{aligned} \left. {\beta}_2 \, \dot{q}^{(2)} \right|_{det} = \left. - \dot{\sigma}^{(2)} \right|_{det} = - {\beta}_2 \, {\xi}_2 \, \bm{{v}}_t^2 .\end{aligned}$$\ #### Hamiltonian of Mean Force. {#hamiltonian-of-mean-force.-1 .unnumbered} The large-mass limit represents a special case of systems away from TSS. Yet, the assumption of a conditional Gibbs state is inconsistent with the independent single-particle distributions . Therefore, the HMF formalism and the deterministic limit are incompatible. We can therefore summarize our second main result: In the deterministic limit, the effective thermodynamics of the first two coarse-graining schemes - marginalization and bipartite structure - are, up to a simple macroscopic frictional term, equivalent to the full thermodynamics. In contrast, the HMF formalism is incompatible with the deterministic limit. In fact, the HMF thermodynamics only matches with the full one in the limit of TSS. This is not surprising since the HMF definitions \[cf. Eqs. and \] are motivated by equilibrium thermostatics. Notably, in the TSS limit there is a completely asymmetric information flow from the slow to the fast particle, while in the deterministic limit all information flows vanish. Two Linearly Coupled Harmonic Oscillators {#sec:example} ========================================= Full Solution ------------- In this section, the results derived above are illustrated for an analytically solvable example. For this purpose, we consider an isothermal version of the setup in Fig. \[fig:modelschematics\] in one dimension. Moreover, the Hamiltonian is assumed time-independent $$\begin{aligned} \label{eq:examplepotential} {V}({x}_1,{x}_2) = (k_1 {x}_1^2)/2 + (k_2 {x}_2^2)/2 + {\beta}({x}_1 {x}_2) ,\end{aligned}$$ and the nonconservative forces $\bm{{g}}_i$ taken zero. Consequently, there is no work done on or by the two-particle system, ${\mathrm{d}}_t {E}= {\mathrm{d}}_t {Q}$. The Fokker-Planck Eq. reads $$\begin{aligned} \label{eq:examplefokkerplanck} {\partial}_t \, {P}= - \nabla \cdot ( \bm{\gamma} \cdot \bm{{\Gamma}} {P}) + \nabla^{\top} \cdot \big( \bm{D} \cdot \nabla {P}\big) ,\end{aligned}$$ with $\bm{{\Gamma}} = ({x}_1,{v}_1,{x}_2,{v}_2)^{\top}$ and $ \nabla ~\equiv ~ ({\partial}_{{x}_1},{\partial}_{{v}_1},{\partial}_{{x}_2},{\partial}_{{v}_2})^{\top} $. The constant drift coefficient and diffusion matrix read, respectively, $$\begin{aligned} \bm{\gamma} &= \begin{pmatrix} 0 & 1 & 0 & 0 \\ -\frac{k_1}{{m}_1} & -\frac{{\xi}_1}{{m}_1} & -\frac{{\beta}}{{m}_1} & 0 \\ 0 & 0 & 0 & 1 \\ -\frac{{\beta}}{{m}_2 } & 0 & -\frac{k_2}{{m}_2} & -\frac{{\xi}_2}{{m}_2} \end{pmatrix}\end{aligned}$$ $$\begin{aligned} \bm{D} &= \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & \frac{ {\xi}_1 }{{\beta}{m}_1^2} & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{ {\xi}_2 }{{\beta}{m}_2^2} \end{pmatrix} .\end{aligned}$$ This partial differential equation is supplemented by the initial condition ${P}(0) = \delta(\bm{{\Gamma}}(t) - \bm{{\Gamma}}(0) )$. The solution of this Fokker-Planck equation is given by a Gaussian [@risken] $$\begin{aligned} \label{eq:examplefokkerplancksolution} {P}= \frac{1}{ (2\pi)^2 \, \sqrt{\det \bm{{\Upsilon}}} } \exp \Big[ \!\! - \! \frac{1}{2} ( \bm{{\Gamma}} \!-\! \langle \bm{{\Gamma}} \rangle )^{\top} \! \cdot \! \bm{{\Upsilon}}^{-1} \! \cdot \! ( \bm{{\Gamma}} \! - \! \langle \bm{{\Gamma}} \rangle ) \Big] ,\end{aligned}$$ where the average values of the coordinates are determined as follows $$\begin{aligned} \label{eq:examplefokkerplancksolutionnotationone} \langle \bm{{\Gamma}} \rangle (t) = {\mathrm{e}}^{ \bm{\gamma} t} \cdot \bm{{\Gamma}}(0) ,\end{aligned}$$ and the covariance matrix is calculated as $$\begin{aligned} \label{eq:examplefokkerplancksolutionnotationtwo} \begin{aligned} \bm{{\Upsilon}}_{kl}(t) & \equiv 2 \sum_{i,j} \frac{1 - {\mathrm{e}}^{ -( \lambda_i +\lambda_j ) t }}{ \lambda_i + \lambda_j } \, C_{ij} \, u_i^{(k)} u_j^{(l)} . \end{aligned}\end{aligned}$$ Here, we introduced the transformation matrix $$\begin{aligned} \bm{C} = \bm{V} \cdot \bm{D} \cdot \bm{V}^{\top}, \quad \bm{V} = \left(\bm{v}^{(1)},\bm{v}^{(2)},\bm{v}^{(3)},\bm{v}^{(4)}\right) ,\end{aligned}$$ where $\lambda_i$ and $\bm{u}^{(i)}$ ($\bm{v}^{(i)}$) denote the $i$th eigenvale and right (left) eigenvector of the drift coefficient matrix $\bm{\gamma}$, respectively, [*i.e.* ]{}$$\begin{aligned} \begin{aligned} \bm{\gamma} \cdot \bm{u}^{(i)} &= \lambda_i \, \bm{u}^{(i)} \\ \bm{v}^{(i)} \cdot \bm{\gamma} &= \lambda_i \, \bm{v}^{(i)} , \end{aligned}\end{aligned}$$ such that the left and right eigenvectors of $\bm{\gamma}$ constitute an orthonormal dual basis, $\bm{v}^{(i)} \cdot \bm{u}^{(j)} = \delta_{ij}$. Substituting Eq. into Eqs. and , we obtain for the heat current and the entropy production rate [$$\begin{aligned} \label{eq:exampleheatcurrent} \dot{{Q}} &= \sum_{i=1}^{2} \left[ - {\xi}_i \Big( \bm{{\Upsilon}}_{2i,2i} + \langle\bm{{\Gamma}}_{2i} \rangle^2 \Big) + \frac{{\xi}_i }{{\beta}{m}_i} \right] = \sum_{i=1}^{2} \dot{q}^{(i)} \\ \dot{{\Sigma}} &= \sum_{i=1}^{2} \left[ {\beta}\, {\xi}_i \Big( \bm{{\Upsilon}}_{2i,2i} + \! \langle\bm{{\Gamma}}_{2i} \rangle^2 \Big) \! - \! 2\frac{{\xi}_i}{{m}_i} \!+\! \frac{{\xi}_i }{{\beta}{m}_i^2} \bm{{\Upsilon}}^{-1}_{2i,2i} \right] \!=\! \sum_{i=1}^{2} \dot{\sigma}^{(i)} , \label{eq:exampleentropyproductioncurrent}\end{aligned}$$ ]{} and because of Eq. $$\begin{aligned} {\mathrm{d}}_t {S}(t) = \sum_{i=1}^{2} \left( \frac{{\xi}_i }{{\beta}\, {m}_i^2} \bm{{\Upsilon}}^{-1}_{2i,2i} - \frac{{\xi}_i}{{m}_i}\right) .\end{aligned}$$ In the following, the distribution for particle one ${P}_1(t)$ is needed. The latter is readily determined by marginalizing Eq. over the coordinates $\bm{{\Gamma}}_2$ of the second particle, $$\begin{aligned} \label{eq:examplefokkerplancksolutionmarginalized} {P}_1 = \frac{1}{2\pi \sqrt{\det \tilde{\bm{{\Upsilon}}}} } \; \exp \left[ - \frac{1}{2} ( \tilde{\bm{{\Gamma}}} - \langle \tilde{\bm{{\Gamma}}} \rangle )^{\top} \cdot \tilde{\bm{{\Upsilon}}}^{-1} \cdot ( \tilde{\bm{{\Gamma}}} - \langle \tilde{\bm{{\Gamma}}} \rangle ) \right] \!,\end{aligned}$$ with $\tilde{\bm{{\Gamma}}} = ({x}_1,{v}_1)^{\top} $ and the inverse of the marginalized covariance matrix $\tilde{\bm{{\Upsilon}}}$ that is given by $$\begin{aligned} \begin{aligned} \tilde{\bm{{\Upsilon}}}^{-1}_{11} &= \frac{1}{ \left( \bm{{\Upsilon}}^{-1}_{34} \right)^2 - \bm{{\Upsilon}}^{-1}_{33} \bm{{\Upsilon}}^{-1}_{44} } \left[ \left( \bm{{\Upsilon}}^{-1}_{14} \right)^2 \bm{{\Upsilon}}^{-1}_{33} - 2 \bm{{\Upsilon}}^{-1}_{13} \bm{{\Upsilon}}^{-1}_{14} \bm{{\Upsilon}}^{-1}_{34} + \bm{{\Upsilon}}^{-1}_{11} \left( \bm{{\Upsilon}}^{-1}_{34} \right)^2 + \left( \bm{{\Upsilon}}^{-1}_{13} \right)^2 \bm{{\Upsilon}}^{-1}_{44} - \bm{{\Upsilon}}^{-1}_{11} \bm{{\Upsilon}}^{-1}_{33} \bm{{\Upsilon}}^{-1}_{44} \right] \\\ \tilde{\bm{{\Upsilon}}}^{-1}_{12} &= \frac{1}{ \left( \bm{{\Upsilon}}^{-1}_{34} \right)^2 \!\!-\!\! \bm{{\Upsilon}}^{-1}_{33} \bm{{\Upsilon}}^{-1}_{44} } \!\! \left[ \! \bm{{\Upsilon}}^{-1}_{14} \bm{{\Upsilon}}^{-1}_{24} \bm{{\Upsilon}}^{-1}_{33} \!\!-\!\! \bm{{\Upsilon}}^{-1}_{14} \bm{{\Upsilon}}^{-1}_{23} \bm{{\Upsilon}}^{-1}_{34} \!\!-\!\! \bm{{\Upsilon}}^{-1}_{13} \bm{{\Upsilon}}^{-1}_{24} \bm{{\Upsilon}}^{-1}_{34} \!\!+\!\! \bm{{\Upsilon}}^{-1}_{12} \left( \bm{{\Upsilon}}^{-1}_{34} \right)^2 \!\!+\!\! \bm{{\Upsilon}}^{-1}_{13} \bm{{\Upsilon}}^{-1}_{23} \bm{{\Upsilon}}^{-1}_{44} \!\!-\!\! \bm{{\Upsilon}}^{-1}_{12} \bm{{\Upsilon}}^{-1}_{33} \bm{{\Upsilon}}^{-1}_{44} \right] \\ \tilde{\bm{{\Upsilon}}}^{-1}_{22} &= \frac{1}{ \left( \bm{{\Upsilon}}^{-1}_{34} \right)^2 - \bm{{\Upsilon}}^{-1}_{33} \bm{{\Upsilon}}^{-1}_{44} } \left[ \left( \bm{{\Upsilon}}^{-1}_{24} \right)^2 \bm{{\Upsilon}}^{-1}_{33} - 2 \bm{{\Upsilon}}^{-1}_{23} \bm{{\Upsilon}}^{-1}_{24} \bm{{\Upsilon}}^{-1}_{34} + \bm{{\Upsilon}}^{-1}_{22} \left( \bm{{\Upsilon}}^{-1}_{34} \right)^2 + \left( \bm{{\Upsilon}}^{-1}_{23} \right)^2 \bm{{\Upsilon}}^{-1}_{44} - \bm{{\Upsilon}}^{-1}_{22} \bm{{\Upsilon}}^{-1}_{33} \bm{{\Upsilon}}^{-1}_{44} \right] . \end{aligned}\end{aligned}$$ Inserting Eq. into Eqs. and , gives the force contribution to the information flow from particle two to one $$\begin{aligned} \label{eq:examplemutualinformationforce} \dot{{I}}^{(2 \to 1)}_F &= - \frac{{\beta}}{{m}_1} \Big( \tilde{\bm{{\Upsilon}}}_{12}^{-1} \, \bm{{\Upsilon}}_{13} + \tilde{\bm{{\Upsilon}}}_{22}^{-1} \, \bm{{\Upsilon}}_{23} \Big) ,\end{aligned}$$ which can be seen by noting that $$\begin{aligned} - \frac{{\beta}}{{m}_1} \int {\mathrm{d}}\bm{{\Gamma}} \, {P}_1 \, {x}_2 \, {\partial}_{{v}_1} {P}_{2|1} = \frac{{\beta}}{{m}_1} \int {\mathrm{d}}\bm{{\Gamma}} \, {P}\, {x}_2 \, {\partial}_{{v}_1} \ln {P}_1 .\end{aligned}$$ Moreover, from Eq. follows for the effective entropy production rate $$\begin{aligned} \dot{{\Sigma}}^{(1)} \!&=\! {\beta}\, {\xi}_1 \Big( \! \tilde{\bm{{\Upsilon}}}_{22} \!+\! \langle\bm{{\Gamma}}_{2} \rangle^2 \! \Big) \!-\! 2\frac{{\xi}_1}{{m}_1} \!+\! \frac{{\xi}_1 }{{\beta}{m}_1^2} \tilde{\bm{{\Upsilon}}}^{-1}_{22} , \label{eq:exampleentropyproductioncurrenteffective}\end{aligned}$$ from which via Eqs. and we get the entropic contribution to the information flow $$\begin{aligned} \label{eq:examplemutualinformationentropy} \dot{{I}}^{(2 \to 1)}_S \! &= \! {\beta}\, {\xi}_1 \Big( \tilde{\bm{{\Upsilon}}}_{22} \!-\! \bm{{\Upsilon}}_{22} \Big) \!+\! \frac{{\xi}_1 }{{\beta}{m}_1^2} \Big( \tilde{\bm{{\Upsilon}}}^{-1}_{22} \!-\! \bm{{\Upsilon}}^{-1}_{22} \Big) . \end{aligned}$$ Combining the last three equations with Eqs. , and , yields $$\begin{aligned} \dot{{I}}^{(2 \to 1)} &= {\beta}\, {\xi}_1 \Big( \tilde{\bm{{\Upsilon}}}_{22} - \bm{{\Upsilon}}_{22} \Big) + \frac{{\xi}_1 }{{\beta}{m}_1^2} \Big( \tilde{\bm{{\Upsilon}}}^{-1}_{22} - \bm{{\Upsilon}}^{-1}_{22} \Big) - \frac{{\beta}}{{m}_1} \Big( \tilde{\bm{{\Upsilon}}}_{12}^{-1} \, \bm{{\Upsilon}}_{13} + \tilde{\bm{{\Upsilon}}}_{22}^{-1} \, \bm{{\Upsilon}}_{23} \Big) \\ {\mathrm{d}}_t \, {S}_{\bm{2}|\bm{1}} &= \frac{{\xi}_2}{{\beta}{m}_2^2} \bm{{\Upsilon}}^{-1}_{44} - \frac{{\xi}_2}{{m}_2} - \frac{{\beta}}{{m}_1} \Big( \tilde{\bm{{\Upsilon}}}_{12}^{-1} \, \bm{{\Upsilon}}_{13} + \tilde{\bm{{\Upsilon}}}_{22}^{-1} \, \bm{{\Upsilon}}_{23} \Big) - {\beta}\, {\xi}_1 \Big( \tilde{\bm{{\Upsilon}}}_{22} - \bm{{\Upsilon}}_{22} \Big) - \frac{{\xi}_1 }{{\beta}{m}_1^2} \Big( \tilde{\bm{{\Upsilon}}}^{-1}_{22} - \bm{{\Upsilon}}^{-1}_{22} \Big) \\ \dot{{Q}}^{(1)} &= \frac{{\xi}_2}{{\beta}{m}_2^2} \bm{{\Upsilon}}^{-1}_{44} + \frac{{\xi}_1}{{m}_1} - \frac{{\xi}_2}{{m}_2} - \frac{{\beta}}{{m}_1} \Big( \tilde{\bm{{\Upsilon}}}_{12}^{-1} \, \bm{{\Upsilon}}_{13} + \tilde{\bm{{\Upsilon}}}_{22}^{-1} \, \bm{{\Upsilon}}_{23} \Big) - {\beta}\, {\xi}_1 \Big( \tilde{\bm{{\Upsilon}}}_{22} + \langle \bm{{\Gamma}}_{2} \rangle^2 \Big) - \frac{{\xi}_1 }{{\beta}{m}_1^2} \Big( \tilde{\bm{{\Upsilon}}}^{-1}_{22} - \bm{{\Upsilon}}^{-1}_{22} \Big) .\end{aligned}$$ Fast-Dynamics Limit ------------------- Since $\bm{{g}}_2 = 0$, the limit of TSS implies that the second particle is at local equilibrium conditioned on the coordinates of particle one. Within TSS, the effective force reads $$\begin{aligned} \label{eq:exampleequilibriumconservativeforce} {g}^{(1)} = {\beta}^2 \frac{{x}_1}{k_2} , \end{aligned}$$ and closes the effective Fokker-Planck Eq. , $$\begin{aligned} {\partial}_t {P}_1 &= - \nabla_1 \cdot \left[ \left( \bm{\gamma}_1 \cdot \tilde{\bm{{\Gamma}}} \right) {P}_1 \right] + \nabla_1 \cdot \big( \bm{D}_1 \cdot \nabla_1 {P}_1 \big) , \label{eq:examplefastfokkerplanckequationvectorial}\end{aligned}$$ with $ \nabla_1 ~\equiv ~ ({\partial}_{{x}_1},{\partial}_{{v}_1})^{\top} $. The drift coefficient and the diffusion matrix read $$\begin{aligned} \bm{\gamma}_1 \!=\! \begin{pmatrix} 0 & 1 \\ -\frac{k_1 }{{m}_1} - \frac{ {\beta}^2 }{k_2 \, {m}_1} & -\frac{{\xi}_1}{{m}_1} \\ \end{pmatrix} \!\! , \quad \bm{D}_1 \!=\! \begin{pmatrix} 0 & 0 \\ 0 & \frac{ {\xi}_1 }{{\beta}{m}_1^2} , \; \end{pmatrix} .\end{aligned}$$ This Fokker-Planck equation implies that we are dealing with a bivariate Ornstein-Uhlenbeck process, thus its solution is given by a bivariate Gaussian [@risken] $$\begin{aligned} \label{eq:examplefokkerplancksolutionfast} {P}_1 = \frac{1}{ 2\pi \, \sqrt{\det \tilde{\bm{{\Upsilon}}}} } \; {\mathrm{e}}^{ -\frac{1}{2} \left( \tilde{\bm{{\Gamma}}} - \langle \tilde{\bm{{\Gamma}}} \rangle \right)^{\top} \cdot \, \tilde{\bm{{\Upsilon}}}^{-1} \cdot \, \left( \tilde{\bm{{\Gamma}}} - \langle \tilde{\bm{{\Gamma}}} \rangle \right) } \, ,\end{aligned}$$ where the covariance matrix $\tilde{\bm{{\Upsilon}}}$ is specified by Eq. and the averages of the coordinates $\tilde{\bm{{\Gamma}}}$ are determined as follows $$\begin{aligned} \langle \tilde{\bm{{\Gamma}}} \rangle (t) = {\mathrm{e}}^{\bm{\gamma}_1 t} \cdot \tilde{\bm{{\Gamma}}}(0) .\end{aligned}$$ In the following of this subsection, we employ the numerical values ${\xi}_1= 0.8$, ${\beta}=0.05$, $k_1 = 1, {m}_1=1$, while we consider three different spring constants $k_2$ masses ${m}_2$ and friction coefficients ${\xi}_2$: $(k_2 = 15, {m}_{2,a}=5 \,,\, {\xi}_{2,a}=0.75)$, $(k_2 = 25,{m}_{2,b}=7.5 \,,\, {\xi}_{2,b}=0.25)$ and $(k_2 = 50,{m}_{2,c}=10 \,,\, {\xi}_{2,c}=0.1)$. This choice of parameters corresponds to an increasing separation of the time-scales between the different stochastic dynamics of the two particles. In the order $a-b-c$, the second particle approaches equilibrium conditioned on the coordinates of the first particle: Since the interaction potential scales linearly in the inverse temperature \[Eq. \], we chose a relatively small value for ${\beta}$ to implement a weak-coupling condition between the first and second particle - a crucial requisite for the second particle to behave like an ideal heat reservoir [@breuer2006; @lindenberg1990]. As $k_2$ and ${m}_2$ increases and ${\xi}_2$ decreases, the relaxation time-scale of the second particle further shrinks, hence the time-scales of the particles dynamics start to separate, as desired. Moreover, we prepare the initial condition with ${P}_1(0) = \delta(\tilde{\bm{{\Gamma}}} - \tilde{\bm{{\Gamma}}}(0) )$ with $ \tilde{\bm{{\Gamma}}}(0) = (2,1)^{\top} $. Fig. \[fig:examplefastheatcurrententropyproduction\] depicts in a) the difference between the global $\dot{{Q}}$ and effective heat current $\dot{{Q}}^{(1)}$ and in b) the scaled difference between the global $\dot{{\Sigma}}$ and effective entropy production rate $\dot{{\Sigma}}^{(1)}$ as a function of time $t$. We observe that both the effective heat current and entropy production rate converge to the corresponding full quantities in the limit of TSS. The overall system remains out-of-equilibrium as reflected by finite (effective) heat currents and (effective) entropy production rates of the first particle. Since the corresponding single-particle definition for the heat, $\dot{q}^{(1)}$, does not agree with definition of the effective one \[not shown in a)\], it follows that the time-derivative of the conditional Shannon entropy, ${\mathrm{d}}_t {S}_{\bm{2}|\bm{1}}$ remains finite in the limit of TSS. We furthermore note that the effective heat current and entropy production rate are in agreement with the time-derivative of the heat and entropy production using the HMF formalism. Moreover, Fig. \[fig:examplefastheatcurrententropyproduction\] c) shows that the directional information flow $\dot{{I}}^{(2 \to 1)}$ vanishes in the limit of TSS. This in turn implies first that the additive contribution $\dot{\sigma}^{(2)}$ to the full entropy production rate becomes zero while the inverse information flow $\dot{{I}}^{(1 \to 2)}$ remains finite. It furthermore follows from the nonpositivity of $\dot{{I}}^{(2 \to 1)}$ that the non-positive entropic contribution $\dot{{I}}^{(2 \to 1)}_S$ dominates over the non-negative force contribution $\dot{{I}}^{(2 \to 1)}_F$. ![Difference between the full $\dot{{Q}}$ and effective heat current $\dot{{Q}}^{(1)}$ in a) and between the scaled full ${\beta}\, \dot{{\Sigma}}$ and scaled effective entropy production rate ${\beta}\, \dot{{\Sigma}}^{(1)}$ in b) as a function of time $t$. The information flow $\dot{{I}}^{(2 \to 1)}$ is depicted in c). Moreover, the effective quantities based on the HMF are overlaid in Figs. a) and b). \[fig:examplefastheatcurrententropyproduction\] ](thermodynamicsfast.pdf) Large-Mass Limit ---------------- In the large-${m}_2$ limit, the effective force reads $$\begin{aligned} \label{eq:exampleheavynonconservativeforce} {g}^{(1)} = \left. - {\beta}\, {x}_2 \right|_{{x}_2 = {x}_t} ,\end{aligned}$$ and closes the effective Fokker-Planck Eq. , $$\begin{aligned} {\partial}_t {P}_1 &= - \nabla_1 \cdot \left[ \left( \bm{\gamma}_1 \cdot \tilde{\bm{{\Gamma}}} + \bm{{g}}^{(1)} \right) {P}_1 \right] + \nabla_1 \cdot \big( \bm{D}_1 \cdot \nabla_1 {P}_1 \big) . \label{eq:exampleheavyfokkerplanckequationvectorial}\end{aligned}$$ The constant drift coefficient, the scaled effective force vector and the diffusion matrix read $$\begin{aligned} \bm{\gamma}_1 = \begin{pmatrix} 0 & 1 \\ -\frac{k_1}{{m}_1} & -\frac{{\xi}_1}{{m}_1} \\ \end{pmatrix} , \; \bm{{g}}^{(1)} = \begin{pmatrix} 0 \\ - \frac{{\beta}x_t}{{m}_1 } \end{pmatrix} , \; \bm{D}_1 = \begin{pmatrix} 0 & 0 \\ 0 & \frac{ {\xi}_1 }{{\beta}{m}_1^2} , \; \end{pmatrix} .\end{aligned}$$ This partial differential Eq. is supplemented by the initial condition ${P}_1(0) = \delta(\tilde{\bm{{\Gamma}}} - \tilde{\bm{{\Gamma}}}(0))$ with $\tilde{\bm{{\Gamma}}}(0) = (2,1)^{\top} $. The averages are determined as follows $$\begin{aligned} \label{eq:examplefokkerplancksolutionnotationoneheavy} \langle \tilde{\bm{{\Gamma}}} \rangle (t) = {\mathrm{e}}^{\bm{\gamma}_1 t} \cdot \tilde{\bm{{\Gamma}}}(0) + \int_0^t {\mathrm{e}}^{\bm{\gamma}_1 (t-t')} \cdot \bm{{g}}^{(1)}(t') \, {\mathrm{d}}t' ,\end{aligned}$$ while the coordinates (${x}_t,{v}_t$) of the second particle follow the solution of the deterministic equation of motion , $$\begin{aligned} \label{eq:exampleheavydeterministicsolution} \begin{aligned} {x}_t &= 2 \cos \left( \frac{k_2}{{m}_2} t \right) + \frac{{m}_2}{k_2} \sin \left( \frac{k_2}{{m}_2} t \right) \\ {v}_t &= \cos \left( \frac{k_2}{{m}_2} t \right) - 2 \frac{k_2}{{m}_2} \sin \left( \frac{k_2}{{m}_2} t \right) , \end{aligned}\end{aligned}$$ for the initial condition as chosen above. In the following, we employ the numerical values ${\xi}_1= 0.3$, ${\xi}_2 = 1.5$, ${\beta}=1$, $k_1 = 4\,,\, {m}_1=1$, while we consider three different masses ${m}_2$ and constants $k_2$ such that their ratio remains constant: $({m}_{2,a}=4 \,,\, k_{2,a}=3.8)$, $({m}_{2,b}=40 \,,\, k_{2,b}=38)$ and $({m}_{2,c}=400 \,,\, k_{2,c}=380)$. It is important to note that the set of parameters $a,b,$ and $c$ are chosen such that the ratio of ${m}_2$ and $k_2$ remains constant and thus leaves the determinstic trajectory of the second particle invariant according to Eq. . Fig. \[fig:examplepositionvariance\] depicts the variances $\bm{{\Upsilon}}_{11}$ and $\bm{{\Upsilon}}_{33}$ of the positional variables ${x}_1$ and ${x}_2$, in panels a) and b) respectively. As expected, the fluctuations of the first particle do not exhibit striking qualitative changes since the variance of the second particle vanishes with growing mass ${m}_2$. We verify that $$\begin{aligned} \bm{{\Upsilon}}_{ij} = 0 , \quad \forall \, ij \neq \lbrace 11,12,21,22 \rbrace\end{aligned}$$ thus confirming that the second particle behaves deterministically in the large-${m}_2$ limit as prescribed by the equations of motion . ![Variance $\bm{{\Upsilon}}_{11}$ in a) and $\bm{{\Upsilon}}_{33}$ in b) of the positional degrees of freedom ${x}_1$ and ${x}_2$, respectively, as a function of time $t$. \[fig:examplepositionvariance\] ](varianceheavy.pdf) Next, Fig. \[fig:exampleheavyheatcurrent\] a) shows that the effective heat current, $\dot{{Q}}^{(1)}$, converges to the full one, $\dot{{Q}}$, minus the macroscopic dissipation of the heavy particle, $ {\xi}_2 {v}_t^2$, as ${m}_2$ increases. ![Difference between the full $\dot{{Q}}$ and effective $\dot{{Q}}^{(1)}$ heat current in a). Moreover, the heat current associated with the heavy particle, $\dot{q}^{(2)} = - {\xi}_2 \, {v}_t^2$, as well as the difference between $\dot{{Q}}$ and $\dot{q}^{(1)}$ is overlaid. Fig. b) is analogous to a) but depicting entropy production rates. Information flow $\dot{{I}}^{(2 \to 1)}$ is shown in panel c). \[fig:exampleheavyheatcurrent\] ](thermodynamicsheavy.pdf) This macroscopic term is naturally non-negative and periodic with the frequency $k_2/{m}_2$ due to the choice of a harmonic potential . Furthermore, Fig. \[fig:exampleheavyheatcurrent\] b) illustrates the convergence of the effective entropy production $\dot{{\Sigma}}^{(1)}$ to the full one, $\dot{{\Sigma}}$, plus the macroscopic dissipation of the heavy particle with increasing $,{m}_2$. Since the single-particle definitions for the heat current, $\dot{q}^{(1)}$, and the entropy production rate, $\dot{\sigma}^{(1)}$, also converge to the full quantities, respectively, it follows that the time-derivative of the conditional Shannon entropy, ${\mathrm{d}}_t {S}_{\bm{2}|\bm{1}}$, and the information flow from the light to the heavy particle, $\dot{{I}}^{(1\to2)}$, vanish as ${m}_2$ grows. Finally, in Fig. \[fig:exampleheavyheatcurrent\] c) the directional information flow from the heavy to the light particle $\dot{{I}}^{(2\to1)}$ is shown to decrease in modulus with increasing ${m}_2$. It is interesting to note that the vanishing directional flow becomes negative if ${m}_2$ is sufficiently large. This means that the non-positive entropic contribution converges at a slower rate to zero than the force one does. Conclusion {#sec:conclusion} ========== In this work, we presented three coarse-graining approaches for the thermodynamics of two interacting underdamped Brownian particles: The observation of only one particle while the other one has been coarse-grained, the partitioning of the two-body system into two single-particle systems exchanging information flows and the Hamiltonian of mean force formalism. We demonstrated that the effective thermodynamics of first and third approach is equivalent to the correct global thermodynamics in the limit of time-scale separation between the two particles, where the faster evolving particle equilibrates with respect to the coordinates of the more slowly evolving particle. Conversely, we observed a mismatch between the effective and full thermodynamics in the bipartite case, since the entropic contribution due to the coupling of the two particles is not taken into account. Physically, in this limit the faster evolving particle becomes part of the heat reservoir to which the other particle is coupled to. Conversely, if one particle becomes deterministic because of an exceedingly large mass compared to the other particle’s mass, it acts as an additional work source on the lighter particle. In this case, the effective thermodynamics of the first two of the aforementioned three approaches agree, up to a simple macroscopic term related to the dissipation of the work source, with the correct global one. The Hamiltonian of mean force formalism however was shown to be incompatible with the large-mass limit. In fact, the same is true for any physical regime outside the time-scale separation limit. This reflects that the Hamiltonian of mean force formalism was originally motivated by and employed in equilibrium thermostatics. These theoretical predictions were confirmed via an analytically tractable model made up of two linearly coupled harmonic oscillators. We remark that the generalization to an arbitrary many-body system, where particle one and two are replaced by two subsets of interacting particles is straightforward. Since the findings for systems with arbitrarily many bodies are identical to the results for the two-body setup reported above, an explicit presentation of the former is omitted. We leave the study of effective fluctuating thermodynamics in underdamped systems for future works. In this context, it would also be interesting to explore if other coarse-graining schemes that were applied to jump processes, for instance as proposed in Refs. [@polettini2017prl; @polettini2019jsp], can also be utilized in underdamped Fokker-Planck systems. These studies could give rise to new strategies for systematically and thermodynamically consistently coarse-graining many-body systems.\ Acknowledgments {#acknowledgments .unnumbered} =============== We gratefully acknowledge funding by the National Research Fund, Luxembourg (AFR PhD Grant 2016, No. 11271777 and ATTRACT Project No. FNR/A11/02) and by the European Research Council (Project NanoThermo, ERC-2015-CoG Agreement No. 681456). [^1]: For velocity-dependent nonconservative forces, *e.g.* magnetic forces, the following procedure is analogous to the case of velocity-independent forces. The only formal modification are the additional terms, $\sum_i \! 1/{m}_i \! \int \! {\mathrm{d}}\bm{{\Gamma}}_i {P}_i {\partial}_{\bm{{v}}_i} \cdot \bm{{g}}_i$, that appear in the entropy balance equation , cf. Eq. .
--- author: - 'Nastassia Grimm,' - Jaiyul Yoo title: Jacobi Mapping Approach for a Precise Cosmological Weak Lensing Formalism --- Introduction ============ The potential of cosmological weak lensing, the deflection of light from distant sources by the large-scale structures of the universe, as a powerful cosmological probe was already recognized by theorists over half a century ago (see [@Gunn; @Miralda1; @Miralda2; @Kaiser] for early work). However, a few more decades had to pass until our observational tools were sufficiently developed to measure these extremely subtle effects. In 2000, the measurement of a cosmic shear signal was reported by four independent groups ([@FirstDetection1; @FirstDetection2; @FirstDetection3; @FirstDetection4]). These first observations immediately sparked great interest within the scientific community. Numerous improved observations followed soon, and cosmological weak lensing has established itself as one of the most successful and promising research fields in cosmology (see e.g. [@BartelmannSchneider; @ReviewMunshi; @ReviewRefregier; @ReviewKilbinger] for reviews). With the next generation of weak lensing surveys, referred to as stage IV, this research field will reach its next important milestone: The ground-based observatory Large Synoptic Survey Telescope (LSST; [@LSST]), and the space-based missions Wide Field Infrared Survey Telescope (WFIRST; [@WFIRST]) and Euclid [@Euclid] will together cover a large fraction of the sky and measure the shape of roughly a billion galaxies with unprecedented precision. These future observations are expected to play a major role in understanding mysteries of the universe such as the nature of dark energy. However, with the high precision and the vast amount of data provided by these surveys, the scientific community is confronted with the challenging task of accounting for all sources of uncertainties to avoid false conclusions. A review of systematics in cosmic shear measurements and their theoretical interpretation which need to be brought under control to ensure the credibility of potential new findings can be found in [@Mandelbaum]. In addition to these well-known issues, there are even more fundamental problems in the theoretical framework for cosmological weak lensing, as was pointed out recently in [@newpaper]. The standard formalism used to describe cosmological weak lensing effects yields gauge-dependent results for the observables, i.e. the convergence, the cosmic shear and the rotation. As different gauge-choices are physically indistinguishable, observable quantities have to be gauge-invariant. This discrepancy clearly shows that the standard cosmological weak lensing formalism fails to correctly account for all relativistic effects contributing to the lensing observables, and thus, conclusions based on this formalism might not be accurate. To some extent, flaws in the standard weak lensing formalism have already been known before. Cosmological studies measuring the magnification effect of weak lensing in fact do not measure the convergence $\kappa$, which characterizes the magnification in the standard formalism, but the distortion in the luminosity distance $\delta D$. The convergence itself is neither a gauge-invariant nor an observable quantity. One of several possibilities to calculate the luminosity distance in a perturbed FLRW universe is to apply its relation to the Jacobi map, which was first done by C. Bonvin, R. Durrer and M. A. Gasparini in 2006 [@Bonvin]. Although, since then, this method was applied in various other works (see e.g. [@Bonvin2; @CMBLensing; @SecondOrderShear; @UTLD; @Clarkson; @Clarkson2; @Yamauchi]), a proof of its gauge-invariance has not been performed so far. In this work, we will prove that the Jacobi mapping formalism indeed yields a gauge-invariant expression for the distortion in the luminosity distance. To this end, we keep all ten degrees of freedom of the perturbed FLRW-metric and work with gauge-invariant perturbation quantities. Furthermore, we will show that the Jacobi mapping approach can be used to calculate gauge-invariant quantities for the cosmic shear and the lensing rotation, similarly to the calculation of gauge-invariant distortion in the luminosity distance $\delta D$ which replaces the convergence $\kappa$ the standard formalism. The idea to extend the Jacobi mapping approach to determine the cosmic shear components in addition to the luminosity distance was already applied in [@CMBLensing], where linear order expressions coinciding with the results of the standard formalism were obtained. Moreover, this method was applied in [@SecondOrderShear] for second order calculations of the shear components. However, the calculations in these papers have been performed only in the Newtonian gauge for scalar modes. When tensor modes are included, we will see that the Jacobi mapping approach yields results for the cosmic shear and the rotation which are in disagreement with the standard formalism. This paper is structured as follows: In Section \[Subsection:Metric\]–\[Subsection:wavevector\], we introduce basic notations and discuss fundamental concepts such as the local orthonormal tetrads. These tetrads are the link between the global frame described by a FLRW-metric through which the light propagates and the local frames of the source and the observer. They are, as we will discuss, vitally important to correctly describe weak lensing effects and avoid gauge dependencies. In Section \[Subsection:SF\], we will briefly summarize the standard weak lensing formalism and its gauge issues. Section \[Section:JM\] is the main part of this work, dedicated to establishing a precise cosmological weak lensing formalism based on the Jacobi map. We will describe how the physical lensing observables can be calculated to arbitrary order by introducing an accurate definition of the distortion matrix, ensuring that all steps of the Jacobi mapping formalism are rigorously justified. Furthermore, we will explicitly perform the linear-order calculations for the lensing observables including all scalar, vector and tensor modes and compare our results to those of the standard formalism. Finally, we will summarize and conclude our results in Section \[Conclusion\]. Preliminaries {#Preliminaries} ============= In this section, we first introduce in Section \[Subsection:Metric\] our convention for the perturbed FLRW metric and the gauge-invariant variables for the metric perturbations that will be used in this paper. In Section \[Subsection:wavevector\], we introduce the local orthonormal tetrad basis, and describe how it is used to obtain the expression of the photon wavevector in FLRW coordinates from its observed value in the local observer frame. We also introduce the conformally transformed metric, which is used to significantly reduce the complexity of the Jacobi mapping formalism in Section \[Section:JM\]. Finally, in Section \[Subsection:SF\] we briefly review the standard weak lensing formalism and its gauge-issues which were pointed out in [@newpaper]. Perturbed FLRW metric and gauge-invariant variables {#Subsection:Metric} --------------------------------------------------- The calculations in this paper are performed in a perturbed FLRW metric, $$\begin{aligned} \mathrm ds^2&=g_{\mu\nu}\mathrm{d}x^\mu\mathrm dx^\nu \nonumber \\ &=-a^2(\tau)(1+2\mathcal{A})\mathrm d\tau^2-2a^2(\tau)\mathcal{B}_\alpha\mathrm d\tau\mathrm dx^\alpha+a^2(\tau)\left(\delta_{\alpha\beta}+2\mathcal{C}_{\alpha\beta}\right)\mathrm dx^\alpha \mathrm dx^\beta\,,\end{aligned}$$ where $\tau$ is the conformal time and $a(\tau)$ is the expansion scale factor. Note that we use $\mu,\nu,\rho,\dots$ to represent the 4-dimensional spacetime indices, and $\alpha,\beta,\gamma,\dots$ to represent the 3-dimensional spatial indices. The metric perturbations can be decomposed into scalar, vector and tensor perturbations: $$\begin{aligned} \mathcal{A}=\alpha,\qquad \mathcal{B}_\alpha=\beta_{,\alpha}+B_\alpha, \qquad \mathcal{C}_{\alpha\beta}=\varphi\delta_{\alpha\beta}+\gamma_{,\alpha\beta}+C_{(\alpha,\beta)}+C_{\alpha\beta}\,, \label{MetricDecomposition}\end{aligned}$$ where the vector perturbations $B_\alpha$, $C_\alpha$ are divergenceless and the tensor perturbation $C_{\alpha\beta}$ is trace-free and divergenceless. In a perturbed FLRW universe, the 4-velocity $u^\mu$ of timelike flows $(u_\mu u^\mu=-1)$ also differs from its background value $\bar u^\mu=1/a\,(1,\,0,\,0,\,0)$: $$\begin{aligned} u^\mu\equiv\frac{1}{a}\left(1-\mathcal{A},\,\mathcal{U}^\alpha\right)\,,\qquad\mathcal{U}^\alpha\equiv-U^{,\alpha}+U^\alpha\,,\qquad \mathcal{U}_\alpha\equiv\delta_{\alpha\beta}\mathcal{U}^\beta\,,\end{aligned}$$ where the vector perturbation $U^\alpha$ is divergenceless. Under a gauge-transformation induced by the coordinate transformation $x^a\mapsto x^a+\xi^a$, where $\xi^a=(T,\,\mathcal{L}^\alpha)$ and $\mathcal{L}^\alpha\equiv L^{,\alpha}+L^\alpha$, the perturbation quantities transform as: $$\begin{aligned} &\alpha\mapsto\alpha-T'-\mathcal{H}T\,,\qquad\beta\mapsto\beta-T+L'\,,\qquad\varphi\mapsto\varphi-\mathcal{H}T\,,\qquad \gamma\mapsto\gamma-L\,,\nonumber\\ &B_\alpha\mapsto B_\alpha+L'_\alpha\,,\qquad C_\alpha\mapsto C_\alpha-L_\alpha\,,\qquad U\mapsto U-L'\,,\qquad\mathcal{U}_\alpha\mapsto\mathcal{U}_\alpha+L'_\alpha\,.\end{aligned}$$ From this, we infer that the following combinations of the perturbation variables are gauge-invariant: $$\begin{aligned} &\alpha_\chi=\alpha-\frac{1}{a}\chi',\qquad \varphi_\chi=\varphi-H\chi,\qquad v_\chi=v-\frac{1}{a}\chi\,, \nonumber \\ &\Psi_\alpha=B_\alpha+C'_\alpha,\qquad v_\alpha=U_\alpha-B_\alpha,\qquad V_\alpha=-{v_\chi}_{,\alpha}+v_\alpha\,, \label{gmetric}\end{aligned}$$ where $\chi\equiv a(\beta+\gamma')$ is the *scalar shear* which transforms as $\chi\mapsto\chi-aT$, and $v\equiv U+\beta$ is the *scalar velocity* which transforms as $v\mapsto v-T$. For future reference, we also define $\mathcal{G}^\alpha=\gamma^{,\alpha}+C^\alpha$, which transforms as $\mathcal{G}^\alpha\mapsto\mathcal{G}^\alpha-\mathcal{L}^\alpha$. The weak gravitational lensing quantities derived in this paper will be written fully in terms of the gauge-invariant variables given in and the gauge term $\mathcal{G}^\alpha$. Their gauge-transformation properties will thus be immediately evident. Orthonormal tetrads and perturbations of the photon wavevector {#Subsection:wavevector} -------------------------------------------------------------- Photons emitted by some distant light source travel through the universe on null geodesics, defined by the geodesic equation $k^\mu {k^\nu}_{;\mu}=0$ and the null condition $k^\mu k_\mu=0$, where the semicolon denotes the covariant derivative with respect to the metric $g_{\mu\nu}$. The tangent vector $k^\mu$ is given by $k^\mu\equiv \mathrm dx^\mu(\Lambda)/\mathrm d\Lambda$, where $\Lambda$ is an affine parameter of the photon path. When these photons reach our observer position at some affine parameter $\Lambda_o$, we measure the photon wavevector $k^a(\Lambda_o)=\omega_o(1,\,-n^i)$, where $\omega_o$ is the angular frequency and $n^i$ is the observed photon direction. However, this measurement is performed in our local rest frame (or *local Lorentz frame*), described by the Minkowski metric $\eta_{ab}$, and not in the global frame described by the perturbed FLRW metric $g_{\mu\nu}$. Note that, to distinguish them from the global FLRW coordinates, we use latin indices for the components in the local frame, where $a,b,c,\dots$ represent the 4-dimensional spacetime components and $i,j,k,\dots$ the 3-dimensional space components. The relation between the global frame and the local Lorentz frame of an observer with velocity $u^\mu$ is described by the orthonormal tetrads $e^\mu_a$ which transform the global into the local metric, $$\begin{aligned} \eta_{ab}=g_{\mu\nu}e_a^\mu e_b^\nu\,. \label{tetrads1}\end{aligned}$$ Additionally, we require that the timelike tetrad $e_0^\mu$ coincides with the observer 4-velocity, $e_0^\mu=u^\mu$. A vector $A^a$ (e.g. the photon wavevector $k^a$) measured in the local observer frame can thus be transformed to FLRW coordinates as $$\begin{aligned} A^\mu=A^a e^\mu_a\,.\end{aligned}$$ This transformation property is of fundamental importance to properly describe cosmological weak lensing observables. The concepts of the size and shape of an object, which are affected by gravitational lensing, are defined not in the global spacetime manifold described by the FLRW metric $g_{\mu\nu}$, but in the local Lorentz frame of the source. Hence, we need to transform the quantities of interest into the local Lorentz frame. The property in equation does not uniquely define the tetrad basis. As described in [@newpaper], we need to additionally take into account that the tetrads $e_i^\mu$ are four vectors in the FLRW frame, i.e. transform as vectors under a coordinate transformation. Hence, using the notation of Section \[Subsection:Metric\], their gauge-transformation property is given by $$\begin{aligned} e_i^0\mapsto e_i^0+\frac{1}{a}\delta_i^\alpha T_{,\alpha}\,,\qquad e_i^\alpha\mapsto e_i^\alpha+\frac{1}{a}\delta^\beta_i{L^\alpha}_{,\beta}+\frac{1}{a}\delta^\beta_i{L^{,\alpha}}_\beta+H\delta_i^\alpha T \label{tetrads2}\,.\end{aligned}$$ The properties in equations and are fulfilled by $$\begin{aligned} e_i^\mu=\frac{1}{a}\left(\delta_i^\beta\left(\mathcal{U}_\beta-\mathcal{B}_\beta\right),\,\delta_i^\alpha-\delta_i^\beta{p^\alpha}_\beta\right)\,,\qquad {p^\alpha}_\beta\equiv\varphi\delta^\alpha_\beta+{\mathcal{G}^\alpha}_{,\beta}+{C}^\alpha_\beta+{\epsilon^\alpha}_{\beta j}\Omega^j\,, \label{tetrads}\end{aligned}$$ where $\Omega^j$ denotes the spatial orientation of the local frame. Note that because of the gauge-transformation property stated in equation , the antisymmetric component $p_{[\alpha\beta]}=C_{[\alpha,\beta]}-{\epsilon^\alpha}_{\beta j}\Omega^j$ of the tetrads is non-vanishing. By applying the previous equation, we can now derive the expression for the photon wavevector $k_o^\mu$ in FLRW coordinates, $$\begin{aligned} k_o^\mu=\left(k^ae^\mu_a\right)_o=\frac{\omega_o}{a_o}\left(1-\mathcal{A}-n^i\delta_i^\beta\left(\mathcal{U}_\beta-\mathcal{B}_\beta\right),\,-n^i\delta_i^\alpha+\mathcal{U}^\alpha+n^i\delta_i^\beta {p^\alpha}_\beta\right)_o\,, \label{wavevectorFLRW}\end{aligned}$$ where we require that the photon wavevector $k^a e^\mu_a$ is equal to the tangent vector $k^\mu=\mathrm dx^\mu(\Lambda)/\mathrm d\Lambda$, which uniquely fixes the parameter $\Lambda$ along the geodesic. We emphasize that the relation given in equation  is valid only at the observer position since the photon direction $n^i(\Lambda)$ measured by a comoving observer at an affine parameter $\Lambda\neq\Lambda_o$ will differ from $n^i$ by a first-order quantity. To simplify further calculations, it is useful to express $k^\mu$ in the conformally transformed metric $\hat g_{\mu\nu}$ defined by $a^2\hat g_{\mu\nu}=g_{\mu\nu}$. As described e.g. in [@Wald], null geodesics are invariant under conformal transformations. The photon path $x^a(\Lambda)$ is thus unaffected. However, the affine parameter $\Lambda$ is transformed into another affine parameter $\lambda$, $\mathrm d\Lambda/\mathrm d\lambda=\mathbb{C}a^2$, where the scale factor $\mathbb{C}$ is unspecified. The conformally transformed wavevector at the observer position $\hat k_o^\mu$ is given by $$\begin{aligned} \hat k_o^0=\left[2\mathbb{C}\pi\nu a\,\left(1-\mathcal{A}-n^\beta\left(\mathcal{U}_\beta-\mathcal{B}_\beta\right)\right)\right]_o\,,\quad\hat k_o^\alpha=\left[2\mathbb{C}\pi\nu a\,\left(-n^\alpha+\mathcal{U}^\alpha+n^\beta{p^\alpha}_\beta\right)\right]_o\,,\end{aligned}$$ where $2\pi\nu\equiv\omega$ and $n^\alpha\equiv n^i\delta^\alpha_i$. We fix the factor $\mathbb{C}$ by requiring that at some point of the photon path specified by the affine parameter $\Lambda_p$ we have $2\mathbb{C}\pi\nu_pa_p=1$. Since the factor $2\pi\nu a$ is constant in an unperturbed FLRW universe, this choice of $\mathbb{C}$ enables us to define the perturbation variable $\widehat{\Delta\nu}$ as $2\pi\nu a\equiv 1+\Delta\nu$, where our choice of normalization is $\Delta\nu_p=0$. The perturbation variables $\delta\nu$ and $\delta n^\alpha$ for the photon wavevector can be defined as $$\begin{aligned} \hat k^\mu\equiv (1+\delta\nu,\,-n^\alpha-\delta n^\alpha)\,,\end{aligned}$$ where $$\begin{aligned} \delta\nu_o=\left(\widehat{\Delta\nu}-\mathcal{A}-n^\alpha\left(\mathcal{U}_\alpha-\mathcal{B}_\alpha\right)\right)_o\,,\qquad\delta n_o^\alpha=\left(-n^\alpha\widehat{\Delta\nu}-\mathcal{U}^\alpha+n^\beta{p^\alpha}_\beta\right)_o\,. \label{deltanu}\end{aligned}$$ The wavevector $\hat k^\mu$ differs from $(1,\,-n^\alpha)$ by a first-order quantity at any point of the light path, i.e. the quantities $\delta\nu$ and $\delta n^\alpha$ are well-defined. Equation , which is valid only at the observer position, serves as the boundary condition. Standard weak lensing formalism {#Subsection:SF} ------------------------------- The observed source position $\bar{x}_s^\mu$ of a light source observed at redshift $z$ and angular direction $(\theta,\,\phi)$ is given by $$\begin{aligned} \bar{x}_s^\mu=\left(\bar\tau_z,\,\bar{r}_z\sin\theta\,\cos\phi,\,\bar{r}_z\sin\theta\,\sin\phi,\,\bar{r}_z\cos\theta\right)\,,\end{aligned}$$ where the temporal and radial coordinates are related to the redshift by $$\begin{aligned} \bar\tau_o-\bar\tau_z=\bar r_z=\int_0^z\frac{\mathrm dz'}{H(z')}\,,\end{aligned}$$ and the photon path is parametrized by the conformal time, $\lambda_z=\bar\tau_z-\bar\tau_o=-\bar{r}_z$. In our real universe, however, the geodesic on which the photons travel differs from the perfectly straight path due to gravitational lensing, which causes a distortion $\delta x^\mu_s$ in the source position: $$\begin{aligned} x_s^\mu=\bar x^\mu_s+\delta x^\mu_s\equiv&\big(\tau_z+\Delta\tau,\,(\bar{r}_z+\delta r)\sin(\theta+\delta\theta)\cos(\phi+\delta\phi), \nonumber \\ &\phantom{(\tau_z+\Delta\tau,\,}(\bar{r}_z+\delta r)\sin(\theta+\delta\theta)\sin(\phi+\delta\phi),\,(\bar{r}_z+\delta r)\sin(\theta+\delta\theta)\big)\,.\end{aligned}$$ For future reference, also note that the affine parameter $\lambda_s\equiv\lambda_z+\Delta\lambda_s$ and the redshift of the source are distorted, where we define the redshift distortion $\delta z$ as $$\begin{aligned} a_s\equiv\frac{1+\delta z}{1+z}\,. \label{defdeltaz}\end{aligned}$$ To linear order in perturbations, the spatial source position can be written as $$\begin{aligned} x^\alpha_s=(\bar{r}_z+\delta r)n^\alpha+\bar{r}_z\,\delta\theta\,\theta^\alpha+\bar{r}_z\sin\theta\,\delta\phi\,\phi^\alpha\,,\end{aligned}$$ where $n^\alpha\equiv n^i\delta^\alpha_i$ is the photon direction measured in the local rest frame of an observer at $\Lambda_o$, and $\theta^\alpha\equiv\theta^i\delta^\alpha_i$ and $\phi^\alpha\equiv\phi^i\delta^\alpha_i$ are two directions orthonormal to it, $$\begin{aligned} n^\alpha=\begin{pmatrix}\sin\theta\cos\phi\\\sin\theta\sin\phi\\\cos\theta\end{pmatrix}\,,\qquad\theta^\alpha=\begin{pmatrix}\cos\theta\cos\phi\\\cos\theta\sin\phi\\-\sin\theta\end{pmatrix}\,,\qquad\phi^\alpha=\begin{pmatrix}-\sin\phi\\\cos\phi\\0\end{pmatrix}\,.\end{aligned}$$ In this paper, the quantities $n^\alpha$, $\theta^\alpha$ and $\phi^\alpha$ always refer to the directions defined at $\Lambda_o$ unless another affine parameter is specified, i.e. $n^\alpha\equiv n^\alpha(\Lambda_o)$, $\theta^\alpha\equiv\theta^\alpha(\Lambda_o)$ and $\phi^\alpha\equiv\phi^\alpha(\Lambda_o)$. The distortion of the photon path affects not only the source position, but also the observed size and shape of the image. This effect is quantified by the *distortion matrix* $\mathbb{D}^I{_J}$, $I=1,2$, which is also referred to as the *amplification matrix*. In most literature on gravitational lensing ${\mathbb{D}^I}_J$ is defined as the $2\times 2$-dimensional projection of the Jacobian matrix of the map $\bar x_s^\alpha\mapsto x_s^\alpha$ onto the plane orthogonal to the observed photon direction $n^\alpha$, hence $$\begin{aligned} {\mathbb{D}^I}_J\equiv\frac{\partial\beta^I}{\partial\theta^J}\,, \label{DefDIJwrong}\end{aligned}$$ where $\theta^I=(\theta,\,\phi)$ and $\beta^I=(\theta+\delta\theta,\,\phi+\delta\phi)$. For the geometrical interpretation of the distortion matrix, consider two light rays coming from the same source observed at an infinitesimally small angular separation $\Delta\Phi^I=(\Delta\theta,\,\sin\theta\,\Delta\phi)$ and redshift $z$. The observed spatial separation of the source is, hence, given by $\bar\xi^I_s=\bar a_s\bar r_z\Delta\Phi^I$. The Jacobian matrix of a map relates infinitesimal separations to each other, i.e. we can write the distortion matrix $\mathbb{D}^I{_J}$ as $$\begin{aligned} \begin{pmatrix}\xi^\alpha_s\bar a_s\theta_\alpha\\\xi^\alpha_s\bar a_s\phi_\alpha\end{pmatrix}=\mathbb{D}^I{_J}\bar\xi^J_s\,, \label{DIJstandard}\end{aligned}$$ where $\xi_s^\alpha=x^\alpha_s(\theta+\Delta\theta,\,\phi+\Delta\phi)-x^\alpha_s(\theta,\,\phi)$ is the separation of the FLRW coordinates at the source position. Note that in the standard formalism, the redshift distortion $\delta z$ and the distortion $\delta r$ of the radial coordinate are ignored. The distortion matrix can be decomposed into a trace, a traceless symmetric component and a traceless anti-symmetric component, $$\begin{aligned} {\mathbb{D}^I}_J\equiv\begin{pmatrix}1-\kappa&0\\0&1-\kappa\end{pmatrix}-\begin{pmatrix}\gamma_1&\gamma_2\\\gamma_2&-\gamma_1\end{pmatrix}-\begin{pmatrix}0&\omega\\-\omega&0\end{pmatrix}\,,\end{aligned}$$ where $\kappa$ is the *convergence* quantifying the magnification of the image, $\gamma_1$ and $\gamma_2$ are the *shear components* quantifying a distortion in shape, and $\omega$ is the *rotation*. Following the definition of the distortion matrix given in equation , these quantities can be inferred from the expressions for $\delta\theta$ and $\delta\phi$. The true source position $x^\alpha_s$ and hence the angular distortions $\delta\theta$ and $\delta\phi$ can be obtained by integrating the photon wavevector $\hat k^\mu=\mathrm dx^\mu(\lambda)/\mathrm d\lambda$ from the observer position to the source position at the perturbed affine parameter $\lambda_s\equiv\lambda_z+\Delta\lambda_s$. In most weak lensing literature, these calculations are performed considering only scalar perturbations and applying a certain gauge, most commonly the Newtonian gauge. To test the standard weak lensing formalism for gauge-invariance, Yoo et al. have calculated the expressions for $\kappa$, $\gamma_1$, $\gamma_2$ and $\omega$ resulting from the definition without fixing a gauge condition and also including vector and tensor modes [@newpaper]. They obtained $$\begin{aligned} \gamma_1=&\frac{1}{2}\left(\phi_\alpha\phi_\beta-\theta_\alpha\theta_\beta\right)\left[C^{\alpha\beta}_o-\mathcal{G}^{\alpha,\beta}_s-\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\frac{\partial}{\partial x_\beta}\left(\Psi^\alpha+2C^{\alpha\gamma}n_\gamma\right)\right] \nonumber \\ &+\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\left(\frac{\bar{r}_z-\bar{r}}{2\bar{r}_z\bar{r}}\right)\left(\frac{\partial^2}{\partial\theta^2}-\cot\theta\frac{\partial}{\partial\theta}-\frac{1}{\sin^2\theta}\frac{\partial^2}{\partial\phi^2}\right)\left(\alpha_\chi-\varphi_\chi-n^\beta\Psi_\beta-n^\beta n^\gamma C_{\beta\gamma}\right) \label{gamma1}\end{aligned}$$ and $$\begin{aligned} \gamma_2=&\frac{1}{2}\left(\theta_\alpha\phi_\beta+\theta_\beta\phi_\alpha\right)\left[-C^{\alpha\beta}_o+\mathcal{G}^{\alpha,\beta}_s+\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\frac{\partial}{\partial x_\beta}\left(\Psi^\alpha+2C^{\alpha\gamma}n_\gamma\right)\right] \nonumber \\ &+\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\left(\frac{\bar{r}_z-\bar{r}}{\bar{r}_z\bar{r}}\right)\frac{\partial}{\partial\theta}\left[\frac{1}{\sin\theta}\frac{\partial}{\partial\phi}\left(\alpha_\chi-\varphi_\chi-n^\beta\Psi_\beta-n^\beta n^\gamma C_{\beta\gamma}\right)\right] \label{gamma2}\end{aligned}$$ for the shear components. For the convergence, the standard formalism yields $$\begin{aligned} \kappa=&\left(\frac{3}{2}C^{\alpha\beta}n_\beta-V^\alpha\right)_on_\alpha-\frac{n_\alpha\mathcal{G}_s^\alpha}{\bar{r}_z}+\frac{\widehat\nabla_\alpha\mathcal{G}_s^\alpha}{2\bar{r}_z}+\frac{n_\alpha\left(\delta x^\alpha+\mathcal{G}^\alpha\right)_o}{\bar{r}_z} \nonumber \\ &-\int_0^{\bar{r}_z}\frac{\mathrm d\bar{r}}{\bar{r}}\,\left(n_\alpha-\frac{\widehat\nabla_\alpha}{2}\right)\left(\Psi^\alpha+2C^\alpha_\beta n^\beta\right) \nonumber \\ &+\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\left(\frac{\bar{r}_z-\bar{r}}{2\bar{r}_z\bar{r}}\right)\widehat{\nabla}^2\left(\alpha_\chi-\varphi_\chi-n^\beta\Psi_\beta-n^\beta n^\gamma C_{\beta\gamma}\right)\,, \label{kappa}\end{aligned}$$ where $\widehat\nabla_\alpha$ and $\widehat\nabla^2$ denote the angular gradient and the angular Laplace operator (see Appendix \[Subsection:SphericalCoordinates\]). Finally, the result for the rotation is $$\begin{aligned} \omega=\Omega^n_o+\frac{1}{2}\left(\theta_\alpha\phi_\beta-\phi_\alpha\theta_\beta\right)\left(\mathcal{G}_s^{\alpha,\beta}+\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\frac{\partial}{\partial x_\beta}\left(\Psi^\alpha+2C^\alpha_\gamma n^\gamma\right)\right)\,, \label{omega}\end{aligned}$$ where $\Omega^n_o=n_i\Omega^i_o$ at the observer position. These expressions are arranged such that the gauge-invariant terms are isolated from the gauge-dependent terms, i.e. those with $\mathcal{G}^\alpha_s$. From this, it is immediately evident that none of the expressions obtained for $\gamma_1$, $\gamma_2$, $\kappa$ and $\omega$ in the standard formalism are gauge-invariant. This is a blatant contradiction to the fact that observable quantities such as the cosmic shear can not depend on physically indistinguishable gauge choices. Therefore, the standard formalism does not capture all physical effects contributing to the cosmic shear and is unsuitable for high-precision cosmological studies. The gauge-dependence of the expression for $\omega$ is also problematic. Several groups have argued that, although the magnitude of this effect is under debate, the rotation contributes to the B-modes of the CMB polarization power spectrum [@CMBPolarization; @Beppe; @Fabbian; @Takahashi]. Furthermore, it was proposed that the rotation could be measured in galaxy surveys by including the polarization information of the galaxy radio emission [@Polarization]. Therefore, although it has not been measured yet, the lensing rotation is a physical observable and its gauge issues need to be clarified. Yoo et al. concluded that the rotation is in fact fully vanishing for scalar, vector and tensor modes to linear order [@newpaper]. We will confirm this result in the next section. For the convergence $\kappa$, it has already been pointed out that it is neither an observable nor a gauge-invariant quantity. Magnification effects in weak lensing are in fact quantified by the distortion in the luminosity distance $\delta D$, which is a gauge-invariant quantity closely related to the convergence [@GalaxyClustering]. One of several possibilities to calculate $\delta D$ is to apply its relation to the Jacobi map presented in [@Schneider]. However, this method has so far not been tested for gauge-invariance. In the next section, we will show that the Jacobi mapping approach indeed yields a gauge-invariant expression for the distortion in the luminosity distance and that, similarly, it can be applied to calculate gauge-invariant expressions for the shear components and the rotation. Jacobi Mapping Approach for Cosmological Weak Lensing {#Section:JM} ===================================================== In this section, we prove that the Jacobi mapping approach provides a precise and gauge-invariant cosmological weak lensing formalism which accounts for all relativistic effects. In Section \[Subsection:JMFormalism\]–\[Subsection:JMFormalism2\] we thoroughly investigate the fully non-linear Jacobi mapping approach, paying particular attention to the subtleties of the formalism such as the discrepancy between the comoving velocity of the source and the comoving velocity of the observer parallel-transported to the source position. In Section \[Subsection:JMlinear\]–\[Subsection:Rotation\], we present the linear order results of the Jacobi mapping approach for the cosmological weak lensing quantities including not only scalar, but also vector and tensor modes. In particular, Section \[Subsection:Rotation\] contains a discussion of the lensing rotation, which, somewhat surprisingly, is fully vanishing to linear order. The results presented here are written fully in terms of gauge-invariant variables, which makes this work to the best of our knowledge the first proof of gauge-invariance of the Jacobi-mapping approach. Furthermore, our results coincide with those in [@newpaper], where the cosmological weak lensing quantities were derived with another method based on solving the geodesic equation for the two nearby light rays instead of solving the geodesic deviation equation. Jacobi Mapping Formalism {#Subsection:JMFormalism} ------------------------ Consider two nearby light rays that are emitted from an infinitesimally extended source at an affine parameter $\Lambda_s$ and converge at the position of an observer at an affine parameter $\Lambda_o$, where they are observed to have a small angular separation $\Delta\Phi^I=(\Delta\theta,\,\sin\theta\Delta\phi)$. The vector $\xi^\mu(\Lambda)$ describes the physical separation of these rays evaluated at an affine parameter $\Lambda$. In a perfectly homogeneous universe described by the FLRW metric the photons travel on straight paths, and the separation $\bar\xi_s^\mu$ at the source position can be inferred from the observed angular separation $\Delta\Phi^I$ and the observed redshift $z$, i.e. $$\begin{aligned} \bar\xi^\mu_s=\bar a_s\bar{r}_z\,\left(0,\,\Delta\theta\,\theta^\alpha+\sin\theta\Delta\phi\,\phi^\alpha\right)\,,\end{aligned}$$ where the scale factor $\bar a_s$ and the radial coordinate $\bar{r}_z$ are determined by the observed redshift $z$. For a general spacetime metric, the propagation of $\xi^\mu(\Lambda)$ along the light path is described by the *geodesic deviation equation*, $$\begin{aligned} \frac{\mathrm D^2\xi^\mu(\Lambda)}{\mathrm d\Lambda^2}={R^\mu}_{\nu\rho\sigma}k^\nu k^\rho \xi^\sigma\,, \label{geodesicdeviation}\end{aligned}$$ where $R^\mu{_{\nu\rho\sigma}}$ is the Riemann tensor. To determine an expression for $\xi^\mu_s$ it is useful to introduce the *Jacobi map* ${\mathcal{J}^\mu}_\nu(\Lambda)$ which relates the separation $\xi^\mu(\Lambda)$ at some affine parameter $\Lambda$ to the initial value $\dot\xi^\mu_o$, $$\begin{aligned} \xi^\mu(\Lambda)\equiv{\mathcal{J}^\mu}_\nu(\Lambda)\dot\xi_o^\nu\,,\qquad\dot\xi^\mu_o\equiv\left.\frac{\mathrm D}{\mathrm d\Lambda}\xi^\mu(\Lambda)\right\vert_{\Lambda_o}\,.\end{aligned}$$ A propagation equation for ${\mathcal{J}^\mu}_\nu$ can be obtained straight-forwardly from the geodesic deviation equation : $$\begin{aligned} \frac{\mathrm D^2{\mathcal{J}^\mu}_\nu(\Lambda)}{\mathrm d\Lambda^2}=\left({R^\mu}_{\rho\sigma\tau}k^\rho k^\sigma\right){\mathcal{J}^\tau}_\nu(\Lambda)\,. \label{JacMap44}\end{aligned}$$ The derivative $\dot\xi^\mu_o$ is used as an initial condition since the photon geodesics meet at the observer, i.e. $\xi^\mu_o=0$, while $\dot\xi^\mu_o$ is non-zero and related to the observed angular separation as (see e.g. [@UTLD]) $$\begin{aligned} \dot\xi^\mu_o=-\omega_o\,(0,\,\Delta\theta\,\theta^\alpha+\sin\theta\Delta\phi\,\phi^\alpha)\,.\end{aligned}$$ Solving the propagation equation for ${\mathcal{J}^\mu}_\nu$ would provide us with an expression for $\xi^\mu_s$. However, the vector $\xi_s^\mu$ connects two events on the global spacetime manifold described by the FLRW metric. Following the discussion in Section \[Subsection:wavevector\], we need to transform $\xi_s^\mu$ to a local Lorentz frame to quantify cosmological weak lensing effects. First, note that, with the choice of parametrization specified by the condition that the photon wavevector $k^\mu=e^\mu_a k^a$ is equal to the tangent vector $\mathrm dx^\mu(\Lambda)/\mathrm d\Lambda$, the vectors $\xi^\mu_s\equiv\xi^\mu(\Lambda_s)$ and $\dot\xi^\mu_o$ live in the 2-dimensional planes which are orthogonal to the photon wavevector and to the 4-velocity at the source and the observer, respectively (see e.g. [@Schneider]). In particular, this means that $\xi_s^\mu$ connects two events which are simultaneous from the point of view of the comoving observer at the source position, i.e. in its local Lorentz frame the time component $\xi_s^t$ vanishes, $$\begin{aligned} \xi_s^t=\xi^\mu(\Lambda_s){u_\mu}(\Lambda_s)=\xi^\mu(\Lambda_s)e_\mu^t(\Lambda_s)=0\,.\end{aligned}$$ Furthermore, the spatial vector $\xi_s^i$ is orthogonal to the photon direction $n_s^i$, where $k^a_s=\omega_s\,(1,\,-n_s^i)$ in local Lorentz coordinates. Hence, the separation vector $\xi_s^a=(0,\,\xi_s^i)$ is in fact a 2-dimensional object, determined by the components $$\begin{aligned} \xi^I_s\equiv\xi^\mu_se^i_\mu(\Lambda_s)\Phi^I_i(\Lambda_s)\,,\qquad I=1,2, \label{defxiI}\end{aligned}$$ where $\Phi^i_I(\Lambda_s)=(\theta^i_s,\,\phi^i_s)$ denotes two directions orthonormal to $n^i_s$. Analogously, the vector $\dot\xi^a_o$ in the observer’s rest frame is determined by the components $$\begin{aligned} \dot\xi^I_o\equiv\dot\xi^\mu_o e^i_\mu(\Lambda_o)\Phi^I_i=\dot\xi^i_o\Phi^I_i\,,\qquad I=1,2\,.\end{aligned}$$ Following these considerations, we want to define a $2\times2$-dimensional Jacobi map ${\mathfrak{D}^I}_J(\Lambda)$ as a map between 2-dimensional vectors. For that, we first define the tetrad basis $[e_I]^\mu(\Lambda_o)$, $I=1,2$, as the orthonormal tetrads which specify the 2-dimensional hypersurface orthogonal to the photon propagation direction and the 4-velocity at the observer. It is obtained by contracting the spatial tetrads ${e_i^\mu}(\Lambda_o)$ with the directions $\theta^i$ and $\phi^i$: $$\begin{aligned} [e_1]^\mu(\Lambda_o)\equiv{e_i^\mu}(\Lambda_o)\theta^i\,,\qquad [e_2]^\mu(\Lambda_o)\equiv{e_i^\mu}(\Lambda_o)\phi^i\,. \label{deftetrads}\end{aligned}$$ This defines the tetrad basis $[e_I]^\mu(\Lambda_o)$, $I=1,2$, uniquely up to the rotation $\Omega_o^j$ of the local frame which corresponds to a rotation of the orthonormal basis $(n^i,\,\theta^i,\,\phi^i)$. Having specified these basis vectors in our local frame, we can set $\Omega^j_o=0$ as a gauge condition, but we choose to keep it in general. As we discuss in Section \[Subsection:Rotation\], only the relation of the rotation $\Omega^j_s$, i.e. the rotation of the local frame at the source position, to $\Omega^j_o$ is relevant, not the value of $\Omega^j_o$ itself. The tetrad basis $[e_I]^\mu(\Lambda)$ at any other affine parameter $\Lambda$ is obtained from its values at the observer position $\Lambda_o$ by parallel transport along the photon path: $$\begin{aligned} \frac{\mathrm D[e_I]^\mu(\Lambda)}{\mathrm d\Lambda}=\frac{\mathrm D\tilde e_i^\mu(\Lambda)}{\mathrm d\Lambda}=0\,,\qquad I=1,2\,. \label{paralleltransport}\end{aligned}$$ The parallel-transported tetrads $\tilde e^\mu_i(\Lambda)$, fulfilling the property $[e_I]^\mu(\Lambda)=\tilde e_i^\mu\Phi^I_i$, specify the local plane of an observer with velocity $\tilde u^\mu(\Lambda)$, i.e. the comoving velocity $u^\mu_o$ at affine parameter $\Lambda_o$ parallel-transported to the affine parameter $\Lambda$. In general, the tetrads $\tilde e^\mu_i(\Lambda)$ differ from the tetrads $e^\mu_I(\Lambda)$ given in equation , which specify the local frame of an observer with comoving velocity $u^\mu(\Lambda)$. The $2\times2$-dimensional Jacobi map ${\mathfrak{D}^I}_J\equiv\mathcal{J}^\mu{_\nu}[e^I]_\mu[e_J]^\nu$ describes the relation between the projected initial separation $\dot\xi^I_o$ and the projected separation $\tilde\xi^I$ at $\Lambda$, i.e.: $$\begin{aligned} \tilde\xi^I\equiv\xi^\mu[e^I]_\mu\,,\qquad\tilde\xi^I={\mathfrak{D}^I}_J(\Lambda)\dot\xi^J_o\,, \qquad \dot\xi^I_o=\dot\xi^\mu_o[e^I]_\mu=\left.\frac{\mathrm d}{\mathrm d\Lambda}\tilde\xi^I\right\vert_{\Lambda_o}\,. \label{defDIJprov}\end{aligned}$$ The quantity $\tilde\xi^I_s$ apparently differs from the physical separation $\xi^I_s$ at the source. To obtain $\tilde\xi^I_s$, the separation vector $\xi^\mu_s$ is contracted with the tetrads $[e^I]_\mu(\Lambda_s)$, which specify the two-dimensional plane orthogonal to the photon wavevector $k^\mu_s$ and the velocity $\tilde u^\mu_s$ (note that photon wavevector is parallel-transported along the photon path, i.e. $\tilde k^\mu_s=k^\mu_s$). However, the vectors $e^i_\mu(\Lambda_s)\Phi^I_i(\Lambda_s)$, which are contracted with $\xi^\mu_s$ to yield the 2-dimensional physical separation $\xi^I_s$ at the source, specify the plane orthogonal to the photon wavevector $k^\mu_s$ and the velocity $u^\mu_s$. The velocities $\tilde u^\mu_s$ and $u^\mu_s$ are in generally not the same, and their respective local Lorentz frames are related to each other by a Lorentz boost. Nevertheless, the quantities $\tilde\xi^I_s$ and $\xi^I_s$ coincide, since the effect of the Lorentz boost is fully absorbed in the transformation of the observed photon direction $n^i_s$, and the quantities orthogonal to it remain unaffected [@CMBLensing; @Ermis], which is thoroughly proved in Appendix \[Appendix:Lorentz\]. Therefore, we can rewrite equation  as: $$\begin{aligned} \xi^I={\mathfrak{D}^I}_J(\Lambda)\dot\xi^J_o\,, \qquad \dot\xi^I_o=\dot\xi^\mu_o[e^I]_\mu=\left.\frac{\mathrm d}{\mathrm d\Lambda}\xi^I\right\vert_{\Lambda_o}\,. \end{aligned}$$ Using the parallel-transported tetrads $[e_I]^\mu(\Lambda_s)=\tilde e_i^\mu(\Lambda_s)\Phi^i_I$ to define the Jacobi map has two advantages over using the projected tetrads $e_i^\mu(\Lambda_s)\Phi^i_I(\Lambda_s)$ appearing in the definition of $\xi^I_s$ given in equation . First of all, the parallel transport property enables us to rewrite the geodesic deviation equation into the following propagation equation for $\mathfrak{D}^I{_J}(\Lambda)$: $$\begin{aligned} \frac{\mathrm d^2}{\mathrm d\Lambda^2}\mathfrak{D}^I{_J}=-\mathfrak{R}^I_K\mathfrak{D}^K{_J}\,,\qquad\mbox{where}\qquad\mathfrak{R}^I_J\equiv \left({R^\mu}_{\nu\rho\sigma}k^\nu k^\sigma\right)[e^I]_\mu[e_J]^\rho\,. \label{JacMap}\end{aligned}$$ Secondly, the tetrad basis at the observer is only defined up to the spatial rotations $\Omega^i_o$ as we have mentioned above. Analogously, the projected tetrads $e^i_\mu(\Lambda_s)\Phi^i_I(\Lambda_s)$ are only defined up to the spatial rotations $\Omega^i_s$. Parallel transport, however, uniquely defines the tetrad basis $[e_I]^\mu(\Lambda_s)$ once the basis $[e_I]^\mu(\Lambda_o)$ at the observer position is specified, i.e. it fixes the relation between $\Omega^i_s$ and $\Omega^i_o$. Therefore, parallel transport is vitally important for an unambiguous and physically meaningful definition of the lensing rotation, as we will further discuss in Section \[Subsection:Rotation\]. The term $\mathfrak{R}^I_J$, which describes how the Jacobi map changes along the photon path, can be decomposed into a trace $\mathfrak{R}$ and a traceless symmetric component $(\mathfrak{E}_1,\,\mathfrak{E}_2)$: $$\begin{aligned} \mathfrak{R}^I_J=\begin{pmatrix}\mathfrak{R}/2+\mathfrak{E}_1&\mathfrak{E}_2\\ \mathfrak{E}_2&\mathfrak{R}/2-\mathfrak{E}_1\end{pmatrix}\,.\end{aligned}$$ Note that $\mathfrak{R}^I_J$ has no anti-symmetric component, which follows immediately from its definition given in and the fact that $R_{\mu\nu\rho\sigma}=R_{\rho\sigma\mu\nu}$. Furthermore, the Riemann tensor $R^\mu{_{\nu\rho\sigma}}$ can be decomposed as $$\begin{aligned} R^\mu{_{\nu\rho\sigma}}=C^\mu{_{\nu\rho\sigma}}-\frac{1}{2}\left(g^\mu{_{\nu\rho\tau}}R^\tau{_\sigma}+g^\mu{_{\nu\tau\sigma}}R^\tau{_\rho}\right)-\frac{R}{6}g^\mu{_{\nu\rho\sigma}}\,,\quad g_{\mu\nu\rho\sigma}\equiv g_{\mu\rho}g_{\nu\sigma}-g_{\mu\sigma}g_{\nu\rho}\,,\end{aligned}$$ where the *Weyl tensor* $C_{\mu\nu\rho\sigma}$ is the tracefree part of the Riemann tensor, i.e. summation over any two indices vanishes. Consequently, we can write the trace $\mathfrak{R}$ as $$\begin{aligned} \mathfrak{R}=R^\mu{_{\nu\rho\sigma}}k^\nu k^\sigma[e^I]_\mu[e_I]^\rho=\frac{1}{2}R_{\nu\sigma}k^\nu k^\sigma\left([e^I]_\mu[e_I]^\mu\right)=R_{\nu\sigma}k^\nu k^\sigma\,, \label{mathfrakR}\end{aligned}$$ and the traceless symmetric component as: $$\begin{aligned} \mathfrak{E}^I_J\equiv\begin{pmatrix}\mathfrak{E}_1&\mathfrak{E}_2\\\mathfrak{E}_2&-\mathfrak{E}_1\end{pmatrix}=C^\mu{_{\nu\rho\sigma}}k^\nu k^\sigma[e^I]_\mu[e_J]^\rho\,.\end{aligned}$$ Distortion matrix and conformally transformed Jacobi map {#Subsection:JMFormalism2} -------------------------------------------------------- Solving equation would yield an expression for the Jacobi map $\mathfrak{D}^I{_J}$ and thus for the separation $\xi^I_s$ at the source position. Consequently, this would provide us with an expression for the distortion matrix $\check{\mathbb{D}}^I{_J}$, which we define as the relation between $\xi^I_s$ and its background quantity $\bar\xi^I_s$, $$\begin{aligned} \xi_s^I\equiv\check{\mathbb{D}}^I{_J}\bar\xi^J_s\,. \label{DefDIJJM}\end{aligned}$$ This definition is closely analogous to the definition of the distortion matrix $\mathbb{D}^I{_J}$ in the standard formalism, which, as stated in equation , also relates spatial separations to each other. However, these definitons are not equivalent. The separation $\xi^I_s$ used to define $\check{\mathbb{D}}^I{_J}$ is given by $\xi^I_s=\xi^\mu_s e^i_\mu (\Lambda_s)\Phi^I_i(\Lambda_s)$, while the distortion matrix $\mathbb{D}^I{_J}$ relates the background separation $\bar\xi^I_s=\bar a_s\bar{r}_z\,\Delta\Phi^I$ to the quantities $\xi^\alpha_s\bar a_s\Phi^I_\alpha$. From the relation $$\begin{aligned} \xi^\alpha_s\bar a_s\Phi^I_\alpha=\xi^\mu_s\bar e^i_\mu(\Lambda_s)\Phi^I_i(\Lambda_s)\,,\qquad\mbox{where}\qquad\bar e^i_\mu(\Lambda_s)=\bar a_s\,(0,\,\delta^i_\alpha)\,,\end{aligned}$$ we can see that the standard formalism neglects the perturbations of the local tetrads, i.e. the separation vector defined in global coordinates is not transformed correctly into the local rest frame of the source. This inconsistency manifests itself in the appearance of gauge-dependent terms evaluated at the source position in the expression – for the lensing observables. Indeed, we will see that in the Jacobi mapping formalism these terms do not appear. From the relations $\dot\xi^I_o=-\omega_o\,\Delta\Phi^I$ and $\bar\xi_s^I=\bar a_s\bar r_z\,\Delta\Phi^I$, we infer that the relation between the Jacobi map ${\mathfrak{D}^I}_J$ and the distortion matrix ${\check{\mathbb{D}}^I}{_J}$ is given by $$\begin{aligned} {\check{\mathbb{D}}^I}{_J}=-\frac{\omega_o}{\bar{a}_s\bar{r}_z}{\mathfrak{D}^I}_J\,.\end{aligned}$$ Solving the evolution equation for the Jacobi map would thus immediately provide us with an expression for the distortion matrix, and therefore for the convergence, shear and rotation. However, the evolution equation is a system of second-order differential equations and thus difficult to solve. A further simplification of the Jacobi mapping formalism seems necessary, which can be achieved by expressing it in the conformally transformed metric $\hat{g}_{\mu\nu}$. The conformally transformed tetrads are defined as $[\hat e_I]^\mu\equiv a[e_I]^\mu$, and $\hat\xi^I$ is defined such that $\xi^\mu=\hat\xi^I[\hat e_I]^\mu$ holds true, i.e. $\hat\xi^I\equiv\xi^I/a$. The conformally transformed Jacobi map $\hat{\mathfrak{D}}^I{_J}$ is then defined by: $$\begin{aligned} \hat\xi^I\equiv\hat{\mathfrak{D}}^I{_J}\dot{\hat{\xi}}^J_o\,,\qquad\mbox{where}\qquad\dot{\hat\xi}^J_o\equiv\left.\frac{\mathrm d}{\mathrm d\lambda}\hat\xi^J\right\vert_{\lambda_o}\,.\end{aligned}$$ As shown in [@UTLD], the propagation equation for $\hat{\mathfrak{D}}^I{_J}$ is analogous to the one for $\mathfrak{D}^I{_J}$, $$\begin{aligned} \frac{\mathrm d^2}{\mathrm d\lambda^2}\hat{\mathfrak{D}}^I{_J}=-\hat{\mathfrak{R}}^I_K\hat{\mathfrak{D}}^K{_J}\,,\qquad\mbox{where}\qquad \hat{\mathfrak{R}}^I_J\equiv \left({\hat R^\mu}{_{\nu\rho\sigma}}\hat k^\nu \hat k^\sigma\right)[\hat e^I]_\mu[\hat e_J]^\rho\,, \label{evolutionJacMap}\end{aligned}$$ and the initial condition is given by $\dot{\hat\xi}^I_o=-(1+\widehat{\Delta\nu}_o)\,\Delta\Phi^I$. Hence, the separation $\xi^I_s$ at the source is related to the observed angular separation $\Delta\Phi^I$ and the Jacobi map $\hat{\mathfrak{D}}^I{_J}$ as $$\begin{aligned} \xi_s^I=a_s\hat\xi^I(\lambda)=-a_s(1+\widehat{\Delta\nu}_o)\hat{\mathfrak{D}}^I{_J}(\lambda)\,\Delta\Phi^J\,. \label{separation}\end{aligned}$$ Combined with the relation $\bar\xi^I_s=-\lambda_z\bar a_s\Delta\Phi^I$ for the background separation vector, we obtain the relation $$\begin{aligned} \check{\mathbb{D}}^I{_J}=\frac{1}{\lambda_z}\left(1+\widehat{\Delta\nu}_o\right)\left(1+\delta z\right)\hat{\mathfrak{D}}^I{_J} \label{RelDIJ}\end{aligned}$$ between the distortion matrix $\check{\mathbb{D}}^I{_J}$ and the conformally transformed Jacobi map $\hat{\mathfrak{D}}^I{_J}$. To solve the system of differential equations for the Jacobi map $\hat{\mathfrak{D}}^I{_J}$ given in , note that the term $\hat{\mathfrak{R}}^I_J$ vanishes in the background, since the conformally transformed metric $\hat{g}_{\mu\nu}$ is equal to the flat Minkowski metric up to perturbations. Therefore, the $n$-th order solution $\hat{\mathfrak{D}}^I{_J}{^{(n)}}$ of the Jacobi map is subject to the differential equation $$\begin{aligned} \frac{\mathrm d^2}{\mathrm d\lambda^2}\hat{\mathfrak{D}}^I_J{^{(n)}}=-\left(\sum_{1\le m\le n-1}\hat{\mathfrak{R}}^I_K{^{(m)}}\hat{\mathfrak{D}}^K_J{^{(n-m)}}+\hat{\mathfrak{R}}^I_K{^{(n)}}\hat{\mathfrak{D}}^K_J{^{(0)}}\right)\,, \label{JacMapGeneral}\end{aligned}$$ which can be solved by integrating twice provided that we already determined the solution of the Jacobi map $\hat{\mathfrak{D}}^I{_J}$ up to the $(n-1)$-th order and the expression for $\hat{\mathfrak{R}}^I{_J}$ up to the $n$-th order. Linear order distortion matrix {#Subsection:JMlinear} ------------------------------ In this subsection, we calculate the Jacobi map $\hat{\mathfrak{D}}^I{_J}$ and hence the distortion matrix $\check{\mathbb{D}}^I{_J}$ up to linear order. All quantities in this subsection are calculated up to a term $\mathcal{O}(2)$, which we will omit in the subsequent equations. The general evolution equation now takes the simple form: $$\begin{aligned} \frac{\mathrm d^2}{\mathrm d\lambda^2}\hat{\mathfrak{D}}^I{_J}=-\hat{\mathfrak{R}}^I_K\hat{\mathfrak{D}}^K{_J}{^{(0)}}\,.\end{aligned}$$ This can be solved by integrating twice and applying the background relation $\hat{{\mathfrak{D}}}^I{_J}{^{(0)}}(\lambda)=\lambda\delta^I_J$. We obtain: $$\begin{aligned} \hat{\mathfrak{D}}^I{_J}(\lambda_s)=\lambda_s\delta^I_J+\delta\hat{\mathfrak{D}}^I{_J}=\lambda_s\delta^I_J-\lambda_s \int_0^{\lambda_s}\mathrm d\lambda\,\left(\frac{\lambda_s-\lambda}{\lambda_s\lambda}\right)\lambda^2\hat{\mathfrak{R}}^I_J(\lambda)\,. \label{solutionDIJ}\end{aligned}$$ By substituting the solution for the Jacobi map $\hat{\mathfrak{D}}^I{_J}$ into the equation  we obtain the following expression for the distortion matrix: $$\begin{aligned} \check{\mathbb{D}}^I_J=\left(1+\delta z+\widehat{\Delta\nu}_o-\frac{\Delta\lambda_s}{\bar{r}_z}\right)\left(\delta^I_J-\int_0^{\lambda_s}\mathrm d\lambda\,\left(\frac{\lambda_s-\lambda}{\lambda_s\lambda}\right)\lambda^2\hat{\mathfrak{R}}^I_J(\lambda)\right)\,. \label{DeformationMatrix}\end{aligned}$$ From this equation and the fact that $\hat{\mathfrak{R}}^I_J$ is fully symmetric, it is immediately evident that the anti-symmetric component $\check\omega$ of the distortion matrix $\check{\mathbb{D}}^I_J$ is vanishing, which confirms the result of [@newpaper]. This might seem surprising to some readers, as it was believed that the vector and tensor modes would yield a non-vanishing linear order lensing rotation [@Dai]. Therefore, the vanishing rotation deserves some more discussion, given in Section \[Subsection:Rotation\]. Being symmetric, the distortion matrix can be split into its trace and a tracefree symmetric matrix: $$\begin{aligned} \check{\mathbb{D}}^I_J=\begin{pmatrix}1-\check\kappa&0\\0&1-\check\kappa\end{pmatrix}-\begin{pmatrix}\check\gamma_1&\check\gamma_2\\\check\gamma_2&-\check\gamma_1\end{pmatrix}\,,\end{aligned}$$ where $\check\kappa$ is given by $$\begin{aligned} \check\kappa=-\frac{1}{2}\left(\check{\mathbb{D}}^1_1+\check{\mathbb{D}}^2_2\right)+1=-\delta z-\widehat{\Delta\nu}_o+\frac{\Delta\lambda_s}{\bar{r}_z}+\int_0^{\lambda_s}\mathrm d\lambda\left(\frac{\lambda_s-\lambda}{\lambda_s\lambda}\right)\lambda^2\frac{1}{2}\left(\hat{\mathfrak{R}}^1_1+\hat{\mathfrak{R}}^2_2\right)\,, \label{kappaJM}\end{aligned}$$ and the trace-free symmetric components $(\check\gamma_1,\,\check\gamma_2)$ are given by $$\begin{aligned} \check\gamma_1=-\frac{1}{2}\left(\check{\mathbb{D}}^1_1-\check{\mathbb{D}}^2_2\right)=\int_0^{\lambda_s}\mathrm d\lambda\left(\frac{\lambda_s-\lambda}{\lambda_s\lambda}\right)\lambda^2\frac{1}{2}\left(\hat{\mathfrak{R}}^1_1-\hat{\mathfrak{R}}^2_2\right) \label{gamma1JM}\end{aligned}$$ and $$\begin{aligned} \check\gamma_2=-\frac{1}{2}\left(\check{\mathbb{D}}^1_2+\check{\mathbb{D}}^2_1\right)=-\check{\mathbb{D}}^1_2=\int_0^{\lambda_s}\mathrm d\lambda\left(\frac{\lambda_s-\lambda}{\lambda_s\lambda}\right)\lambda^2\hat{\mathfrak{R}}^1_2\,. \label{gamma2JM}\end{aligned}$$ The first-order contribution $\check\kappa$ to the trace, which is the counterpart to the convergence $\kappa$ in the standard formalism, is determined by the trace $\hat{\mathfrak{R}}$ of the matrix $\hat{\mathfrak{R}}^I_J$ and thus by the Ricci tensor $\hat R^\mu{_\nu}$, as seen from equation . As described e.g. in [@Bonvin; @UTLD; @newpaper], $\check\kappa$ is in fact equal to the distortion $\delta D$ in the luminosity distance up to a negative sign, $\check\kappa=-\delta D$. The traceless symmetric components $(\check\gamma_1,\,\check\gamma_2)$, which replace the shear components $(\gamma_1,\,\gamma_2)$ of the standard formalism, are sourced by the Weyl tensor $\hat C^\mu{_{\nu\rho\sigma}}$. To obtain explicit expressions for the components of the distortion matrix in terms of metric perturbations, we first need to calculate $\hat{\mathfrak{R}}^I_J(\lambda)$ defined in equation . Since our calculations are to linear order and the Riemann tensor ${\hat R^\mu}{_{\nu\rho\sigma}}$ of the conformally transformed metric is vanishing in the background, we only need the background values of the tetrads and the photon wavevector. Hence, $\hat{\mathfrak{R}}^I_J(\lambda)$ is given by $$\begin{aligned} \hat{\mathfrak{R}}^I_J=&{\hat R^\alpha}{_{\mu\beta\nu}}\hat k^\mu\hat k^\nu\Phi^I_\alpha\Phi_J^\beta \nonumber \\ =&{\hat R^\alpha}{_{0\beta 0}}\Phi^I_\alpha\Phi_J^\beta+{\hat R^\alpha}{_{\gamma\beta\delta}}n^\gamma n^\delta\Phi^I_\alpha\Phi_J^\beta-{\hat R^\alpha}{_{\gamma\beta 0}}n^\gamma\Phi^I_\alpha\Phi_J^\beta-{\hat R^\alpha}{_{0\beta\gamma}}n^\gamma\Phi^I_\alpha\Phi_J^\beta\,. \label{RIJ}\end{aligned}$$ Now, we need to apply the expressions for ${\hat{R}^\mu}{_{\nu\rho\sigma}}$ to obtain the result for $\hat{\mathfrak{R}}^I_J$ in terms of metric perturbations. The components of the distortion matrix are then calculated by performing the integration in the equations –. This lengthy procedure is described in detail in Appendix \[Appendix:JacobiMap\]. We will only state the results here. Cosmic shear and distortion in the luminosity distance to first order {#Subsec:Shear} --------------------------------------------------------------------- From the expressions for the diagonal components $\check{\mathbb{D}}^1_1$ and $\check{\mathbb{D}}^2_2$ given in equation , we can readily compute the shear component $\check\gamma_1=(\check{\mathbb{D}}^2_2-\check{\mathbb{D}}^1_1)/2$: $$\begin{aligned} \check\gamma_1=&\frac{1}{2}\left(\phi_\alpha\phi_\beta-\theta_\alpha\theta_\beta\right)\left[C^{\alpha\beta}_o+C^{\alpha\beta}_s-\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\frac{\partial}{\partial x_\beta}\left(\Psi^\alpha+2C^{\alpha\gamma}n_\gamma\right)\right] \nonumber \\ &+\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\left(\frac{\bar{r}_z-\bar{r}}{2\bar{r}_z\bar{r}}\right)\left(\frac{\partial^2}{\partial\theta^2}-\cot\theta\frac{\partial}{\partial\theta}-\frac{1}{\sin^2\theta}\frac{\partial^2}{\partial\phi^2}\right)\left(\alpha_\chi-\varphi_\chi-n^\beta\Psi_\beta-n^\beta n^\gamma C_{\beta\gamma}\right)\,. \label{gamma1JMsolved}\end{aligned}$$ The expression for the off-diagonal elements $\check{\mathbb{D}}^1_2=\check{\mathbb{D}}^2_1$ of the distortion matrix is given in equation , and it is equal to the expression for $\check\gamma_2$ up to a negative sign, hence: $$\begin{aligned} \check\gamma_2=&\frac{1}{2}\left(\theta_\alpha\phi_\beta+\theta_\beta\phi_\alpha\right)\left[-C^{\alpha\beta}_o-C^{\alpha\beta}_s+\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\frac{\partial}{\partial x_\beta}\left(\Psi^\alpha+2C^{\alpha\gamma}n_\gamma\right)\right] \nonumber \\ &+\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\left(\frac{\bar{r}_z-\bar{r}}{\bar{r}_z\bar{r}}\right)\frac{\partial}{\partial\theta}\left[\frac{1}{\sin\theta}\frac{\partial}{\partial\phi}\left(\alpha_\chi-\varphi_\chi-n^\beta\Psi_\beta-n^\beta n^\gamma C_{\beta\gamma}\right)\right]\,. \label{gamma2JMsolved}\end{aligned}$$ It is immediately evident that the expressions for $\check\gamma_1$ and $\check\gamma_2$ are gauge-invariant, as the gauge-dependent terms appearing in the equations and for the shear component $\gamma_1$ and $\gamma_2$ in the standard formalism are now absent, and all remaining terms are written fully in terms of the gauge-invariant metric perturbations. Considering only scalar modes in the Newtonian gauge, the standard formalism and the Jacobi mapping approach yield the same expressions for the shear components. However, the standard formalism does not correctly account for the effect of primordial gravitational waves, as the expressions for $\check\gamma_1$ and $\check\gamma_2$ contain a term in $C^{\alpha\beta}_s$ which is absent in the expressions for $\gamma_1$ and $\gamma_2$. Our results are fully compatible with the expressions for the shear components in [@SchmidtJeong], where the presence of the term evaluated at the source position was first pointed out, and the recent results presented in [@newpaper]. While solving the integral in equation  for $\check{\mathbb{D}}^I_J$ immediately yields gauge-invariant results for $\check\gamma_1$ and $\check\gamma_2$, obtaining a gauge-invariant expression for the distortion in the luminosity distance $\delta D=-\check\kappa$ is slightly more complicated due to the contributions of the perturbation quantities $\delta z$, $\widehat{\Delta\nu}_o$ and $\Delta\lambda_o$. In Appendix \[Appendix:LumDist\], we solve the temporal part of the geodesic equation to relate the perturbation of the affine parameter $\Delta\lambda_s$ to the distortion of the radial coordinate $\delta r$, and we combine these calculations with the expression  for $\check\kappa$ resulting from the calculations of $\check{\mathbb{D}}^1_1$ and $\check{\mathbb{D}}^2_2$ in Appendix \[Appendix:JacobiMap\]. We obtain: $$\begin{aligned} \check\kappa=&-\frac{\delta r_\chi}{\bar{r}_z}-\delta z_\chi+\left(\frac{3}{2}n^\alpha n^\beta C_{\alpha\beta}-n_\alpha V^\alpha\right)_o-\left(\varphi_\chi-\frac{1}{2}n^\alpha n^\beta C_{\alpha\beta}\right)_s \nonumber \\ &-\int_0^{\bar{r}_z}\frac{\mathrm d\bar{r}}{\bar{r}}\,\left(n_\alpha-\frac{\hat\nabla_\alpha}{2}\right)\left(\Psi^\alpha+2C^\alpha_\beta n^\beta\right) \nonumber \\ &+\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\left(\frac{\bar{r}_z-\bar{r}}{2\bar{r}_z\bar{r}}\right)\widehat{\nabla}^2\left(\alpha_\chi-\varphi_\chi-n^\beta\Psi_\beta-n^\beta n^\gamma C_{\beta\gamma}\right)\,, \label{checkkappa}\end{aligned}$$ where we have defined $\delta r_\chi\equiv \delta r+n_\alpha\mathcal{G}_s^\alpha$ and $\delta z_\chi\equiv \delta z+H_s\chi_s$. The source position $x^\alpha_s$ transforms as $x^\alpha_s\,\mapsto\, x^\alpha_s+\mathcal{L}_s^\alpha$ under a gauge-transformation. Thus, $\delta r$ transforms as $\delta r\,\mapsto\,\delta r+n_\alpha\mathcal{L}^\alpha_s$, which means that $\delta r_\chi$ is a gauge-invariant quantity. Furthermore, $\delta z_\chi$ is also gauge-invariant, since $\delta z$ transforms as $\delta z\,\mapsto \delta z+\mathcal{H}_sT_s$, which follows from the definition of $\delta z$ given in equation  and the fact that $a_s$ transforms as $a_s\,\mapsto\,a_s+a_s'T_s$. Hence, equation  is written fully in terms of gauge-invariant perturbation variables. Furthermore, by comparing this expression to the expression for $\kappa$ in the standard formalism, we can rewrite it as $$\begin{aligned} \check\kappa=\left(\kappa+\frac{n_\alpha\mathcal{G}^\alpha}{\bar{r}_z}-\frac{\hat\nabla_\alpha\mathcal{G}^\alpha}{2\bar{r}_z}\right)-\delta z_\chi-\frac{\delta r_\chi}{\bar{r}_z}+\left(\frac{1}{2}n^\alpha n^\beta C_{\alpha\beta}-\varphi_\chi\right)_s\,,\end{aligned}$$ which coincides with the results of [@GalaxyClustering] and [@newpaper]. Vanishing rotation in the Jacobi mapping formalism {#Subsection:Rotation} -------------------------------------------------- In Section \[Subsection:JMlinear\], we have seen that as a straight-forward consequence of the fact that $\hat{\mathfrak{R}}^I_J$ is symmetric, the lensing rotation $\check\omega$ is fully vanishing to linear order. In particular, the gauge-invariant vector and tensor contributions integrated along the line-of-sight appearing in equation  for the rotation $\omega$ in the standard formalism are now absent. These terms imply that the spatial separation vector $\xi^\alpha$ defined in the global FLRW frame is rotated along the photon path. However, since there is no global observer the rotation within the global frame bears no physical meaning. The vanishing lensing rotation $\check\omega=0$ in the Jacobi mapping formalism implies that the image is not rotated with respect to the parallel-transported tetrad basis, as we will discuss in more detail in the following. First, note that the spatial rotations $\Omega^j_o$ which we left unspecified in equation  yield the following contributions to the spatial components $[e_I]^\alpha(\Lambda_o)$ of the tetrads: $$\begin{aligned} &[e_1]^\alpha(\Lambda_o)\quad\ni\quad \theta^\beta\phi^j\epsilon^\alpha{_{\beta j}}\Omega^\phi_o+\theta^\beta n^j\epsilon^\alpha{_{\beta j}}\Omega^n_o\,,\nonumber \\ &[e_2]^\alpha(\Lambda_o)\quad\ni\quad \phi^\beta\theta^j\epsilon^\alpha{_{\beta j}}\Omega^\theta_o+\phi^\beta n^j\epsilon^\alpha{_{\beta j}}\Omega^n_o\,.\end{aligned}$$ While the terms in $\Omega^\phi_o$ and $\Omega^\theta_o$ have a vanishing contribution to the initial separation $\dot\xi^I_o$ since it is orthogonal to the observed photon direction, the terms in $\Omega^n_o$ contribute as $$\begin{aligned} \dot\xi^1_o\quad\ni\quad \theta^\beta\phi^\alpha n^j\epsilon^\alpha{_{\beta j}}\Omega^n_o=-\Omega^n_o\,,\qquad\dot\xi^2_o\quad\ni\quad -\theta^\beta\phi^\alpha n^j\epsilon^\alpha{_{\beta j}}\Omega^n_o=\Omega^n_o\,.\end{aligned}$$ Hence, the quantity $\Omega^n_o$ corresponds to a rotational degree of freedom of $\dot\xi^I_o$ within the plane orthogonal to the observed photon direction $n^i$. Similarly, constructing the projected tetrads $e^i_\mu(\Lambda_s)\Phi^I_i(\Lambda_s)$ at the source position would yield a rotational degree of freedom of the separation vector $\xi^I_s$. Consequently, if we constructed the basis at the source position instead of parallel-transporting it from the observer position, the anti-symmetric component of the distortion matrix would depend on the choice of $\Omega^i_s$ and $\Omega^i_o$, as can be seen in Section 5.1 of [@newpaper] where the term $(\Omega^n_s-\Omega^n_o)$ appears in the expression for the rotation. In the Jacobi mapping approach, however, the rotation is fully vanishing without any degree of freedom corresponding to $\Omega^i_o$ or $\Omega^i_s$, even though we have never explicitly specified any of these two quantities. This is a consequence of the fact that in this formalism, parallel transport uniquely defines the tetrads $[e_I]^\mu(\Lambda_s)$ at the source position once the tetrads $[e_I]^\mu(\Lambda_o)$ are fixed, i.e. it uniquely determines the relation between $\Omega^i_o$ and $\Omega^i_s$. Indeed, the parallel transport property of the tetrads led to the propagation equation of the Jacobi map $\mathfrak{D}^I{_J}$ which includes the definition of the symmetric matrix $\mathfrak{R}^I_J$. The fact that we obtain a fully symmetric distortion matrix $\mathbb{D}^I_J$ is thus a consequence of using the parallel-transported tetrads to describe the separation $\xi^I_s$ at the source position. To define weak lensing quantities in a meaningful way, the size and shape of the source measured by an observer at affine parameter $\Lambda_o$ with respect to some observer basis should be compared to the size and shape measured by a (fictitious) observer at the source position $\Lambda_s$ with respect to the “same” observer basis. The notion of equality of (basis) vectors at different points of the spacetime manifold (such as the observer position and the source position at affine parameters $\Lambda_o$ and $\Lambda_s$) is defined by parallel transport. Hence, given a value of $\Omega^i_o$, any choice of $\Omega^i_s$ which differs from the one determined by parallel transport would yield an unphysical rotation sourced by the mismatch of the tetrad bases. We conclude that the antisymmetric component of the distortion matrix calculated in the Jacobi mapping approach is indeed the physically consistent result for the lensing rotation, i.e. it is, to linear order, fully vanishing for scalar, vector and tensor modes, which confirms the result of [@newpaper]. The advantage of the Jacobi mapping approach is that it naturally includes the parallel transport of the tetrad basis while in [@newpaper] it had to be explicitly calculated to correct the initially non-zero result. From the non-zero contribution in the standard formalism of the vector and tensor modes integrated along the line-of-sight, it follows that the image is rotated with respect to global FLRW coordinates, but not with respect to the tetrad basis. The result of the Jacobi mapping formalism implies that the tetrad basis rotates in exactly the same way with respect to global coordinates while being parallel-transported along the light geodesic, resulting in a vanishing rotation of the image with respect to the tetrad basis. A similar argument was already stated in [@Dai], where it was claimed that the B-modes induced by the rotation of the polarization vector (which follows the parallel transport equation as the tetrad basis) in the polarization spectrum of the CMB cancel out the B-modes induced by the rotation of the image which leaves the polarization unchanged. However, we strongly emphasize that the rotation of the tetrads (equal to the rotation of the polarization vector) and the rotation of the image with respect to the global coordinates are not measurable since there is no global observer. Therefore, it is incorrect to interpret these two rotations as two distinct physical effects which cancel each other out. The rotation of the image following the geodesic deviation equation with respect to the tetrad basis propagating according to the parallel transport equation comprises only one physical effect which vanishes to linear order. Conclusion {#Conclusion} ========== In this work, we have applied the Jacobi mapping approach to calculate the distortion in the luminosity distance $\delta D$, the cosmic shear components $\check\gamma_1$ and $\check\gamma_2$, and the rotation $\check\omega$ for all scalar, vector and tensor modes. Our results are written fully in terms of gauge-invariant perturbation quantities. With this explicit check of gauge-invariance, we have provided an important confirmation of accuracy for the Jacobi mapping approach, which was previously missing even though this method is already an established lensing formalism (see e.g. [@Bonvin; @Bonvin2; @CMBLensing; @SecondOrderShear; @UTLD; @Clarkson; @Clarkson2; @Yamauchi]). With the Jacobi mapping approach, we obtained results for the lensing observables $\delta D$, $\check\gamma_1$, $\check\gamma_2$ and $\check\omega$ which are not compatible with the expressions for $\kappa$, $\gamma_1$, $\gamma_2$ and $\omega$ in the standard weak lensing formalism reviewed in Section \[Subsection:SF\]. The discrepancy between the distortion in the luminosity distance $\delta D$, an observable and gauge-invariant quantity, and the convergence $\kappa$, which is neither observable nor gauge-invariant, is already well known (see e.g. [@Bonvin; @Bonvin2; @newpaper]). However, for the cosmic shear and the rotation the disagreement between the Jacobi mapping formalism and the standard formalism is more surprising as the standard formalism was so far the most widely used method to compute these lensing observables. It was pointed out only recently in [@newpaper] that the results for $\gamma_1$, $\gamma_2$ and $\omega$ obtained in the standard formalism are in fact gauge-dependent. Therefore, this method is unsuitable to describe the cosmic shear and the lensing rotation as these quantities are observable and consequently have to be gauge-invariant. The difference between the Jacobi mapping approach and the standard formalism consists not only in the presence or absence of gauge-dependent terms, but is far more drastic. In fact, we showed that the lensing rotation $\check\omega$ is completely vanishing to linear order for scalar, vector and tensor perturbations. The non-vanishing integral terms appearing in the expression for the rotation in the standard formalism describe the rotation of the image with respect to the global coordinates, which has no physical meaning as there is no global observer. The physical lensing rotation $\check\omega$, however, describes the rotation of the image with respect to the parallel-transported tetrads which represent the local frames of the observer and the source. As parallel transport is naturally included in the Jacobi mapping approach, no explicit calculations of parallel-transported quantities are needed. In particular, the linear-order result $\check\omega=0$ is obtained immediately without any further calculations. This simplicity of the Jacobi mapping formalism makes it a viable option for higher-order calculations of the distortion matrix, where the antisymmetric component is not vanishing. We want to emphasize that with our thorough investigation of the Jacobi mapping approach, all of its assumptions are rigorously justified. In particular, we have shown that the discrepancy between the parallel-transported velocity $\tilde u^\mu_s$ and the source velocity $u^\mu_s$ does not affect the lensing observables. We conclude that $\check\omega=0$ is not arising from inconsistencies within the Jacobi mapping formalism, but is indeed the correct result for the physical lensing rotation. Furthermore, our calculations show that the standard formalism cannot be used to describe the contribution of primordial gravitational waves to the cosmic shear. The results from the Jacobi mapping approach yield an additional term in $C_{\alpha\beta}$ evaluated at the source position in the expressions for the shear components $\check\gamma_1$ and $\check\gamma_2$. This additional contribution of tensor modes to the weak lensing observables should be carefully evaluated to avoid false conclusions from the data of upcoming high-precision observations. The effect of tensor modes on the shear power spectra, including the term evaluated at the source position, has already been computed in [@SchmidtJeong2] and [@Yamauchi]. These works can be extended by computing the auto- and cross-correlation fuctions of the convergence and the shear components, including the contributions of the monopole and dipole which have been ignored so far. The results presented in this paper perfectly complement the results of [@newpaper]. The method applied there is based on a different propagation equation. Here, we apply the Jacobi mapping approach based on the geodesic deviation equation, whereas the calculations in [@newpaper] are based on solving the geodesic equation for two infinitesimally light rays. The fact that these independent calculations yield coinciding results is, additionally to their gauge-invariance, a strong confirmation of the accuracy of both methods. We believe that if the additional relativistic effects in the lensing observables are taken into account, we can expect fascinating – and credible – results from upcoming high-precision lensing surveys. Acknowledgments {#acknowledgments .unnumbered} =============== We acknowledge useful discussions with Alexandre Refregier. We acknowledge support by the Swiss National Science Foundation. J.Y. is further supported by a Consolidator Grant of the European Research Council (ERC-2015-CoG grant 680886). Discrepancy between the Source Velocity and the parallel-transported Observer Velocity {#Appendix:Lorentz} ====================================================================================== As mentioned in Section \[Subsection:JMFormalism\], the source velocity $u_s^\mu$, which defines the local Lorentz frame of the source, differs in general from the velocity $\tilde u^\mu_s$, which is the velocity $u^\mu_o$ of the observer parallel-transported to the source position. Hence, the quantity $\tilde\xi^I_s$ defined in equation , which lives in the local Lorentz frame of an observer with velocity $\tilde u^\mu_s$, apparently differs from the physical separation $\xi^I_s$ defined in the local Lorentz frame of the source. However, we have claimed that the effect of the Lorentz boost, which relates these two local frames to each other, is fully absorbed in the transformation of the observed photon direction $n^i_s$, i.e. the quantity $\xi_s^I=\tilde\xi^I_s$ is in fact invariant under a Lorentz boost. Here, we thoroughly prove this statement. First, recall that $\xi^I_s$ is defined as $$\begin{aligned} \xi^I_s=\xi^\mu_se^i_\mu(\Lambda_s)\Phi^I_i(\Lambda_s)\,,\end{aligned}$$ where $\xi^i_s=\xi^\mu_se^i_\mu(\Lambda_s)$ is the spatial separation in the rest frame of the source, and $\Phi^I_i(\Lambda_s)=(\theta^i_s,\,\phi^i_s)$ specifies two directions orthonormal to the photon direction $n^i_s$ measured in this frame. The quantity $\tilde\xi^I_s$, however, is given by $$\begin{aligned} \tilde\xi^I_s=\xi^\mu_s\tilde e^i_\mu(\Lambda_s)\tilde\Phi^I_i(\Lambda_s)\,, \label{tildexi}\end{aligned}$$ where $\tilde e^i_\mu(\Lambda_s)$ denotes the spatial tetrads $e^i_\mu(\Lambda_o)$ at the observer position, but parallel-transported to the source position. The directions $\tilde\Phi^I_i(\Lambda_s)=(\tilde\theta^i_s,\,\tilde\phi^i_s)$ specify the 2-dimensional plane orthogonal to the photon direction $\tilde n^i_s$ measured by an observer with velocity $\tilde u^\mu_s$. As the photon wavevector $k^\mu$, the velocity $\tilde u^\mu$ and the tetrad $\tilde e^\mu_i$ are parallel-transported along the photon path, we have $$\begin{aligned} \frac{\mathrm d}{\mathrm d\Lambda}\left(k^\mu \tilde u_\mu\right)=0 \,,\qquad \frac{\mathrm d}{\mathrm d\Lambda}\left(k^\mu\tilde e^i_\mu\right)=0\,,\qquad\frac{\mathrm d}{\mathrm d\Lambda}\left(\tilde u^\mu\tilde e^i_\mu\right)=0\,. \label{A2}\end{aligned}$$ This means that the observed photon direction $\tilde n^i_s$ determined by $$\begin{aligned} \tilde n^i=\left(\tilde u^\mu-\frac{1}{\tilde\omega}k^\mu\right)\tilde e^i_\mu\,,\qquad\mbox{where}\qquad \tilde \omega=-k^\mu\tilde u_\mu\,, \label{A3}\end{aligned}$$ does not change along the photon path, $\tilde n^i_s=n^i$. Consequently, the orthonormal directions $\tilde\Phi^I_i$ are also equal to the respective quantities at the observer position $\Lambda_o$, i.e. $\tilde\Phi^I_i(\Lambda_s)=\Phi^I_i$. Thus, equation  for $\tilde\xi^I_s$ indeed coincides with its definition stated in equation . Note that equations  and  imply that the frequency $\tilde\omega$ with respect to the observer moving with parallel-transported velocity $\tilde u^\mu$ does not change along the photon path, $\tilde \omega_s=\omega_o$. However, the frequency $\omega_o$ at the observer position appears redshifted with respect to the frequency $\omega_s$ emitted at the source. The quantity $\tilde\omega_s$ along with the quantities $\tilde n^i_s$ and $\tilde \xi^I_s$ bear no immediate physical meaning, as the velocity $\tilde u^\mu_s$ does not coincide with the source velocity $u^\mu_s$. By applying a Lorentz boost, these quantities can be transformed to match the frequency $\omega_s$, the photon direction $n^i_s$ and the physical separation $\xi^I_s$ in the rest frame of the source. To prove that $\xi^I_s=\tilde\xi^I_s$, which assigns a physical meaning to $\tilde\xi^I_s$, we need to determine how $\xi^i_s$ and $\Phi^I_i(\Lambda_s)$ transform under a Lorentz boost $\Lambda^a{_b}$. For that, first note that the photon wavevector $k^a_s=\omega_s(1,\,-n^i_s)$ transforms as $\tilde k^a_s=\Lambda^a{_b}k^b_s$, and hence the observed photon direction $n^i_s$ transforms as $$\begin{aligned} \tilde n^i_s=-\Lambda_\parallel^{-1}\left(\Lambda^i{_0}-\Lambda^i{_j}n^j_s\right)\,,\qquad\mbox{where}\qquad\Lambda_\parallel=\Lambda^0{_0}-\Lambda^0{_i}n^i_s\,.\end{aligned}$$ As a next step, we study the transformation properties of quantities perpendicular to $n^i_s$, such as the orthonormal directions $\theta^i_s$ and $\phi^i_s$. An arbitrary vector $A^a$ which is orthogonal to the photon wavevector, $A_ak^a_s=0$, can be decomposed as $$\begin{aligned} A^a=\left(-A_\parallel,\,A^i_\perp+n^i_sA_\parallel\right)\,,\qquad\mbox{where}\qquad A^i_\perp=P^{ij}A^j\,,\qquad P^{ij}\equiv\delta^{ij}-n^i_sn^j_s\,.\end{aligned}$$ After the Lorentz boost, the perpendicular component $\tilde A^i_\perp$ is given by $$\begin{aligned} \tilde A^i_\perp=\tilde A^i-\tilde n_s^i\tilde A_\parallel=\Lambda^i{_a}A^a-\Lambda_\parallel^{-1}\left(\Lambda^i{_0}-\Lambda^i{_j}n_s^j\right)\Lambda^0{_a}A^a\,.\end{aligned}$$ By writing the spatial component as $A^i=A^i_\perp+n^i_sA_\parallel=A^i_\perp-n_s^iA^0$, the expression for $\tilde A^i_\perp$ can be rewritten as $$\begin{aligned} \tilde A^i_\perp=\left(\Lambda_\perp\right)^i{_j} A^j_\perp\,,\qquad\mbox{where}\qquad \left(\Lambda_\perp\right)^i{_j}\equiv\Lambda^i{_j}-\Lambda_\parallel^{-1}\left(\Lambda^i{_0}-\Lambda^i{_k}n^k_s\right)\Lambda^0{_j}\,. \label{defLambdaperp}\end{aligned}$$ Hence, the perpendicular component of the Lorentz boosted vector $\tilde A^i$ is fully determined by the perpendicular component of $A^i$. Another property which we need to show that $\tilde\xi^I_s$ and $\xi^I_s$ are equal is: $$\begin{aligned} \tilde X^i_\perp\tilde Y^i_\perp=\left(\Lambda_\perp\right)^i_kX^k_\perp\left(\Lambda_\perp\right)^i_lY^l_\perp=X^i_\perp Y^i_\perp\,. \label{perpcomponents}\end{aligned}$$ To prove this, note that the coordinate transformation of two inertial frames moving with constant velocity $v^i$ with respect to each other is given by the boost matrix $\Lambda^a{_b}$ with components $$\begin{aligned} \Lambda^0{_0}=\gamma\,,\qquad\Lambda^0{_i}=-\gamma\beta\check v^i\,,\qquad\Lambda^i{_j}=\delta^i_j+(\gamma-1)\check v^i\check v_j\,,\end{aligned}$$ where $\beta\equiv v=\sqrt{v^i v_i}$, $\gamma\equiv 1/\sqrt{1-\beta^2}$ and $\check v^i\equiv v^i/v$ is the unit vector in the direction of the boost. Hence, the components of the boost matrix fulfil the properties $$\begin{aligned} &\Lambda^i{_0}\Lambda^i{_0}=-1+\left(\Lambda^0{_0}\right)^2=-1+\gamma^2\,,\qquad\Lambda^j{_0}\Lambda^j{_i}=\Lambda^0{_0}\Lambda^0{_i}=-\gamma\beta\check v_i\,,\nonumber \\ &\Lambda^k{_i}\Lambda^k{_j}=\delta_{ij}+\Lambda^0{_i}\Lambda^0{_j}=\delta_{ij}+\gamma^2\beta^2\check v_i\check v_j\,.\end{aligned}$$ Using these properties, one can show that $$\begin{aligned} \left(\Lambda_\perp\right)^i{_k}\left(\Lambda_\perp\right)^i{_l}=\delta_{kl}\end{aligned}$$ by applying the definition of $\left(\Lambda_\perp\right)^i{_k}$ given in  and calculating all terms explicitly. This proves equation . Having derived these general properties, we can now determine the relation between $\tilde\xi^i_s$ and $\xi^i_s$. According to the transformation property given in equation , the vectors $\theta^i_s$ and $\phi^i_s$ orthonormal to $n^i_s$ transform into the vectors $\tilde\theta^i_s=\theta^i$ and $\tilde\phi^i_s=\phi^i$ orthonormal to $\tilde n_s^i=n^i$ as $$\begin{aligned} \tilde\theta^i_s=\left(\Lambda_\perp\right)^i{_j}\theta^j_s\,,\qquad\tilde\phi^i_s=\left(\Lambda_\perp\right)^i{_j}\phi^j_s\,.\end{aligned}$$ Finally, equation  yields $$\begin{aligned} \tilde\xi^I_s=\left(\tilde\xi^i\tilde\Phi^I_i\right)_s=\left(\tilde\xi^i_\perp\tilde\Phi^I_i\right)_s=\left(\xi^i_\perp\Phi^I_i\right)_s=\left(\xi^i\Phi^I_i\right)_s=\xi^I_s\,,\end{aligned}$$ which completes our proof that the physical separation $\xi^I_s$ of the source is unaffected by a Lorentz boost. Therefore, the fact that the Jacobi mapping approach is based on the parallel transport of the tetrads, and hence of the velocity, does not lead to any inconsistencies. Calculations of the Distortion Matrix in the Jacobi Mapping Approach {#Appendix:JacobiMap} ==================================================================== In this section, we perform the calculations of the linear-order distortion matrix $\check{\mathbb{D}}^I_J$ which describes how an image is affected by cosmological weak lensing. We first calculate in Section \[Subsection:Riemann\] the expressions for the conformally transformed Riemann tensor $\hat{R}^\mu{_{\nu\sigma\rho}}$. Then, we state in Section \[Subsection:SphericalCoordinates\] some useful relations in spherical coordinates which will be applied extensively in the subsequent sections. In Section \[Subsection:D11D22\], we calculate the diagonal components $\check{\mathbb{D}}^1_1$ an $\check{\mathbb{D}}^2_2$ of the distortion matrix by first calculating $\hat{\mathfrak{R}}^1_1$ and $\hat{\mathfrak{R}}^2_2$ from the expressions for the Riemann tensor $\hat R^\mu{_{\nu\sigma\rho}}$ and then calculating the integral in equation . Finally, in Section \[Subsection:D12\], we perform these calculations for the off-diagonal components $\check{\mathbb{D}}^1_2=\check{\mathbb{D}}^2_1$. Riemann tensor in the conformally transformed metric {#Subsection:Riemann} ---------------------------------------------------- For the calculation of the distortion matrix $\check{\mathbb{D}}^I_J$, we need to know the components of the Riemann tensor ${\hat{R}^\mu}{_{\nu\sigma\rho}}$ in the conformally transformed metric $\hat{g}_{\mu\nu}$. For that, first recall that the Christoffel symbols $\hat\Gamma^\sigma{_{\mu\nu}}$ are related to the metric by $$\begin{aligned} \hat\Gamma^\sigma{_{\mu\nu}}=\frac{1}{2}\hat g^{\sigma\rho}\left(\hat g_{\nu\rho,\mu}+\hat g_{\mu\rho,\nu}-\hat g_{\mu\nu,\rho}\right)\,,\end{aligned}$$ which yields $$\begin{aligned} &\hat\Gamma^0{_{00}}=\mathcal{A}'\,,\qquad\hat\Gamma^0{ _{0\alpha}}=\mathcal{A}_{,\alpha}\,,\qquad\hat\Gamma^0 {_{\alpha\beta}}=\mathcal{B}_{(\alpha,\beta)}+\mathcal{C}'_{\alpha\beta}\,,\qquad\hat\Gamma^\alpha{_{00}}=\mathcal{A}^{,\alpha}-{\mathcal{B}^\alpha}'\,, \nonumber \\ &\hat\Gamma^\alpha{_{0\beta}}=\frac{1}{2}\left({\mathcal{B}_\beta}^{,\alpha}-{\mathcal{B}^\alpha}_{,\beta}\right)+{\mathcal{C}^\alpha_\beta}'\,,\qquad\hat\Gamma^\alpha{_{\beta\gamma}}=2{\mathcal{C}^\alpha}_{(\beta,\gamma)}-{\mathcal{C}_{\beta\gamma}}^{,\alpha}\,.\end{aligned}$$ The Riemann tensor ${\hat R^\mu}{_{\nu\rho\sigma}}$ is related to the Christoffel symbols by $$\begin{aligned} {\hat R^\mu{_{\nu\rho\sigma}}}=\hat\Gamma^\mu{_{\nu\sigma,\rho}}-\hat\Gamma^\mu{_{\rho\sigma,\nu}}+\hat\Gamma^\kappa{_{\nu\sigma}}\hat\Gamma^\mu{_{\kappa\rho}}-\hat\Gamma^\kappa{_{\rho\sigma}}\hat\Gamma^\mu{_{\kappa\nu}}\,.\end{aligned}$$ From this, we obtain that the Riemann tensor components which will appear in further calculations are given by $$\begin{aligned} \hat{R}^\alpha{_{0\beta0}}&={\mathcal{A}}^{,\alpha}{_\beta}-\frac{1}{2}\left({{\mathcal{B}}_\beta}{^{,\alpha}}+{{\mathcal{B}}^\alpha}{_{,\beta}}\right)'-{{\mathcal{C}}^\alpha_\beta}{''}\, \nonumber \\ {\hat R^\alpha}{_{0\beta\gamma}}&=-{{\mathcal{B}}_{[\beta}}{^{,\alpha}}{_{\gamma]}}-2{{\mathcal{C}}^\alpha}{_{[\beta,\gamma]}}'\,, \nonumber \\ \hat{R}^\alpha{_{\beta\gamma0}}&=\frac{1}{2}\left({{\mathcal{B}}_\beta}{^{,\alpha}}-{{\mathcal{B}}^\alpha} {_{,\beta}}\right)_{,\gamma}-{\mathcal{C}}^\alpha_{\gamma,\beta}{'}+{{\mathcal{C}}{_{\beta\gamma}}}{^{,\alpha}}{'}\,, \nonumber \\ {\hat R^\alpha}{_{\beta\gamma\delta}}&=2{\mathcal{C}}^\alpha{_{[\delta,\gamma]\beta}}+2{\mathcal{C}}_{\beta[\gamma,\delta]}{^\alpha}\,.\end{aligned}$$ The components $\hat R^0{_{\alpha0\beta}}$ and $\hat R^0{_{\alpha\beta\gamma}}$ are also non-vanishing, but will be of no importance for further calculations. Useful relations in spherical coordinates {#Subsection:SphericalCoordinates} ----------------------------------------- Here, we briefly review some basic properties of the spherical coordinates $(r,\,\theta,\,\phi)$ which are vitally important for the calculation of the distortion matrix. First, recall that we have defined two directions $\theta^\alpha$ and $\phi^\alpha$ orthogonal to the observed photon direction $n^\alpha$ in Section \[Subsection:SF\]. These three vectors provide a basis of orthonormal unit vectors in spherical coordinates. The gradient can be written in terms of these vectors as $$\begin{aligned} \nabla_\alpha=n_\alpha\frac{\partial}{\partial r}+\frac{1}{r}\left(\theta_\alpha\frac{\partial}{\partial\theta}+\frac{1}{\sin\theta}\phi_\alpha\frac{\partial}{\partial\phi}\right)\equiv n_\alpha\frac{\partial}{\partial r}+\frac{1}{r}\widehat\nabla_\alpha\,,\end{aligned}$$ and the Laplacian operator can be written as $$\begin{aligned} \Delta=&\frac{\partial^2}{\partial r^2}+\frac{2}{r}\frac{\partial}{\partial r}+\frac{1}{r^2}\left[\left(\cot\theta+\frac{\partial}{\partial\theta}\right)\frac{\partial}{\partial\theta}+\frac{1}{\sin^2\theta}\frac{\partial^2}{\partial\phi^2}\right] \nonumber \\ \equiv&\frac{\partial^2}{\partial r^2}+\frac{2}{r}\frac{\partial}{\partial r}+\frac{1}{r^2}\widehat\nabla^2\,.\end{aligned}$$ From the expression of the gradient, it follows that given any scalar function $Y$, we have $$\begin{aligned} n_\alpha Y^{,\alpha}=\frac{\partial}{\partial r}Y\,,\qquad\theta_\alpha Y^{,\alpha}=\frac{1}{r}\frac{\partial}{\partial\theta}Y\,,\qquad\phi_\alpha Y^{,\alpha}=\frac{1}{r\sin\theta}\frac{\partial}{\partial\phi}Y\equiv\frac{1}{r}\check\partial_\phi Y\,, \label{angularderivatives}\end{aligned}$$ where we have defined $\check\partial_\phi\equiv\partial_\phi/\sin\theta$ for simplicity. Note that the operators $\partial_\theta$ and $\check\partial_\phi$ are not commuting, $\check\partial_\phi\partial_\theta\neq\partial_\theta\check\partial_\phi$. Furthermore, note that the non-vanishing derivatives of the unit vectors are given by $$\begin{aligned} &\frac{\partial}{\partial\theta}n_\alpha=\theta_\alpha\,,\qquad\frac{\partial}{\partial\phi}n_\alpha=\sin\theta\,\phi_\alpha\,,\qquad\frac{\partial}{\partial\theta}\theta_\alpha=-n_\alpha\,,\qquad\frac{\partial}{\partial\phi}\theta_\alpha=\cos\theta\,\phi_\alpha\,,\nonumber \\ &\frac{\partial}{\partial\theta}\phi_\alpha=0\,,\qquad\frac{\partial}{\partial\phi}\phi_\alpha=-\sin\theta\,n_\alpha-\cos\theta\,\theta_\alpha\,. \label{derivunitvectors}\end{aligned}$$ Diagonal components of the distortion matrix {#Subsection:D11D22} -------------------------------------------- Here we calculate the component $\check{\mathbb{D}}^2_2$ of the distortion matrix. The calculations of $\check{\mathbb{D}}^1_1$ are performed in a completely analogous way. First, recall that $\hat{\mathfrak{R}}^2_2$ is defined as: $$\begin{aligned} \hat{\mathfrak{R}}^2_2=&{\hat R^\alpha}{_{\mu\gamma\nu}}\hat k^\mu\hat k^\nu\phi_\alpha\phi^\gamma \nonumber \\ =&{\hat R^\alpha}{_{0\gamma 0}}\phi_\alpha\phi^\gamma+{\hat R^\alpha}{_{\beta\gamma \delta}}n^\beta n^\delta\phi_\alpha\phi^\gamma-{\hat R^\alpha}{_{\beta\gamma 0}}n^\beta\phi_\alpha\phi^\gamma-{\hat R^\alpha}{_{0\gamma\delta}}n^\delta\phi_\alpha\phi^\gamma \nonumber \\ =&{\hat R^\alpha}{_{0\gamma 0}}\phi_\alpha\phi^\gamma+{\hat R^\alpha}{_{\beta\gamma \delta}}n^\beta n^\delta\phi_\alpha\phi^\gamma-2{\hat R^\alpha}{_{\beta\gamma 0}}n^\beta\phi_\alpha\phi^\gamma \label{R22}\,.\end{aligned}$$ We can now calculate $\hat{\mathfrak{R}}^2_2$ by inserting the expressions for the components of the Riemann tensor given in Section \[Subsection:Riemann\] and by applying the properties of the spherical coordinates given in and . The first and third terms on the right-hand side of equation are given by $$\begin{aligned} {\hat R^\alpha}{_{0\gamma 0}}\phi_\alpha\phi^\gamma =&\left( {\mathcal{A}}^{,\alpha}{_\beta}-\frac{1}{2}\left({{\mathcal{B}}_\beta}{^{,\alpha}}+{{\mathcal{B}}^\alpha}{_{,\beta}}\right)'-{{\mathcal{C}}^\alpha_\beta}{''}\right)\phi_\alpha\phi^\gamma \nonumber \\ =&\frac{1}{\bar{r}^2}\check\partial_\phi^2{\mathcal{A}}+\frac{1}{\bar{r}}\partial_r{\mathcal{A}}+\frac{\cot\theta}{\bar{r}^2}\partial_\theta{\mathcal{A}}-\frac{1}{\bar{r}}\phi^\alpha\check\partial_\phi{\mathcal{B}}'_\alpha-\mathcal{C}^\alpha_{\gamma}{''}\phi_\alpha\phi^\gamma\label{1stterm}\end{aligned}$$ and $$\begin{aligned} {\hat R^\alpha}{_{\beta\gamma 0}}n^\beta\phi_\alpha\phi^\gamma =&\left(\frac{1}{2}\left({{\mathcal{B}}_\beta}{^{,\alpha}}-{{\mathcal{B}}^\alpha} {_{,\beta}}\right)_{,\gamma}-{\mathcal{C}}^\alpha_{\gamma,\beta}{'}+{{\mathcal{C}}{_{\beta\gamma}}}{^{,\alpha}}{'}\right)n^\beta\phi_\alpha\phi^\gamma \nonumber \\ =&\frac{1}{2\bar{r}^2}n^\beta \check\partial_\phi^2{\mathcal{B}}_{\beta}+\frac{1}{2\bar{r}}n^\beta \partial_{\bar r}{\mathcal{B}}_{\beta}+\frac{\cot\theta}{2\bar{r}^2}n^\beta \partial_\theta{\mathcal{B}}_{\beta}+\frac{1}{2\bar{r}^2}\phi^\alpha\check\partial_\phi{\mathcal{B}}_\alpha \nonumber \\ &-\frac{1}{2\bar{r}} \phi^\alpha\check\partial_\phi\partial_{\bar{r}}{\mathcal{B}}_{\alpha}-\phi^\alpha\phi^\gamma\partial_{\bar{r}}{\mathcal{C}}'_{\alpha\gamma}+\frac{1}{\bar{r}}n^\beta\phi^\gamma\check\partial_\phi{\mathcal{C}}'_{\beta\gamma}\,. \label{3rdterm}\end{aligned}$$ To calculate the second term $$\begin{aligned} {\hat R^\alpha}{_{\beta\gamma \delta}}n^\beta n^\delta\phi_\alpha\phi^\gamma=&\left(2{\mathcal{C}}^\alpha{_{[\delta,\gamma]\beta}}+2{\mathcal{C}}_{\beta[\gamma,\delta]}{^\alpha}\right)\phi_\alpha\phi^\gamma n^\beta n^\delta \nonumber \\ =&\left(2{\mathcal{C}}^\alpha_{\beta,\delta\gamma}-{\mathcal{C}}^\alpha_{\gamma,\beta\delta}-{{\mathcal{C}}_{\beta\delta}}{^{,\alpha}}{_\gamma}\right)\phi_\alpha\phi^\gamma n^\beta n^\delta\,,\end{aligned}$$ note that $$\begin{aligned} {{\mathcal{C}}^\alpha}{_{\beta,\delta\gamma}}\phi_\alpha\phi^\gamma n^\beta n^\delta=\frac{1}{\bar{r}}\phi_\alpha n^\beta n^\delta\partial_\phi{\mathcal{C}}^\alpha_{\beta,\delta}=\frac{1}{\bar{r}}\phi_\alpha n^\beta\check\partial_\phi\partial_{\bar{r}}{\mathcal{C}}^\alpha_{\beta}-\frac{1}{\bar{r}^2}\phi_\alpha n^\beta\check\partial_\phi\mathcal{C}^\alpha_{\beta}\,,\end{aligned}$$ and that $$\begin{aligned} {{\mathcal{C}}_{\beta\delta}}{^{,\alpha}}{_\gamma}\phi_\alpha\phi^\gamma n^\beta n^\delta =&\frac{1}{\bar{r}}n^\beta n^\delta\partial_\phi\left({{\mathcal{C}}_{\beta\delta}}{^{,\alpha}}\phi_\alpha\right)-\frac{1}{\bar{r}}{{\mathcal{C}}_{\beta\delta}}{^{,\alpha}}n^\beta n^\delta\check\partial_\phi\phi_\alpha \nonumber \\ =&\frac{1}{\bar{r}^2}n^\beta n^\delta\check\partial_\phi^2{{\mathcal{C}}_{\beta\delta}}+\frac{1}{\bar{r}}n^\beta n^\delta\partial_{\bar{r}}{{\mathcal{C}}_{\beta\delta}}+\frac{\cot\theta}{\bar{r}^2}n^\beta n^\delta \partial_\theta{{\mathcal{C}}_{\beta\delta}}\,.\end{aligned}$$ Hence, we obtain for the second term on the right-hand side of equation : $$\begin{aligned} {\hat R^\alpha}{_{\beta\gamma \delta}}n^\beta n^\delta\phi_\alpha\phi^\gamma=&\frac{2}{\bar{r}}\phi_\alpha n^\beta \check\partial_\phi\partial_{\bar{r}}{\mathcal{C}}^\alpha_{\beta}-\frac{2}{\bar{r}^2}\phi_\alpha n^\beta \check\partial_\phi{\mathcal{C}}^\alpha_\beta-\phi_\alpha\phi^\gamma\partial_{\bar{r}}^2{\mathcal{C}}^\alpha_\gamma \nonumber \\ &-\frac{1}{\bar{r}^2}n^\beta n^\delta\check\partial_\phi^2{{\mathcal{C}}_{\beta\delta}}-\frac{1}{\bar{r}}n^\beta n^\delta \partial_{\bar{r}}{{\mathcal{C}}_{\beta\delta}}-\frac{\cot\theta}{\bar{r}^2}n^\beta n^\delta\partial_\theta{{\mathcal{C}}_{\beta\delta}}\,. \label{2ndterm2}\end{aligned}$$ By summing up the three different terms, we obtain the following expression for the component $\hat{\mathfrak{R}}^2_2$ of the distortion matrix: $$\begin{aligned} \hat{\mathfrak{R}}^2_2=&\frac{1}{\bar{r}^2}\check\partial_\phi^2{\mathcal{A}}+\frac{1}{\bar{r}}\partial_{\bar{r}}{\mathcal{A}}+\frac{\cot\theta}{\bar{r}^2}\partial_\theta{\mathcal{A}}-\frac{2}{\bar{r}}\phi_\alpha n^\beta \frac{\mathrm d}{\mathrm d\lambda}\check\partial_\phi\mathcal{C}^\alpha_{\beta}-\frac{2}{\bar{r}^2}\phi_\alpha n^\beta \check\partial_\phi{\mathcal{C}}^\alpha_{\beta} \nonumber \\ &-\phi_\alpha\phi^\gamma\frac{\mathrm d^2}{\mathrm d\lambda^2}{\mathcal{C}}^\alpha_\gamma-\frac{1}{\bar{r}^2}n^\beta n^\delta\check\partial_\phi^2{{\mathcal{C}}_{\beta\delta}}-\frac{1}{\bar{r}}n^\beta n^\delta \partial_{\bar{r}}{{\mathcal{C}}_{\beta\delta}}-\frac{\cot\theta}{\bar{r}^2}n^\beta n^\delta \partial_\theta{{\mathcal{C}}_{\beta\delta}} \nonumber \\ &-\frac{1}{\bar{r}^2}n^\beta\check\partial_\phi^2{\mathcal{B}}_{\beta}-\frac{1}{\bar{r}}n^\beta \partial_{\bar{r}}{\mathcal{B}}_{\beta}-\frac{\cot\theta}{\bar{r}^2}n^\beta \partial_\theta{\mathcal{B}}_{\beta}-\frac{1}{\bar{r}}\phi^\alpha\frac{\mathrm d}{\mathrm d\lambda}\check\partial_\phi{\mathcal{B}}_\alpha-\frac{1}{\bar{r}^2}\phi^\alpha\check\partial_\phi{\mathcal{B}}_\alpha\,, \label{R22computed}\end{aligned}$$ where we simplified the expression by using that $$\begin{aligned} \frac{\mathrm d^2}{\mathrm d\lambda^2}=\frac{\partial^2}{\partial\tau^2}-2\frac{\partial^2}{\partial\tau\partial\bar{r}}+\frac{\partial^2}{\partial\bar{r}^2}\,,\end{aligned}$$ since $\mathrm d/\mathrm d\lambda=\partial_\tau-n^\alpha\partial_\alpha$. However, since we are concerned with the gauge transformation properties of the weak lensing observables, this expression for $\hat{\mathfrak{R}}^2_2$ in terms of $\mathcal{A}$, $\mathcal{B}^\alpha$ and $\mathcal{C}^{\alpha\beta}$ is impractical. Instead, we want to express it using the gauge-invariant quantities defined in equation . For this, we first rewrite it in terms of the scalar, vector and tensor perturbations defined in equation . First, consider the contribution of scalar perturbation $\alpha$, $\beta$, $\varphi$ and $\gamma$ to the quantity $\hat{\mathfrak{R}}^2_2$. By inserting $\mathcal{B}^\alpha=\beta^{,\alpha}+B^\alpha$ into , we obtain for the terms in $\beta$: $$\begin{aligned} \left(\hat{\mathfrak{R}}^2_2\right)_\beta=&-\frac{1}{\bar{r}^2}n^\alpha \check\partial_\phi^2\beta_{,\alpha}-\frac{1}{\bar{r}}n^\alpha \check\partial_{\bar{r}}\beta_{,\alpha}-\frac{\cot\theta}{\bar{r}^2}n^\beta {\partial_\theta}\beta_{,\alpha}-\frac{1}{\bar{r}}\phi^\alpha\frac{\mathrm d}{\mathrm d\lambda}\check\partial_\phi\beta_{,\alpha}-\frac{1}{\bar{r}^2}\phi^\alpha{\check\partial_\phi}\beta_{,\alpha}\,. \label{termsinbeta}\end{aligned}$$ Using that $$\begin{aligned} n^\alpha{\check\partial_\phi^2}\beta_{,\alpha}={\check\partial_\phi^2}\partial_{\bar{r}}\beta-\beta_{,\alpha}{\check\partial_\phi^2}n^\beta-2{\check\partial_\phi}\beta_{,\alpha}{\check\partial_\phi}n^\alpha={\check\partial_\phi^2}\partial_{\bar{r}}\beta-\partial_{\bar{r}}\beta-\frac{\cot\theta}{\bar{r}}{\partial_\theta}\beta-2\frac{1}{\bar{r}}{\check\partial_\phi^2}\beta\,,\end{aligned}$$ and that $$\begin{aligned} \frac{\mathrm d}{\mathrm d\lambda}\left(\phi^\alpha{\check\partial_\phi}\beta_{,\alpha}\right)=&\frac{\mathrm d}{\mathrm d\lambda}\left(\frac{1}{\bar{r}}{\check\partial_\phi^2}\beta+\partial_{\bar{r}}\beta+\frac{\cot\theta}{\bar{r}}{\partial_\theta}\beta\right) \nonumber \\ =&\frac{1}{\bar{r}}\frac{\mathrm d}{\mathrm d\lambda}{\check\partial_\phi^2}\beta+\frac{\mathrm d}{\mathrm d\lambda}\partial_{\bar{r}}\beta+\frac{\cot\theta}{\bar{r}}\frac{\mathrm d}{\mathrm d\lambda}{\partial_\theta}\beta+\frac{1}{\bar{r}^2}{\check\partial_\phi^2}\beta+\frac{\cot\theta}{\bar{r}^2}{\partial_\theta}\beta\,,\end{aligned}$$ we can rewrite equation into $$\begin{aligned} \left(\hat{\mathfrak{R}}^2_2\right)_\beta=&-\frac{1}{\bar{r}^2}{\check\partial_\phi^2}\beta'-\frac{1}{\bar{r}}\partial_{\bar{r}}\beta'-\frac{\cot\theta}{\bar{r}^2}{\partial_\theta}\beta'\,.\end{aligned}$$ By inserting $\mathcal{C}_{\alpha\beta}=\delta_{\alpha\beta}\varphi+\gamma_{,\alpha\beta}+C_{(\alpha,\beta)}+C_{\alpha\beta}$ into equation , we obtain for the terms in $\gamma$: $$\begin{aligned} \left(\hat{\mathfrak{R}}^2_2\right)_\gamma=&-\frac{\mathrm d}{\mathrm d\lambda}\left(\frac{2}{\bar{r}}\phi^\alpha n^\beta {\check\partial_\phi}\gamma_{,\alpha\beta}\right)-\phi^\alpha\phi^\beta\frac{\mathrm d^2}{\mathrm d\lambda^2}\gamma_{,\alpha\beta} \nonumber \\ &-\frac{1}{\bar{r}^2} n^\beta n^\delta{\check\partial_\phi^2}\gamma_{,\beta\delta}-\frac{1}{\bar{r}} n^\beta n^\delta \partial_{\bar{r}}{\gamma_{,\beta\delta}}-\frac{\cot\theta}{\bar{r}^2}n^\beta n^\delta{\partial_\theta}{\gamma_{,\beta\delta}}\,. \label{R22gamma}\end{aligned}$$ To simplify the first term of this expression, we use that $$\begin{aligned} \phi^\alpha n^\beta{\check\partial_\phi}\gamma_{,\alpha\beta}=&\partial_{\bar{r}}\left(\phi^\alpha{\check\partial_\phi}\partial_{\bar{r}}\gamma_{,\alpha}\right)-\frac{1}{\bar{r}}\phi^\alpha{\check\partial_\phi}\gamma_{,\alpha} \nonumber \\ =&\frac{1}{\bar{r}}{\check\partial_\phi^2}\partial_{\bar{r}}\gamma-\frac{2}{\bar{r}^2}{\check\partial_\phi^2}\gamma+\partial_{\bar{r}}^2\gamma+\frac{\cot\theta}{\bar{r}}{\partial_\theta}\partial_{\bar{r}}\gamma-\frac{2\cot\theta}{\bar{r}^2}{\partial_\theta}\gamma-\frac{1}{\bar{r}}\partial_{\bar{r}}\gamma\,. \label{termsGamma1}\end{aligned}$$ For the third term, we use that $$\begin{aligned} {\check\partial_\phi^2}\left(n^\beta n^\delta\right)=-2 n^\beta n^\delta-\cot\theta\,\theta^\beta n^\delta-\cot\theta\,\theta^\delta n^\beta+2\phi^\beta\phi^\delta\,,\end{aligned}$$ and, hence, $$\begin{aligned} n^\beta n^\delta{\check\partial_\phi^2}\gamma_{,\beta\delta}=&{\check\partial_\phi^2}\partial_{\bar{r}}^2\gamma-\gamma_{,\beta\delta}{\check\partial_\phi^2}\left(n^\beta n^\delta\right)-2{\check\partial_\phi}\gamma_{,\beta\delta}{\check\partial_\phi}\left(n^\beta n^\delta\right) \nonumber \\ =&{\check\partial_\phi^2}\partial_{\bar{r}}^2\gamma+\frac{6}{\bar{r}^2}{\check\partial_\phi^2}\gamma+\frac{2}{\bar{r}}\partial_{\bar{r}}\gamma-2\partial_{\bar{r}}^2\gamma-\frac{2\cot\theta}{\bar{r}}{\partial_\theta}\partial_{\bar{r}}\gamma-\frac{4}{\bar{r}}{\check\partial_\phi^2}\partial_{\bar{r}}\gamma+\frac{4\cot\theta}{\bar{r}^2}{\partial_\theta}\gamma\,. \label{termsGamma2}\end{aligned}$$ By applying the relations given in equations and and also noting that $$\begin{aligned} \frac{\partial^2}{\partial\tau^2}=\frac{\partial^2}{\partial\bar{r}^2}+2\frac{\mathrm d}{\mathrm d\lambda}\frac{\partial}{\partial\bar{r}}+\frac{\mathrm d^2}{\mathrm d\lambda^2}\,,\end{aligned}$$ we can show that equation is equivalent to $$\begin{aligned} \left(\hat{\mathfrak{R}}^2_2\right)_\gamma=&-\frac{1}{\bar{r}^2}{\check\partial_\phi^2}\gamma''-\frac{1}{\bar{r}}\partial_{\bar{r}}\gamma''-\frac{\cot\theta}{\bar{r}^2}{\partial_\theta}\gamma''\,.\end{aligned}$$ The expressions for the terms in $\alpha$ and $\varphi$ follow straight-forwardly from the expressions for $\hat{\mathfrak{R}}^2_2$. We obtain for the scalar terms in $\hat{\mathfrak{R}}^2_2$: $$\begin{aligned} \left(\hat{\mathfrak{R}}^2_2\right)_{s}=&\frac{1}{\bar{r}^2}{\check\partial_\phi^2}\left(\alpha-\varphi-\beta'-\gamma''\right)+\frac{1}{\bar{r}}\partial_{\bar{r}}\left(\alpha-\varphi-\beta'-\gamma''\right) \nonumber \\ &+\frac{\cot\theta}{\bar{r}^2}{\partial_\theta}\left(\alpha-\varphi-\beta'-\gamma''\right)-\frac{\mathrm d^2}{\mathrm d\lambda^2}\varphi\,.\end{aligned}$$ Using the gauge-invariant variables defined in equation , we can rewrite this as $$\begin{aligned} \left(\hat{\mathfrak{R}}^2_2\right)_{s}=&\frac{1}{\bar{r}^2}{\check\partial_\phi^2}\left(\alpha_\chi-\varphi_\chi\right)-\frac{\mathrm d^2}{\mathrm d\lambda^2}\left(\alpha_\chi-\varphi_\chi\right)+\frac{1}{\bar{r}}\left(\alpha'_\chi-\varphi'_\chi\right) \nonumber \\ &+\frac{\cot\theta}{\bar{r}^2}{\partial_\theta}\left(\alpha_\chi-\varphi_\chi\right)-\frac{\mathrm d^2}{\mathrm d\lambda^2}\varphi_\chi-\frac{\mathrm d^2}{\mathrm d\lambda^2}\left(H\chi\right)\,.\end{aligned}$$ Next we consider the vector perturbation $B_\alpha$ and $C_\alpha$. For the terms in $C_\alpha$, we obtain: $$\begin{aligned} \left(\hat{\mathfrak{R}}^2_2\right)_{C_\alpha}=&-\frac{2}{\bar{r}}\phi_\alpha n^\beta \frac{\mathrm d}{\mathrm d\lambda}{\check\partial_\phi}{C^\alpha}_{,\beta}-\frac{2}{\bar{r}^2}\phi_\alpha n^\beta {\check\partial_\phi}{C^\alpha}_{,\beta}-\phi^\alpha\phi^\gamma\frac{\mathrm d^2}{\mathrm d\lambda^2}C_{\alpha, \gamma} \nonumber \\ &-\frac{1}{\bar{r}^2} n^\beta n^\delta{\check\partial_\phi^2}{C_{\beta,\delta}}-\frac{1}{\bar{r}} n^\beta n^\delta \partial_{\bar{r}}{C_{\beta,\delta}}-\frac{\cot\theta}{\bar{r}^2}n^\beta n^\delta {\partial_\theta}{C_{\beta,\delta}}\,. \label{termsCalpha}\end{aligned}$$ To simplify this expression, we need to rewrite all of the terms such that all derivatives are expressed in spherical coordinates. For example, for the fourth term we have: $$\begin{aligned} n^\beta n^\delta{ \check\partial_\phi^2}C_{\beta,\delta} =&n^\beta{\check\partial_\phi^2}\partial_{\bar{r}}C_\beta-\frac{2}{\bar{r}}n^\beta{\check\partial_\phi^2} C_{\beta}-n^\beta\partial_{\bar{r}}C_{\beta}-\frac{\cot\theta}{\bar{r}}n^\beta{\partial_\theta}C_{\beta}\,.\end{aligned}$$ Similar simplifications can be made for all other terms in the expression . We obtain: $$\begin{aligned} \left(\hat{\mathfrak{R}}^2_2\right)_{C_\alpha}=&-\frac{1}{\bar{r}}\phi_\alpha \frac{\mathrm d}{\mathrm d\lambda}{\check\partial_\phi}{C'^\alpha}-\frac{1}{\bar{r}^2}\phi_\alpha {\check\partial_\phi}{C'^\alpha}-\frac{1}{\bar{r}^2}n^\beta{\check\partial_\phi^2}C'_\beta-\frac{1}{\bar{r}} n^\beta \partial_{\bar{r}}C'_{\beta}-\frac{\cot\theta}{\bar{r}^2}n^\beta {\partial_\theta}C'_{\beta}\,.\end{aligned}$$ Hence, the contributions of the vector perturbations $C_\alpha$ and $B_\alpha$ to $\hat{\mathfrak{R}}^2_2$ is given by $$\begin{aligned} \left(\hat{\mathfrak{R}}^2_2\right)_{v}=&-\frac{1}{\bar{r}}\phi_\alpha \frac{\mathrm d}{\mathrm d\lambda}{\check\partial_\phi}\left(B^\alpha+{C'^\alpha}\right)-\frac{1}{\bar{r}^2}\phi_\alpha {\check\partial_\phi}\left(B^\alpha+{C'^\alpha}\right)-\frac{1}{\bar{r}^2}n^\beta{\check\partial_\phi^2}\left(B_\beta+C'_\beta\right) \nonumber \\ &-\frac{1}{\bar{r}} n^\beta \partial_{\bar{r}}\left(B_\beta+C'_{\beta}\right)-\frac{\cot\theta}{\bar{r}^2}n^\beta {\partial_\theta}\left(B_\beta+C'_{\beta}\right)\,,\end{aligned}$$ where the expression for the terms in $B_\alpha$ follows immediately from the expression for $\hat{\mathfrak{R}}^2_2$. Using the gauge-invariant quantity $\Psi^\alpha$ defined in , we can rewrite this as $$\begin{aligned} \left(\hat{\mathfrak{R}}^2_2\right)_{v}=&-\frac{1}{\bar{r}}\phi_\alpha \frac{\mathrm d}{\mathrm d\lambda}{\check\partial_\phi}\Psi^\alpha-\frac{1}{\bar{r}^2}\phi_\alpha {\check\partial_\phi}\Psi^\alpha-\frac{1}{\bar{r}^2}n^\beta{\check\partial_\phi^2}\Psi_\beta-\frac{1}{\bar{r}} n^\beta \partial_{\bar{r}}\Psi_\beta-\frac{\cot\theta}{\bar{r}^2}n^\beta {\partial_\theta}\Psi_\beta\,.\end{aligned}$$ Summing up the expressions for the scalar, vector and tensor perturbations to $\hat{\mathfrak{R}}^2_2$, we obtain the following expression, $$\begin{aligned} \hat{\mathfrak{R}}^2_2=&\frac{1}{\bar{r}^2}{\check\partial_\phi^2}(\alpha_\chi-\varphi_\chi)-\frac{1}{\bar{r}}\frac{\mathrm d}{\mathrm d\lambda}(\alpha_\chi-\varphi_\chi)+\frac{1}{\bar{r}}(\alpha'_\chi-\varphi'_\chi)+\frac{\cot\theta}{\bar{r}^2}{\partial_\theta}(\alpha_\chi-\varphi_\chi)-\frac{\mathrm d^2}{\mathrm d\lambda^2}\varphi_\chi \nonumber \\ &-\frac{1}{\bar{r}^2}n^\beta{\check\partial_\phi^2}\Psi_\beta-\frac{1}{\bar{r}}n^\beta\partial_{\bar{r}}\Psi_\beta-\frac{\cot\theta}{\bar{r}^2}n^\beta{\partial_\theta}\Psi_\beta-\frac{1}{\bar{r}}\phi^\alpha\frac{\mathrm d}{\mathrm d\lambda}{\check\partial_\phi}\Psi_\alpha-\frac{1}{\bar{r}^2}\phi^\alpha{\check\partial_\phi}\Psi_\alpha \nonumber \\ &-\frac{2}{\bar{r}}\phi_\alpha n^\beta \frac{\mathrm d}{\mathrm d\lambda}{\check\partial_\phi}C^\alpha_{\beta}-\frac{2}{\bar{r}^2}\phi_\alpha n^\beta {\check\partial_\phi}C^\alpha_{\beta}-\phi_\alpha\phi^\gamma\frac{\mathrm d^2}{\mathrm d\lambda^2}C^\alpha_\gamma-\frac{1}{\bar{r}^2} n^\beta n^\delta{\check\partial_\phi^2}{C_{\beta\delta}} \nonumber \\ &-\frac{1}{\bar{r}} n^\beta n^\delta \partial_{\bar{r}}{C_{\beta\delta}}-\frac{\cot\theta}{\bar{r}^2}n^\beta n^\delta {\partial_\theta}{C_{\beta\delta}}-\frac{\mathrm d^2}{\mathrm d\lambda^2}(H\chi) \,, \label{R22result}\end{aligned}$$ where the terms in $C_{\alpha\beta}$ follow directly from the expression for $\hat{\mathfrak{R}}^2_2$. The calculations for the component $\hat{\mathfrak{R}}^1_1$ can be performed completely analogously. It is given by: $$\begin{aligned} \hat{\mathfrak{R}}^1_1=&\frac{1}{\bar{r}^2}{\partial_\theta^2}(\alpha_\chi-\varphi_\chi)-\frac{1}{\bar{r}}\frac{\mathrm d}{\mathrm d\lambda}(\alpha_\chi-\varphi_\chi)+\frac{1}{\bar{r}}(\alpha'_\chi-\varphi'_\chi)-\frac{\mathrm d^2}{\mathrm d\lambda^2}\varphi_\chi-\frac{\mathrm d^2}{\mathrm d\lambda^2}(H\chi) \nonumber \\ &-\frac{1}{\bar{r}^2}n^\beta{\partial_\theta^2}\Psi_\beta-\frac{1}{\bar{r}}n^\beta\partial_{\bar{r}}\Psi_\beta-\frac{1}{\bar{r}}\theta^\alpha\frac{\mathrm d}{\mathrm d\lambda}{\partial_\theta}\Psi_\alpha-\frac{1}{\bar{r}^2}\theta^\alpha{\partial_\theta}\Psi_\alpha-\frac{2}{\bar{r}}\theta_\alpha n^\beta \frac{\mathrm d}{\mathrm d\lambda}{\partial_\theta}C^\alpha_\beta \nonumber \\ &-\frac{2}{\bar{r}^2}\theta_\alpha n^\beta {\partial_\theta}C^\alpha_{\beta}-\theta_\alpha\theta^\gamma\frac{\mathrm d^2}{\mathrm d\lambda^2}C^\alpha_\gamma-\frac{1}{\bar{r}^2} n^\beta n^\delta{\partial_\theta^2}{C_{\beta\delta}}-\frac{1}{\bar{r}} n^\beta n^\delta \partial_{\bar{r}}{C_{\beta\delta}} \,. \label{R11result}\end{aligned}$$ Note that there is a gauge-dependent term, $\mathrm d^2/\mathrm d\lambda^2(H\chi)$, appearing in the expressions for $\hat{\mathfrak{R}}^1_1$ and $\hat{\mathfrak{R}}^2_2$. This term vanishes in the conformally transformed metric $\hat g_{\mu\nu}$ where $H=0$, but not in the full metric $g_{\mu\nu}$. Note that we introduced the conformally transformed metric to simplify the calculations along the photon geodesic, not because the physics would be invariant under the conformal transformation. Omitting the term $\mathrm d^2/\mathrm d\lambda^2(H\chi)$ would thus lead to an incorrect and gauge-dependent result for the trace of $\check{\mathbb{D}}^I_J$ as the physics described by the full metric $g_{\mu\nu}$ is not correctly accounted for. Now, we need to calculate the first-order distortion matrix which, as stated in equation , is given by: $$\begin{aligned} \frac{1}{\lambda_s}\delta\hat{\mathfrak{D}}^I_J=-\int_0^{\lambda_s}\mathrm d\lambda\,\left(\frac{\lambda_s-\lambda}{\lambda_s\lambda}\right)\lambda^2\hat{\mathfrak{R}}^I_J(\lambda)\,. \label{integral}\end{aligned}$$ For this, we apply the following integrals: $$\begin{aligned} -\int_0^{\lambda_s}\mathrm d\lambda\,\left(\frac{\lambda_s-\lambda}{\lambda_s\lambda}\right)\lambda^2\left(\frac{1}{\bar{r}}Y\right)=&-\frac{1}{\bar{r}_z}\int_0^{\bar{r}_z}\mathrm d\bar{r}\,(\bar{r}_z-\bar{r})Y\,, \nonumber \\ -\int_0^{\lambda_s}\mathrm d\lambda\,\left(\frac{\lambda_s-\lambda}{\lambda_s\lambda}\right)\lambda^2\left(\frac{1}{\bar{r}}\frac{\mathrm d}{\mathrm d\lambda}Y\right)=&-Y_o+\frac{1}{\bar{r}_z}\int_0^{\bar{r}_z}\mathrm d\bar{r}\,Y\,, \nonumber \\ -\int_0^{\lambda_s}\mathrm d\lambda\,\left(\frac{\lambda_s-\lambda}{\lambda_s\lambda}\right)\lambda^2\left(\frac{\mathrm d^2}{\mathrm d\lambda^2}Y\right)=&-Y_s-Y_o+\frac{2}{\bar{r}_z}\int_0^{\bar{r}_z}\mathrm d\bar{r}\,Y\,, \label{standardI3}\end{aligned}$$ where $Y$ is some generic scalar function. Any of the terms in equations and will take the form of one of the integrals in equation  when inserted into the integral in equation . We obtain the following expressions: $$\begin{aligned} \frac{1}{\lambda_s}\delta\hat{\mathfrak{D}}^2_2=&\left(\alpha_\chi+H\chi+\phi^\alpha\phi^\beta C_{\alpha\beta}\right)_o+\left(\varphi_\chi+H\chi+\phi^\alpha\phi^\beta C_{\alpha\beta}\right)_s \nonumber \\ &+\int_0^{\lambda_s}\mathrm d\lambda\left(\frac{\lambda_s-\lambda}{\lambda_s\lambda}\right)\left[-\left(\cot\theta\,\partial_\theta+\check\partial^2_\phi\right)(\alpha_\chi-\varphi_\chi)+\left(n^\alpha\check\partial^2_\phi+\cot\theta\, n^\alpha\partial_\theta+\phi^\alpha\check\partial_\phi\right)\Psi_\alpha\right] \nonumber \\ &+\int_0^{\lambda_s}\mathrm d\lambda\left(\frac{\lambda_s-\lambda}{\lambda_s\lambda}\right)\left(2\phi^\alpha n^\beta\check\partial_\phi+n^\alpha n^\beta\check\partial^2_\phi+\cot\theta\,n^\alpha n^\beta\partial_\theta\right)C_{\alpha\beta} \nonumber \\ &-\frac{1}{\bar{r}_z}\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\left(\alpha_\chi-\varphi_\chi+2\varphi_\chi+2H\chi+\phi^\alpha\check\partial_\phi\Psi_\alpha+2\phi_\alpha n^\beta\check\partial_\phi C^\alpha_\beta+2\phi^\alpha\phi^\beta C_{\alpha\beta}\right) \nonumber \\ &+\frac{1}{\bar{r}_z}\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\left(\bar{r}_z-\bar{r}\right)\left(\varphi'_\chi-\alpha'_\chi+n^\alpha\partial_{\bar{r}}\Psi_\alpha+n^\alpha n^\beta\partial_{\bar{r}}C_{\alpha\beta}\right)\,,\end{aligned}$$ and $$\begin{aligned} \frac{1}{\lambda_s}\delta\hat{\mathfrak{D}}^1_1=&\left(\alpha_\chi+H\chi+\theta^\alpha\theta^\beta C_{\alpha\beta}\right)_o+\left(\varphi_\chi+H\chi+\theta^\alpha\theta^\beta C_{\alpha\beta}\right)_s \nonumber \\ &+\int_0^{\lambda_s}\mathrm d\lambda\left(\frac{\lambda_s-\lambda}{\lambda_s\lambda}\right)\left[-\partial^2_\theta(\alpha_\chi-\varphi_\chi)+\left(n^\alpha\partial^2_\theta+\theta^\alpha\partial_\theta\right)\Psi_\alpha\right] \nonumber \\ &+\int_0^{\lambda_s}\mathrm d\lambda\left(\frac{\lambda_s-\lambda}{\lambda_s\lambda}\right)\left(2\theta^\alpha n^\beta\partial_\theta+n^\alpha n^\beta\partial^2_\theta\right)C_{\alpha\beta} \nonumber \\ &-\frac{1}{\bar{r}_z}\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\left(\alpha_\chi-\varphi_\chi+2\varphi_\chi+2H\chi+\theta^\alpha\partial_\theta\Psi_\alpha+2\theta_\alpha n^\beta\partial_\theta C^\alpha_\beta+2\theta^\alpha\theta^\beta C_{\alpha\beta}\right) \nonumber \\ &+\frac{1}{\bar{r}_z}\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\left(\bar{r}_z-\bar{r}\right)\left(\varphi'_\chi-\alpha'_\chi+n^\alpha\partial_{\bar{r}}\Psi_\alpha+n^\alpha n^\beta\partial_{\bar{r}}C_{\alpha\beta}\right)\,.\end{aligned}$$ Following equation , the components $\check{\mathbb{D}}^1_1$ and $\check{\mathbb{D}}^2_2$ of the distortion matrix are now determined by $$\begin{aligned} \check{\mathbb{D}}^1_1=1+\delta z+\widehat{\Delta\nu}_o-\frac{\Delta\lambda_z}{\bar r_z}+\frac{1}{\lambda_s}\delta\hat{\mathfrak{D}}^1_1\,,\qquad\check{\mathbb{D}}^2_2=1+\delta z+\widehat{\Delta\nu}_o-\frac{\Delta\lambda_z}{\bar r_z}+\frac{1}{\lambda_s}\delta\hat{\mathfrak{D}}^2_2\,. \label{D11D22}\end{aligned}$$ Off-diagonal components of the distortion matrix {#Subsection:D12} ------------------------------------------------ Here we calculate the off-diagonal components $\check{\mathbb{D}}^1_2$ and $\check{\mathbb{D}}^2_1$ of the distortion matrix. For this, we first calculate the source term $\hat{\mathfrak{R}}^I_J$, which is $$\begin{aligned} \hat{\mathfrak{R}}^1_2=\hat{\mathfrak{R}}^2_1\equiv&{\hat R^\alpha}{_{\mu\gamma \nu}}\hat k^\mu\hat k^\nu\theta_\alpha\phi^\gamma \nonumber \\ =&{\hat R^\alpha}{_{0\gamma 0}}\theta_\alpha\phi^\gamma+{\hat R^\alpha}{_{\beta\gamma \delta}}n^\beta n^\delta\theta_\alpha\phi^\gamma-{\hat R^\alpha}{_{\beta\gamma 0}}n^\beta\theta_\alpha\phi^\gamma-{\hat R^\alpha}{_{0\gamma\delta}}n^\delta\theta_\alpha \phi^\gamma\,. \label{R12Def}\end{aligned}$$ With the same techniques as in the calculation of $\hat{\mathfrak{R}}^2_2$, we will now compute the four different terms. To abbreviate the lengthy expressions in this section, we introduce the differential operator $$\begin{aligned} \mathrm{X}^\alpha\equiv\theta^\alpha\frac{1}{\sin\theta}\frac{\partial}{\partial\phi}+\phi^\alpha\frac{\partial}{\partial\theta}\,,\end{aligned}$$ Inserting the expressions of the Riemann tensor given in Section \[Subsection:Riemann\], we obtain $$\begin{aligned} {\hat R^\alpha}{_{0\gamma 0}}\theta_\alpha\phi^\gamma =&\left({\mathcal{A}}^{,\alpha}{_\beta}-\frac{1}{2}\left({{\mathcal{B}}_\beta}{^{,\alpha}}+{{\mathcal{B}}^\alpha}{_{,\beta}}\right)'-{{\mathcal{C}}^\alpha_\beta}{''}\right)\theta_\alpha\phi^\gamma \nonumber \\ =&\frac{1}{\bar{r}^2}\partial_\theta\check\partial_\phi{\mathcal{A}}-\frac{1}{2\bar{r}}\mathrm{X}_\alpha{{\mathcal{B}}^{\alpha}}'-{\mathcal{C}}_{\alpha\gamma}''\theta^\alpha\phi^\gamma \label{1sttermR12}\end{aligned}$$ for the first term on the right-hand side of equation , $$\begin{aligned} {\hat R^\alpha}{_{\beta\gamma \delta}}n^\beta n^\delta\theta_\alpha\phi^\gamma=&\left(2{\mathcal{C}}^\alpha{_{(\beta,\delta)\gamma}}-2{\mathcal{C}}^\alpha{_{(\beta,\gamma)\delta}}+{{\mathcal{C}}_{\beta\gamma}}{^{,\alpha}}{_\delta}-{{\mathcal{C}}_{\beta\delta}}{^{,\alpha}}{_\gamma}\right)\theta_\alpha\phi^\gamma n^\beta n^\delta \nonumber \\ =&\left({{\mathcal{C}}^\alpha}{_{\beta,\delta\gamma}}+{\mathcal{C}_{\beta\gamma}}{^{,\alpha}}{_\delta}\right)\theta_\alpha\phi^\gamma n^\beta n^\delta-\left({{\mathcal{C}}^\alpha}{_{\gamma,\beta\delta}}+{{\mathcal{C}}_{\beta\delta}}{^{,\alpha}}{_\gamma}\right)\theta_\alpha\phi^\gamma n^\beta n^\delta \nonumber \\ =&\frac{1}{\bar{r}}n^\beta\mathrm X^\alpha\partial_{\bar{r}}{\mathcal{C}}_{\alpha\beta}-\frac{1}{\bar{r}^2}n^\beta\mathrm X^\alpha{\mathcal{C}}{_{\alpha\beta}}-\theta^\alpha\phi^\gamma\partial^2_{\bar{r}}{\mathcal{C}}{_{\alpha\gamma}}-\frac{1}{\bar{r}^2}n^\beta n^\gamma\partial_\theta\check\partial_\phi{\mathcal{C}}{_{\beta\gamma}}\end{aligned}$$ for the second term, $$\begin{aligned} {\hat R^\alpha}{_{\beta\gamma 0}}n^\beta\theta_\alpha\phi^\gamma =&\left(\frac{1}{2}\left({{\mathcal{B}}_\beta}{^{,\alpha}}-{{\mathcal{B}}^\alpha} {_{,\beta}}\right)_{,\gamma}-\mathcal{C}^\alpha_{\gamma,\beta}{'}+{{\mathcal{C}}{_{\beta\gamma}}}{^{,\alpha}}{'}\right)n^\beta\theta_\alpha\phi^\gamma \nonumber \\ =&\frac{1}{2\bar{r}^2}n^\beta\partial_\theta\check\partial_\phi{\mathcal{B}}_\beta-\frac{1}{2\bar{r}}\theta_\alpha\check\partial_\phi\partial_{\bar{r}}{\mathcal{B}}^\alpha+\frac{1}{2\bar{r}^2}\theta_\alpha\check\partial_\phi{\mathcal{B}}^\alpha \nonumber \\ &-\theta^\alpha\phi^\gamma\partial_{\bar{r}}{\mathcal{C}}_{\alpha\gamma}'+\frac{1}{\bar{r}}n^\beta\phi^\gamma{\partial_\theta}{\mathcal{C}}'_{\beta\gamma}\end{aligned}$$ for the third term, and $$\begin{aligned} \hat R^\alpha{_{0\gamma\delta}}n^\delta \theta_\alpha\phi^\gamma=&\left(-{{\mathcal{B}}_{[\beta}}{^{,\alpha}}{_{\gamma]}}-2{{\mathcal{C}}^\alpha}{_{[\beta,\gamma]}}'\right)n^\delta \theta_\alpha\phi^\gamma \nonumber \\ =&-\frac{1}{2\bar{r}}\phi^\gamma{\partial_\theta}\partial_{\bar{r}}{\mathcal{B}}_\gamma+\frac{1}{2\bar{r}^2}\phi^\gamma{\partial_\theta}{\mathcal{B}}_\gamma+\frac{1}{2\bar{r}^2}n^\delta\partial_\theta\check\partial_\phi{\mathcal{B}}_\delta \nonumber \\ &-\theta^\alpha\phi^\gamma\partial_{\bar{r}}{\mathcal{C}}'_{\alpha\gamma}+\frac{1}{\bar{r}}\theta^\alpha n^\delta {\check\partial_\phi}{\mathcal{C}}'_{\alpha\delta} \label{4thtermR12}\end{aligned}$$ for the fourth term. Summing all these four terms, we obtain for $\hat{\mathfrak{R}}^1_2$: $$\begin{aligned} \hat{\mathfrak{R}}^1_2=&\frac{1}{\bar{r}^2}\partial_\theta\check\partial_\phi{\mathcal{A}}-\frac{1}{2\bar{r}}\mathrm X_\alpha{\mathcal{B}}^{\alpha'}+\frac{1}{2\bar{r}}\mathrm X_\alpha\frac{\partial}{\partial\bar{r}}{\mathcal{B}}^\alpha-\frac{1}{2\bar{r}^2}\mathrm X_\alpha{\mathcal{B}}^\alpha-\frac{1}{\bar{r}^2}n^\alpha\partial_\theta\check\partial_\phi{\mathcal{B}}_\alpha\nonumber \\ &-\theta^\alpha\phi^\gamma\frac{\mathrm d^2}{\mathrm d\lambda^2}{\mathcal{C}}_{\alpha\gamma}-\frac{1}{\bar{r}}n^\beta\mathrm{X}^\alpha\frac{\mathrm d}{\mathrm d\lambda}{\mathcal{C}}_{\alpha\beta}-\frac{1}{\bar{r}^2}n^\beta\mathrm X^\alpha{\mathcal{C}}_{\alpha\beta}-n^\beta n^\gamma\frac{1}{\bar{r}^2}\partial_\theta\check\partial_\phi{\mathcal{C}}_{\beta\gamma}\,. \label{R12}\end{aligned}$$ We now want to express this using the metric decompositions given in . As for the calculations of $\hat{\mathfrak{R}}^2_2$, we need to do some work to simplify the expressions for the terms in $\beta$, $\gamma$ and $C^\alpha$. For this, we apply the following relations: $$\begin{aligned} &\mathrm X_\alpha\nabla^\alpha=\frac{1}{\bar{r}}\mathrm X_\alpha\widehat\nabla^\alpha=\frac{2}{\bar{r}}\partial_\theta\check\partial_\phi\,,\qquad \mathrm X_\beta n^\beta=\theta^\alpha\phi^\beta+\theta^\beta\phi^\alpha\,,\nonumber \\ &\partial_\theta\check\partial_\phi n^\alpha=0\,,\qquad\partial_\theta\check\partial_\phi\left(n^\alpha n^\beta\right)=\theta^\alpha\phi^\beta+\theta^\beta\phi^\alpha\,. \label{helpfulrelations}\end{aligned}$$ First, we consider the scalar contributions to $\hat{\mathfrak{R}}^1_2$. By inserting $\mathcal{B}^\alpha=\beta^{,\alpha}+B^\alpha$ into the equation , we obtain for the terms in $\beta$: $$\begin{aligned} \left({\hat{\mathfrak{R}}^1_2}\right)_\beta=&-\frac{1}{2\bar{r}}\mathrm X_\alpha{\beta^{,\alpha}}'+\frac{1}{2\bar{r}}\mathrm X_\alpha\frac{\partial}{\partial\bar{r}}\beta^{,\alpha}-\frac{1}{2\bar{r}^2}\mathrm X_\alpha\beta^{,\alpha}-\frac{1}{\bar{r}^2}n^\alpha\partial_\theta\check\partial_\phi\beta^{,\alpha}=-\frac{1}{\bar{r}^2}\partial_\theta\check\partial_\phi\beta'\,,\end{aligned}$$ where the second equality follows straight-forwardly from the relations . Using $\mathcal{C}^{\alpha\beta}=\delta^{\alpha\beta}\varphi+\gamma^{,\alpha\beta}+C^{(\alpha,\beta)}+C^{\alpha\beta}$, we obtain for the terms in $\gamma$: $$\begin{aligned} \left({\hat{\mathfrak{R}}^1_2}\right)_\gamma=&-\theta^\alpha\phi^\gamma\frac{\mathrm d^2}{\mathrm d\lambda^2}\gamma_{,\alpha\gamma}-\frac{1}{\bar{r}}n^\beta\mathrm X^\alpha\frac{\mathrm d}{\mathrm d\lambda}\gamma_{,\alpha\beta}-\frac{1}{\bar{r}^2}n^\beta\mathrm X^\alpha\gamma_{,\alpha\beta}-n^\beta n^\gamma\frac{1}{\bar{r}^2}\partial_\theta\check\partial_\phi\gamma_{,\beta\gamma} \nonumber \\ =&-\frac{1}{\bar{r}^2}\partial_\theta\check\partial_\phi\gamma''\,, \end{aligned}$$ where for the second equality we used that $$\begin{aligned} n^\beta\mathrm X^\alpha\gamma_{,\alpha\beta}=&\frac{\partial}{\partial\bar{r}}\left(\frac{2}{\bar{r}}\partial_\theta\check\partial_\phi\gamma\right)-\gamma_{,\alpha\beta}\mathrm X^\alpha n^\beta=\frac{2}{\bar{r}}\frac{\partial}{\partial\bar{r}}\partial_\theta\check\partial_\phi\gamma-\frac{4}{\bar{r}^2}\partial_\theta\check\partial_\phi\gamma\,,\end{aligned}$$ and that $$\begin{aligned} n^\beta n^\gamma\partial_\theta\check\partial_\phi\gamma_{,\beta\gamma}=&\partial_\theta\check\partial_\phi\left(n^\beta n^\gamma\gamma_{,\beta\gamma}\right)-\partial_\theta\check\partial_\phi\left(n^\beta n^\gamma\right)\gamma_{,\beta\gamma}-2n^\beta\left(\theta_\alpha\check\partial_\phi+\phi_\alpha\partial_\theta\right)\gamma_{,\beta\gamma} \nonumber \\ =&\partial_\theta\check\partial_\phi\frac{\partial^2}{\partial\bar{r}^2}\gamma+\frac{6}{\bar{r}^2}\partial_\theta\check\partial_\phi\gamma-\frac{4}{\bar{r}}\frac{\partial}{\partial\bar{r}}\partial_\theta\check\partial_\phi\gamma\,.\end{aligned}$$ The terms in $\alpha$ and $\varphi$ follow straight-forwardly from the expression . For the contribution of all scalar perturbations to $\hat{\mathfrak{R}}^1_2$, we obtain: $$\begin{aligned} \left(\hat{\mathfrak{R}}^1_2\right)_s=\frac{1}{\bar{r}^2}\partial_\theta\check\partial_\phi\left(\alpha-\varphi-\beta'-\gamma''\right)=\frac{1}{\bar{r}^2}\left(\alpha_\chi-\varphi_\chi\right)\,.\end{aligned}$$ Now, we consider the vector contributions to $\hat{\mathfrak{R}}^1_2$. For the terms in $C^\alpha$, we have $$\begin{aligned} \left({\hat{\mathfrak{R}}^1_2}\right)_{C^\alpha}=&-\theta^\alpha\phi^\gamma\frac{\mathrm d^2}{\mathrm d\lambda^2}C_{(\alpha,\gamma)}-\frac{1}{\bar{r}}n^\beta\mathrm X^\alpha\frac{\mathrm d}{\mathrm d\lambda}C_{(\alpha,\beta)}-\frac{1}{\bar{r}^2}n^\beta\mathrm X^\alpha C_{(\alpha,\beta)} \nonumber \\ &-n^\beta n^\gamma\frac{1}{\bar{r}^2}\partial_\theta\check\partial_\phi C_{(\beta,\gamma)}\,. \end{aligned}$$ By applying the relations $$\begin{aligned} \theta^\alpha\phi^\gamma\frac{\mathrm d^2}{\mathrm d\lambda^2}C_{(\alpha,\gamma)}=&\theta^{(\alpha}\phi^{\gamma)}\frac{\mathrm d^2}{\mathrm d\lambda^2}C_{\alpha,\gamma}=\frac{1}{2}\frac{\mathrm d^2}{\mathrm d\lambda^2}\left(\frac{1}{\bar{r}}\mathrm X^\alpha C_\alpha\right) \nonumber \\ =&\frac{1}{2\bar{r}}\frac{\mathrm d^2}{\mathrm d\lambda^2}\left(\mathrm X^\alpha C_\alpha\right)+\frac{1}{\bar{r}^2}\frac{\mathrm d}{\mathrm d\lambda}\mathrm X^\alpha C_\alpha+\frac{1}{\bar{r}^3}\mathrm X^\alpha C_\alpha\,, \end{aligned}$$ and $$\begin{aligned} \frac{\mathrm d}{\mathrm d\lambda}\left(n^\beta\mathrm{X}^\alpha C_{\alpha,\beta}\right)=&\frac{\mathrm d}{\mathrm d\lambda}\frac{\partial}{\partial\bar{r}}\left(\mathrm{X}^\alpha C_\alpha\right)-\frac{\mathrm d}{\mathrm d\lambda}\left(C_{\alpha,\beta}\mathrm{X}^\alpha n^\beta\right) \nonumber \\ =&-\frac{\mathrm d^2}{\mathrm d\lambda^2}\left(\mathrm X^\alpha C_\alpha\right)+\frac{\mathrm d}{\mathrm d\lambda}\left(\mathrm X^\alpha C'_\alpha\right)-\frac{1}{\bar{r}}\frac{\mathrm d}{\mathrm d\lambda}\left(\mathrm X^\alpha C_\alpha\right)-\frac{1}{\bar{r}^2}\left(\mathrm X^\alpha C_\alpha\right)\,, \end{aligned}$$ and $$\begin{aligned} \partial_\theta\check\partial_\phi\left(n^\gamma C_{\beta,\gamma}\right)=&n^\gamma\partial_\theta\check\partial_\phi C_{\beta,\gamma}+\mathrm X^\gamma C_{\beta,\gamma}=n^\gamma\partial_\theta\check\partial_\phi C_{\beta,\gamma}+\frac{2}{\bar{r}}\partial_\theta\check\partial_\phi C_\beta\,,\end{aligned}$$ we can simplify the expression for the terms in $C^\alpha$ to: $$\begin{aligned} \left(\hat{\mathfrak{R}}^1_2\right)_{C_\alpha}=&-\frac{1}{2\bar{r}}\frac{\mathrm d}{\mathrm d\lambda}\left(\mathrm X^\alpha C'_\alpha\right)-\frac{1}{2\bar{r}^2}\mathrm X^\alpha C'_\alpha-\frac{1}{\bar{r}^2}n^\beta\partial_\theta\check\partial_\phi C'_\beta\,.\end{aligned}$$ Combining this with the terms in $B^\alpha$ which are obtained from expression without further simplifications, the total contribution of vector perturbations to $\hat{\mathfrak{R}}^1_2$ are given by: $$\begin{aligned} \left(\hat{\mathfrak{R}}^1_2\right)_v=&-\frac{1}{2\bar{r}}\frac{\mathrm d}{\mathrm d\lambda}\left(\mathrm X^\alpha \Psi_\alpha\right)-\frac{1}{2\bar{r}^2}\mathrm X^\alpha \Psi_\alpha-\frac{1}{\bar{r}^2}n^\beta\partial_\theta\check\partial_\phi\Psi_\beta\,.\end{aligned}$$ Summing up the scalar, the vector and also the tensor contributions, we obtain for $\hat{\mathfrak{R}}^1_2$: $$\begin{aligned} \hat{\mathfrak{R}}^1_2=&\frac{1}{\bar{r}^2}\partial_\theta\check\partial_\phi\left(\alpha_\chi-\varphi_\chi\right)-\frac{1}{2\bar{r}}\frac{\mathrm d}{\mathrm d\lambda}\mathrm X^\alpha\Psi_\alpha-\frac{1}{2\bar{r}^2}\mathrm X^\alpha\Psi_\alpha-\frac{1}{\bar{r}^2}n^\alpha\partial_\theta\check\partial_\phi\Psi_\alpha \nonumber \\ &-\theta^\alpha\phi^\gamma\frac{\mathrm d^2}{\mathrm d\lambda^2}C_{\alpha\gamma}-\frac{1}{\bar{r}}n^\beta\mathrm X^\alpha\frac{\mathrm d}{\mathrm d\lambda}C_{\alpha\beta}-\frac{1}{\bar{r}^2}n^\beta\mathrm X^\alpha C_{\alpha\beta}-n^\beta n^\gamma\frac{1}{\bar{r}^2}\partial_\theta\check\partial_\phi C_{\beta\gamma}\,.\end{aligned}$$ Now, we can calculate the components $\check{\mathbb{D}}^1_2=\check{\mathbb{D}}^2_1$ of the distortion matrix by performing the integration, applying the integrals given in equation , which yields $$\begin{aligned} \check{\mathbb{D}}^1_2=&\left(\theta^\alpha\phi^\beta C_{\alpha\beta}\right)_o+\left(\theta^\alpha\phi^\beta C_{\alpha\beta}\right)_s-\frac{1}{2\bar{r}_z}\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\mathrm X^\alpha\left(\Psi_\alpha+2n^\beta C_{\alpha\beta}\right) \nonumber \\ &-\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\left(\frac{\bar{r}_z-\bar{r}}{\bar{r}_z\bar{r}}\right)\left(\partial_\theta\check\partial_\phi(\alpha_\chi-\varphi_\chi)-n^\alpha\partial_\theta\check\partial_\phi\Psi_\alpha-n^\beta n^\gamma\partial_\theta\check\partial_\phi C_{\beta\gamma}\right) \nonumber \\ &+\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\left(\frac{\bar{r}_z-\bar{r}}{\bar{r}_z\bar{r}}\right)\left(\frac{1}{2}\mathrm X^\alpha\Psi_\alpha+n^\beta\mathrm X^\alpha C_{\alpha\beta}\right)\,. \label{D12result}\end{aligned}$$ By applying the relations $$\begin{aligned} &\partial_\theta\check\partial_\phi\left(n^\alpha\Psi_\alpha\right)=n^\alpha\partial_\theta\check\partial_\phi\Psi_\alpha+\mathrm X_\alpha\Psi^\alpha\,,\nonumber \\ &\partial_\theta\check\partial_\phi\left(n^\alpha n^\beta C_{\alpha\beta}\right)=n^\alpha n^\beta\partial_\theta\check\partial_\phi C_{\alpha\beta}+2n^\alpha\mathrm X^\beta C_{\alpha\beta}+C_{\alpha\beta}\mathrm X^\alpha n^\beta\,,\end{aligned}$$ the expression for $\check{\mathbb{D}}^1_2$ can be rewritten into $$\begin{aligned} \check{\mathbb{D}}^1_2=&\left(\theta^\alpha\phi^\beta C_{\alpha\beta}\right)_o+\left(\theta^\alpha\phi^\beta C_{\alpha\beta}\right)_s-\int_0^{\bar{r}_z}\frac{\mathrm d\bar{r}}{2\bar{r}}\,\mathrm X^\alpha\left(\Psi_\alpha+2n^\beta C_{\alpha\beta}\right) \nonumber \\ &-\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\left(\frac{\bar{r}_z-\bar{r}}{\bar{r}_z\bar{r}}\right)\partial_\theta\check\partial_\phi\left(\alpha_\chi-\varphi_\chi-n^\alpha\Psi_\alpha-n^\beta n^\gamma C_{\beta\gamma}\right)\,, \label{D12result}\end{aligned}$$ which, up to a negative sign, is equal to the expression for the shear component $\check\gamma_2$. Gauge-Invariant Expression for the Distortion in the Luminosity Distance {#Appendix:LumDist} ======================================================================== Equation  for $\check{\mathbb{D}}^1_1$ and $\check{\mathbb{D}}^2_2$ yields, combined with the equation  for $\delta D=-\check\kappa$, the following expression for $\check\kappa$: $$\begin{aligned} \check\kappa=&-\delta z-\widehat{\Delta\nu}_o+\frac{\Delta\lambda_s}{\bar{r}_z}-\left(\alpha_\chi+H\chi-\frac{1}{2}n^\alpha n^\beta C_{\alpha\beta}\right)_o-\left(\varphi_\chi+H\chi-\frac{1}{2}n^\alpha n^\beta C_{\alpha\beta}\right)_s \nonumber \\ &+\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\left(\frac{\bar{r}_z-\bar{r}}{2\bar{r}_z\bar{r}}\right)\left(\widehat\nabla^2(\alpha_\chi-\varphi_\chi)-n^\alpha\widehat\nabla^2\Psi_\alpha-n^\alpha n^\beta\widehat\nabla^2 C_{\alpha\beta}\right) \nonumber \\ &+\frac{1}{\bar{r}_z}\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\left(\alpha_\chi+\varphi_\chi+2H\chi+\frac{1}{2}\widehat\nabla^\alpha\Psi_\alpha+n^\beta\widehat\nabla^\alpha C_{\alpha\beta}-n^\alpha n^\beta C_{\alpha\beta}\right) \nonumber \\ &-\frac{1}{\bar{r}_z}\int_0^{\bar{r}_z}\mathrm d\bar{r}\,(\bar{r}_z-\bar{r})\left(\varphi'_\chi-\alpha'_\chi\right)\,. \label{kappaJMresult}\end{aligned}$$ Due to the contributions of the perturbation quantities $\delta z$, $\widehat{\Delta\nu}_o$ and $\Delta\lambda_s$, the gauge-transformation property of this expression is not immediately evident. To deal with these terms, we relate the distortion $\Delta\lambda_s$ of the affine parameter to the distortion $\delta r$ of the radial coordinate. First, note that integrating the spatial part of the equation $\mathrm dx^\alpha/\mathrm d\lambda=\hat k^\alpha$ yields $$\begin{aligned} x^\alpha_s=\int_0^{\lambda_z+\Delta\lambda_s}\mathrm d\lambda\,\left(-n^\alpha-\delta n^\alpha\right)=\delta x^\alpha_o+\bar{r}_z-\Delta\lambda_s+\int_0^{\bar{r}_z}\mathrm d\lambda\,\delta n^\alpha\,,\end{aligned}$$ which means that for $\delta r=x^\alpha_sn_\alpha-\bar{r}_z$, we have $$\begin{aligned} \delta r=\delta x^\alpha_on_\alpha-\Delta\lambda_s+\int_0^{\bar{r}_z}\mathrm d\bar{r}\,n_\alpha\delta n^\alpha\,.\end{aligned}$$ Note that the photon wavevector fulfills the null condition $\hat k^\mu\hat k_\mu=0$, which, to first order, reads: $$\begin{aligned} 0=n_\alpha\delta n^\alpha-\delta\nu-\mathcal{A}+\mathcal{B}_\alpha n^\alpha+\mathcal{C}_{\alpha\beta}n^\alpha n^\beta\,.\end{aligned}$$ This enables us to write the equation for $\delta r$ as $$\begin{aligned} \delta r=\delta x^\alpha_on_\alpha-\Delta\lambda_s+\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\delta\nu+\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\left(\mathcal{A}-\mathcal{B}_\alpha n^\alpha-\mathcal{C}_{\alpha\beta}n^\alpha n^\beta\right)\,.\end{aligned}$$ Now, we want to express the integral over $\delta\nu$ in terms of metric perturbations, for which we apply the temporal part of the geodesic equation, $$\begin{aligned} \frac{\mathrm d\hat k^a}{\mathrm d\lambda}=-\hat\Gamma^a{_{bc}}\hat k^b\hat k^c=-\hat\Gamma^a{_{bc}}\hat{\bar{k}}^b\hat{\bar{k}}^c\,,\end{aligned}$$ where for the second equality we used that the Christoffel symbols of the conformally transformed metric vanish in the background. The equation for $\delta r$ now reads $$\begin{aligned} \delta r=\delta x^\alpha_on_\alpha-\Delta\lambda_s+\bar{r}_z\delta\nu_o+\int_0^{\bar{r}_z}\mathrm d\bar{r}\,(\bar{r}_z-\bar{r})\delta\hat\Gamma^0+\int_0^{\bar{r}_z}\mathrm d\bar{r}\left(\mathcal{A}-\mathcal{B}_\alpha n^\alpha-\mathcal{C}_{\alpha\beta}n^\alpha n^\beta\right)\,. \label{exdeltar}\end{aligned}$$ The quantity $\delta\hat\Gamma^0\equiv\hat\Gamma^0{_{\mu\nu}}\hat{\bar k}^\mu\hat{\bar k}^\nu$ can be calculated using the expressions for the Christoffel symbols given in Appendix \[Appendix:JacobiMap\], which yields $$\begin{aligned} \delta\hat\Gamma^0=&\mathcal{A}'-2n^\alpha\mathcal{A}_{,\alpha}+\left(\mathcal{B}_{(\alpha,\beta)}+\mathcal{C}'_{\alpha\beta}\right)n^\alpha n^\beta\,, \nonumber \\ =&2\frac{\mathrm d}{\mathrm d\lambda}\left(\alpha_\chi+H\chi\right)-\left(\alpha_\chi-\varphi_\chi\right)'+\frac{\partial}{\partial\bar{r}}\left(\Psi_{\alpha}n^\alpha+C_{\alpha\beta}n^\alpha n^\beta\right) \nonumber \\ &+\frac{\mathrm d}{\mathrm d\lambda}C_{\alpha\beta}n^\alpha n^\beta+\frac{\mathrm d^2}{\mathrm d\lambda^2}\left(\frac{\chi}{a}\right)\,.\end{aligned}$$ Furthermore, note that $$\begin{aligned} \mathcal{A}-\mathcal{B}_\alpha n^\alpha-\mathcal{C}_{\alpha\beta}n^\alpha n^\beta=(\alpha_\chi-\varphi_\chi)-\Psi_\alpha n^\alpha-C_{\alpha\beta}n^\alpha n^\beta+\frac{\mathrm d}{\mathrm d\lambda}\left(\frac{\chi}{a}+\mathcal{G}_\alpha n^\alpha\right)\,,\end{aligned}$$ which, combined with the expression for $\delta\hat\Gamma^0$, enables us to rewrite equation as $$\begin{aligned} \delta r_\chi=&-n_\alpha\mathcal{G}^\alpha_s-\Delta\lambda_s+n_\alpha\left(\delta x^\alpha+\mathcal{G}^\alpha\right)_o+\bar{r}_z\left(\delta\nu+2H\chi+2\alpha_\chi+C_{\alpha\beta}n^\alpha n^\beta+\frac{\mathrm d}{\mathrm d\lambda}\left(\frac{\chi}{a}\right)\right)_o \nonumber \\ &-\int_0^{\bar{r}_z}\mathrm d\bar{r}\left(\alpha_\chi+\varphi_\chi+\Psi_\alpha n^\alpha+2C_{\alpha\beta}n^\beta n^\alpha\right)-2\int_0^{\bar{r}_z}\mathrm d\bar{r}\,H\chi \nonumber \\ &-\int_0^{\bar{r}_z}\mathrm d\bar{r}\,(\bar{r}_z-\bar{r})\left[(\alpha_\chi-\varphi_\chi)'-\frac{\partial}{\partial\bar{r}}\left(\Psi_{\alpha}n^\alpha+C_{\alpha\beta}n^\alpha n^\beta\right)\right]\,. \label{Deltar}\end{aligned}$$ We can now use this equation to substitute $\Delta\lambda_s$ in the expression for $\check\kappa$, which yields $$\begin{aligned} \check\kappa=&-\frac{\delta r_\chi}{\bar{r}_z}+\frac{1}{\bar{r}_z}n_\alpha\left(\delta x^\alpha+\mathcal{G}^\alpha\right)_o-\delta z_\chi+\left(\frac{3}{2}n^\alpha n^\beta C_{\alpha\beta}-n_\alpha V^\alpha\right)_o-\left(\varphi_\chi-\frac{1}{2}n^\alpha n^\beta C_{\alpha\beta}\right)_s \nonumber \\ &-\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\left(\frac{\bar{r}_z-\bar{r}}{2\bar{r}_z\bar{r}}\right)\left(\widehat\nabla^2(\alpha_\chi-\varphi_\chi)-n^\alpha\widehat\nabla^2\Psi_\alpha-n^\alpha n^\beta\widehat\nabla^2 C_{\alpha\beta}\right) \nonumber \\ &+\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\left(\frac{\bar{r}_z-\bar{r}}{2\bar{r}_z}\right)\frac{\partial}{\partial\bar{r}}\left(\Psi_\alpha n^\alpha+2C_{\alpha\beta}n^\alpha n^\beta\right)+\frac{1}{2\bar{r}_z}\int_0^{\bar{r}_z}\mathrm d\bar{r}\,\left(\widehat\nabla^\alpha\Psi_\alpha+n_\beta\widehat\nabla^\alpha C_{\alpha\beta}\right) \nonumber \\ &-\frac{1}{\bar{r}_z}\int_0^{\bar{r}_z}\mathrm d\bar{r}\left(\Psi_\alpha n^\alpha+2C_{\alpha\beta}n^\beta n^\alpha\right)\,, \label{newhatkappa}\end{aligned}$$ where $\delta z_\chi=\delta z+H_s\chi_s$ and $\delta r_\chi=\delta r+n_\alpha\mathcal{G}^\alpha_s$ are gauge-invariant quantities as discussed in Section \[Subsec:Shear\]. Furthermore, we used the relation $$\begin{aligned} \widehat{\Delta\nu}_o=\left(\delta\nu+H\chi+\frac{\mathrm d}{\mathrm d\lambda}\left(\frac{\chi}{a}\right)+\alpha_\chi+n^\alpha V_\alpha\right)_o\,,\end{aligned}$$ which is a rewritten form of equation for $\delta\nu_o$. Finally, we can use the relations $$\begin{aligned} &\widehat\nabla^2\left(n^\alpha\Psi_\alpha\right)=-2n^\alpha\Psi_\alpha+n^\alpha\widehat\nabla^2\Psi_\alpha+2\widehat\nabla^\alpha\Psi_\alpha\,, \nonumber \\ &\widehat\nabla^2\left(n^\alpha n^\beta C_{\alpha\beta}\right)=-2n^\alpha n^\beta C_{\alpha\beta}+n^\alpha n^\beta\widehat\nabla^2C_{\alpha\beta}+4\widehat\nabla^\beta\left(C_{\alpha\beta} n^\alpha\right)\,, \nonumber \\ &\frac{\partial}{\partial\bar{r}}\left(\Psi_\alpha n^\alpha+2C_{\alpha\beta}n^\alpha n^\beta\right)=-\frac{1}{\bar{r}}\widehat\nabla^\alpha\left(\Psi_\alpha+2C_{\alpha\beta}n^\beta\right)-\frac{2}{\bar{r}}C_{\alpha\beta}n^\alpha n^\beta\,,\end{aligned}$$ to rewrite the expression for $\check\kappa$ into the expression . [40]{} J. E. Gunn, ApJ **150**, 737 (1967). J. Miralda-Escudé, ApJ **370**, 1 (1991). J. Miralda-Escudé, ApJ **380**, 1 (1991). N. Kaiser, ApJ **388**, 272 (1992). D. J. Bacon, A. Refregier, R. S. Ellis, Mon. Not. Roy. Astron. Soc. **318**, 625 (2000), [arXiv:astro-ph/0003008](https://arxiv.org/abs/astro-ph/0003008). N. Kaiser, G. Wilson, G. A. Luppino (2000), [arXiv:astro-ph/0003338](https://arxiv.org/abs/astro-ph/0003338). L. Van Waerberke et al., Astron. Astrophys. **358**, 30 (2000), [arXiv:astro-ph/0002500](https://arxiv.org/abs/astro-ph/0002500), D. M. Wittman et al., Nature **405**, 143 (2000), [arXiv:astro-ph/0003014](https://arxiv.org/abs/astro-ph/0003014). M. Bartelmann and P. Schneider, Phys. Rep. **340**, 291 (2001), [arXiv:astro-ph/9912508](https://arxiv.org/abs/astro-ph/9912508). A. Refregier, Ann. Rev. Astron. Astrophys. **41**, 645 (2003), [arXiv:astro-ph/0307212](https://arxiv.org/abs/astro-ph/0307212). D. Munshi et al., Phys. Rep. **462**, 67 (2008), [arXiv:astro-ph/0612667](https://arxiv.org/abs/astro-ph/0612667). M. Kilbinger, Rep. Prog. Phys. **78**, 086901 (2015), [arXiv:1411.0115](https://arxiv.org/abs/1411.0115). C. W. Stubbs, D. Sweeney, J. A. Tyson, and LSST Collaboration, in *American Astronomical Society Meeting Abstracts* (2004), vol. 36 of *Bulletin of the American Astronomical Society*, p. 108.02. J. Green, P. Schechter, C. Baltay et al. (2012), [arXiv:1208.4012](https://arxiv.org/abs/1208.4012). R. Laureijs, J. Amiaux, S. Arduini et al. (2011), [arXiv:1110.3193](https://arxiv.org/abs/1110.3193). R. Mandelbaum (2017), [arXiv:1710.03235](https://arxiv.org/abs/1710.03235). J. Yoo et al., JCAP **04**, 029 (2018), [arXiv:1802.03403](https://arxiv.org/abs/1802.03403). J. Yoo, Class. Quant. Grav. **31**, 234001 (2014), [arXiv:1409.3223](https://arxiv.org/abs/1409.3223). R. M. Wald, *General Relativity*, Appendix D (The University of Chicago Press, 1984). D. B. Thomas et al. (2016), [arXiv:1612.01533](https://arxiv.org/abs/1612.01533). A. Lewis, A. Hall and A. Challinor (2017), [arXiv:1706.02673](https://arxiv.org/abs/1706.02673). G. Marozzi et al. (2016), [arXiv:1612.07263](https://arxiv.org/abs/1612.07263). G. Fabbian, M. Calabrese, C. Carbone, JCAP **02**, 050 (2018), [arXiv:1702.03317](https://arxiv.org/abs/1702.03317). R. Takahashi et al., ApJ **850**, 24 (2017), [arXiv:1706.01472](https://arxiv.org/abs/1706.01472). C. Bonvin, R. Durrer and M. A. Gasparini, Phys. Rev. D **73**, 023523 (2006), [arXiv:astro-ph/0511183](https://arxiv.org/abs/astro-ph/0511183). A. Lewis and A. Challinor, Phys. Rept. **429**, 1 (2006), [arXiv:astro-ph/0601594](https://arxiv.org/abs/astro-ph/0601594). J. Yoo and F. Scaccabarozzi, JCAP **09**, 046 (2016), [arXiv:1606.08453](https://arxiv.org/abs/1606.08453). C. Bonvin, Phys. Rev. D **78**, 123530 (2008), [arXiv:0810.0180](https://arxiv.org/abs/0810.0180). F. Bernardeau, C. Bonvin and F. Vernizzi, Phys. Rev. D **81**, 083002 (2010), [arXiv:0911.2244](https://arxiv.org/abs/0911.2244). C. Clarkson (2015), [arXiv:1503.08660](https://arxiv.org/abs/1503.08660). C. Clarkson, Class. Quantum Grav. **33**, 245003 (2016), [arXiv:1603.04652](https://arxiv.org/abs/1603.04652). D. Yamauchi, T. Namikawa, A. Taruya, JCAP **08**, 051 (2013), [arXiv:1305.3348](https://arxiv.org/abs/1305.3348). P. Schneider, J. Ehlers and E. E. Falco, *Gravitational Lenses*, Chapter 3 (Springer, 1992). E. Mitsou and J. Yoo, in preparation (2018). L. Dai, Phys. Rev. Lett. 112, 041303 (2014), [arXiv:1311.3662](https://arxiv.org/abs/1311.3662). F. Schmidt, D. Jeong, Phys. Rev. D **86**, 083527 (2012), [arXiv:1204.3625](https://arxiv.org/abs/1204.3625). F. Schmidt, D. Jeong, Phys. Rev. D **86**, 083513 (2012), [arXiv:1205.1514](https://arxiv.org/abs/1205.1514).
--- abstract: 'Newsroom in online ecosystem is difficult to untangle. With prevalence of social media, interactions between journalists and individuals become visible, but lack of understanding to inner processing of information feedback loop in public sphere leave most journalists baffled. Can we provide an organized view to characterize journalist behaviors on individual level to know better of the ecosystem? To this end, I propose Poisson Factorization Machine (PFM), a Bayesian analogue to matrix factorization that assumes Poisson distribution for generative process. The model generalizes recent studies on Poisson Matrix Factorization to account temporal interaction which involves tensor-like structure, and label information. Two inference procedures are designed, one based on batch variational EM and another stochastic variational inference scheme that efficiently scales with data size. An important novelty in this note is that I show how to stack layers of PFM to introduce a deep architecture. This work discusses some potential results applying the model and explains how such latent factors may be useful for analyzing latent behaviors for data exploration.' author: - | Pau Perng-Hwa Kung\ Media Lab\ Massachusetts Institute of Technology\ Cambridge, MA 01239\ `pernghwa@media.mit.edu`\ title: 'Deep Poisson Factorization Machines: factor analysis for mapping behaviors in journalist ecosystem' --- Intuition and Introduction ========================== Digital newsroom is rapidly changing. With the introduction of social media, the new public sphere offers greater mutual visibility and diversity. However, a full understanding of how the new system operates is still out of complete grasp. In particular, the problem is made further challenging with data sparsity, where individual records about responses to any particular news event is limited. We believe mapping the development of analytical framework that integrates different domains of data can significantly help us characterize behavioral patterns of journalists and understand how they operate in social media according to different news topics and events. Figure 1 outlines a possible data mapping. Ingesting data from published feeds by news organizations and map corresponding journalists to social media and observe their activities will undoubtedly enable a clearer mapping of individual behaviors related to news event progression. Based on this data mapping, we can incorporate general response by normal users on social media to construct a data feedback loop. -0.1in ![A sampled data view of a network of relational data spanning news data and social media data. Blue nodes denote tweets, green nodes denote journalists, yellow nodes denote news articles, and red nodes denote news organizations. Edges are data relations and interactions.[]{data-label="PMF_ex"}](network){width="40.00000%"} -0.2in 0.1in ![image](PMF_ex){width="100.00000%"} -0.1in We are interested in aggregated behaviors as well as individual behaviors that interact with the ecosystem. A way to achieve this aim is to design a mathematical model that can map the latent process that characterize individual behavior as well as feature analysis on aggregate level. In this work, we propose a model building on top of probabilistic matrix factorization, called Poisson Factorization Machine, to describe the latent factors that behavioral features are embedded upon. In particular, we first create a data design matrix by inferring links between data instances from different domains. Figure 2 illustrates such a representation, where each row is a news article published by news organization. The columns denote the features associated with the story, which can include topics written in the news, name entities mentioned, as well as social media engagement associated with journalists’ Twitter accounts and general users interactions. Then, we perform our Poisson Factorization Machine to learn a lower dimensional embedding of the latent factors that map to these features. Using this framework, we can answer several key questions: **query-** given any data instance or any subset of features as queries, can we understand potentially related instances and features by the learned latent feature mapping? **factor analysis-** can we describe the learned latent factors in a meaningful way to better understand relations between features? **prediction-** given new data instances, can we use its latent dimensional embedding to predict the instance’s response at some unobserved features (e.g. features that have not yet occurred so we cannot record). Our proposed factorization model offers several advantages. First, the model bases on a hierarchical generative structure with Poisson distribution, which provide better statistical strength sharing among factors and model human behavior data better, being discrete output. Second, our model generalizes earlier works on Poisson Matrix Factorization by extending the model to include response variables to enable a form of supervised guidance. We offer batch as well as online variational inference to accommodate large-scale datasets. In addition, we show that it is possible to stack units of PFM to make a deep architecture. Third, we further extend our model to incorporate multi-way interactions among features. This provides natural feature grouping for more interpretable feature exploration. \[submission\] Methodology =========== In this section, we describe the details for the proposed Poisson Factorization Machine, where we discuss the formal definition of the model, and inference with prediction procedures. Consider a set of observed news articles, where for each article, we have mapped the authors to their respective Twitter handles and extracted a set of behavior patterns around the time of posting the article. We denote this data matrix by $\mathcal{X}$ $\in$ $\mathbb{R}^{NxD}$, where each entry $x_i$ is a D dimensional feature associated with that article. Also, we have response variables $\mathcal{Y}$ $\in \mathbb{R}^{NxC}$ with total of C response types. In this work, we set C = 1, but the reasoning can easily apply to the higher dimension cases. Imagine there are $k \in$ {1,$\cdots$,$K$} latent factors from the matrix factorization model. We define the Poisson Factorization Machine with the following generative process: $$\begin{aligned} \theta_{i,k} &\sim Gamma(a, ac) \\ \beta_{k,d} &\sim Gamma(b, b) \\ x_{i,d} &\sim Poisson(\textstyle \sum_{k=1}^{K}{\theta_{i,k}\beta_{k,d}}) \\ y_{i,c} &\sim Gaussian(\theta_i^T\eta, \sigma) \end{aligned}$$ where $\beta_k \in \mathbb{R}_{+}^{D}$ is the $k$th latent factor mapping for feature vector, $\theta_i \in \mathbb{R}_{+}^{K}$ denotes the factor weights. Note the hyperparameters of the model $a, b, c$, and $\eta \in \mathbb{R}^{KxC}, \sigma \in \mathbb{R}^{CxC}$. Here, $c$ is a scalar used to tune the weights. To generate the response variable, for each instance, we use the corresponding factor weights as input for normal linear model for regression. Figure shows the graphical model. In this work, we leave $a, b$ predefined, and update other hyperparameters during model fitting. Notice the introduction of auxiliary variable $z$. This is employed to facilitate posterior variational inference by helping us to develop closed coordinate ascent updates for the parameters. More specifically, by the Superposition Theorem \[4\], the Poisson distribution dictated by $x$ can be composed of $K$ independent Poisson variables $x_{id} = \Sigma_{k=1}^K z_{idk}$ where $z_{idk} \sim Poisson(\theta_{ik}\beta_{kd})$. Effectively, when we are looking at the data likelihood, it can be viewed as marginalizing $z$ (if we denote all hidden parameters by $\pi$), but keep in mind that having $z$ not marginalized maintains flexibility for inference later on: $$P(y_{i,c}, x_{i,d}| \pi) = \displaystyle \sum_{z_{idk}}{P(y_{i,c},x_{i,d},z_{i,d,k}|\pi_{-z})}$$ The intuition for selecting Poisson model comes from the statistical property that Poisson distribution models count data, which is more suitable for sparse behavioral patterns than traditional Gaussian-based matrix factorization. Also, Poisson factorization avoids the problem of data sparsity since it penalizes 0 less strongly than Gaussian distribution, which makes it desirable for weakly labeled data\[1\]. The selection of Gamma prior as originally suggested by \[3\]\[5\]. Notice that response variable need not be Gaussian, and can vary depending on the nature of inputting dataset. In fact, since our proposed learning and inference procedure is very similar as in \[1\], our model can generalize to Generalized Linear Models, which include other distributions such as Poisson, Multinomial, etc. 0.1in =\[circle, minimum size = 7mm, thick, draw =black!80, node distance = 12mm\] =\[-latex, thick\] =\[rectangle, draw=black!100\] (a) \[label=below:${a,ac}$\] [ ]{}; (theta) \[right=of a,label=below:$\theta_{i,k}$\] [ ]{}; (Y) \[right=of theta,label=below:$Y$\] ; (eta) \[right=of Y,label=below:${\eta,\sigma}$\] ; (X) \[above=of Y,label=below:$X_{i,d}$\] [ ]{}; (beta) \[right=of X,label=below:$\beta_{k,d}$\] [ ]{}; (b) \[right=of beta,label=below:${b,b}$\] [ ]{}; (a) edge \[connect\] (theta) (theta) edge \[connect\] (X) (theta) edge \[connect\] (Y) (eta) edge \[connect\] (Y) (beta) edge \[connect\] (X) (b) edge \[connect\] (beta); ; ; ; ; ; ; -0.1in Posterior Inference ------------------- For posterior inference, we adopt variational EM algorithm, which was employed in Supervised Topic Model \[1\]. After showing the derivation, we will also present a stochastic variational inference analogue for the inference algorithm. First, we consider a fully factorized variational distribution for the parameters: $$q(\boldsymbol{\theta}, \boldsymbol{\beta}, \boldsymbol{z}) = \prod_{k=1}^K{\Big(\prod_{i=1}^N{q(\theta_{ik})\prod_{d=1}^D{q(z_{idk})}}\Big)\Big(\prod_{d=1}^D{q(\beta_{kd})}\Big)}$$ For the above variational distribution, we choose the following distributions to reflect the generative process in the Poisson Factorization Model: $$\begin{aligned} q(\theta_{i,k}) &= Gamma(\gamma_{ik}, \chi_{ik}) \\ q(\beta_{k,d}) &= Gamma(\nu_{kd}, \lambda_{kd}) \\ q(z_{i,k,d}) &= Multinomial(\overrightarrow{\phi_{id}})\end{aligned}$$ The log likelihood for the data features is calculated and bounded by: $$\begin{aligned} ln\ p(x_{1:N}|\pi) &= ln \int_{\theta}\int_{\beta}p(\theta, \beta, x, y | \pi_{-\theta,-\beta}) \frac{q(\theta,\beta)}{q(\theta, \beta)} \\ &\geq \mathbb{E_q}[ln\ p(\theta, \beta, x | \pi)] - \mathbb{E_q}[ln\ q(\theta, \beta)] \\ & = \mathbb{E_q}[ln\ p(\theta, \beta, x | \pi)] + H(q)\end{aligned}$$ where $H(q)$ is the entropy of the variational distribution $q$ and the resulting lower bound is our Evidence Lower Bound (ELBO). ### Batch inference The ELBO lower bounds the log-likelihood for all article items in the data. For the E-step in variational EM procedure, we run variational inference on the variational parameters. In the M-step, we optimize according to the ELBO with respect to model parameters $c$ (with $a$, $b$ held fixed), as well as regression parameters $\eta$ and $\sigma$. **Variational E-step** To calculate the updates, we try to first further break down $E_q[ln\ p(\theta, \beta, x | \pi)]$ and draw analogy to earlier works on Poisson Matrix Factorization \[2\]\[3\]\[5\]\[7\] to illustrate how we can directly leverage on previous results to derive coordinate ascent update rules for the proposed model. Notice that: $$\begin{gathered} \mathbb{E_q}[ln\ p(\theta, \beta, x | \pi)] = \mathbb{E_q}[ln\ p(\beta | b, b)] \\+ \mathbb{E_q}[ln\ p(\theta | a, ac)] + \sum_{n=1}^N {\mathbb{E_q}[ln\ p(z | x, \theta, \beta)]}\\ + \sum_{n=1}^N{\mathbb{E_q}[ln\ p(x | \theta, \beta)]+\mathbb{E_q}[ln\ p(y | \theta, \eta, \sigma)]} \end{gathered}$$ In fact, for variational parameters, when taking partial derivatives, the expression is identical to batch inference algorithm in ordinary Poisson Matrix Factorization, so we arrive at the following updates for $\theta_{i,k},\ \beta_{k,d}$: $$\overrightarrow{\phi_{idk}} := \frac{exp(\mathbb{E_q}[ln\ \theta_{ik}\beta_{kd}])}{\sum_{k=1}^K{exp(\mathbb{E_q}[ln\ \theta_{ik}\beta_{kd}])}}$$ $$\begin{aligned} \gamma_{i,k} &= a + \displaystyle \sum_{d=1}^D{x_{id}\phi_{idk}} \\ \chi_{i,k} &= ac + \displaystyle \sum_{d=1}^D{\mathbb{E_q}[\beta_{kd}]} \\ \nu_{k,d} &= b + \displaystyle \sum_{i=1}^N{x_{id}\phi_{idk}} \\ \lambda_{k,d} &= b + \displaystyle \sum_{i=1}^N{\mathbb{E_q}[\theta_{ik}]}\end{aligned}$$ Expectations for $\mathbb{E_q}[\theta_{ik}]$ and $\mathbb{E_q}[ln\ \theta_{ik}]$ are $\gamma_{ik}/\chi_{ik}$ and $\psi(\gamma_{ik})-ln\ \chi_{ik}$ respectively (where $\psi(\cdot)$ is the digamma function). **Variational M-step** In the M-step, we maximize the article-level ELBO with respect to $c$, $\eta$, and $\sigma$. We update $c$ according to $$\begin{aligned} c = \mathbb{E_q}[\overline{\theta} ] \ \ \ \Rightarrow \ \ \ c^{-1} = \frac{1}{NK}\sum_{i,k} \mathbb{E_q}[\theta_{ik}].\end{aligned}$$ For regression parameters, we take partial derivatives with respect to $\eta$ and $\sigma$, yielding for $\eta$: $$\begin{aligned} \frac{\partial \mathcal L}{\partial \eta} &= \frac{\partial}{\partial \eta}\Big(\frac{1}{\sigma}\Big)\displaystyle \sum_{i=1}^N \mathbb E[\eta^T \theta_i]y_i - \mathbb E[\frac{\eta^T\theta_i\theta_i^T\eta_i}{2}] \\ &= \frac{1}{\sigma}\displaystyle \sum_{i=1}^N \mathbb E[\theta_i] y_i-\eta^T\mathbb E[\theta_i\theta_i^T] \end{aligned}$$ where the latter term $\mathbb E[\theta_i\theta_i^T]$ is evaluated as $$\frac{1}{K^2}\Big(\sum_{j}\sum_{l \neq j}{\frac{\gamma_{ij}}{\chi_{ij}}\frac{\gamma_{il}}{\chi_{il}}} + \sum_j {(\frac{1}{ac^2}+\frac{\gamma_{ij}}{\chi_{ij}})} \Big).$$ Setting the partial derivative to zero and we arrive at the updating rule for $\eta$, as we collapse all instances $i$ into vector: $$\eta = \Big(\mathbb E[\theta^T\theta]\Big)^{-1} \mathbb E[\theta]^T y$$ For derivative of $\sigma$, we have: $$\begin{aligned} \frac{\partial \mathcal L}{\partial \sigma} &= \frac{-1}{2\sigma} + \Big(\frac{1}{\sigma}\Big)\displaystyle \sum_{i=1}^N \mathbb E[\eta^T \theta_i]y_i - \mathbb E[\frac{\eta^T\theta_i\theta_i^T\eta_i}{2}] \Rightarrow \\ \sigma &= \frac{1}{N}\Big \{y^Ty - y^T\mathbb E[\theta]\Big(\mathbb E[\theta^T\theta]\Big)^{-1} \mathbb E[\theta]^T y \Big\}\end{aligned}$$ To complete the procedure, we iterate between updating and optimizing parameters by following E-step and M-step until convergence. Algorithm 1 outlines the entire process. ### Stochastic Inference Batch variational EM algorithm might take multiple passes at the dataset before reaching convergence. In real world applications, this may not be desirable since social media data is more dynamic and large in scale. Recently, Hoffman et al. developed stochastic variational inference where each iteration we work with a subset of data instead, averaging over the noisy variational objective $\mathcal L$. We optimize by iterating between global variables and local variables. The global objective is based on $\beta$, and at each iteration $t$, with data subset $B_t$,we have: $$\mathcal L_t = \frac{N}{|B_t|}\displaystyle \sum_{i \in B_t}{\mathbb E_q[ln\ p(x_i, \theta_i | \beta)] + \mathbb E_q[ln\ p(\beta)] + H(q)}.$$ The main difference in the inference procedure for stochastic variational inference is in the updating part, where parameters $c$, $\eta$, and $\sigma$ are updated with the average empirically according to the mini-batch samples (where $\widehat{\theta} = \{\theta_{i};\ \forall i \in B_t\}$, $\widehat{y} = \{y_i;\ \forall i \in B_t\}$): $$\begin{aligned} (c^{(t)})^{-1} &= \frac{1}{|B_t|K}\sum_{i \in B_t,k} \mathbb{E_q}[\theta_{ik}] \\ \eta^{(t)} &= \Big(\mathbb E[\widehat{\theta}^T\widehat{\theta}]\Big)^{-1} \mathbb E[\widehat{\theta}]^T \widehat{y} \\ \sigma^{(t)} &= \frac{1}{|B_t|}\Big \{\widehat{y}^T\widehat{y} -\widehat{y}^T\mathbb E[\widehat{\theta}]\Big(\mathbb E[\widehat{\theta}^T\widehat{\theta}]\Big)^{-1} \mathbb E[\widehat{\theta}]^T \widehat{y} \Big\}\end{aligned}$$ As for $\beta$, since it is also global variable, we update by taking a natural gradient step based on the Fisher information matrix of variational distribution $q(\beta_{kd})$. We have the variational parameters for $\beta$ updated as: $$\begin{aligned} \nu_{k,d}^{(t)} &= (1-\rho_t)\nu_{kd}^{(t-1)} + \rho_t \Big (b+ \displaystyle \frac{N}{|B_t|}\sum_{i\in B_t}{x_{id}\phi_{idk}} \Big)\\ \lambda_{k,d}^{(t)} &= (1-\rho_t)\lambda_{kd}^{(t-1)} + \rho_t \Big(b + \displaystyle \frac{N}{|B_t|}\sum_{i\in B_t}{\mathbb{E_q}[\theta_{ik}]}\Big)\end{aligned}$$ where $\rho_t$ is a positive interpolation factor and ensuring convergence. We adopt $\rho_t = (t_0 + t)^{-\kappa}$ for $t_0>0$ and $\kappa \in (0.5,1]$ which is shown to correspond to natural gradient step in information geometry with stochastic optimization\[3\]\[5\]. Algorithm 2 summaries the stochastic variational inference procedure. Query and Prediction -------------------- For any given query $x_i$ or $s \subseteq D$, we can infer the corresponding latent factor distribution by computing $\mathbb E[\theta_i]$ as well as $\mathbb E[\beta_{-s} | s]$ given that $\mathbb E[\beta]$ is already learned by the model. Prediction follows by combining both latent factors to generate the response. Determine instance set [$\mathcal S$]{} for iteration: Modeling Cross-Category Interactions ------------------------------------ Notice that we can enforce each $\theta_{i,k},\ \beta{k,d}$ to be an $f$ dimensional vector. In this way, we create an f-way interaction among latent factors. We can further impose a sparse deterministic prior $\textbf{B}$ with binary entries where each data entry encodes whether the two latent factors should interact. We can also approach by using a diffusion tree prior, which consider a hierarchy of prior interactions. Under this construct, we can flatmap the tensor data structure to two dimensional structure and apply the factor interactions to encode a higher level of feature dependency. Deep Poisson Factorization Machines =================================== Given the development of Poisson Factorization Machines, here we show that it is possible to stack layers of latent Gamma units on top of PFM to introduce a deep architecture. In particular, for latent factors $\beta$ and $\theta$, a layering (for layer l) parent variables $z_{n,l}$ with corresponding weights $w_{l,k}$ are fed with link function $g_{l-1}$ specified by $g_{l-1}(z_{l}^{T} w_{l,k})$. As described in the recent work on Deep Exponential Families \[6\] with exponential family in the form $$p(x) = h(x)\ exp(\eta^T T(x) -a (\eta))$$, we can condition the sufficient statistics, the mean to be particular, controlled by the link function $g_l$ via the gradient of the log-normalizer. $$E[T(z_{l,k})] = \bigtriangledown_{\eta}a(g_l(z_{l+1}^Tw_{l,k})).$$Figure 4 shows the schematics of stacking layers of latent variables on PFM latent states. **Gamma variables** The Gamma distribution is an exponential family distribution with the probability density controlled by natural parameters $\alpha,\ \beta$ where $$p(z) = z^{-1}exp(\alpha log(z) - \beta z - log \Gamma(\alpha) -\alpha log(\beta))$$ The link functions for $\alpha$ and $\beta$ are respectively $$g_{\alpha} = \alpha_{l},\ g_{\beta} = \frac{\alpha_{l}}{z_{l+1}^Tw_{l,k}}$$ where $\alpha_{l}$ denotes the shape parameter at layer $l$, and $\textbf{w}$ is simply drawn from a Gamma distribution. **Gaussian variables** For Gaussian distribution, we are interested in the switching variables $\theta$, which is calculated as part of stacking Gamma variables. 0.1in =\[circle, minimum size = 7mm, thick, draw =black!80, node distance = 12mm\] =\[-latex, thick\] =\[rectangle, draw=black!100\] (a) \[\] [PFM]{}; (theta) \[above=of a,label=right:$z_{l,k}$\] [ ]{}; (Y) \[left=of theta,label=below:$w_{l,k}$\] ; (eta) \[above=of theta,label=right:$z_{l+1,k}$\] ; (X) \[left=of eta,label=below:$w_{l+1,k}$\] [ ]{}; ; ; (theta) edge \[connect\] (a) (Y) edge \[connect\] (theta) (eta) edge \[connect\] (theta) (X) edge \[connect\] (eta); ; ; -0.1in **Inference** For inference, since all variables in each layer follows precisely for the inference procedures derived earlier, we focus on the stacking variables. The ELBO for the generative process now includes layer-wise $w$ and $z$. For each stacking $z$ and corresponding $w$, we have: $$\begin{aligned} w_l &\sim q(w_l) \\ z_l &\sim q(z_l) \\ g_l &\sim \bigtriangledown log\ q(z_l) \end{aligned}$$ For parameter updates, taking the gradient for the variational bound $\bigtriangledown_{z,w} \mathcal L$ can be used to update each stacking layer’s parameters, respectively. ### References {#references .unnumbered}
--- abstract: | We study the tradeoff between reliability, data rate, and delay for half-duplex MIMO multihop networks that utilize the automatic-retransmission-request (ARQ) protocol both in the asymptotic high signal-to-noise ratio (SNR) regime and in the [f]{}[i]{}[n]{}[i]{}[t]{}[e]{} SNR regime. We propose novel ARQ protocol designs that optimize these tradeoffs. In particular, we [f]{}irst derive the diversity-multiplexing-delay tradeoff (DMDT) in the high SNR regime, where the delay is caused only by retransmissions. This asymptotic DMDT shows that the performance of an $N$ node network is limited by the weakest three-node sub-network, and the performance of a three-node sub-network is determined by its weakest link, and, hence, the optimal ARQ protocol needs to equalize the performance on each link by allocating ARQ window sizes optimally. This equalization is captured through a novel Variable Block-Length (VBL) ARQ protocol that we propose, which achieves the optimal DMDT. We then consider the DMDT in the [f]{}inite SNR regime, where the delay is caused by both the ARQ retransmissions and queueing. We characterize the [f]{}inite SNR DMDT of the [f]{}ixed ARQ protocol, when an end-to-end delay constraint is imposed, by deriving the probability of message error using an approach that couples the information outage analysis with the queueing network analysis. The exponent of the probability of deadline violation demonstrates that the system performance is again limited by the weakest three-node sub-network. The queueing delay changes the consideration for optimal ARQ design: more retransmissions reduce decoding error by lowering the information outage probability, but may also increase message drop rate due to delay deadline violations. Hence, the optimal ARQ should balance link performance while avoiding signi[f]{}icant delay. We [f]{}ind the optimal [f]{}ixed ARQ protocol by solving an optimization problem that minimizes the message error subject to a delay constraint. author: - 'Yao Xie,[^1]' - 'Deniz G[ü]{}nd[ü]{}z,[^2]' - 'Andrea Goldsmith [^3] [^4]' bibliography: - 'yao\_proposal\_2.bib' title: 'The Diversity-Multiplexing-Delay Tradeoff in MIMO Multihop Networks with ARQ' --- Submitted to IEEE Trans. Info. Theory, April, 2011. Introduction ============ Multihop relays are widely used for coverage extension in wireless networks when the direct link between the source and destination is weak. The coverage of relay networks can be further enhanced by equipping the source, relays and destination with multiple antennas and using multiple-input-multiple-out (MIMO) techniques for beamforming. Indeed, MIMO can be used either for beamforming, which improves the reliability, or for spatial multiplexing, which increases the data rate [@Goldsmith2005]. These dual uses of MIMO gives rise to a diversity-multiplexing tradeoff in point-to-point and multihop MIMO systems, as discussed in more detail below. Recovery of packets received in error in multihop networks is usually achieved by automatic retransmission (ARQ) protocols. With an ARQ protocol, on each hop, the receiver feeds back to the transmitter a one-bit indicator signifying whether the message can be decoded or not. In case of failure the transmitter retransmits the same message (or incremental information, e.g., using a Raptor code [@Luby2002][@Shokrollahi2006]) until successful packet reception. The ARQ protocol can be viewed as either a one-bit feedback scheme from the receiver to the transmitter, or as a time diversity scheme employed by the transmitter. The ARQ protocol improves system reliability at a cost of increased delay. In order to design an effective ARQ protocol for multihop relay networks with MIMO nodes, [f]{}irst the fundamental tradeoffs between reliability, data rate, and delay of such systems must be determined, and then the protocol performance can be compared to this theoretical performance limit. A fundamental tradeoff in designing point-to-point MIMO systems is the tradeoff between reliability and data rate, characterized by the diversity-multiplexing tradeoff (DMT). The asymptotic DMT was introduced in [@ZhengTse03diversity] focusing on the asymptotically high SNR regime. The [f]{}inite SNR DMT was presented in [@Narasimhan2006]. The DMT has also been used to characterize the performance of classical three-node relay networks, with a direct link between the source and the destination, when the nodes have single-antenna (SISO) or multiple antennas for various relaying strategies [@LanemanTseWornell2003], [@YukselErkip2007], [@GunduzKhojastepourGoldsmithDMTMIMO2008]. The DMTs for the amplify-and-forward (AF) and decode-and-forward (DF) relaying strategies are discussed in [@LanemanTseWornell2003]. Several extensions of the amplify-and-forward strategy have been proposed recently, including the rotate-and-forward relaying [@PedarsaniLevequeYang2010] and [f]{}lip-and-forward relaying [@PedarsaniLevequeYang2010FF] strategies, which employ a sequence of forwarding matrices to create an arti[f]{}icial time-varying channel within a single slow fading transmission block in order to achieve a higher diversity gain. A dynamic decode-and-forward (DDF) protocol, in which the relay listens to the source transmission until it can decode the message and then transmits jointly with the source, is proposed in [@AzarianElGamal2005] and its DMT performance is shown to dominate the [f]{}ixed AF and DF schemes. The DDF protocol is shown to achieve the optimal DMT performance in MIMO multihop relay networks in [@GunduzKhojastepourGoldsmithDMTMIMO2008]. In this paper, we restrict our attention to multihop networks using the DF relaying strategy, since it enables us design an optimal ARQ protocol for MIMO multihop relay networks, as we will show later. Here we consider the diversity-multiplexing-delay tradeoff (DMDT), which was introduced in [@GamalDamen06themimo] as an extension of the DMT to include the delay dimension. Here the notion of delay is the time from the arrival of a message at the transmitter until the message is successfully decoded at the receiver, also known as the “sojourn time” in queueing systems. Delays are incurred for two reasons: (1) ARQ retransmissions: messages are retransmitted over each hop until correctly decoded at the corresponding receiver, and (2) queueing delay: ARQ results in a queue of messages to be retransmitted at the transmitter. Most works on DMDT assume in[f]{}inite SNR for the asymptotic analysis, and the queueing delay has been largely neglected. This is because in the high SNR regime, retransmission is a rare event [@HollidayGoldsmithPoor2008]. With this asymptotic in[f]{}inite SNR assumption, [@GamalDamen06themimo] presents the DMDT for a point-to-point MIMO system with ARQ, [@Tabet:IT:07] studies the DMDT for cooperative relay networks with ARQ and single-antenna nodes, and [@AzarianElGamalSchniter2008] proves the DMDT-optimality of ARQ-DDF for the multiple access relay and the cooperative vector multiple access channels with single antenna nodes. However, the asymptotically high SNR regime does not capture the operating conditions of typical wireless systems in practice, where errors during transmission attempts are not rare events [@HollidayGoldsmithPoor2008]. Hence, to fully characterize the DMDT performance in the [f]{}inite SNR regime, we must bring the queuing delay into the problem formulation. For a point-to-point MIMO system with a delay constraint and no feedback link, the tradeoff between the error caused by outage due to insuf[f]{}icient code length, and the error caused by delay exceeding a given deadline, has been studied in [@KittipiyakulEliaJavidi2009] using large deviation analysis. One of the goals of our paper is to study the effects of dynamic ARQ on the DMDT in relay networks. Hence, we consider a line network in which a node’s transmission is only received by adjacent nodes in the line. This is a reasonable approximation for environments where received power falls off sharply with distance (i.e., the path loss exponent is large). For this multi-hop channel model we show that the optimal ARQ protocol requires dynamic allocation of the ARQ transmission rounds based on the instantaneous channel state, and we obtain its exact DMDT characterization. The more general case where non-adjacent nodes receive a given node’s transmission is signi[f]{}icantly more complicated, and the optimal DMT is unknown for this case even with a single relay [@Tabet:IT:07]. The contribution of this paper is two-fold: (1) we characterize the DMDT of multihop MIMO relay networks in both the asymptotically high SNR regime and in the [f]{}inite SNR regime where, in the latter, queuing delay is incorporated into the analysis; (2) we design the optimal ARQ protocol in both regimes. Our work extends the DMDT analysis of a point-to-point MIMO system presented in [@HollidayGoldsmithPoor2008] to MIMO multihop relay networks. In the [f]{}irst part of the paper, we derive the DMDT in the asymptotic high SNR regime, where the delay is caused by retransmissions only. For a certain multiplexing gain, the diversity gain is found by studying the information outage probability. An information outage occurs when the receiver fails to decode the message within the maximum number of retransmission rounds allowed. Based on this formulation, for some multihop relay networks a closed-form expression for the DMDT can be identi[f]{}ied, whereas for general multihop networks, determining the DMDT can be cast as an optimization problem that can be solved numerically. The DMDT of a general multi-hop network can be studied by decomposing the network into three-node sub-networks. Each three-node sub-network consists of any three neighboring nodes in the network and the corresponding links between them. The asymptotic DMDT result shows that the performance of the multihop MIMO network, i.e., its DMDT, is determined by the three-node sub-network with the minimum DMDT. The DMDT of the three-node sub-network is again determined by its weakest link. Hence, the optimal ARQ protocol should balance the link DMDT performances on each hop by allocating ARQ window sizes among the hops. From this insight, we present an adaptive variable block-length (VBL) ARQ protocol and prove its DMDT optimality. Next, we study the DMDT in the [f]{}inite SNR regime, in which the delay is caused by both retransmissions and queueing. We introduce an end-to-end delay constraint such that a message is dropped once its delay exceeds this constraint. We characterize the [f]{}inite SNR DMDT by studying the probability of message error, which is dominated by two causes: the information outage event and the missing deadline event, when the block length is su[f]{}[f]{}iciently long [@ZhengTse03diversity]. Our approach couples the information-theoretical outage probability [@Narasimhan2006] with queueing network analysis. In contrast to the analysis under asymptotically high SNR, this does not yield closed-form DMDT expressions; however, it leads to a practically more relevant ARQ protocol design. The end-to-end delay that takes the queueing delay into consideration introduces one more factor into the DMDT tradeoff and the associated optimal ARQ protocol design. Speci[f]{}ically, allocating more transmission rounds to a link may improve its diversity gain and, hence, lower the information outage probability; however, it also increases the queueing delay and, hence, may also increase the overall error probability as more messages are dropped due to the violation of the deadline. Thus, an optimal ARQ protocol in the [f]{}inite SNR regime should balance these con[f]{}[l]{}[i]{}cting goals: our results will show that this leads to equalizing the DMDT performance of the links. We formulate the optimal ARQ protocol design as an optimization problem that minimizes the probability of message error under a given delay constraint. The end-to-end delay constraint requires us to take into account the message burstiness and queueing delays, which are known to be the main obstacles in merging the information-theoretical physical layer results with the network layer analysis [@EphremidesHajek1998]. We bridge this gap by modeling the MIMO multihop relay network as a queueing network. However, unlike in traditional queueing network theory, e.g., [@BolchGreinerdeMeer2006; @BertsekasGallager1992], the multihop network with half-duplex relay nodes is not a standard queue tandem, because node $i$ along the multihop queue tandem must wait to complete reception of the previous message by the node $i+2$ before it can transmit to node $i+1$ in the tandem. Another difference between our analysis and traditional queueing theory is that we study the amount of time a message waits in the queue (similar to [@HollidayGoldsmithPoor2008]) rather than just the number of messages awaiting transmission. This poses a challenge because the distribution associated with these random delays is hard to obtain [@Harrison1973], unlike the distribution of the number of messages for which a product form solution is available [@BolchGreinerdeMeer2006]. In [@BisnikAbouzeid2006] delay is studied by using a closed queue model and diffusion approximation. We derive the exponent of the deadline missing probability in our half-duplex multihop MIMO network by adapting the large deviation argument used in [@Ganesh1998]. The expression of the exponent again demonstrates that the system performance (in terms of the exponent) of multi-hop network with half-duplex relays is determined by the three-node sub-network with the minimum exponent. The remainder of this paper is organized as follows. Section \[sec:model\] introduces the system model and the ARQ protocol. Section \[sec:DMDT\_asymptotic\] presents the asymptotic DMDT analysis for various ARQ protocols while proving the DMDT optimality of the VBL ARQ. Section \[sec:DMDT\_finite\_SNR\] presents the [f]{}inite SNR DMDT with queueing delays, including some illustrative examples. [F]{}inally, Section \[sec:conclusion\] concludes the paper and discusses some future directions. System Model and ARQ Protocols {#sec:model} ============================== Channel Model ------------- ![MIMO multihop relay network with ARQ. The number of antennas on the $i$th node is $M_i$, and ${\boldsymbol}{H}\in \mathcal{C}^{M_{i+1}\times M_i}$ is the channel matrix from node $i$ to node $(i+1)$. []{data-label="Fig:scheme_all"}](multihop.eps){width="5in"} Consider an $N$-node multihop MIMO relay network. Node 1 is the source, node $N$ is the destination, while nodes 2 through $N-1$ serve as relays. Node $i$ has $M_i$ antennas for $i = 1, \cdots, N.$ The system model is illustrated in [F]{}ig. \[Fig:scheme\_all\]. We denote this MIMO relay network as $(M_1, M_2, \cdots, M_N)$. At the source, the message is encoded by a space-time encoder and mapped into a sequence of $L$ matrices, $\{{\boldsymbol}{X}_{1,l} \in \mathcal{C}^{M_1\times T}: l = 1, \cdots, L\}$, where $T$ is the block length, i.e., the number of channel uses of each block, and $L$ is the maximum number of end-to-end total ARQ rounds that can be used to transmit each message from the source to the destination. The rate of the space-time code is $R$. We de[f]{}ine one ARQ round as the transmitter sending a whole block code of the message to the receiver. We assume that the relays use the DF protocol: node $i$, $2\leq i \leq N-1$, decodes the message, and reencodes it with a space-time encoder into a sequence of $L$ matrices $\{ {\boldsymbol}{X}_{i,l} \in \mathcal{C}^{M_i \times T}: l = 1, \cdots, L \}$. The channel between node $i$ and node ($i+1$) is given by: $$\begin{aligned} {\boldsymbol}{Y}_{i,l} = \sqrt{\frac{SNR}{M_i}} {\boldsymbol}{H}_{i,l} {\boldsymbol}{X}_{i,l} + {\boldsymbol}{W}_{i,l}, \quad 1 \leq l \leq L,\end{aligned}$$ where ${\boldsymbol}{Y}_{i,l} \in \mathcal{C}^{M_{i+1} \times T}$, $i = 1, \cdots, N-1$, is the received signal at node $(i+1)$ in the $l$th ARQ round. Channels are assumed to be frequency non-selective, block Rayleigh fading and independent of each other, i.e., the entries of the channel matrices ${\boldsymbol}{H}_{i,l} \in \mathcal{C}^{M_{i+1}\times M_i}$ are independent and identically distributed (i.i.d.) complex Gaussian with zero mean and unit variance. The additive noise terms ${\boldsymbol}{W}_{i,l}$ are also i.i.d. circularly symmetric complex Gaussian with zero mean and unit variance. The forward communication links and ARQ feedback links only exist between neighboring nodes. Other assumptions we have made for the channel model are as follows: - We consider half-duplex relays, that is, the relays cannot transmit and receive at the same time. - We assume a short-term power constraint at each node for each block code, given by $ {\mathbb{E}}\{{\mathop{\bf tr}}({\boldsymbol}{X}_{i,l}^\dag {\boldsymbol}{X}_{i,l})\} \leq M_i $, $\forall i, l$. Here ${\mathbb{E}}\{\cdot\}$ denotes expectation, and $\dag$ denotes the Hermitian transpose. A long-term power constraint would allow us to adapt the transmit power and achieve power control gain, as we brie[f]{}ly discuss later in the paper. In the following results we assume a short-term power constraint in order to focus on the diversity gain achieved by the ARQ protocol. - We consider both the long-term static channel model, in which ${\boldsymbol}{H}_{i,l} = {\boldsymbol}{H}_i$ for all $l$, i.e. the channel state remains constant during all the ARQ rounds for a hop and is independent from hop to hop; and the short-term static channel model, where ${\boldsymbol}{H}_{i,l}$ are i.i.d. but not identical for the same $l$. The long-term static channel assumption is the worst-case in terms of the achievable diversity with a maximum of $L$ ARQ rounds [@GamalDamen06themimo], because there is no time diversity gain. The long-term static channel model may be suitable for modeling indoor applications such as Wi-[F]{}i, while the short-term static channel model suits applications with higher mobility, such as outdoor cellular systems. Multihop ARQ Protocols ---------------------- Consider a family of multihop ARQ protocols, in which the following standard ARQ protocol is used over each hop. The receiver in each hop tries to decode the message after or during one round, depending on whether the synchronization is per-block based or per-channel-use based. Once it is able to decode the message, a one bit acknowledgement (ACK) is fed back to the transmitter that triggers the transmission of the next message. After one ARQ round, if the receiver cannot decode the message, a negative acknowledgement (NACK) is fed back to the transmitter. Then the transmitter sends the next block of the code that carries additional information for the same message. The retransmission over the $i$th hop continues for a maximum number of $L_i$ rounds, called the ARQ window size. Once the ARQ window size is reached without successful decoding of the message, the message is discarded, causing an information outage. Then the next message is transmitted. The sum of the ARQ window sizes is upper bounded by $L>0$, where $$\sum_{i=1}^{N-1} L_i\leq L.$$ We consider several ARQ protocols with different ways to allocate the available ARQ windows among different hops: - A [f]{}ixed ARQ protocol, which allocates a [f]{}ixed ARQ window size of $L_i$ for the transmitter of node $i$, $i = 1, \cdots, N-1$ such that $\sum_{i=1}^{N-1}L_i = L$. - An adaptive ARQ protocol, in which the allocation of the ARQ window size per hop is not [f]{}ixed but adapted to the channel state. The transmitter of a node can keep retransmitting as long as the total ARQ window size of $L$ has not been reached. We further consider two types of adaptive ARQs based on different synchronization levels: - [F]{}ixed-Block-Length (FBL) ARQ protocol: The synchronization is per-block based. The transmission of a message over each hop spans an integer number of ARQ rounds. - Variable-Block-Length (VBL) ARQ protocol: The synchronization is per-channel-use based. The receiver can send an ACK as soon as it can decode the message, and the transmitter starts transmitting a new message without waiting until the beginning of the next channel block. VBL has a [f]{}iner time resolution than FBL and is more ef[f]{}icient in using the available channel block, at a cost of higher synchronization complexity. We assume that the ARQ feedback links has zero-delay and no error. Asymptotic DMDT {#sec:DMDT_asymptotic} =============== We characterize the tradeoff among the data rate (measured by the multiplexing gain $r$), the reliability (measured by the diversity gain $d$), and the delay by the asymptotic DMDT of a system with ARQ. Following the framework of [@ZhengTse03diversity] and [@GamalDamen06themimo], we assume that the rate of transmission depends on the operating SNR $\rho$, and consider a family of space time codes with block rate $R(\rho)$ scaling with the logarithm of SNR as $$R(\rho) = r\log\rho.$$ Diversity Gain -------------- In the high SNR regime, the diversity gain is de[f]{}ined as the SNR exponent of the message error probability [@ZhengTse03diversity]. It is shown in [@ZhengTse03diversity] that the message error probability $P_e(\rho)$ is dominated by the information outage probability $P_{out}(\rho)$ whenÊ the block-length is suf[f]{}iciently large. In the following we make this assumption. The *information outage event* is the event that the accumulated mutual information at the receiver within the allowed ARQ window size does not meet the data rate of the message and, therefore the receiver cannot decode the message. Hence, the diversity gain for a family of codes is de[f]{}ined as: $$\begin{aligned} d(r) \triangleq -\lim_{\rho \rightarrow \infty} \frac{\log P_{e}(\rho)}{\log \rho}.\label{def_diversity}\end{aligned}$$ The DMT of an $M_1\times M_2$ MIMO system is denoted by $d^{(M_1, M_2)}(r)$ and de[f]{}ined as the supremum of the diversity gain $d(r)$ over all families of codes. DMT of a point-to-point MIMO system is characterized in [@ZhengTse03diversity] by the following theorem: \[DMT\] For a suf[f]{}iciently long block-length, theÊ DMT $d^{(M_1,M_2)}(r)$ is given by the piece-wise linear function connecting the points $(r, (M_1 - r)(M_2 - r)),$ for $r = 0, \cdots, \min(M_1, M_2)$. Asymptotic DMDT {#sec:asymptoticDMDT} --------------- To characterize the asymptotic DMDT for a multihop network in the high SNR regime, we need the following quantity. Assume that the channel inputs at both the source and the relays are Gaussian with identity covariance matrices. De[f]{}ine $M_i^* = \min\{M_i, M_{i+1}\}$, for $i = 1, \cdots, N-1$. For the long-term static channel, let $\lambda_{i,1}, \cdots, \lambda_{i,M_i^*}$ be the nonzero eigenvalues of ${\boldsymbol}{H}_i {\boldsymbol}{H}_i^\dag$, for $i = 1, \cdots, N-1$. Suppose $\lambda_{i,j} = \rho^{-\alpha_{i,j}}$, for $j = 1, \cdots, M_i^*$, $i = 1, \cdots, N-1$. At high SNR, we can approximate the channel capacities ${C}_i({\boldsymbol}{H}_i)\triangleq \log \det \left({\boldsymbol}{I} + \frac{\rho}{M_i} {\boldsymbol}{H}_i {\boldsymbol}{H}_i^\dag\right)$ as $ C_i({\boldsymbol}{H}_i) \doteq \log \rho^{S_i({\boldsymbol}{\alpha}_i)}$ [^5], where $$S_i({\boldsymbol}{\alpha}_i) \triangleq \sum_{j = 1}^{M_i^*}(1-\alpha_{i,j})^+,\label{S_def}$$ $(x)^+ \triangleq \max\{x, 0\}$, and the vector ${\boldsymbol}{\alpha}_i \triangleq [\alpha_{i,1}\cdots \alpha_{i,M_i^*}].$ This $S_i({\boldsymbol}{\alpha}_i) $ plays an important role in the asymptotic DMDT analysis. The closer the SNR exponents $\alpha_{i,j}$’s are to unity, the closer the channel matrix is to being singular. Similarly, we can de[f]{}ine $\{\alpha_{i, j}^l\}$ in the short-term static channel model and the corresponding matrix ${\boldsymbol}{\alpha}_i \in \mathbb{R}^{M_i^*\times L}$ as $[{\boldsymbol}{\alpha}_i]_{k,l} \triangleq {\alpha}_{i,k}^l$. Proofs for the asymptotic DMDT analysis rely on the notion of *decoding time*, which is the time at which the accumulated information reaches $R(\rho)$. In the case of the short-term static channel, for the FBL ARQ and other block-based ARQ protocols, the decoding time for the $i$th node is given by $$\begin{aligned} t_i \triangleq \inf {\left\{} \newcommand{\rbb}{\right\}}t \in \mathbb{Z}^+: \sum_{l = 1}^{t} C_i({\boldsymbol}{H}_{i,l})\geq r\log \rho \rbb \doteq \inf{\left\{} \newcommand{\rbb}{\right\}}t \in \mathbb{Z}^+: \sum_{l = 1}^{t} S_i({\boldsymbol}{\alpha}_i^l) \geq r \rbb, \label{stop_3}\end{aligned}$$ where $\mathbb{Z}^+$ denotes the set of positive integers. For the VBL ARQ and other non-block-based ARQ protocols, the decoding time is given by $$\begin{aligned} t_i \triangleq \inf{\left\{} \newcommand{\rbb}{\right\}}t \in \mathbb{R}: \sum_{l = 1}^{\lfloor t \rfloor} S_i({\boldsymbol}{\alpha}_i^l) + (t - \lfloor t \rfloor) S_i({\boldsymbol}{\alpha}_i^{\lfloor t \rfloor+1}) \geq r \rbb, \label{stop_2}\end{aligned}$$ where $\lfloor x\rfloor$ denotes the largest integer smaller than $x$. Similarly we can de[f]{}ine the decoding time for the long-term static channel model. We can view the accumulated mutual information as a random walk with random increments $S_i({\boldsymbol}{\alpha}_i^l) > 0$ and stopping boundary $r$. In the following, we [f]{}irst state our results for the three-node network $(M_1, M_2, M_3)$, and then extend them to the general $N$-node network. ### Long-Term Static Channel The DMDT of the [f]{}ixed ARQ protocol in the case of the long-term static channel is given by the following theorem: \[thm\_static\_ARQ\] *With the long-term static channel assumption, the DMDT of the [f]{}ixed ARQ protocol for a three-node MIMO multihop network with window sizes $L_1$ and $L_2$, $L_i \in \mathbb{Z}^+$, $L_1 + L_2 \leq L$, is given by:* $$\begin{aligned} d_F^{(M_1, M_2, M_3)}(r,L_1, L_{2}|L_1+L_2\leq L) = \min_{i = 1, 2} {\left\{} \newcommand{\rbb}{\right\}}d^{(M_i, M_{i+1})}{\left(}\frac{r}{L_i}{\right)}\rbb. \label{dmt_multi_ARQ}\end{aligned}$$ Proof: See Appendix \[app:thm\_static\_ARQ\]. Consistent with our intuition, (\[dmt\_multi\_ARQ\]) shows that the performance of a three-node network is limited by the weakest link. This implies that if there were no constraint for the $L_i$’s to be integers, the optimal choice should equalize the diversity-multiplexing tradeoff of all the links, i.e., $$\begin{aligned} d^{(M_1, M_2)}{\left(}\frac{r}{L_1}{\right)}= d^{(M_2, M_3)}{\left(}\frac{r}{L_2}{\right)}. \label{link_perf_match}\end{aligned}$$ With the integer constraint we choose the integer $L_i$’s such that the minimum of $d^{(M_i, M_{i+1})}\left(\frac{r}{L_i}\right)$ for $i = 1, 2$ is maximized. The DMDT of the FBL ARQ protocol is a piece-wise linear function characterized by the following theorem: \[thm\_block\_DDF\] *With the long-term static channel assumption, the DMDT of the FBL ARQ protocol for a three-node MIMO multihop network is given by* $$\begin{aligned} d_{FBL}^{(M_1, M_2, M_3)}(r, L) = \min_{l_i \in \mathbb{Z}^+ : l_1 + l_2 = L-1} {\left\{} \newcommand{\rbb}{\right\}}d^{(M_1, M_2)}{\left(}\frac{r}{l_1}{\right)}+ d^{(M_2, M_3)}{\left(}\frac{r}{l_2}{\right)}\rbb.\end{aligned}$$ Proof: See Appendix \[app:thm\_block\_DDF\]. The DMDT of the VBL ARQ protocol cannot always be expressed in closed-form, but can be written as the solution of an optimization problem, as stated in the following theorem. \[thm\_DDF\] *With the long-term static channel assumption, the DMDT of the VBL ARQ protocol for a three-node MIMO multihop network is given by $$\begin{aligned} d_{VBL}^{(M_1, M_2, M_3)}(r, L) = \inf_{({\boldsymbol}{\alpha}_1, {\boldsymbol}{\alpha}_2) \in \mathcal{O}} h({\left\{} \newcommand{\rbb}{\right\}}\alpha_{i,j} \rbb), \label{ddf1}\end{aligned}$$ where $ h({\left\{} \newcommand{\rbb}{\right\}}\alpha_{i,j} \rbb)\triangleq \sum_{i=1}^{2} \sum_{j = 1}^{M_i^*} (2j - 1 + |M_i - M_{i+1}|)\alpha_{i,j}$. The set $\mathcal{O}$ is de[f]{}ined as* $$\begin{split} \mathcal{O} \triangleq & \left\{({\boldsymbol}{\alpha}_1, {\boldsymbol}{\alpha}_2)\in \mathbb{R}^{M_1^*}\times \mathbb{R}^{M_2^*}: \right. \\ & \alpha_{i,1}\geq \cdots\geq \alpha_{i,M_i^*}\geq 0, i = 1, 2, \left. \frac{S_1({\boldsymbol}{\alpha}_1)S_2({\boldsymbol}{\alpha}_2)}{S_1({\boldsymbol}{\alpha}_1) + S_2({\boldsymbol}{\alpha}_2)} < \frac{r}{L} \rbb, \end{split} \label{feas_VBL2}$$ and this is the optimal DMDT for a three-node network in the long-term static channel. Proof: See Appendix \[app:thm\_DDF\]. Note that the DMDT of the VBL ARQ protocol in the three-node network, under the long-term static channel assumption, is similar to the DMT of DDF without ARQ given in [@GunduzKhojastepourGoldsmithDMTMIMO2008], with proper scaling of the multiplexing gain. The optimization problem in (\[ddf1\]) can be shown to be convex using techniques similar to Theorem 3 in [@GunduzKhojastepourGoldsmithDMTMIMO2008]. We have closed-form solutions for some speci[f]{}ic cases where the optimization problem in (\[ddf1\]) has a simple form and can be solved analytically. For example, for a $(M_1, 1, M_3)$ network (\[ddf1\]) becomes $$\begin{aligned} d_{VBL}^{(M_1, 1, M_3)}(r, L) = \inf_{\alpha_{1,1}, \alpha_{2,1}} &&M_1\alpha_{1,1} + M_3 \alpha_{2,1} \nonumber\\ \mbox{subject to}&& \frac{(1-\alpha_{1,1})^+ (1-\alpha_{2,1})^+}{(1-\alpha_{1,1})^+ + (1-\alpha_{2,1})^+} < \frac{r}{L}, \nonumber \\ && \alpha_{i,1} \geq 0, \quad i = 1, 2.\end{aligned}$$ The DMDT for this case (and two other special cases) is given by the following corollary: With the long-term static channel assumption the DMDT of the VBL ARQ protocol ### Short-Term Static Channel The DMDT of the [f]{}ixed and the FBL ARQ under the short-term static channel assumption are similar to those under the long-term static channel assumption, with additional scaling factors for DMDTs of each hop due to the time diversity gain. \[thm\_st\] *With the short-term static channel assumption, the DMDT of the FBL ARQ protocol for a three-node MIMO multihop network is given by $$\begin{aligned} d_{FBL}^{(M_1, M_2, M_3)}(r, L) = \min_{l_i \in \mathbb{Z}^+ : l_1 + l_2 = L - 1} {\left\{} \newcommand{\rbb}{\right\}}\sum_{i = 1}^{2} l_i d^{(M_i, M_{i+1})}{\left(}\frac{r}{l_i}{\right)}\rbb.\nonumber\end{aligned}$$* Proof: See Appendix \[app:thm\_st\]. \[thm\_st\_fractional\] With the short-term static channel assumption, the DMDT of the VBL ARQ protocol for a three-node MIMO multihop network is given by $$\begin{aligned} d_{VBL}^{(M_1, M_2, M_3)}(r, L) = \inf_{({\boldsymbol}{\alpha}_1, {\boldsymbol}{\alpha}_{2}) \in \mathcal{G}} \tilde{h}\left({\left\{} \newcommand{\rbb}{\right\}}\alpha_{i,j}^l \rbb\right), $$ where $\tilde{h}\left({\left\{} \newcommand{\rbb}{\right\}}\alpha_{i,j}^l \rbb\right)\triangleq \sum_{i=1}^{2} \sum_{j = 1}^{M_i^*} \sum_{l = 1}^{L}{\left(}2j - 1 + |M_i - M_{i+1}|{\right)}\alpha_{i,j}^l$, and the set $\mathcal{G}$ is de[f]{}ined as $$\begin{aligned} \mathcal{G} &\triangleq& {\left\{} \newcommand{\rbb}{\right\}}({\boldsymbol}{\alpha_1}, {\boldsymbol}{\alpha}_{2})\in\mathbb{R}^{M_1^* \times L}\times \mathbb{R}^{M_{2}^*\times L}: \right. \nonumber\\ &&~~~\left. \alpha_{i, 1}^l\geq \cdots \geq \alpha_{i, M_i^*}^l \geq 0, \forall i, l, \quad t_1 + t_2 > L \rbb.\label{VBL_region}\end{aligned}$$ The $t_i$’s, de[f]{}ined in (\[stop\_2\]), depend on the ${\boldsymbol}{\alpha}_i$’s. This is the optimal DMDT for a three-node MIMO multihop network in the short-term static channel. Proof: See Appendix \[app:thm\_st\_fractional\]. DMDT of an $N$-Node Network and Optimality of VBL ------------------------------------------------- Next, we extend our DMDT results to general $N$-node MIMO multihop networks. Note that in our model, since each transmitted signal is received only by the next node in the network, the transmission over the $i$th hop does not interfere with other transmissions. We will show the DMDT of this more general network is a minimization of the DMDTs of all its three-node sub-networks, due to half-duplexing and multihop diversity. The multihop diversity [@GunduzKhojastepourGoldsmithDMTMIMO2008] captures the fact that we allow simultaneous transmissions of multiple node pairs in half-duplex relay networks. For example, while node $i$ is transmitting to node $(i+1)$, node $(i+2)$ can also transmit to node $(i+3)$. This effect allows us to split a message into pieces, which are transmitted simultaneously in the network to increase the multiplexing gain. Using this rate-splitting scheme, we can prove the DMDT optimality of the VBL ARQ protocol. Due to their [f]{}ixed block length, we are only able to provide upper and lower bounds for DMDTs of [f]{}ixed ARQ and FBL ARQ in an $N$-node network. \[thm\_n\_VBL\] With the long-term or short-term static channel assumption, the DMDT of the VBL ARQ for an $N$-node MIMO multihop network is given by $$d_{VBL}^{(M_1, \cdots, M_N)}(r, L) = \min_{i = 1, \cdots, N-2} d_{VBL}^{(M_i, M_{i+1}, M_{i+2})}(r, L),$$ and this is the optimal DMDT for an $N$-node network. See Appendix \[app:thm\_n\_VBL\]. Theorem \[thm\_n\_VBL\] says that the DMDT of an $N$-node system is determined by the smallest DMDT of its three-node sub-networks. The minimization in Theorem \[thm\_n\_VBL\] is over all possible three-node sub-networks instead of pairs of nodes, due to the half-duplex constraint: each low-rate piece of message has to wait for the previous piece to go through two hops before it can be transmitted. Theorem \[thm\_n\_VBL\] also says that the VBL ARQ is the optimal ARQ protocol in the general multi-hop network. \[thm\_n\_fixed\] With the long-term or short-term static channel assumption, the DMDT of [f]{}ixed ARQ for an $N$-node network is lower bounded and upper bounded, respectively, by $$d_{F}^{(M_1, \cdots, M_N)}(r, L_1, \cdots, L_{N-1}) \geq \min_{i = 1, \cdots, N-2} d_{F}^{(M_i, M_{i+1}, M_{i+2})}\left(\frac{L_{\max}}{L}r, L_i, L_{i+1}\;\middle\vert\;L_i + L_{i+1} \leq L_{\max}\right),$$ and $$d_{F}^{(M_1, \cdots, M_N)}(r, L_1, \cdots, L_{N-1}) \leq \min_{i = 1, \cdots, N-2} d_{VBL}^{(M_i, M_{i+1}, M_{i+2})}(r, L),$$ where $L_{\max} \triangleq \max_{i=1}^{N-2}\left\{L_i + L_{i+1}\right\}$. See Appendix \[app:thm\_n\_fixed\]. \[thm\_n\_FBL\] With the long-term or short-term static channel assumption, the DMDT of the FBL ARQ for an $N$-node network is lower bounded and upper bounded, respectively, by $$d_{FBL}^{(M_1, \cdots, M_N)}(r, L) \geq \min_{i = 1, \cdots, N-2} d_{VBL}^{(M_i, M_{i+1}, M_{i+2})}(r, L - N),\label{FBL_lower}$$ and $$d_{FBL}^{(M_1, \cdots, M_N)}(r, L) \leq \min_{i = 1, \cdots, N-2} d_{VBL}^{(M_i, M_{i+1}, M_{i+2})}(r, L). \label{FBL_upper}$$ See Appendix \[app:thm\_n\_FBL\]. An intuitive explanation for the DMDT optimality of the VBL ARQ is as follows. Recall that $t_i$ is the number of channel blocks, including retransmissions, needed to decode the message over the $i$th hop. For a three-node network, we can illustrate the information outage region in the region of $t_1\times t_2$ values as in [F]{}ig. \[Fig:tau\]. The outage region of the VBL ARQ is smaller than those of the [f]{}ixed and the FBL ARQ. Due to its per-block based synchronization, the outage region boundary of the FBL ARQ is a piecewise approximation to that of the VBL ARQ. In the high SNR regime, we formalize the above intuition in the following corollary to Theorem \[thm\_n\_FBL\]. ![Outage regions of the [f]{}ixed ARQ, and two adaptive ARQs: the FBL and the VBL ARQs, with $L_1 + L_2 = L$.[]{data-label="Fig:tau"}](taus.eps){width="3.5in"} With the long-term or short-term static channel assumption, for an $N$-node MIMO multihop network, the DMDT of the FBL ARQ converges to that of the VBL ARQ when $L\rightarrow\infty$. \[thm\_conv\] Using (\[FBL\_lower\]) and (\[FBL\_upper\]), when $L\rightarrow\infty$, $$\begin{split} \min_{i = 1, \cdots, N-2} d_{VBL}^{(M_i, M_{i+1}, M_{i+2})}(r, L) \geq d_{FBL}^{(M_1, \cdots, M_N)}(r, L) & \geq \min_{i = 1, \cdots, N-2} d_{VBL}^{(M_i, M_{i+1}, M_{i+2})}(r, L[1- N/L])\\ & \xrightarrow{L\rightarrow \infty} \min_{i = 1, \cdots, N-2} d_{VBL}^{(M_i, M_{i+1}, M_{i+2})}(r, L). \end{split}\label{FBL_near}$$ Power Control Gain with Long Term Power Constraint -------------------------------------------------- With the long-term power constraint and channel state information at the transmitter (CSIT), we can employ a power control strategy to further improve diversity. Let the SNR in the $l$th round be $\rho(l) = \rho^{g(l)}$, where $\rho$ is the average SNR, and $g(l)$ is the function de[f]{}ining the power control strategy. In the high SNR regime, similar to (\[S\_def\]) we can approximate channel capacities as $C_i ({\boldsymbol}{H}_i) \doteq \log \rho^{S'_i({\boldsymbol}{\alpha}_i)}$, where $S'_i({\boldsymbol}{\alpha}_i) = \sum_{j = 1}^{M_i^*}{\left(}g(l) - \alpha_{i,j}{\right)}^+$. Hence, with power control, all the asymptotic DMDT results in the previous sections hold with $S_i({\boldsymbol}{\alpha}_i)$ replaced by $S'_i({\boldsymbol}{\alpha}_i)$. Examples for Asymptotic DMDT ---------------------------- ![The DMDT for a (4,1,3) multihop network with $L = 4$.[]{data-label="Fig:dmt_314"}](dmt_413.eps){width="3.1in"} ![The DMDT for a (2,2,2) multihop network with $L = 4$.[]{data-label="Fig:dmt_222"}](dmt_222.eps){width="3.2in"} In this section we show some illustrative examples for the asymptotic DMDT. We [f]{}irst consider the long-term static channel model. For a three-node $(4,1,3)$ multihop network with $L=4$, [F]{}ig. \[Fig:dmt\_314\] shows the DMDT of the [f]{}ixed ARQ with $L_1 = L_2 = L/2$, of the per-hop-performance-equalizing $L_1$ and $L_2$ satisfying (\[link\_perf\_match\]), as well as the DMDTs of the FBL and the VBL ARQs. Note that the DMDT of the VBL ARQ in [F]{}ig. \[Fig:dmt\_314\] is the optimal DMDT for the (4, 1, 3) network. We also consider a (2, 2, 2) network, whose DMDTs are shown in [F]{}ig. \[Fig:dmt\_222\]. [F]{}ig. \[Fig:dmt\_413\_L\] presents the three-dimensional DMDT surface of the VBL and the FBL ARQs, respectively, for the $(4,1,3)$ multihop network. Note that as $L$ increases, the diversity gain at a given $r$ increases for both the FBL and the VBL ARQ protocols. Also note that the DMDT surface of the FBL ARQ is piecewise and that of the VBL ARQ is smooth due to their different synchronization levels. [F]{}ig. \[Fig:dmt\_413\_L\_slice\] illustrates the cross sections of the surfaces in [F]{}ig. \[Fig:dmt\_413\_L\] at $L = 2$ and $L = 10$, which demonstrates the convergence of the DMDTs proved in Theorem \[thm\_conv\]. ![The three-dimensional DMDT surface for a (4,1,3) network, with the FBL ARQ (left) and the VBL ARQ (right).[]{data-label="Fig:dmt_413_L"}](dmt_413_L_bb.eps "fig:"){width="3in"} ![The three-dimensional DMDT surface for a (4,1,3) network, with the FBL ARQ (left) and the VBL ARQ (right).[]{data-label="Fig:dmt_413_L"}](dmt_413_L.eps "fig:"){width="3in"} ![The slices of the DMDT surface in [F]{}igure \[Fig:dmt\_413\_L\] at $L = 2$ (left) and at $L = 10$ (right).[]{data-label="Fig:dmt_413_L_slice"}](dmt_413_L2.eps "fig:"){width="3in"} ![The slices of the DMDT surface in [F]{}igure \[Fig:dmt\_413\_L\] at $L = 2$ (left) and at $L = 10$ (right).[]{data-label="Fig:dmt_413_L_slice"}](dmt_413_L10.eps "fig:"){width="3in"} ![The DMDT for a (4, 1, 3) multihop network in the long-term static channel versus that in the short-term static channel.[]{data-label="Fig:dmt_413_s_l"}](short_versus_long_413.eps){width="3in"} Next we consider the short-term static channel model. The DMDT of the (4,1,3) multihop network using the FBL ARQ is shown in [F]{}ig. \[Fig:dmt\_413\_s\_l\]. Note that the asymptotic DMDT in the short-term static channel model is not necessarily a multiple $L$ of the corresponding DMDT in the long-term static channel model, which differs from the point-to-point MIMO channel [@GamalDamen06themimo], where the asymptotic DMDT in the short-term static channel model is a multiple $L$ of the corresponding DMDT in the long-term static channel model. [F]{}inite SNR DMDT With Delay Constraint {#sec:DMDT_finite_SNR} ========================================= Ê In analyzing the [f]{}inite SNR DMDT, we add a practical end-to-end delay constraint: each message has to reach the destination before the deadline; otherwise it is discarded. We characterize the [f]{}inite SNR DMDT by studying the probability of message error. With the delay constraint brought into the picture, the probability of message error has two components: the *information outage probability* and the *deadline missing probability*, which we analyze using the [f]{}inite SNR DMT introduced in [@Narasimhan2006] and the queueing network analysis, respectively. In the [f]{}inite SNR regime, the multiplexing gain is de[f]{}ined as: $$\begin{aligned} r \triangleq \frac{R}{\log_2(1+M_r \rho)},\end{aligned}$$ where $M_r$ is the number of antennas at the receiver. In the following we only consider the long-term static channel model and the [f]{}ixed ARQ protocol. We [f]{}irst introduce the queueing network model. Queueing Network Model ---------------------- The messages enter the network at the source node and exit from the destination node, forming an open queue. Messages arrive at the source node as a Poisson process with a mean message inter-arrival time of $\lambda$ blocks. As in the previous sections, the unit of time is one block of the channel consisting of $T$ channel uses. The end-to-end delay constraint is $k$ blocks. Each node can be viewed as a service station transmitting (possibly with several retransmissions) a message to the next node. The time node $i$ spends to successfully transmit a message to node $(i+1)$ is called the service time of the $i$th node, which depends on the channel state and is upper bounded by the ARQ window size $L_i$. The allocated ARQ window sizes satisfy $\sum_{i} L_i\leq k$. As an approximation, we assume that the random service time at node $i$ for each message is i.i.d. with an exponential distribution of mean $\mu(L_i)$ (the actual service time has value distributed in the interval of $[0, L_i]$). Here $\mu(L_i)$, which we derive later, is the actual average service time of the ARQ process when the ARQ window size is $L_i$. With these assumptions we can treat each node as an $M/M/1$ queue. This approximation makes the problem tractable and characterizes the qualitative behavior of MIMO multihop networks. The messages enter the buffer and are processed based on the [f]{}irst-come-[f]{}irst-served (FCFS) rule. We assume $\mu(L_i) + \mu(L_{i+1})< \lambda$, $i = 1, \cdots, N-2$, so the queues are stable, i.e., the waiting time at a node does not grow unbounded. Burke’s theorem (see [@BolchGreinerdeMeer2006]) says that in an $M/M/1$ queue with Poisson arrivals, the messages leave the server as a Poisson process. Hence messages arrive at each relay (and the destination node) as a Poisson process with rate $(1-p_i)/\lambda$, where $p_i$ is the probability that a message is dropped. When the SNR is reasonably high, we can assume the message drop probability is small and hence $1 - p_i \approx 1$. Probability of Message Error ---------------------------- Denote the total queueing delay experienced by the $n$th message transmitted from the source to the destination as $W_n$, and the number of transmissions needed by node $i$ to transmit the $n$th message to node $(i+1)$ as $t_n^i$, if the transmitter can use any number of rounds. For the $n$th message, if it is not discarded due to information outage, the total “service time” is $S_n = \sum_{i=1}^{N-1} \min\{t_n^i, L_i\}$ and the random end-to-end delay is $ D_n = W_n + S_n. $ Recall for the [f]{}ixed ARQ, the message is dropped once the number of retransmissions exceeds the ARQ window size of any hop, or the end-to-end delay exceeds the deadline. Hence, the message error probability of the $n$th message can be written as $$\begin{aligned} P_e &=& P {\left\{} \newcommand{\rbb}{\right\}}\cup_{i=1}^{N-1}\{t_n^i>L_i\}\rbb+ P{\left\{} \newcommand{\rbb}{\right\}}\cap_{i=1}^{N-1}\{t_n^i \leq L_i\} \cap\{D_n> k\} \rbb. \label{eqn22}\end{aligned}$$ The [f]{}irst term in (\[eqn22\]) is the message outage probability: $$P_{out}(\{L_i\}|\rho) \triangleq P {\left\{} \newcommand{\rbb}{\right\}}\cup_{i=1}^{N-1}\{t_n^i>L_i\}\rbb,\label{out_finiteSNR}$$ which is identical for any message $n$ since channels are i.i.d. The second term in (\[eqn22\]) is related to the deadline missing probability, and can be rewritten as $$\begin{split} P{\left\{} \newcommand{\rbb}{\right\}}\cap_{i=1}^{N-1}\{t_n^i \leq L_i\} \cap\{D_n> k\} \rbb = [1-P_{out}(\{L_i\}|\rho) ] P{\left\{} \newcommand{\rbb}{\right\}}D_n > k \middle\vert \cap_{i=1}^{N-1}\left\{ t_n^i \leq L_i \right\} \rbb \end{split}.$$ De[f]{}ine the stationary deadline missing probability: $$P_{deadline}{\left(}\{L_i\}|\rho, k{\right)}\triangleq \lim_{n\rightarrow\infty}P{\left\{} \newcommand{\rbb}{\right\}}D_n > k \middle\vert \cap_{i=1}^{N-1}\left\{ t_n^i \leq L_i \right\} \rbb.\label{deadline_finiteSNR}$$ In the following we derive (\[out\_finiteSNR\]) and (\[deadline\_finiteSNR\]). ### Information Outage Probability Since channels in different hops are independent, (\[out\_finiteSNR\]) becomes $$\begin{split} P_{out}{\left(}\{L_i\}|\rho{\right)}&= \sum_{i=1}^{N-1} P{\left\{} \newcommand{\rbb}{\right\}}t_n^i> L_i\rbb = \sum_{i=1}^{N-1}P{\left\{} \newcommand{\rbb}{\right\}}L_i C_i({\boldsymbol}{H}_i) < r\log_2(1+M_{i+1}\rho)\rbb \\ &\triangleq \sum_{i=1}^{N-1}P_{out,i}{\left(}L_i|\rho {\right)}, \end{split} \label{finite_SNR_pout}$$ which is a sum of the per-hop outage probabilities $P_{out, i}{\left(}L_i|\rho {\right)}$. Using results in [@Narasimhan2006] for point-to-point MIMO, we have $$P_{\rm{out}, i}{\left(}L_i|\rho {\right)}= \sup_{(b_1, \cdots b_{M_i^*}) \in \mathcal{B}_i} \prod_{l=1}^{M_i^*} \frac{\gamma {\left(}M_{i+1} - M_i + 2l - 1, \frac{M_i}{\rho}[(1+M_{i+1} \rho)^{b_l} - (1+ M_{i+1} \rho)^{b_{l-1}}] {\right)}}{\Gamma(M_{i+1} - M_i + 2l - 1)},\label{per-hop-out}$$ where the set $\mathcal{B}_i$ is given by $$\mathcal{B}_i = {\left\{} \newcommand{\rbb}{\right\}}(b_1, \cdots, b_{M_i^*}) \left| b_{l-1} < b_l < \frac{\frac{r}{L_i}-\sum_{k=1}^{l-1}b_k}{M_i^* - l +1} \right., l = 1, \cdots M_i^*-1; b_{M_i^*} = \frac{r}{L_i} - \sum_{k=1}^{M_i^*-1}b_k \rbb,$$ $\gamma(m, x) \triangleq \int_{0}^x t^{m-1} e^{-t} dt$ is the incomplete gamma function, and $\Gamma(m) \triangleq (m-1)!$ for a positive integer $m$. For orthogonal space-time block coding (OSTBC), we can derive a closed-form $P_{out,i}(L_i|\rho)$ using techniques similar to [@Narasimhan2006]: $$P_{\rm out, OSTBC}{\left(}\{L_i\}|\rho{\right)}= \sum_{i=1}^{N-1} P{\left\{} \newcommand{\rbb}{\right\}}r_s L_i\log_2{\left(}1+\frac{\rho}{M_{i}}\|{\boldsymbol}{H}_i\|_F^2 {\right)}\leq r\log_2(1+ M_{i+1}\rho)\rbb ,\label{32}$$ where $\|{\boldsymbol}{A}\|$ denotes the Frobenius norm of a matrix ${\boldsymbol}{A}$, and the spatial code rate $r_s$ is equal to the average number of independent constellation symbols transmitted per channel use. For example, $r_s = 1$ for the Alamouti space-time code [@Alamouti1998]. When ${\boldsymbol}{H}_i$ is Rayleigh distributed, its Frobenius norm has the Gamma(1, $M_i \cdot M_{i+1}$) distribution. Hence, (\[32\]) becomes: $$P_{\rm out, OSTBC}(\{L_i\}|\rho) = \sum_{i=1}^{N-1}\frac{1}{(M_i\cdot M_{i+1}-1)!}\gamma\left(\frac{M_i}{\rho}[(1+M_{i+1}\rho)^{\frac{r}{r_s L_i}}-1], M_i\cdot M_{i+1}\right). \label{OSTBC}$$ ### Deadline Missing Probability For a three-node network with half-duplex relay, there is only one queue at the source that incurs the queueing delay. For given $r$ and $\rho$, we can derive the stationary deadline missing probability using a martingale argument: \[lemma1\] For a half-duplex three-node MIMO multihop network with Poisson arrival of rate $\lambda$ and ARQ rounds $L_1$ and $L_2$, the probability that the end-to-end delay exceeds the deadline $k$ is given by $$P_{deadline}{\left(}\{L_i\}|\rho, k{\right)}= \frac{\mu(L_1)+\mu(L_2)}{\lambda}e^{-k {\left(}\frac{1}{\mu(L_1)+\mu(L_2)} - \frac{1}{\lambda} {\right)}}. \label{three-hop-dd}$$ \[lemma\_stat\_dist\] Proof: See Appendix \[app:lemma\_stat\_dist\]. For general multihop networks with any number of half-duplex relays, the analysis for (\[deadline\_finiteSNR\]) is more involved. Due to half-duplexing the neighboring links cannot operate simultaneously, and this effect is not captured in the standard queueing network analysis. Here we adapt the proof in [@Ganesh1998], which uses large deviation techniques, to derive the following theorem for the exponent of the deadline missing probability in half-duplex relay networks: \[SojournTimeHalfDuplex\] For a half-duplex $N$-node MIMO multihop network, with Poisson arrival of rate $\lambda$ and ARQ rounds $L_i$’s, the probability that the end-to-end delay exceeds the deadline $k$ is given by $$\begin{aligned} \lim_{k\rightarrow \infty} \lim_{n \rightarrow \infty}\frac{1}{k} P{\left\{} \newcommand{\rbb}{\right\}}D_n > k \middle\vert \cap_{i=1}^{N-1}\left\{ t_n^i \leq L_i \right\} \rbb = -\theta^*,\end{aligned}$$ where $\theta^* = \min_{1\leq i\leq N-2}\theta_i$, and $\theta_i = 1/[\mu(L_i)+\mu(L_{i+1})]-1/\lambda$, $i = 1, \cdots, N-2$. \[thm\_half\_duplex\] Proof: See Appendix \[app:SojournTimeHalfDuplex\]. This theorem again demonstrates that the performance of the $N$-node multihop network with a half-duplex relay (here the performance metric is in terms of the deadline-missing probability exponent) is determined by the smallest exponent of each three-node sub-network. By Theorem \[thm\_half\_duplex\], for [f]{}inite $k$ we can approximate (\[deadline\_finiteSNR\]) as $$P_{deadline}(\{L_i\}|\rho, k)\approx e^{-k\theta^*}. \label{more-hop-dd}$$ Also note that in the special case with $N = 3$ nodes, for [f]{}inite $k$ $$P_{deadline}(\{L_i\}|\rho, k)\approx e^{-k \left(\frac{1}{\mu(L_1)+\mu(L_2)}-\frac{1}{\lambda}\right)},$$ which is identical with (\[three-hop-dd\]) up to a multiplicative constant $[\mu(L_1)+\mu(L_2)]/\lambda$. This constant is typically not identi[f]{}iable by large deviation techniques such as the one used in Theorem \[thm\_half\_duplex\]. ### Mean Service Time Calculation The above analysis requires $\mu(L_i)$, which we will derive in this section. For a given $t$ and message $n$, the cumulative distribution function (CDF) of $t_n^i$ is given by Ê $$P{\left\{} \newcommand{\rbb}{\right\}}t_n^i \leq t\rbb = P{\left\{} \newcommand{\rbb}{\right\}}t C_i({\boldsymbol}{H}_i) \geq r\log_2(1+M_{i+1}\rho)\rbb = 1 - P_{out, i}(t|\rho),\label{CDF}$$ where $P_{out, i}(t|\rho)$ is given in (\[per-hop-out\]) (or a term in the summation in (\[OSTBC\]) for OSTBC). Differentiating (\[CDF\]) gives the desired probability density function (PDF): $$\begin{aligned} P{\left\{} \newcommand{\rbb}{\right\}}t_n^i = t\rbb = \frac{M_i r}{\rho r_s t^2(M_i\cdot M_{i+1}-1)!}f_i^{M_i\cdot M_{i+1} - 1} e^{-f_i} (1+M_{i+1}\rho)^{\frac{r/t}{r_s}}\log_2(1+M_{i+1}\rho),\end{aligned}$$ where $f_ i \triangleq \frac{M_i}{\rho}\left[(1+M_{i+1} \rho)^{\frac{r/t}{r_s}}-1\right]$. Using this we have $$\mu(L_i) = \int_{t=1}^{L_i} t P{\left\{} \newcommand{\rbb}{\right\}}t_n^i = t\rbb dt + L_i \int_{t = L_i}^\infty P{\left\{} \newcommand{\rbb}{\right\}}t_n^i = t\rbb dt. \label{39}$$ Optimal [F]{}ixed ARQ Design at [F]{}inite SNR ---------------------------------------------- Based on the above analysis, we formulate the optimal [f]{}ixed ARQ design in the [f]{}inite SNR regime as an optimization problem that allocates the total ARQ window size among hops to minimize the probability of message error subject to the queue stability and the end-to-end delay constraint $k$: $$\begin{split} \min_{\{L_i\}} & \quad P_{out}(\{L_i\}|\rho) + [1-P_{out}(\{L_i\}|\rho)]P_{deadline}(\{L_i\}|\rho, k) \\ \mbox{subject to} & \quad 1\leq \mu(L_i) \leq \lambda, \quad \sum_{i=1}^{N-1} L_i \leq k. \end{split} \label{opt_formulation}$$ The terms in (\[opt\_formulation\]) are given by (\[finite\_SNR\_pout\]), (\[OSTBC\]), (\[three-hop-dd\]), and (\[more-hop-dd\]). This optimization problem can be solved numerically. In particular, for a three-hop network with OSTBC, (\[opt\_formulation\]) becomes $$\begin{aligned} \min_{\{L_i\}} && \quad \sum_{i=1}^{2}\frac{1}{(M_i M_{i+1}-1)!}\gamma\left(\frac{M_i}{\rho}[(1+\rho)^{\frac{r}{r_s L_i}}-1], M_i M_{i+1}\right) + \frac{\mu(L_1)+\mu(L_2)}{\lambda}e^{-k {\left(}\frac{1}{\mu(L_1)+\mu(L_2)} - \frac{1}{\lambda} {\right)}} \nonumber\\ \mbox{subject to} && \quad 1\leq \mu(L_i) \leq \lambda, \quad \sum_{i=1}^{N-1} L_i \leq k. \label{opt_formulation_3hop}\end{aligned}$$ As we demonstrate in the following examples, the information outage probability is decreasing in $L_i$, and the deadline missing probability is increasing in $L_i$. Hence the optimal ARQ window size allocation $L_i$ at each node $i$ should trade off these two terms. Moreover, the optimal ARQ window size allocation should equalize the performance of each link. Numerical Example ----------------- We [f]{}irst consider a point-to-point (2, 2) MIMO system at the source and the destination. Assume $\rho = 3$ dB and $\lambda = 2$ blocks. An OSTBC with $r_s = 1$ is used, for which the information outage probability is given in (\[OSTBC\]). The deadline constraint is $k = 5$ blocks. For $r = 1$, the information outage probability (\[out\_finiteSNR\]) and the deadline missing probability (\[deadline\_finiteSNR\]) are shown in [F]{}ig. \[Fig:tradeoff\_finite\_snr\]. Note that (\[out\_finiteSNR\]) decreases, while (\[deadline\_finiteSNR\]) increases with the ARQ window size. Next we consider the (4, 1, 3) MIMO multihop network. Assume $\rho = 20$ dB and $\lambda = 10$ blocks. An OSTBC with $r_s = 1$ is used. The optimal [f]{}ixed ARQ protocol is obtained by solving (\[opt\_formulation\_3hop\]). For $r = 1$ and a deadline constraint of $k = 5$ blocks, the optimal [f]{}ixed ARQ has $L_1 = 2$ and $L_2 = 3$, and the optimal probability of message error is 0.1057. For $r = 1$ and $k = 10$ blocks, the optimal [f]{}ixed ARQ has $L_1 = 4$ and $L_2 = 6$, and the optimal probability of message error is 0.0355. For all $r$ and $k$, the probability of message error is plotted as a surface in [F]{}ig. \[Fig:finite\_snr\_surface\]. This surface is the DMDT for the three-node network in the [f]{}inite SNR regime, which has an interesting similarity to the high SNR asymptotic DMDT surface in [F]{}ig. \[Fig:dmt\_413\_L\_slice\], since indeed the high SNR DMDT represents the SNR exponent of the [f]{}inite SNR DMDT. ![The information outage probability and the deadline missing probability for a (2, 2) point-to-point MIMO system, with $r_s = 1$, $r = 1$, and SNR $\rho = 3$ dB, $k = 5$ blocks and $\lambda = 2$ blocks. The minimum probability of message error is $0.4147$ and is achieved by $L = 2$.[]{data-label="Fig:tradeoff_finite_snr"}](tradeoff_finite_snr.eps){width="2.5in"} ![The probability of message error for the optimal [f]{}ixed ARQ protocol in a (4, 1, 3) multihop network as a function of multiplexing gain $r$ and delay constraint $k$; $r_s = 1$, SNR $\rho=20$ dB and $\lambda = 10$ blocks.[]{data-label="Fig:finite_snr_surface"}](finite_snr_surface.eps){width="3in"} Conclusions {#sec:conclusion} =========== We have analyzed the asymptotic diversity-multiplexing-delay tradeoff (DMDT) for the $N$-node MIMO multihop relay network with ARQ, under both long-term and the short-term static channel assumptions. We show that the asymptotic DMDT can be cast into an optimization problem that can be solved numerically in general, and closed-form asymptotic DMDT expressions are obtained in some special cases. We also proposed the VBL ARQ protocol which adapts the ARQ window size among hops dynamically and proved that it achieves the optimal DMDT under both channel assumptions. We also show that the DMDT for general multihop networks with multiple half-duplex relays can be found by decomposing the network into three-node sub-networks such that each sub-network consists of three neighboring nodes and its corresponding two hops. The DMDT of the relay network is determined by the minimum of the DMDTs of the three-node sub-networks. We have also shown that the DMDT of the three-node subnetwork is determined by its weakest link. Hence, the optimal ARQ should equalize the link performances by properly allocating the per-hop ARQ window sizes dynamically. We then studied the DMDT in the [f]{}inite SNR regime for [f]{}ixed ARQ protocols. We introduced an end-to-end delay constraint such that a message is dropped once its delay exceeds the delay constraint. Since in the [f]{}inite SNR regime retransmission is not a rare event, we incorporated the queueing delay into the system model, and modeled the system as a queueing network. The [f]{}inite SNR DMDT is characterized by the probability of message error, which consists of the information outage probability and the deadline missing probability. While the information outage probability can be found through [f]{}inite SNR DMDT analysis, we have also found the exponent for the deadline missing probability. Our result demonstrates that the performance of a multihop network with half-duplex relays in the [f]{}inite SNR regime is also determined by the performance of the weakest three-node sub-network. It has been shown that, based on these analyses, the optimal [f]{}ixed ARQ window size allocation can be solved numerically as an optimization problem, which should balance the per-hop diversity performance and avoid a long per-hop delay. The dif[f]{}iculty in merging the network layer analysis with the physical layer information theoretic results stems from the bursty nature of the source and the end-to-end delays. By modeling the multihop relay network with ARQ as a queuing network, we have tried to answer a question posed at the end of [@HollidayGoldsmithPoor2008]: how to couple the fundamental performance limits of general multihop networks with queueing delay. Our work provides a step towards bridging the gap between network theory and information theory. Future work includes developing an optimal dynamic ARQ protocol that can adapt to the channel state and the message arrival rate. The problem can be formulated as a dynamic programming problem or analyzed using a heavy traf[f]{}ic approximation. Proof of Theorem \[thm\_static\_ARQ\] {#app:thm_static_ARQ} ===================================== With [f]{}ixed-ARQ protocol and half-duplex relays, the system is in outage if any hop is in outage. The probability of message error $P_e(\rho)$, using the decoding time de[f]{}inition in (\[stop\_3\]), can be written as: $$\begin{aligned} P_e(\rho) && \doteq P{\left\{} \newcommand{\rbb}{\right\}}{\left\{} \newcommand{\rbb}{\right\}}t_1 > L_1 \rbb \cup {\left\{} \newcommand{\rbb}{\right\}}t_2 > L_2 \rbb \rbb \label{app:eqn1}\\ &&= P{\left\{} \newcommand{\rbb}{\right\}}t_1 > L_1 \rbb + P{\left\{} \newcommand{\rbb}{\right\}}t_2 > L_2 \rbb \label{app:eqn2} \\ && \doteq P {\left\{} \newcommand{\rbb}{\right\}}L_1 S_1({\boldsymbol}{\alpha}_1)< r\rbb + P {\left\{} \newcommand{\rbb}{\right\}}L_2 S_2({\boldsymbol}{\alpha}_2)< r\rbb\\ && \doteq \sum_{i=1}^{2}\rho^{-d^{(M_i, M_{i+1})}{\left(}\frac{r}{L_i}{\right)}} \label{app:eqn3}\\ && \doteq \rho^{-\min_{i=1, 2} d^{(M_i, M_{i+1})}{\left(}\frac{r}{L_i}{\right)}},\label{d_multi_ARQ}\end{aligned}$$ where (\[app:eqn2\]) is due to the independence of each link, and (\[app:eqn3\]) follows from the method used in [@ZhengTse03diversity], and the fact that $$P{\left\{} \newcommand{\rbb}{\right\}}t_i > L_i \rbb = P{\left\{} \newcommand{\rbb}{\right\}}\sum_{l=1}^{L_i} S_i({\boldsymbol}{\alpha}_i^l) < r \rbb,$$ since $S_i({\boldsymbol}{\alpha}_i^l) \geq 0$ and $S_i({\boldsymbol}{\alpha}_i^l) = S_i({\boldsymbol}{\alpha}_i)$ for the long-term static channel. The last equality follows since when SNR is high, the dominating term is the one with the smaller SNR exponent. Using (\[d\_multi\_ARQ\]) and the de[f]{}inition of diversity in (\[def\_diversity\]) we obtain the DMDT stated in Theorem \[thm\_static\_ARQ\]. Proof of Theorem \[thm\_block\_DDF\] {#app:thm_block_DDF} ==================================== For the FBL ARQ protocol with two hops, the probability of message error is given by $$\begin{aligned} P_{e}(\rho) \doteq P{\left\{} \newcommand{\rbb}{\right\}}t_1 + t_2 > L \rbb = \sum_{k = 1}^{L-1} P{\left\{} \newcommand{\rbb}{\right\}}t_1 = k\rbb P{\left\{} \newcommand{\rbb}{\right\}}t_2 > L - k\rbb + P{\left\{} \newcommand{\rbb}{\right\}}t_1 \geq L \rbb. \label{stop_formula}\end{aligned}$$ In the long-term static channel model we have $$\begin{split} P{\left\{} \newcommand{\rbb}{\right\}}t_1 = k\rbb &= P {\left\{} \newcommand{\rbb}{\right\}}(k-1)C_1 ({\boldsymbol}{H}_1) < r\log \rho \leq k C_1 ({\boldsymbol}{H}_1)\rbb \\ & = P {\left\{} \newcommand{\rbb}{\right\}}C_1({\boldsymbol}{H}_1) < \frac{r}{k-1}\log \rho\rbb - P {\left\{} \newcommand{\rbb}{\right\}}C_1({\boldsymbol}{H}_1) \leq \frac{r}{k}\log \rho\rbb \\ &\doteq \rho^{-d^{(M_1, M_2)}{\left(}\frac{r}{k-1}{\right)}} - \rho^{-d^{(M_1, M_2)}{\left(}\frac{r}{k}{\right)}} \doteq \rho^{-d^{(M_1, M_2)}{\left(}\frac{r}{k-1}{\right)}}, \end{split} \label{Pk}$$ which follows from the fact that $d^{(M_1, M_2)}(r)$ is monotone decreasing, i.e., $d^{(M_1, M_2)}{\left(}\frac{r}{k-1}{\right)}\leq d^{(M_1, M_2)}{\left(}\frac{r}{k}{\right)}$. If we plug (\[Pk\]) into (\[stop\_formula\]) we get $$\begin{aligned} P_{e}(\rho) &&\doteq P{\left\{} \newcommand{\rbb}{\right\}}C_1({\boldsymbol}{H}_1) \geq r\log\rho \rbb P {\left\{} \newcommand{\rbb}{\right\}}C_2({\boldsymbol}{H}_2) \leq \frac{r\log\rho}{L-1}\rbb \nonumber \\ && + \sum_{k = 2}^{L-1} P {\left\{} \newcommand{\rbb}{\right\}}(L-k)C_2({\boldsymbol}{H}_2) < r\log \rho \rbb P(t_1 = k) + P{\left\{} \newcommand{\rbb}{\right\}}C_1({\boldsymbol}{H}_1) <  \frac{r\log\rho}{L-1} \rbb \label{p_out_block_1}\\ &&\doteq \rho^{- \min_{k=2,\cdots,L-1}{\left\{} \newcommand{\rbb}{\right\}}d^{(M_1, M_2)}{\left(}\frac{r}{L-1}{\right)}, d^{(M_1, M_2)}{\left(}\frac{r}{k-1}{\right)}+d^{(M_2, M_3)}{\left(}\frac{r}{L-k}{\right)}, d^{(M_2, M_3)}{\left(}\frac{r}{L-1}{\right)}\rbb}\nonumber\\ && = \rho^{- \min_{ l_1+ l_2 = L-1, l_1 = \{0, 1, \cdots, L-1\}}, {\left\{} \newcommand{\rbb}{\right\}}d^{(M_1, M_2)}{\left(}\frac{r}{l_1}{\right)}+d^{(M_2, M_3)}{\left(}\frac{r}{l_2}{\right)}\rbb},\label{outage_fix}\end{aligned}$$ where we have used the fact that $d^{(M_i, M_{i+1})}(\infty) = 0$. From the de[f]{}inition of diversity in (\[def\_diversity\]) the DMDT in Theorem \[thm\_block\_DDF\] follows. Proof of Theorem \[thm\_DDF\] {#app:thm_DDF} ============================= The decoding time of VBL ARQ is real, which differs from FBL ARQ. Since the long-term static channel has constant state, we can write the decoding time as $t_i = \frac{r\log\rho}{C_i({\boldsymbol}{H}_i)}.$ Hence: $$\begin{split} P_e(\rho) & \doteq P{\left\{} \newcommand{\rbb}{\right\}}t_1 + t_2 > L \rbb \\ &\doteq P{\left\{} \newcommand{\rbb}{\right\}}(L - t_1)C_2({\boldsymbol}{H}_2) < r\log \rho < L C_1({\boldsymbol}{H}_1) \rbb \\ & \doteq P{\left\{} \newcommand{\rbb}{\right\}}L < \frac{r}{S_1({\boldsymbol}{\alpha}_1)} + \frac{r}{S_2({\boldsymbol}{\alpha}_2)} \rbb, \end{split}\label{stopping_time_lt}$$ and $d_{VBL}^{(M_1, M_2, M_3)}(r, L) = \inf_{{\left\{} \newcommand{\rbb}{\right\}}\alpha_{i,j}\rbb \in \mathcal{O}} h({\left\{} \newcommand{\rbb}{\right\}}\alpha_{i,j}\rbb)$, where $\mathcal{O}$ is de[f]{}ined in (\[feas\_VBL2\]). To prove that the DMDT of VBL ARQ is the optimal DMDT in an $N$-node network, we [f]{}irst provide an upper bound on the DMDT, and show that the DMDT of the VBL ARQ protocol achieves this upper bound. Our proof is for the short-term static channel in a three-node network, as stated in Theorem \[thm\_st\_fractional\]. A similar (and simpler) proof can be written for the long-term static channel in a three-node network as stated in Theorem \[thm\_DDF\]. Assume that the source transmits for $k T$ channel uses ($kT < L$) and the relay transmits in the remaining $L - k T $ channel uses. Here $k$ depends on the channel states and the multiplexing gain $r$. From the cut-set bound on the multihop network channel capacity, the instantaneous capacity of the MIMO ARQ channel is given by $$\begin{split} &\min {\left\{} \newcommand{\rbb}{\right\}}\max_{P_{{\boldsymbol}{X}_{1, l }, l = 1, \cdots, \lfloor k \rfloor +1}} {\left[}\sum_{l =1}^{ \lfloor k \rfloor} I({\boldsymbol}{X}_{1,l}; {\boldsymbol}{Y}_{1,l}| {\boldsymbol}{H}_{1,l}) +(k - \lfloor k \rfloor) I({\boldsymbol}{X}_{1,k+1}; {\boldsymbol}{Y}_{1,k+1}| {\boldsymbol}{H}_{1,k+1}) \right.{\right]}, \\ &~~~\left. \max_{P_{{\boldsymbol}{X}_{2, l}, l= 1, \cdots, L-\lfloor k \rfloor-1}} {\left[}\sum_{l = 1}^{L-\lfloor k \rfloor-1} I({\boldsymbol}{X}_{2,l}; {\boldsymbol}{Y}_{2,l}| {\boldsymbol}{H}_{2,l}) + (1-k+\lfloor k \rfloor)I({\boldsymbol}{X}_{2,L-k}; {\boldsymbol}{Y}_{2,L-k}| {\boldsymbol}{H}_{2,L-k}) {\right]}\rbb. \end{split}\nonumber$$ Since the capacity is maximized with Gaussian inputs, and linear scaling of the power constraint does not affect the high SNR analysis, the capacity $C$ is upper bounded by $$\begin{aligned} C \leq \min {\left\{} \newcommand{\rbb}{\right\}}\sum_{l =1}^{ \lfloor k \rfloor} C_1({\boldsymbol}{H}_{1,l}) + (k-\lfloor k \rfloor) C_1({\boldsymbol}{H}_{1,k+1}), \sum_{l = 1}^{L-\lfloor k \rfloor-1} C_2({\boldsymbol}{H}_{2,l}) + (1-k+\lfloor k \rfloor)C_2({\boldsymbol}{H}_{1,L-k}) \rbb.\end{aligned}$$ For any ARQ we can [f]{}ind a $k^*< L$ such that $ \sum_{l =1}^{ k^*} C_1({\boldsymbol}{H}_{1,l}) + (k^* - \lfloor k^* \rfloor) C_1({\boldsymbol}{H}_{1,k^*+1}) = r\log\rho $. This means $k^* \doteq t_1$. With this $k^*$, the probability of message error is lower bounded by $$\begin{split} P_{e}(\rho) &\geq P{\left\{} \newcommand{\rbb}{\right\}}r\log \rho > \sum_{l = 1}^{L-\lfloor k^*\rfloor -1} C_2({\boldsymbol}{H}_{2,l}) + (1- k^* + \lfloor k^*\rfloor)C_2({\boldsymbol}{H}_{1,L-\lfloor k^* \rfloor}) \rbb \\ &\doteq P{\left\{} \newcommand{\rbb}{\right\}}{\left\{} \newcommand{\rbb}{\right\}}r > \sum_{l = 1}^{L-\lfloor t_1\rfloor -1} S_2({\boldsymbol}{\alpha}_{2}^l) + (1-t_1 + \lfloor t_1\rfloor)S_2({\boldsymbol}{\alpha}_{2}^{L-\lfloor t_1\rfloor}) \rbb\cap \tilde{\mathcal{G}} \rbb \\ &= P{\left\{} \newcommand{\rbb}{\right\}}{\left\{} \newcommand{\rbb}{\right\}}t_2 > L - t_1 \rbb \cap \tilde{\mathcal{G}} \rbb =P{\left\{} \newcommand{\rbb}{\right\}}{\left\{} \newcommand{\rbb}{\right\}}t_1 + t_2 > L \rbb \cap \tilde{\mathcal{G}}\rbb, \end{split}$$ where $\tilde{\mathcal{G}} = {\left\{} \newcommand{\rbb}{\right\}}({\boldsymbol}{\alpha_1},\cdots, {\boldsymbol}{\alpha}_{N-1})\in\mathbb{R}^{M_1^* \times L}\times\cdots \times\mathbb{R}^{M_{N-1}^*\times L}: \alpha_{i, 1}^l\geq \cdots \geq \alpha_{i, M_i^*}^l \geq 0, \forall i, l \rbb$. Hence, the diversity gain of any ARQ $d^{(M_1, M_2, M_3)}(r, L)$ of a three-node network is upper bounded by $$\begin{aligned} d^{(M_1, M_2, M_3)}(r, L) \dot{\leq} \inf_{\alpha_{i,j}^l \in\mathcal{G}_2} \tilde{h}(\alpha_{i,j}^l), \label{app_3}\end{aligned}$$ with $\mathcal{G}_2 \triangleq {\left\{} \newcommand{\rbb}{\right\}}t_1 + t_2 > L \rbb \cap \tilde{\mathcal{G}}$, which is the same as the set $\mathcal{G}$ in (\[VBL\_region\]), the DMDT expression for VBL ARQ in the short-term static channel. This shows that the DMDT upper bound is achieved by the VBL ARQ in the short-term static channel. This completes our proof. Proof of Theorem \[thm\_st\] {#app:thm_st} ============================ In the short-term static channel, for the FBL ARQ protocol with two hops (\[Pk\]) becomes $$\begin{split} P{\left\{} \newcommand{\rbb}{\right\}}t_1 = k\rbb &= P {\left\{} \newcommand{\rbb}{\right\}}\sum_{l = 1}^{k-1}C_1 ({\boldsymbol}{H}_{1,l}) < r\log \rho \rbb - P {\left\{} \newcommand{\rbb}{\right\}}\sum_{l = 1}^{k}C_1 ({\boldsymbol}{H}_{1,l}) < r\log \rho \rbb \\ &\doteq {\left(}P {\left\{} \newcommand{\rbb}{\right\}}S_1 ({\boldsymbol}{\alpha}_1^1) < \frac{r\log \rho}{k-1} \rbb {\right)}^{k-1} - {\left(}P {\left\{} \newcommand{\rbb}{\right\}}S_1 ({\boldsymbol}{\alpha}_1^1) < \frac{r\log \rho}{k} \rbb {\right)}^{k} \doteq \rho^{-(k-1) d^{(M_1, M_2)}{\left(}\frac{r}{k-1}{\right)}}. \end{split}\nonumber$$ Hence, the probability of message error can be written as $$\begin{aligned} P_{e}(\rho) &\doteq& P{\left\{} \newcommand{\rbb}{\right\}}C_1({\boldsymbol}{H}_{1,1}) \geq r\log\rho \rbb P {\left\{} \newcommand{\rbb}{\right\}}\sum_{l = 2}^{L} C_2({\boldsymbol}{H}_{2,l}) \leq r\log\rho \rbb \nonumber \\ &+& \sum_{k = 2}^{L-k} P {\left\{} \newcommand{\rbb}{\right\}}\sum_{l = k+1}^{L}C_2({\boldsymbol}{H}_{2,l}) < r\log \rho \rbb P(t_1 = k) + P{\left\{} \newcommand{\rbb}{\right\}}\sum_{l =1}^{L} C_1({\boldsymbol}{H}_{1, l}) \geq r\log\rho \rbb \nonumber \\ &\doteq& \rho^{- \min_{k=2,\cdots,L-1}{\left\{} \newcommand{\rbb}{\right\}}(L-1) d^{(M_1, M_2)}{\left(}\frac{r}{L-1}{\right)}, (k-1)d^{(M_1, M_2)}{\left(}\frac{r}{k-1}{\right)}+ (L-k)d^{(M_2, M_3)}{\left(}\frac{r}{L-k}{\right)}, (L-1)d^{(M_2, M_3)}{\left(}\frac{r}{L-1}{\right)}\rbb}. \nonumber $$ By the de[f]{}inition of diversity, the DMDT in Theorem \[thm\_st\] follows. Proof of Theorem \[thm\_st\_fractional\] {#app:thm_st_fractional} ======================================== For a three-node network, we can break down the information outage event as a disjoint union of events that outage happens at the $i$th hop: $$P_{e}(\rho) \doteq \rho^{-\inf_{{\left\{} \newcommand{\rbb}{\right\}}\alpha_{i,j}^l\rbb \in \cup_{k=1}^2 \mathcal{G}_k} \tilde{h}\left({\left\{} \newcommand{\rbb}{\right\}}\alpha_{i,j}^l\rbb\right)},$$ where $\mathcal{G}_k \triangleq {\left\{} \newcommand{\rbb}{\right\}}\sum_{i=1}^{k}t_i >L\rbb$. Due to nonnegativity of $t_i$, $\mathcal{G}_1\subset\mathcal{G}_2$. Hence, the minimization should be over $\mathcal{G}_2$. Adding the ordering requirement on elements of ${\left\{} \newcommand{\rbb}{\right\}}\alpha_{i,j}^l\rbb$, we have Theorem \[thm\_st\_fractional\]. Proof of Theorem \[thm\_n\_VBL\] {#app:thm_n_VBL} ================================ Upper bound {#app:upper_bound} ----------- We will [f]{}irst prove an upper bound for any ARQ protocol in an $N$-hop network by considering a genie-aided scheme. For each $i = 1, \cdots, N - 2$, consider the two consecutive hops from node $i$ to node $(i +1)$ and then from node $(i+1)$ to node $(i + 2)$. Assume a genie aided scheme where the messages are provided to node $i$, and the output of node $(i+2)$ is forwarded to the terminal node $N$. The maximum number of ARQ rounds that can be spent on this two-hop is $L$. The DMDT of this genie aided setup for any $i$, is an upper bound on the DMDT of the $(M_1, \cdots, M_{N})$ system. The optimal DMDT of the $(M_i, M_{i+1}, M_{i+2})$ system with $L$ ARQ rounds is characterized in Theorem \[thm\_DDF\]. Hence, we have $$\begin{aligned} d^{(M_1, \cdots, M_{N})}(r, L) &\leq \min_{i = 1, \cdots, N-2} d^{(M_i, M_{i+1}, M_{i+2})}_{VBL} (r, L),\label{eqn1}\end{aligned}$$ where $d^{(M_1, \cdots, M_{N})}(r, L)$ is the DMDT of any ARQ protocol for an $N$-hop network. The DMDT of VBL ARQ {#achievable_VBL} ------------------- To be able to exploit the multi-hop diversity in the network, we use the following rate and ARQ round allocation scheme. [F]{}irst we split the original message of rate $r\log\rho$ into $N/2$ lower rate messages each having a rate of $(r\log\rho)/(N/2)$ when $N$ is even (we split into $(N-1)/2$ lower rate messages when $N$ is odd). We pump these pieces of the original message into the network sequentially, and in equilibrium, they are transmitted simultaneously by adjacent pairs of nodes. Moreover, we require the number of blocks allowed for any two-hop transmission, from node $i$ to node $(i+1)$ and then to node $(i+2)$, for all $i = 1, \cdots, N-2$, to be $\bar{L} = L/(N/2)$ when $N$ is even (or $\bar{L} = L/[(N-1)/2]$ when $N$ is odd). This is equivalent to requiring the total number of blocks that each node $i$, $i = 2, \cdots, N$, spends for listening and transmitting each piece of a message to be $\bar{L}$. Note that with this constraint, the end-to-end total number of ARQ rounds used for transmitting each piece of the original message is upper bounded by $\bar{L}\times N/2 = L$ when $N$ is even (or equals $\bar{L}\times (N-1)/2 = L$ when $N$ is odd). Hence, this scheme satis[f]{}ies the constraint on the end-to-end total number of ARQ rounds. It is easy to see that the number of simultaneous transmission pairs we can have in an $N$ node network is $N/2$ when $N$ is even, and $(N-1)/2$ or $(N+1)/2$ when $N$ is odd. At the destination, all pieces of a message are combined to decode the original message. From the above analysis, the last piece of these low rate messages is received after at most $L$ blocks, and the rate of the combined data is $r\log\rho/(N/2)\times(N/2) = r\log \rho$ when $N$ is even (similarly for odd $N$), which equals the original data rate. Hence this low rate message simultaneous transmission scheme meets both the data rate and end-to-end ARQ window size constraints. Now we study the outage probability $P_{out}(r)$ of this scheme. De[f]{}ine an outage event for any three-node sub-network consisting of nodes $i$, $(i+1)$, and $(i+2)$, for $N$ even, as: $$\begin{split} P_{out}^i(r, L)& \triangleq P\left\{\frac{r/(N/2)\log\rho}{C_i({\boldsymbol}{H}_i)} + \frac{r/(N/2)\log\rho}{C_{i+1}({\boldsymbol}{H}_{i+1})} > \frac{L}{(N/2)}\right\} \\ & = P\left\{\frac{r\log\rho}{C_i({\boldsymbol}{H}_i)} + \frac{r\log\rho}{C_{i+1}({\boldsymbol}{H}_{i+1})} > L \right\}, \end{split}\label{eqn_even}$$ and for $N$ odd, similarly, as $$\begin{split} P_{out}^i(r, L)& \triangleq P\left\{\frac{r/[(N-1)/2]\log\rho}{C_i({\boldsymbol}{H}_i)} + \frac{r/[(N-1)/2]\log\rho}{C_{i+1}({\boldsymbol}{H}_{i+1})} > \frac{L}{[(N-1)/2]}\right\} \\ & = P\left\{\frac{r\log\rho}{C_i({\boldsymbol}{H}_i)} + \frac{r\log\rho}{C_{i+1}({\boldsymbol}{H}_{i+1})} > L \right\}. \end{split}\label{eqn_odd}$$ Note that (\[eqn\_even\]) and (\[eqn\_odd\]) say that by using this scheme, regardless of whether $N$ is even or odd, the outage probability is as if we transmit the original message with data rate $r\log\rho$ over two hops with a total ARQ round constraint of $L$. From our earlier analysis for the VBL ARQ of a two-hop network, we have that as $\rho\rightarrow \infty$, $$P_{out}^i(r, L) \doteq \rho^{-d_{VBL}^{(M_i, M_{i+1}, M_{i+2})}(r, L)}.$$ The system is in outage if there is an outage over any of the consecutive two-hop links from the source to the destination. Using a union bound, we have $$P_{out}(r, L) \leq \sum_{i=1}^{N-2} P_{out}^i (r, L).$$ As SNR goes to in[f]{}inity, the right hand sum will be dominated by the slowest decaying term, which is the term with minimum $d_{VBL}^{(M_i, M_{i+1}, M_{i+2})}(r, L)$. Hence, the DMDT of this scheme is lower bounded by $$d^{(M_1, \cdots, M_{N})}(r, L) \geq \min_{i = 1, \cdots, N-2} d_{VBL}^{(M_i, M_{i+1}, M_{i+2})}(r, L).$$ Together with the upper bound in (\[eqn1\]), this shows that the presented scheme with the VBL ARQ achieves the optimal DMDT of an $N$-hop network, and its DMDT is given by Theorem \[thm\_n\_VBL\]. Proof of Theorem \[thm\_n\_fixed\] {#app:thm_n_fixed} ================================== The proof for the upper bound is identical to the one in Appendix \[app:upper\_bound\]. For the achievable DMDT of the [f]{}ixed ARQ, we consider the following scheme with deterministic number of ARQ rounds: a node has to wait for at least $L_i$ rounds over hop $i$ for each piece of message, and we allow simultaneous transmissions to employ multihop diversity. Using this scheme, in steady state, the destination will receive one piece of the message every $L_{\max}$ rounds (rather than $L$, if we do not employ multihop diversity). Now we divide the message into pieces with lower rates $\frac{L_{\max}}{L}r\log\rho$. Using this scheme, overall we will still achieve a rate of $r\log\rho$ in the steady state by transmitting these lower rate pieces. The outage probability of this scheme provides an upper bound on that of the [f]{}ixed ARQ protocol: $$P_{out}(r, L_1, \cdots, L_{N-1}) \leq \sum_{i=1}^{N-1} P\left\{\left\lceil\frac{\frac{L_{\max}}{L}r\log\rho}{C_i({\boldsymbol}{H}_i)}\right\rceil > L_i \right\},$$ where $\lceil x \rceil$ is the smallest integer larger than $x$. As SNR goes to in[f]{}inity, the right hand sum will be dominated by the slowest decaying term, which is the term with minimum $d^{(M_i, M_{i+1})}\left(\frac{L_{\max}}{L}\cdot\frac{r}{L_i} \right) $, and, hence, $$\begin{split} d_{F}^{(M_1, \cdots, M_N)}(r, L_1, \cdots, L_{N-1}) &\geq \min_{i=1, \cdots, N - 1} d^{(M_i, M_{i+1})}\left(\frac{L_{\max}}{L}\cdot\frac{r}{L_i} \right) \\ &=\min_{i = 1, \cdots, N-2} d_{F}^{(M_i, M_{i+1}, M_{i+2})}\left(\frac{L_{\max}}{L}r, L_i, L_{i+1} \;\middle\vert\; L_i + L_{i+1} \leq L_{\max}\right), \end{split}$$ which completes our proof. Proof of Theorem \[thm\_n\_FBL\] {#app:thm_n_FBL} ================================ The proof for the upper bound is identical to Appendix \[app:upper\_bound\]. For the achievable DMDT of the [f]{}ixed ARQ, again consider the same rate-splitting scheme in Appendix \[achievable\_VBL\]. The difference here is that the number of ARQ rounds used is rounded up to be integer. For $N$ even, the outage probability can be written as $$\begin{split} P_{out}^{i}(r, \bar{L}) &= P\left\{{\left\lceil}\frac{r/(N/2)\log(\rho)}{C_i({\boldsymbol}{H}_i)} {\right\rceil}+ {\left\lceil}\frac{r/(N/2)\log(\rho)}{C_{i+1}({\boldsymbol}{H}_{i+1})}{\right\rceil}> \bar{L} \right\} \\ & < P\left\{\frac{r/(N/2)\log(\rho)}{C_i({\boldsymbol}{H}_i)} + 1 + \frac{r/(N/2)\log(\rho)}{C_{i+1}({\boldsymbol}{H}_{i+1})} + 1 > \bar{L} \right\} \\ & = P\left\{\frac{r\log(\rho)}{C_i({\boldsymbol}{H}_i)} + \frac{r\log(\rho)}{C_{i+1}({\boldsymbol}{H}_{i+1})} + N> L\right\} \\ &\doteq \rho^{-d_{VBL}^{(M_i, M_{i+1}, M_{i+2})}(r, L -N)}. \end{split}$$ Note that $L > N$ since we need at least $N$ hops. The system is in outage if any three-node sub-network in outage. Using the union bound, again we have $$P_{out}(r, L) \leq \sum_{i=1}^{N-2} P_{out}^{i}(r, \bar{L}) \leq \sum_{i=1}^{N-2} \rho^{-d_{VBL}^{(M_i, M_{i+1}, M_{i+2})}(r, L -N)},$$ and $$d_{FBL}^{(M_1, \cdots, M_N)}(r, L) \geq \min_{i = 1, \cdots, N-2} d_{VBL}^{(M_i, M_{i+1}, M_{i+2})}(r, L - N).$$ Proof of Theorem \[lemma\_stat\_dist\] {#app:lemma_stat_dist} ====================================== Theorem \[app:lemma\_stat\_dist\] can be proved using Theorem 7.4.1 of [@Ross1995]. The theorem views the random queueing delay of the $n$th message as a re[f]{}lected random walk. The deadline missing probability can be interpreted as a boundary hitting probability of the random walk, which can be obtained via a standard martingale argument which will not be repeated here. Note that the service time in a half-duplex two-hop network has a mean service time of $\mu(L_1)+\mu(L_2)$ blocks (approximate the service time as exponentially distributed) when conditioned on the event $\cap_{i=1}^{N-1}\{t_n^i \leq L_i\}$. The mean message inter-arrival time is $\lambda$ blocks. Using the mean service time, the mean inter-arrival time, and the delay deadline constraint $k$ in Theorem 7.4.1 of [@Ross1995], we obtain the statement in Theorem \[lemma\_stat\_dist\]. Proof of Theorem \[SojournTimeHalfDuplex\] {#app:SojournTimeHalfDuplex} ========================================== The following proof is adapted from the proof in [@Ganesh1998], where a conventional queue tandem is considered. The conventional queue tandem is equivalent to a full-duplex multihop network, where the transmission (service) of node $i$ for a message has to wait for the transmission of the previous message from node $i$ to node $(i+1)$. However, in our problem, we have a half-duplex scenario, in particular, the transmission (service) of node $i$ for a message has to wait for the transmission of the previous message over node $(i+1)$ to node $(i+2)$. The half-duplex scenario leads to a different and more complex queueing dynamic that we will study more precisely in the following. For node $i$, $i = 1\cdots N-1$, $N\geq 3$, let the random variable $S_n^i$ denote the service time of the $n$th message at node $i$, and $A_n^i$ be the inter-arrival time of the $n$th message at node $i$. Due to the half-duplex constraint, there are $N-2$ queues at the source and node $i$, $i = 2, \cdots, N-2$. After the completion of transmission of the previous message, the message will be transmitted from node $i$ to node $(i+1)$ and to node $(i+2)$, for $i = 1, \cdots, N-2$. Because of this queueing dynamic, the waiting time of the $n$th message at node $i$, $W_n^i$, satis[f]{}ies the following form of Lindley’s recursion (see [@Ganesh1998]): $$W_n^i = (W_{n-1}^i + S_{n-1}^i + S_{n-1}^{i+1} - A_n^i)^+, \quad i = 1, \cdots, N-2, \label{rec_2}$$ where $(x)^+ = \max(x, 0)$. The total time a message spent for transmission from node $i$ to node $i+2$ is given by the waiting time plus its own transmission time $$\begin{aligned} D_n^i &= W_n^i + S_n^i + S_n^{i+1}, \quad i = 1, \cdots, N-2. \label{D_n_i}\end{aligned}$$ Note there are overlaps in these transmission times $D_n^i$’s we de[f]{}ined above, so their sum provides an upper bound on the end-to-end delay of the $n$th message: $$D_n \leq \sum_{i=1}^{N-2} D_n^i. \label{upper_bound}$$ Next we will write $D_n^i$’s in (\[upper\_bound\]) explicitly using a non-recursive expression. The arrival process at node $i$ is the departure process from node $(i-1)$, which satis[f]{}ies the recursion [@Ganesh1998]: $$\begin{aligned} A_n^i = A_n^{i-1} + D_n^{i-1} - D_{n-1}^i, \quad i = 1, \cdots, N-2, \label{rec_1}\end{aligned}$$ where $A_n^i$ is a Poisson process with mean interarrival time $\lambda$. A well-known result from queueing theory [@Ganesh1998] states the following: if the arrival and service processes satisfy the stability condition, that is, the mean inter-arrival time $\lambda$ is greater than the mean service time $\mu(L_i) + \mu(L_{i+1})$ at each of the queues $i = 1, \cdots, N-2$, then Lindley’s recursion (\[rec\_2\]) has the solution: $$W_n^i = \max_{j_i\leq n}(\sigma^{i}_{j_i, n-1} + \sigma^{i+1}_{j_i, n-1} - \tau^i_{j_i+1,n}), \quad i = 1, \cdots N-2, \label{W_n_i}$$ where the partial sums are de[f]{}ined as $\sigma_{l,p}^i = \sum_{k=l}^p S_k^i$ and $\tau_{l,p}^i = \sum_{k=l}^p A_k^i$. Hence, from (\[D\_n\_i\]) and (\[W\_n\_i\]), we have $$\begin{aligned} D_n^i = \max_{j_i\leq n}(\sigma^{i}_{j_i, n} + \sigma^{i+1}_{j_i, n} - \tau^i_{j_i+1,n}), \quad i = 1, \cdots N-2. \label{D}\end{aligned}$$ One the other hand, from (\[rec\_1\]) we have $$\tau_{l,p}^i = \left\{ \begin{array}{lc} \tau_{l,p}^{i-1} + D_p^{i-1} - D_{l-1}^{i-1}, & l\leq p+1,\\ 0, & \mbox{otherwise.} \end{array}\right.\label{73}$$ Plugging (\[73\]) into (\[D\]), we have $$\begin{aligned} D_n^i = \max_{j_i\leq n}(\sigma^i_{j_i, n} + \sigma^{i+1}_{j_i, n} - \tau_{j_i+1,n}^{i-1} - D_n^{i-1} + D_{j_i}^{i-1}),\quad i = 2, \cdots, N-2.\end{aligned}$$ Moving $D_n^{i-1}$ to the left-hand-side, we obtain the recursion relation $$\begin{aligned} D_n^i + D_n^{i-1} = \max_{j_i\leq n}(\sigma_{j_i, n}^{i}+\sigma_{j_i, n}^{i+1} -\tau_{j_i+1,n}^{i-1} + D_{j_i}^{i-1}), \quad i=2,\cdots, N-2. \label{seq}\end{aligned}$$ Now from (\[D\]) we have $ D_{j_i}^{i-1} = \max_{j_{i-1}\leq j_i}(\sigma^{i-1}_{j_{(i-1)},j_i} +\sigma^i_{j_{(i-1)},j_i} - \tau_{j_{(i-1)}+1, j_i}^{i-1})$. Plugging this into (\[seq\]) we have $$\begin{aligned} D_n^i + D_n^{i-1} = \max_{j_{(i-1)}\leq j_i \leq n} (\sigma_{j_i, n}^{i} +\sigma_{j_i, n}^{i+1} + \sigma^{i-1}_{j_{(i-1)},j_i} +\sigma^i_{j_{(i-1)},j_i} - \tau_{j_{(i-1)}+1, n}^{i-1}), \quad i = 2, \cdots, N-2.\end{aligned}$$ Repeating this operation inductively by expanding $\tau_{j_{(i-1)}+1, n}^{i-1}$, we obtain $$\begin{aligned} \sum_{i=1}^{N-2} D_n^i = \max_{j_1\leq \cdots \leq j_{N-1}=n}\left[ \sum_{i=1}^{N-2}(\sigma^{i}_{j_i,j_{(i+1)}}+\sigma^{i+1}_{j_i,j_{(i+1)}} ) -\tau^1_{j_1+1, n} \right]. \label{recursion_1}\end{aligned}$$ Note that in the non-recursive expression (\[recursion\_1\]), we split the interval $[1, n]$ by increasingly ordered integers $j_1, \cdots, j_{(N-1)} = n$, and the summations of random variables over these different intervals are mutually independent. This decomposition enables us to adopt a similar large deviation argument as in [@Ganesh1998], to estimate the exponent $\theta^*$ in the form of $P(\sum_{i=1}^{N-2}D_n^i \geq k) \approx \exp(-\theta^* k)$ for large $k$. Following a similar argument as in [@Ganesh1998] by [f]{}inding a condition under which the log-moment generating function is bounded for each independent sum-of-random-variables when $n\rightarrow \infty$, we can show that $$\lim_{k\rightarrow \infty}\frac{1}{k} P\left(\sum_{i=1}^{N-2} D_n^i \geq k\right) = -\min_{i=1, \cdots, N-2} \theta_i,$$ where $\theta_i = \sup\{\theta > 0: \Lambda_T(-\theta) + \Lambda_{(S^i + S^{i+1})}(\theta)<0 \}$, $i = 1, \cdots, N-2$, and the log-moment-generating functions for the inter-arrival time $\Lambda_T(\theta) = \log(1-\theta\lambda)^{-1}$, for the service time $\Lambda_{(S^i+ S^{i+1})}(\theta) = \log(1-\theta[\mu(L_i) + \mu(L_{i+1})])$ (if we approximate the sum service time to be exponentially distributed). We can further solve that $\theta_i = (\mu(L_i) + \mu(L_{i+1}))^{-1} - \lambda^{-1}$, $i = 2, \cdots, N-2$. Because of (\[upper\_bound\]), $P(D_n \geq k) \leq P(\sum_{i=1}^{N-2} D_n^i \geq k)$, and hence the exponent for the deadline missing probability is bounded by $-\theta^* \leq -\min_{i=1, \cdots, N-2}\theta_i$. Now we prove the lower bound. Note that the end-to-end delay $D_n$ is greater than the delay in any three-node sub-network: $D_n \geq D_n^i$, $i = 1, \cdots, N-2$. Using a similar argument, we can show that the exponent for the probability that the delay in a three-node sub-network exceeds $k$ is given by $-\theta_i$. Hence, we have $$-\theta^* = \lim_{k\rightarrow \infty}\frac{1}{k}\log P(D_n \geq k) \geq \lim_{k\rightarrow \infty}\frac{1}{k} \log P(D_n^i \geq k) = -\theta_i.\label{81}$$ Inequality (\[81\]) still holds if we take the maximum over all $i$ on the right-hand-side: $$-\theta^* \geq \max_{i=1, \cdots, N-1} -\theta_i = - \min_{i=1, \cdots, N-1} \theta_i.$$ This completes our proof. Acknowledgment {#acknowledgment .unnumbered} ============== The authors would like to thank J. Michael Harrison for his helpful suggestions and comments, which was of great help in our queuing analysis. [^1]: Yao Xie (Email: yaoxie@stanford.edu) is with the Department of Electrical Engineering at Stanford University. [^2]: Deniz G[ü]{}nd[ü]{}z (Email: dgunduz@princeton.edu) was with the Department of Electrical Engineering, Stanford University, and with the Department of Electrical Engineering, Princeton University. He is now with Centre Tecnològic de Telecomunicacions de Catalunya (CTTC), Spain. [^3]: Andrea Goldsmith (Email: andrea@wsl.stanford.edu) is with the Department of Electrical Engineering at Stanford University. [^4]: This work is supported by the Interconnected Focus Center, ONR grant N000140910072P00006, DARPA’s ITMANET program, and the Stanford General Yao-Wu Wang Graduate Fellowship. [^5]: Here the exponential equality $\doteq$ is de[f]{}ined as $f(\rho) \doteq \rho^c$, if $\lim_{\rho\rightarrow \infty}\frac{\log f(\rho)}{\log \rho} = c$. The exponential inequalities $\dot{\leq}$ and $\dot{\geq}$ are de[f]{}ined similarly.
--- abstract: 'For a given self-adjoint first order elliptic differential operator on a closed smooth manifold, we prove a list of results on when the delocalized eta invariant associated to a regular covering space can be approximated by the delocalized eta invariants associated to finite-sheeted covering spaces. One of our main results is the following. Suppose $M$ is a closed smooth spin manifold and $\widetilde M$ is a $\Gamma$-regular covering space of $M$. Let $\langle \alpha \rangle$ be the conjugacy class of a non-identity element $\alpha\in \Gamma$. Suppose $\{\Gamma_i\}$ is a sequence of finite-index normal subgroups of $\Gamma$ that distinguishes $\langle \alpha \rangle$. Let $\pi_{\Gamma_i}$ be the quotient map from $\Gamma$ to $\Gamma/\Gamma_i$ and $\langle \pi_{\Gamma_i}(\alpha) \rangle$ the conjugacy class of $\pi_{\Gamma_i}(\alpha)$ in $\Gamma/\Gamma_i$. If the scalar curvature on $M$ is everywhere bounded below by a sufficiently large positive number, then the delocalized eta invariant for the Dirac operator of $\widetilde M$ at the conjugacy class $\langle \alpha \rangle$ is equal to the limit of the delocalized eta invariants for the Dirac operators of $M_{\Gamma_i}$ at the conjugacy class $\langle \pi_{\Gamma_i}(\alpha) \rangle$, where $M_{\Gamma_i}= \widetilde M/\Gamma_i$ is the finite-sheeted covering space of $M$ determined by $\Gamma_i$.' address: - Shanghai Center for Mathematical Sciences - 'Department of Mathematics, Texas A&M University' - 'Department of Mathematics, Texas A&M University' author: - Jinmin Wang - Zhizhang Xie - Guoliang Yu nocite: '[@MR3454548; @Song:2019aa]' title: Approximations of delocalized eta invariants by their finite analogues --- [^1] [^2] [^3] Introduction ============ The delocalized eta invariant for self-adjoint elliptic operators was first introduced by Lott [@Lott] as a natural extension of the classical eta invariant of Atiyah-Patodi-Singer [@A-P-S75a; @A-P-S75b; @A-P-S76]. It is a fundamental invariant in the studies of higher index theory on manifolds with boundary, positive scalar curvature metrics on spin manfolds and rigidity problems in topology. More precisely, the delocalized eta invariant can be used to detect different connected components of the space of positive scalar curvature metrics on a given closed spin manifold [@MR1339924; @LeichtnamPos]. Furthermore, it can be used to give an estimate of the connected components of the moduli space of positive scalar curvature metrics on a given closed spin manifold [@MR3590536]. Here the moduli space is obtained by taking the quotient of the space of positive scalar curvature metrics under the action of self-diffeomorphisms of the underlying manifold. As for applications to topology, the delocalized eta invariant can be applied to estimate the size of the structure group of a given closed topological manifold [@Weinberger:2016dq]. The delocalized eta invaraint is also closely related to the Baum-Connes conjecture. The second and third author showed that if the Baum-Connes conjecture holds for a given group $\Gamma$, then[^4] the delocalized eta invariant associated to any regular $\Gamma$-covering space is an algebraic number [@Xie]. In particular, if a delocalized eta invariant is transcendental, then it would lead to a counterexample to the Baum-Connes conjecture. We refer the reader to [@Xie:2019aa] for a more detailed survey of the delocalized eta invariant and its higher analogues. The delocalized eta invariant, despite being defined in terms of an explicit integral formula, is difficult to compute in general, due to its non-local nature. The main purpose of this article is to study when the delocalized eta invariant associated to the universal covering of a space can be approximated by the delocalized eta invariants associated to finite-sheeted coverings, where the latter are easier to compute. Let us first recall the definition of delocalized eta invariants. Let $M$ be a closed manifold and $D$ a self-adjoint elliptic differential operator on $M$. Suppose $\Gamma$ is a discrete group and $\widetilde M$ is a $\Gamma$-regular covering space of $M$. Denote by $\widetilde D$ the lift of $D$ from $M$ to $\widetilde M$. For any non-identity element $\alpha \in \Gamma$, the delocalized eta invariant $\eta_{\left\langle \alpha \right\rangle }(\widetilde D)$ of $\widetilde D$ at the conjugacy class $\langle \alpha \rangle$ is defined to be $$\label{eq:delocalizedeta} \eta_{\left\langle \alpha \right\rangle }(\widetilde D)\coloneqq \frac{2}{\sqrt\pi}\int_{0}^\infty \sum_{\gamma \in\langle \alpha \rangle} \int_\mathcal F {\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}(K_t(x,\gamma x))dxdt,$$ where $K_t(x,y)$ is the Schwartz kernel of the operator $\widetilde D e^{-t^2\widetilde D^2}$ and $\mathcal F$ is a fundamental domain of $M_\Gamma$ under the action of $\Gamma$. Now suppose $\{\Gamma_i\}$ is a sequence of finite-index normal subgroups of $\Gamma$. Let $M_{\Gamma_i} = \widetilde M/\Gamma_i$ be the associated finite-sheeted covering space of $M$ and $D_{\Gamma_i}$ the lift of $D$ from $M$ to $M_{\Gamma_i}$. The delocalized eta invariant $\eta_{\langle \pi_{\Gamma_i}(\alpha) \rangle}(D_{\Gamma_i})$ of $D_{\Gamma_i}$ is defined similarly as in line $\eqref{eq:delocalizedeta}$, where $\pi_{\Gamma_i}$ is the canonical quotient map from $\Gamma$ to $\Gamma/\Gamma_i$. Suppose $\{\Gamma_i\}$ distinguishes the conjugacy class $\langle \alpha \rangle$ of a non-identity element $\alpha\in \Gamma$ (cf. Definition $\ref{def:distinguish}$). We prove a list of results that answers positively either one or both of the following questions. 1. Does $\displaystyle \lim_{i\to\infty}\eta_{\langle \pi_{\Gamma_i}(\alpha)\rangle }(D_{\Gamma_i})$ exist? 2. If $\displaystyle \lim_{i\to\infty}\eta_{\langle \pi_{\Gamma_i}(\alpha )\rangle }(D_{\Gamma_i})$ exists, is the limit equal to $\eta_{\left\langle \alpha \right\rangle }(\widetilde D)$? Here is one of the main results of our paper. \[thm:A\] With the above notation, if the maximal Baum-Connes assembly map for $\Gamma$ is rationally an isomorphism, then the limit $$\lim_{i\to \infty}\eta_{\langle\pi_{\Gamma_i}(\alpha)\rangle}(D_{\Gamma_i})$$ stabilizes, that is, $ \exists k>0$ such that $\eta_{\langle\pi_{\Gamma_i}(\alpha)\rangle}(D_{\Gamma_i}) = \eta_{\langle\pi_{\Gamma_k}(\alpha)\rangle}(D_{\Gamma_k}) $ for all $ i\geqslant k.$ By a theorem of Higson and Kasparov [@MR1821144 Theorem 1.1], the maximal Baum-Connes assembly map is an isomorphism for all a-T-menable groups. We have the following immediate corollary. With the above notation, if $\Gamma$ is a-T-menable, then the limit $$\lim_{i\to \infty}\eta_{\langle\pi_{\Gamma_i}(\alpha)\rangle}(D_{\Gamma_i})$$ stabilizes. Note that Theorem $\ref{thm:A}$ and its corollary above only addresses the first question, that is, only the convergence of $\displaystyle \lim_{i\to\infty}\eta_{\langle \pi_{\Gamma_i}(\alpha)\rangle }(D_{\Gamma_i})$. On the other hand, if in addition there exists a smooth dense subalgebra[^5] $\mathcal A$ of the reduced group $C^\ast$-algebra $C_r^\ast(\Gamma)$ of $\Gamma$ such that $\mathbb C\Gamma\subset \mathcal A$ and the trace map[^6] ${\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \alpha \rangle }\colon \mathbb C\Gamma \to \mathbb C$ extends continuously to a trace map ${\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \alpha \rangle}\colon \mathcal A \to \mathbb C$, then we have $$\lim_{i\to \infty}\eta_{\langle\pi_{\Gamma_i}(\alpha)\rangle}(D_{\Gamma_i}) = \eta_{\left\langle \alpha \right\rangle }(\widetilde D).$$ See the discussion at the end of Section $\ref{sec:max}$ for more details. There is an analogue of Theorem $\ref{thm:A}$ using the reduced Baum-Connes conjecture instead of the maximal Baum-Connes conjecture. However, due to the lack of functoriality of reduced group $C^\ast$-algebras, we need to make some additional assumptions, besides the reduced Baum-Connes conjecture. With the above notation, suppose a multiple of $M$ stably bounds cf. Definition $\ref{def:bound}$. If both the rational reduced Baum-Connes conjecture and the generalized Stolz conjecture Conjecture $\ref{conj:stolz2}$ hold for $\Gamma$, then the limit $$\lim_{i\to \infty}\eta_{\langle\pi_{\Gamma_i}(\alpha)\rangle}(D_{\Gamma_i})$$ stabilizes. Here is another main result of our paper. \[thm:B\] With the above notation, if the spectral gap of $\widetilde D$ at zero is sufficiently large[^7], then we have $$\lim_{i\to \infty}\eta_{\langle\pi_{\Gamma_i}(\alpha)\rangle}(D_{\Gamma_i}) = \eta_{\left\langle \alpha \right\rangle }(\widetilde D).$$ There are other variants of Theorem $\ref{thm:B}$ above. We refer the reader to Theorem $\ref{thm:conjcontrol}$ and Proposition $\ref{prop:sep}$ for the details. The paper is organized as follows. In Section \[sec:pre\], we review some basic facts about conjugacy separable groups and certain geometric $C^\ast$-algebras. In Section \[sec:deleta\], we review some basics of delocalized eta invariants. In Section \[sec:max\], we prove one of main results, Theorem $\ref{thm:A}$, and discuss some of its consequences. In Section $\ref{sec:sc}$ and $\ref{sec:sep}$, we prove Theorem $\ref{thm:B}$ and its variants. Preliminaries {#sec:pre} ============= In this section, we review some basic facts about conjugacy separable groups and certain geometric $C^\ast$-algebras. Conjugacy separable groups {#Sec: Conjugacy separable} -------------------------- We will prove our main approximation results for a particular class of groups, called conjugacy separable groups. In this subsection, we review some basic properties of conjugacy separable groups. In the following, all groups are assumed to be finitely generated, unless otherwise specified. \[conjdis\] Let $\Gamma$ be a finitely generated discrete group. We say that $\gamma \in \Gamma$ is conjugacy distinguished if for any $\beta\in \Gamma$ that is not conjugate to $\gamma$, there exists a finite-index normal subgroup $\Gamma'$ of $\Gamma$ such that the image of $\beta$ in $\Gamma/\Gamma'$ is not conjugate to $\gamma$. If every element in $\Gamma$ is conjugacy distinguished, then we say that $\Gamma$ is conjugacy separable. In other words, we have the following definition of conjugacy separability. \[def:conjsep\] A finitely generated group $\Gamma$ is conjugacy separable if for any $\gamma_1, \gamma_2\in \Gamma$ that are not conjugate, there exists a finite-index normal subgroup $\Gamma'$ of $\Gamma$ such that the image of $\gamma_1$ and $\gamma_2$ in $\Gamma/\Gamma'$ are not conjugate. For any normal subgroup $\Gamma'$ of $\Gamma$, we denote by $\pi_{\Gamma'}$ the quotient map from $\Gamma$ to $\Gamma/\Gamma'$. \[def:distinguish\] Suppose that $\{\Gamma_i\}$ is a sequence of finite-index normal subgroups of $\Gamma$. For any non-trivial conjugacy class $\langle \alpha\rangle$ of $\Gamma$, we say that $\{\Gamma_i\}$ distinguishes $\langle \alpha\rangle$, if for any finite set $F$ in $\Gamma$ there exists $k\in\mathbb N_+$ such that $$\forall \beta\in F,\ \beta\notin\langle \alpha\rangle\implies \pi_{ \Gamma_i}(\beta)\notin\langle \pi_{\Gamma_i}(\alpha)\rangle$$ for all $i\geqslant k$. If $\alpha\in \Gamma$ is conjugacy distinguished, then such sequence always exists. More generally, let $\mathfrak N$ be the net of all normal subgroups of $\Gamma$ with finite indices. If $\alpha\in G$ is conjugacy distinguished in the sense of Definition $\ref{conjdis}$, then $\mathfrak N$ distinguishes $\langle \alpha\rangle$, that is, for any finite set $F\subset \Gamma$, there exists a finite index normal subgroup $\Gamma_F$ of $\Gamma$ such that $$\forall \beta\in F,\ \beta\notin\langle \gamma\rangle\implies \pi_{\Gamma'}(\beta)\notin\langle \pi_{\Gamma'}(\gamma)\rangle$$ for all $\Gamma'\in \mathfrak N$ with $\Gamma'\supseteq \Gamma_F$. Let $\mathbb C\Gamma$ be the group algebra of $\Gamma$ and $\ell^1(\Gamma)$ be the $\ell^1$-completion of $\mathbb C\Gamma$. For any normal subgroup $\Gamma'$ of $\Gamma$, the quotient map $\pi_{\Gamma'}\colon \Gamma \to \Gamma/\Gamma'$ naturally induces an algebra homomorphism $\pi_{\Gamma'}\colon \mathbb C\Gamma\to \mathbb C(\Gamma/\Gamma')$, which extends to a Banach algebra homomorphism $\pi_{\Gamma'} \colon \ell^1(\Gamma) \to \ell^1(\Gamma/\Gamma')$. For any conjugacy class $\langle \gamma \rangle$ of $\Gamma$, let ${\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \gamma \rangle }\colon \mathbb C\Gamma\to \mathbb C$ be the trace map defined by the formula: $$\sum_{\beta\in \Gamma}a_\beta \beta\mapsto \sum_{\beta\in\left\langle \gamma\right\rangle }a_\beta.$$ The following lemma is obvious. \[lemma:limitoftrace\] If $\langle \alpha\rangle$ is a non-trivial conjugacy class of $\Gamma$ and $\{\Gamma_i\}$ is a sequence of finite-index normal subgroups that distinguishes $\langle \alpha\rangle$, then $$\lim_{i\to\infty}{\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\pi_{\Gamma_i}(\alpha)\rangle }(\pi_{\Gamma_i}(f))={\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \alpha\rangle}(f)$$ for all $f\in \ell^1(\Gamma)$. Moreover, if $f\in \mathbb C\Gamma$, then the limit on the left hand side stabilizes, that is, $$\exists k>0 \textup{ such that } {\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\pi_{\Gamma_i}(\alpha)\rangle }(\pi_{\Gamma_i}(f))= {\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \alpha\rangle }(f), \textup{ for all } i\geqslant k.$$ As we will mainly work with integral operators whose associated Schwartz kernels are smooth, let us fix some notation further and restate the above lemma in the context of integral operators. Let $M$ be a closed manifold and $\widetilde M$ be the universal covering space of $M$. Denote the fundamental group $\pi_1(M)$ of $M$ by $\Gamma$. Suppose $T$ is a $\Gamma$-equivariant bounded smooth function on $\widetilde M\times\widetilde M$, that is, $$T(\gamma x,\gamma y)=T(x,y)$$ for all $x, y\in \widetilde M$ and $\gamma \in \Gamma$. We say that $T$ has finite propagation if there exists a constant $d>0$ such that $${\mathrm{dist}}(x,y)>d\implies T(x,y)=0,$$ where ${\mathrm{dist}}(x, y)$ is the distance between $x$ and $y$ in $\widetilde M$. In this case, we define the propagation of $T$ to be the infimum of such $d$. \[def:L1\] A $\Gamma$-equivariant bounded function $T$ on $\widetilde M\times\widetilde M$ is said to be $\ell^1$-summable if $$\|T\|_{\ell^1} \coloneqq \sup_{x, y\in\mathcal F}\sum_{\gamma \in \Gamma}|T(x,\gamma y)|<\infty,$$ where $\mathcal F$ is a fundamental domain of $\widetilde M $ under the action of $\Gamma$. We shall call $\|T\|_{\ell^1}$ the $\ell^1$-norm of $T$ from now on. Clearly, every $T$ with finite propagation is $\ell^1$-summmable. If a $\Gamma$-equivariant bounded smooth function $T\in C^\infty(\widetilde M \times \widetilde M)$ is $\ell^1$-summable, then it defines a bounded operator on $L^2(\widetilde M)$ by the formula: $$\label{eq:int} f\mapsto \int_{\widetilde M}T(x,y)f(y)dy$$ for all $ f\in L^2(\widetilde M)$. For notational simplicity, we shall still denote this operator by $T$. Now suppose that $\Gamma'$ is a finite-index normal subgroup of $\Gamma$. Let $M_{\Gamma'} = \widetilde M/\Gamma'$ be the quotient space of $\widetilde M$ by the action of $\Gamma'$. In particular, $M_{\Gamma'}$ is a finite-sheeted covering space of $M$ with the deck transformation group being $\Gamma/\Gamma'$. Let $\pi_{\Gamma'}$ be the quotient map from $\widetilde M$ to $M_{\Gamma'}$. Any $\Gamma$-equivariant bounded smooth function $T\in C^\infty(\widetilde M \times \widetilde M)$ that is $\ell^1$-summable naturally descends to a smooth function $\pi_{\Gamma'}(T)$ on $M_{\Gamma'}\times M_{\Gamma'}$ by the formula: $$\pi_{\Gamma'}(T)(\pi_{\Gamma'}(x),\pi_{\Gamma'}(y)):=\sum_{\gamma\in {\Gamma'}}T(x,\gamma y)$$ for all $(\pi_{\Gamma'}(x),\pi_{\Gamma'}(y))\in M_{\Gamma'}\times M_{\Gamma'}$. Clearly, $\pi_{\Gamma'}(T)$ is a $\Gamma/{\Gamma'}$-equivariant smooth function on $M_{\Gamma'}\times M_{\Gamma'}$ and, similar to the formula in $\eqref{eq:int}$, defines a bounded operator on $L^2(M_{\Gamma'})$. For any non-trivial conjugacy class $\langle \alpha\rangle$ of $\Gamma$, we define the following trace map: $${\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \alpha\rangle}(T)=\sum_{\gamma\in\langle \alpha\rangle}\int_{\mathcal F}T(x,\gamma x)dx,$$ for all $\Gamma$-equivariant $\ell^1$-summable smooth function $T\in C^\infty(\widetilde M \times \widetilde M)$, where $\mathcal F$ is a fundamental domain of $\widetilde M $ under the action of $\Gamma$. More generally, for each finite-index normal subgroup ${\Gamma'}$ of $\Gamma$, a similar trace map is defined for $\Gamma/{\Gamma'}$-equivariant smooth functions on $M_{\Gamma'}\times M_{\Gamma'}$. With the above notation, Lemma $\ref{lemma:limitoftrace}$ can be restated as follows. \[lemma:limitkerneltrace\] Suppose $\langle \alpha\rangle$ is a non-trivial conjugacy class of $\Gamma$ and $\{{\Gamma}_i\}$ is a sequence of finite-index normal subgroups that distinguishes $\langle \alpha\rangle$. Let $T$ be a $\Gamma$-equivariant $\ell^1$-summable bounded smooth function on $\widetilde M\times\widetilde M$. Then we have $$\lim_{i\to\infty}{\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\pi_{{\Gamma}_i}(\alpha)\rangle }(\pi_{{\Gamma}_i}(T))={\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \alpha\rangle}(T).$$ Moreover, if $T$ has finite propagation, the the limit on the left hand side stabilizes, that is, $ \exists k>0$ such that ${\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\pi_{{\Gamma}_i}(\alpha)\rangle }(\pi_{{\Gamma}_i}(T))= {\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \alpha\rangle}(T)$, for all $ i\geqslant k.$ Geometric $C^*$-algebras ------------------------ In this subsection, we review the definitions of some geometric $C^*$-algebras, cf. [@Xiepos; @Yulocalization] for more details. Let $X$ be a proper metric space, i.e. every closed ball in $X$ is compact. An $X$-module is a separable Hilbert space equipped with a $*$-representation of $C_0(X)$. An $X$-module is called non-degenerated if the $*$-representation of $C_0(X)$ is non-degenerated. An $X$-module is called standard if no nonzero function in $C_0(X)$ acts as a compact operator. In addition, we assume that a discrete group $\Gamma$ acts on $X$ properly and cocompactly by isometries. Assume $H_X$ is an $X$-module equipped with a covariant unitary representation of $\Gamma$. If we denote by $\varphi$ and $\pi$ the representations of $C_0(X)$ and $\Gamma$ respectively, this means $$\pi(\gamma)(\varphi(f)v)=\varphi(\gamma^*f)(\pi(\gamma)v),$$ where $f\in C_0(X),\gamma\in \Gamma,v\in H_X$ and $\gamma^*f(x)=f(\gamma^{-1}x)$. In this case, we call $(H_X,\Gamma,\varphi)$ a covariant system. A covariant system $(H_X,\Gamma,\varphi)$ is called admissible if 1. $H_X$ is a non-degenerate and standard $X$-module; 2. for each $x\in X$, the stabilizer group $\Gamma_x$ acts regularly in the sense that the action is isomorphic to the action of $\Gamma_x$ on $l^2(\Gamma_x)\otimes H$ for some infinite dimensional Hilbert space $H$. Here $\Gamma_x$ acts on $l^2(\Gamma_x)$ by translations and acts on $H$ trivially. We remark that for each locally compact metric space $X$ with a proper, cocompact and isometric action of $\Gamma$, an admissible covariant system $(H_X,\Gamma,\varphi)$ always exists. In particular, if $\Gamma$ acts on $X$ freely, then the condition (2) above holds automatically. Let $(H_X,\Gamma,\varphi)$ be a covariant system and $T$ a $\Gamma$-equivariant bounded linear operator acting on $H_X$. - The propagation of $T$ is defined to be $$\sup\{d(x,y):(x,y)\in supp(T)\},$$ where $supp(T)$ is the complement (in $X\times X$) of points $(x,y)\in X\times X$ for which there exists $f,g\in C_0(X)$ such that $gTf=0$ and $f(x)\ne 0,g(y)\ne 0$; - $T$ is said to be locally compact if $fT$ and $Tf$ are compact for all $f\in C_0(X)$. Let $X$ be a locally compact metric space with a proper and cocompact isometric action of $\Gamma$. Let $(H_X,\Gamma,\varphi)$ be an admissible covariant system. We denote by $\mathbb C[X]^\Gamma$ the $*$-algebra of all $\Gamma$-equivariant locally compact bounded operators acting on $H_X$ with finite propagations. We define the equivariant Roe algebra $C^*(X)^\Gamma$ to be the completion of $\mathbb C[X]^\Gamma$ under the operator norm. Indeed, $C^*(X)^\Gamma$ is isomorphic to $C_r^*(\Gamma)\otimes \mathcal K$, the $C^*$-algebraic tensor product of the reduced group $C^*$-algebra of $\Gamma$ and the algebra of compact operators. We define the localization algebra $C^*_L(X)^\Gamma$ to be the $C^*$-algebra generated by all uniformly bounded and uniformly norm-continuous function $f:[0,\infty)\to \mathbb C^\ast(X)^\Gamma$ such that the propagation of $f(t)$ goes to zero as $t$ goes to infinity. Define $C^*_{L,0}(X)^\Gamma$ to be the kernel of the evaluation map $${\mathrm{ev}}:C^*_L(X)^\Gamma\to C^*(X),\ {\mathrm{ev}}(f)=f(0).$$ Now let us also review the construction of higher rho invariants for invertible differential operators. For simplicity, let us focus on the odd dimensional case. Suppose $M$ is closed manifold of odd dimension. Let $M_\Gamma$ be the regular covering space of $M$ whose deck transformation group is $\Gamma$. Suppose $D$ is a self-adjoint elliptic differential operator on $M$ and $\widetilde D$ is the lift of $D$ to $M_\Gamma$. If $\widetilde D$ is invertible, then its higher rho invariant is defined as follows. With the same notation as above, the higher rho invariant $\rho(\widetilde D)$ of an invertible operator $\widetilde D$ is defined to be $$\rho(\widetilde D):=[e^{2\pi i\frac{\chi(t\widetilde D)+1}{2}}]\in K_1(C^*_{L,0}(M_\Gamma)^\Gamma),$$ where $\chi$ (called a normalizing function) is a continuous odd function such that $\lim_{x\to \pm \infty} \chi(x) = \pm 1$. The above discussion has an obvious maximal analogue. For an operator $T\in\mathbb{C}[X]^\Gamma$, its *maximal norm* is $$\|T\|_{\textnormal{max}}\coloneqq\sup_{\varphi}\left\{\|\varphi(T) \| : \varphi\colon \mathbb{C}[X]^\Gamma\rightarrow\mathcal{B}(H)\textrm{ is a $*$-representation}\right\}.$$ The maximal equivariant Roe algebra $C^*_{\max}(X)^\Gamma$ is defined to be the completion of $\mathbb{C}[X]^\Gamma$ with respect to $\|\cdot\|_{\textnormal{max}}$. Similarly, we define 1. the maximal localization algebra $C^*_{L, \max}(X)^\Gamma$ to be the $C^*$-algebra generated by all uniformly bounded and uniformly norm-continuous function $f:[0,\infty)\to \mathbb C^\ast_{\max}(X)^\Gamma$ such that the propagation of $f(t)$ goes to zero as $t$ goes to infinity. 2. and $C^*_{L,0, \max}(X)^\Gamma$ to be the kernel of the evaluation map $${\mathrm{ev}}:C^*_{L, \max}(X)^\Gamma\to C^*_{\max}(X),\ {\mathrm{ev}}(f)=f(0).$$ Now suppose $M$ is a closed spin manifold. Assume that $M$ is endowed with a Riemannian metric $g$ of positive scalar curvature. Let $M_\Gamma$ be the regular covering space of $M$ whose deck transformation group is $\Gamma$. Suppose $D$ is the associated Dirac operator on $M$ and $\widetilde D$ is the lift of $D$ to $M_\Gamma$. In this case, we can define the maximal higher rho invariant of $\widetilde D$ as follows. The maximal higher rho invariant $\rho_{\max}(\widetilde D)$ of $\widetilde D$ is defined to be $$\rho_{\max}(\widetilde D):=[e^{2\pi i\frac{\chi(t\widetilde D)+1}{2}}]\in K_1(C^*_{L,0, \max}(M_\Gamma)^\Gamma),$$ Here $\chi$ is again a normalizing function, but the functional calculus for defining $\chi(t\widetilde D)$ is performed under the maximal norm instead. See for example [@Guo:2019aa Section 3] for a discussion of such a functional calculus. Delocalized eta invariants and their approximations {#sec:deleta} =================================================== In this section, we review the definition of delocalized eta invariants and formulate the main question of this article. Let $M$ be a closed spin manifold and $\Gamma$ a finitely generated discrete group. Assume that $M$ is endowed with a Riemannian metric of positive scalar curvature. Let $\widetilde M$ be a $\Gamma$-regular covering space of $M$. Suppose $D$ is the associated Dirac operator on $M$ and $\widetilde D$ is the lift of $D$ to $\widetilde M$. \[def:deloceta\] For any conjugacy class $\left\langle\alpha\right\rangle$ of $\Gamma$, Lott’s *delocalized eta invariant* $\eta_{\left\langle\alpha\right\rangle }(\widetilde D)$ of $\widetilde D$ is defined to be $$\label{delocalizedeta} \eta_{\left\langle\alpha\right\rangle }(\widetilde D)\coloneqq \frac{2}{\sqrt\pi}\int_{0}^\infty {\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\left\langle\alpha\right\rangle }(\widetilde De^{-t^2\widetilde D^2})dt$$ whenever the integral converges. Here $${\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\left\langle\alpha\right\rangle }(\widetilde De^{-t^2\widetilde D^2})= \sum_{\gamma\in\langle\alpha\rangle}\int_\mathcal F {\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}(k_t(x,\gamma x))dx,$$ where $k_t(x, y)$ is the corresponding Schwarts kernel of the operator $\widetilde De^{-t^2\widetilde D^2}$ and $\mathcal F$ is a fundamental domain of $\widetilde M$ under the action of $\Gamma$. It is known that the integral formula $\eqref{delocalizedeta}$ for $\eta_{\left\langle\alpha\right\rangle }(\widetilde D)$ converges if $\widetilde D$ is invertible and any one of the following conditions is satisfied: 1. the scalar curvature of $M$ is sufficiently large (see [@CWXY Definition 3.2] for the precise definition of “sufficiently large"); 2. or there exists a smooth dense subalgebra[^8] of $C^*_r(\Gamma)$ onto which the trace map ${\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\alpha\rangle }$ extends continuously (cf. [@Lott Section 4]); 3. or $\langle\alpha\rangle$ has subexponential growth (cf. [@CWXY Corollary 3.4]). In general, it is still an open question when the integral in $\eqref{delocalizedeta}$ converges for invertible operators. Now suppose that $\Gamma'$ is a finite-index normal subgroup of $\Gamma$. As before, let $M_{\Gamma'} = \widetilde M/\Gamma'$ be the associated finite-sheeted covering space of $M$. Similarly, let $D_{\Gamma'}$ be the lift of $D$ to $M_{\Gamma'}$, and define the delocalized eta invariant $\eta_{\langle \pi_{\Gamma'}(\alpha)\rangle }(D_{\Gamma'})$ of $D_{\Gamma'}$ to be $$\label{eq:fineta} \eta_{\langle \pi_{\Gamma'}(\alpha)\rangle }(D_{\Gamma'})\coloneqq \frac{2}{\sqrt\pi}\int_{0}^\infty {\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\pi_{\Gamma'}(\alpha)\rangle }(D_{\Gamma'}e^{-t^2 D_{\Gamma'}^2})dt,$$ where $\alpha\in \Gamma$ and $\langle \pi_{\Gamma'}(\alpha)\rangle $ is conjugacy class of $\pi_{\Gamma'}(\alpha)$ in $\Gamma/\Gamma'$. As $M_{\Gamma'}$ is compact, it is not difficult to verify that the integral in $\eqref{eq:fineta}$ always converges absolutely. The above discussion naturally leads to the following questions. \[question\] Given a non-identity element $\alpha \in \Gamma$, suppose $\{\Gamma_i\}$ is a sequence of finite-index normal subgroups that distinguishes the conjugacy class $\langle\alpha\rangle$. 1. \[question1\] When does $\displaystyle \lim_{i\to\infty}\eta_{\langle \pi_{\Gamma_i}(\alpha)\rangle }(D_{\Gamma_i})$ exist? 2. \[question2\] If $\eta_{\left\langle\alpha\right\rangle }(\widetilde D)$ is well-defined and $\displaystyle \lim_{i\to\infty}\eta_{\langle \pi_{\Gamma_i}(\alpha)\rangle }(D_{\Gamma_i})$ exists, when do we have $$\label{eq:appr} \lim_{i\to\infty}\eta_{\langle \pi_{\Gamma_i}(\alpha)\rangle }(D_{\Gamma_i})=\eta_{\left\langle\alpha\right\rangle }(\widetilde D)?$$ Maximal higher rho invariants and their functoriality {#sec:max} ===================================================== In this section, we use the functoriality of higher rho invariants to give some sufficient conditions under which the answer to part (I) of Question $\ref{question}$ is positive. Before we get into the technical details, here is a special case which showcases the main results of this section. \[prop:atmenable\] With the same notation as in Question $\ref{question}$, if $\Gamma$ is a-T-menable and $\{\Gamma_i\}$ is a sequence of finite-index normal subgroups that distinguishes the conjugacy class $\langle\alpha\rangle$ for a non-identity element $\alpha\in \Gamma$, then the limit $$\displaystyle \lim_{i\to\infty}\eta_{\langle \pi_{\Gamma_i}(\alpha)\rangle }(D_{\Gamma_i})$$ stabilizes, that is, $ \exists k>0$ such that $\eta_{\langle\pi_{\Gamma_i}(\alpha)\rangle}(D_{\Gamma_i}) = \eta_{\langle\pi_{\Gamma_k}(\alpha)\rangle}(D_{\Gamma_k}) $, for all $ i\geqslant k.$ In particular, $\displaystyle \lim_{i\to\infty}\eta_{\langle \pi_{\Gamma_i}(\alpha)\rangle }(D_{\Gamma_i}) $ exists. This is a consequence of Theorem $\ref{prop:max}$ below and a theorem of Higson and Kasparov [@MR1821144 Theorem 1.1]. Given a finitely presented discrete group $\Gamma$, let $\underline{E}\Gamma$ be the universal $\Gamma$-space for proper $\Gamma$-actions. The Baum-Connes conjecture [@PBAC88] can be stated as follows. The following map $${\mathrm{ev}}_*\colon K_i(C^*_L(\underline{E}\Gamma)^\Gamma)\to K_i(C^*(\underline{E}\Gamma)^\Gamma)$$ is an isomorphism. Although this was not how the Baum-Connes conjecture was originally stated, the above formulation is equivalent to the original Baum-Connes conjecture, after one makes the following natural identifications: $$K_i(C^*_L(\underline{E}\Gamma)^\Gamma)\cong K_i^\Gamma(\underline{E}\Gamma) \textup{ and } K_i(C^*(\underline{E}\Gamma)^\Gamma)\cong K_i(C_r^*(\Gamma)).$$ Under this notation, we usually write the map $${\mathrm{ev}}_\ast\colon K_i(C^*_L(\underline{E}\Gamma)^\Gamma)\to K_i(C^*(\underline{E}\Gamma)^\Gamma$$ as follows: $$\mu\colon K_i^\Gamma(\underline{E}\Gamma)\to K_i(C_r^*(\Gamma))$$ and call it the Baum-Connes assembly map. Similarly, there is a maximal version of the Baum-Connes assembly map: $$\mu_{\max} \colon K_i^\Gamma(\underline{E}\Gamma)\to K_i(C_{\max}^*(\Gamma)).$$ The maximal Baum-Connes assembly map $\mu_{\max}$ is not an isomorphism in general. For example, $\mu_{\max}$ fails to be surjective for non-finite property (T) groups. Before we discuss the functoriality of higher rho invariants, let us recall the functoriality of higher indices. More precisely, let $D$ be a Dirac-type operator on a closed $n$-dimensional manifold $X$. Consider the following commutative diagram $$\xymatrixrowsep{1pc} \xymatrix{ & B\Gamma_1 \ar[dd]^{B\varphi} \\ X \ar[dr]_{f_2} \ar[ur]^{f_1} & \\ & B\Gamma_2 }$$ where $f_1$, $f_2$ are continuous maps and $B\varphi$ is a continuous map from $B\Gamma_1$ to $B\Gamma_1$ induced by a group homomorphism $\varphi\colon \Gamma_1\to \Gamma_2$. Let $X_{\Gamma_1}$ (resp. $X_{\Gamma_2}$) be the $\Gamma_1$ (resp. $\Gamma_2$) regular covering space of $X$ induced by the map $f_1$ (resp. $f_2$), and $D_{X_{\Gamma_1}}$ (resp. $D_{X_{\Gamma_2}}$) be the lift of $D$ to $X_{\Gamma_1}$ (resp. $X_{\Gamma_2}$). We have the following functoriality of the higher indices: $$\varphi_\ast({\textup{\,Ind}}_{\max}(D_{X_{\Gamma_1}})) = {\textup{\,Ind}}_{\max}(D_{X_{\Gamma_2}}) \textup{ in } K_n(C_{\max}^\ast(\Gamma_2)),$$ where $C_{\max}^\ast(\Gamma_i)$ is the maximal group $C^\ast$-algebra of $\Gamma_i$, the notation ${\textup{\,Ind}}_{\max}$ stands for higher index in the maximal group $C^\ast$-algebra, and $\varphi_\ast\colon K_n(C_{\max}^\ast(\Gamma_1))\to K_n(C_{\max}^\ast(\Gamma_2))$ is the morphism naturally induced by $\varphi$. Now let us consider the functoriality of higher rho invariants. Following the same notation from above, in addition, assume $X$ is a closed spin manifold endowed with a Riemannian metric of positive scalar curvature. In this case, the maximal higher rho invariants $\rho_{\max}(D_{X_{\Gamma_1}})$ of $D_{X_{\Gamma_2}}$ and $\rho_{\max}(D_{X_{\Gamma_1}})$ of $D_{X_{\Gamma_2}}$ are defined. Let $E\Gamma_1$ (resp. $E\Gamma_2$) be universal $\Gamma_1$-space (resp. $\Gamma_2$-space ) for free $\Gamma_1$-actions (resp. $\Gamma_2$-actions). Denote by $\Phi$ the equivariant map $ X_{\Gamma_1}\to X_{\Gamma_2}$ induced by $\varphi\colon \Gamma_1 \to \Gamma_2$, which in turn induces a morphism $$\Phi_\ast \colon K_n(C_{L,0, \max}^\ast(X_{\Gamma_1})^{\Gamma_1}) \to K_n(C_{L,0, \max}^\ast(X_{\Gamma_2})^{\Gamma_2}).$$ By [@guoxieyu], the maximal higher rho invariants are functorial: $$\Phi_\ast(\rho_{\max}(D_{X_{\Gamma_1}})) = \rho_{\max}(D_{X_{\Gamma_2}})$$ in $K_n(C_{L,0, \max}^\ast(X_{\Gamma_2})^{\Gamma_2})$. Now suppose $M$ is an odd-dimensional closed spin manifold endowed with a positive scalar curvature metric and $\Gamma$ is a finitely generated discrete group. Let $\widetilde M$ be a $\Gamma$-regular covering space of $M$ and $\widetilde D$ be the Dirac operator lifted from $M$. For each finite-index normal subgroup $\Gamma'$ of $\Gamma$, let $M_{\Gamma'} = \widetilde M/\Gamma'$ be the associated finite-sheeted covering space of $M$. Denote by $D_{\Gamma'}$ the Dirac opeartor on $M_{\Gamma'}$ lifted from $M$. \[prop:max\] With the above notation, given a non-identity element $\alpha \in \Gamma$, suppose $\{\Gamma_i\}$ is a sequence of finite-index normal subgroups that distinguishes the conjugacy class $\langle\alpha\rangle$. If the maximal Baum-Connes assembly map for $\Gamma$ is rationally an isomorphism, then $$\lim_{i\to\infty}\eta_{\langle\pi_{\Gamma_i}(\alpha)\rangle}(D_{\Gamma_i})$$ stabilizes, that is, $ \exists k>0$ such that $\eta_{\langle\pi_{\Gamma_i}(\alpha)\rangle}(D_{\Gamma_i}) = \eta_{\langle\pi_{\Gamma_k}(\alpha)\rangle}(D_{\Gamma_k}) $, for all $ i\geqslant k.$ We have the short exact sequence of $C^\ast$-algebras: $$0\to C^*_{L,0, \max}(E\Gamma)^\Gamma \to C^*_{L, \max}(E\Gamma)^\Gamma \to C^*_{\max}(E\Gamma)^\Gamma \to 0$$ which induces the following long exact sequence in $K$-theory: $$\label{cd:longexact} \scalebox{0.82}{ $\begin{CD} K_0(C^*_{L,0, \max}(E\Gamma)^\Gamma)\otimes\mathbb Q @>>>K_0(C^*_{L,\max }(E\Gamma)^\Gamma)\otimes\mathbb Q@>{\mu_0}>>K_0(C_{\max}^*(\Gamma))\otimes\mathbb Q\\ @AAA@. @VV\partial V \\ K_1(C^*_{\max}(\Gamma))\otimes\mathbb Q @<{\mu_1}<<K_1(C^*_{L,\max }(E\Gamma)^\Gamma)\otimes\mathbb Q@<<<K_1(C^*_{L,0,\max }(E\Gamma)^\Gamma)\otimes\mathbb Q \end{CD} $}$$ Note that $K_i(C^*_{L, \max}(E\Gamma)^\Gamma)$ is naturally isomorphic to $K_i^\Gamma(E\Gamma)$. Similarly, we have $K_i(C^*_{L, \max}(\underline{E}\Gamma)^\Gamma)\cong K_i^\Gamma(\underline{E}\Gamma).$ The morphism $K_i^\Gamma(E\Gamma)\to K_i^\Gamma(\underline{E}\Gamma)$ induced by the inclusion from $E\Gamma$ to $\underline{E}\Gamma$ is rationally injective (cf: [@BaumConnesHigson Section 7]). It follows that if the rational maximal Baum-Connes conjecture holds for $\Gamma$, that is, the maximal Baum-Connes assembly map $\mu_{\max}\colon K_i(C_{L, \max}^\ast(\underline{E}\Gamma)^\Gamma)\otimes \mathbb Q \to K_i(C_{\max}^\ast(\Gamma))\otimes \mathbb Q$ is an isomorphism, then the maps $\mu_i$ in the above commutative diagram are injective and the map $\partial $ is surjective. In particular, for the higher rho invariant $\rho(\widetilde D) $ of $\widetilde D$, there exists $$[p]\in K_0(C^*_{\max}(\underline{E}\Gamma)^\Gamma)\otimes \mathbb Q \cong K_0(C^*_{\max}(\Gamma))\otimes \mathbb Q$$ such that $\partial [p]=\rho(\widetilde D)$ rationally, that is, $\partial [p]=\lambda \cdot \rho(\widetilde D)$ for some $\lambda \in \mathbb Q$. In particular, in this case, we can assume $p$ is an idempotent with finite propagation, since $[p]$ is an element of $K_0(C^*_{\max}(\underline{E}\Gamma)^\Gamma)$. Let $\Psi_i$ be the canonical quotient map from $ \widetilde M$ to $M_{\Gamma_i} = \widetilde M/{\Gamma_i}$ and $$(\Psi_i)_\ast\colon K_1(C_{L,0, \max}^\ast(\widetilde M)^\Gamma) \to K_1(C_{L,0, \max}^\ast(M_{\Gamma_i})^{\Gamma/\Gamma_i})$$ the corresponding morphism induced by $\Psi_i$. By [@guoxieyu], we have $$(\Psi_i)_\ast (\rho_{\max}(\widetilde D)) = \rho(D_{\Gamma_i}) \textup{ in } K_1(C_{L,0, \max}^\ast(M_{\Gamma_i})^{\Gamma/\Gamma_i}).$$ By passing to the universal spaces, we have $$(\Psi_i)_\ast (\rho_{\max}(\widetilde D)) = \rho(D_{\Gamma_i}) \textup{ in } K_1(C^*_{L,0, \max}(E(\Gamma/\Gamma_i))^{\Gamma/\Gamma_i}).$$ Consider the following commutative diagram of long exact sequences[^9]: $$\label{cd:longexact2} \begin{gathered} \scalebox{0.8}{ \xymatrixcolsep{1pc} \xymatrix{ K_0(C^*_{L,\max }(E\Gamma)^\Gamma)\otimes\mathbb Q \ar[d] \ar[r] & K_0(C_{\max}^*(\Gamma))\otimes\mathbb Q \ar[r]^-{\partial} \ar[d]^{(\pi_{\Gamma_i})_\ast} & K_1(C^*_{L,0, \max}(E\Gamma)^\Gamma)\otimes\mathbb Q \ar[d]^{(\Psi_i)_\ast} \\ K_0(C^*_{L }(E(\Gamma/\Gamma_i))^{\Gamma/\Gamma_i})\otimes\mathbb Q \ar[r] & K_0(C^*_r(\Gamma/\Gamma_i))\otimes\mathbb Q \ar[r]^-{\partial} & K_1(C^*_{L,0}(E(\Gamma/\Gamma_i))^{\Gamma/\Gamma_i})\otimes\mathbb Q } } \end{gathered}$$ where $(\pi_{\Gamma_i})_\ast\colon K_0(C_{\max}^*(\Gamma))\to K_0(C_r^*(\Gamma/\Gamma_i))$ is the natural morphism induced by the canonical quotient map $\pi_{\Gamma_i}\colon \Gamma\to \Gamma/\Gamma_i$. Let us denote $(\pi_{\Gamma_i})_\ast(p)$ by $p_i$. It follows from the commutative diagram above that $$\label{eq:aps} \partial(p_i) = \rho(D_{\Gamma_i}).$$ By [@Xie Lemma 3.9 & Theorem 4.3], for each $\Gamma_i$, there exists a determinant map $$\tau_i\colon K_1(C^*_{L,0}(E(\Gamma/\Gamma_i))^{\Gamma/{\Gamma_i}}) \to \mathbb C$$ such that $$\frac{1}{2}\eta_{\langle \pi_{\Gamma_i}(\alpha)\rangle }(D_{\Gamma_i}) = -\tau_{i}(\rho(D_{\Gamma_i}))= {\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\pi_{\Gamma_i}(\alpha)\rangle }(p_i).$$ Since the idempotent $p$ has finite propagation, it follows from Lemma \[lemma:limitoftrace\] that $$\lim_{i \to \infty }\eta_{\langle \pi_{\Gamma_i}(\alpha)\rangle }(D_{\Gamma_i})=2\lim_{i \to \infty}{\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\pi_{\Gamma_i}(\alpha)\rangle }(p_i)=2{\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\alpha\rangle}(p),$$ and the limit stabilizes.  \ 1. Note that Proposition $\ref{prop:max}$ only answers part (I) of Question $\ref{question}$. Part (II) of Question $\ref{question}$ is still open, even under the assumption that the maximal Baum-Connes conjecture holds for $\Gamma$. 2. Although Proposition $\ref{prop:max}$ assumes that the maximal Baum-Connes conjecture holds for $\Gamma$, it is clear from the proof that it suffices to assume $\rho_{\max}(\widetilde D)$ is rationally in the image of the composition of the following maps: $$K_0^\Gamma(\underline{E}\Gamma)\to K_0(C_{\max}^\ast(\Gamma)) \xrightarrow{\ \partial \ } K_1(C^*_{L,0,\max }(E\Gamma)^\Gamma).$$ By a theorem of Higson and Kasparov [@MR1821144 Theorem 1.1], the maximal Baum-Connes conjecture holds for all a-T-menable groups. Together with Proposition $\ref{prop:max}$ above, this proves Proposition $\ref{prop:atmenable}$ at the beginning of the section. As mentioned above, the maximal Baum-Connes assembly map $\mu_{\max}$ fails to be an isomorphism in general. For example, $\mu_{\max}$ fails to be surjective for non-finite property (T) groups. On the contrary, there is no counterexample to the Baum-Connes conjecture, at the time of writing. In particular, the Baum-Connes conjecture is known to hold for all hyperbolic groups [@MR2874956; @MR1914618], many of which have property (T). For this reason, we shall now investigate Question $\ref{question}$, in particular, the convergence of $$\lim_{i\to\infty}\eta_{\langle\pi_{\Gamma_i}(\alpha)\rangle}(D_{\Gamma_i})$$ when the group $\Gamma$ satisfies the Baum-Connes conjecture. One of the first difficulties we face is that reduced groups $C^\ast$-algebras are not functorial with respect to group homomorphisms in general. As a result, the functoriality of higher rho invariants is, a priori, lost in the reduced $C^\ast$-algebra setting. Note that a key step (cf. Equation $\eqref{eq:aps}$) in the proof of Theorem $\ref{prop:max}$ is the existence of a “universal“ idempotent $p$ with finite propagation such that $$\partial(p_i) = \rho(D_{\Gamma_i}),$$ where $p_i = (\pi_{\Gamma_i})_\ast(p)$. We shall follow a similar strategy in the reduced case. It turns out the existence of such a ”universal" idempotent in the reduced case is closely related to a conjecture of Stolz on positive scalar curvature metrics, which we shall review in the following. Given a topological space $Y$, let $R^{\textup{spin}}_n(Y)$ be the following bordism group of triples $(L, f, h)$, where $L$ is an $n$-dimensional compact spin manifold (possibly with boundary), $f: L\to Y$ is a continuous map, and $h$ is a positive scalar curvature metric on the boundary $\partial M$. Two triples $(L_1, f_1, h_1)$ and $(L_2, f_2, h_2)$ are bordant if 1. there is a bordism $(V, F, H)$ between $(\partial L_1, f_1, h_1)$ and $(\partial L_2, f_2, h_2)$ such that $H$ is a positive scalar curvature metric on $V$ with product structure near $\partial L_i$ and $H|_{\partial L_i} = h_i$, and the restriction of the map $F\colon V \to Y$ on $\partial L_i$ is $f_i$; 2. and the closed spin manifold $L_1\cup_{\partial L_1} V \cup_{\partial L_2} L_2$ (obtained by gluing $L_1, V$ and $L_2$ along their common boundaries) is the boundary of a spin manifold $W$ with a map $E: W\to Y$ such that $E|_{ L_i} = f_i$ and $E|_{V} = F$. The above definition has the following obvious analogue for the case of proper actions. Let $X$ be a proper metric space equipped with a proper and cocompact isometric action of a discrete group $\Gamma$. We denote by $R^{{\textup{spin}}}_n(X)^\Gamma$ the set of bordism classes of pairs $(L, f, h)$, where $L$ is an $n$-dimensional complete spin manifold equipped with a proper and cocompact isometric action of $\Gamma$, the map $f: L\to X$ is a $\Gamma$-equivariant continuous map and $h$ is a $\Gamma$-invariant positive scalar curvature metric on $\partial L$. Here the bordism equivalence relation is defined similarly as the non-equivariant case above. If the action of $\Gamma$ on $X$ is free and proper, then it follows by definition that $$R_n^{\textup{spin}}(X)^\Gamma \cong R_n^{\textup{spin}}(X/\Gamma).$$ Suppose $(L, f, h)$ is an element in $R_n^{\textup{spin}}(E\Gamma)^\Gamma \cong R_n^{\textup{spin}}(B\Gamma)$, where $B\Gamma = E\Gamma/\Gamma$ is the classifying space for free $\Gamma$-actions. Let $ L_\Gamma$ be the $\Gamma$-covering space of $M$ induced by the map $f\colon L\to B\Gamma$ and $D_{L_\Gamma}$ be the associated Dirac operator. Due to the positive scalar curvature metric $h$ on $\partial L$, the $\Gamma$-equivariant operator $D_{L_\Gamma}$ has a well-defined higher index class ${\textup{\,Ind}}(D_{L_\Gamma})$ in $K_n(C^\ast_r(\Gamma))$, cf. [@Roe Proposition 3.11] [@MR3439130]. By the relative higher index theorem [@UB95; @MR3122162], we have the following well-defined index map $${\textup{\,Ind}}\colon R^{{\textup{spin}}}_n(B\Gamma) \to KO_n(C^\ast_r(\Gamma; \mathbb R)), \quad (L, f, h) \mapsto {\textup{\,Ind}}(D_{L_\Gamma}),$$ where $C_r^\ast(\Gamma; \mathbb R)$ is the reduced group $C^\ast$-algebra of $\Gamma$ with real coefficients. Now let $\mathfrak B$ be the Bott manifold, a simply connected spin manifold of dimension $8$ with $\widehat A(\mathfrak B) =1$. This manifold is not unique, but any choice will work for the following discussion. To make the discussion below more transparent, let us choose a $\mathfrak B$ that is equipped with a flat scalar curvature metric. The fact that such a choice exists follows for example from the work of Joyce [@MR1383960 Section 6]. Let $(L, f, h)$ be an element $ R^{{\textup{spin}}}_n(B\Gamma)$, that is, $L$ is a $n$-dimensional spin manifold whose boundary $\partial L$ carries a positive scalar curvature metric $h$, together with a map $f\colon L \to B\Gamma$. Taking direct product with $k$ copies of $\mathfrak B$ produces an element $(L', f', h')$ in $ R^{{\textup{spin}}}_{n+8k}(B\Gamma)$, where $L' = L\times \mathfrak B\times \cdots \times \mathfrak B$, $f' = f\circ p$ with the map $p$ being the projection from $L'$ to $ L$, and $h'$ is the product metric of $h$ with the Riemannian metric on $\mathfrak B$. By our choice of $\mathfrak B$ above, the Riemannian metric $h'$ also has positive scalar curvature since $h$ does. Define $R^{{\textup{spin}}}_n(B\Gamma)[\mathfrak B^{-1}] $ to be the direct limit of the following directed system: $$R^{{\textup{spin}}}_n(B\Gamma)\xrightarrow{\times \mathfrak B} R^{{\textup{spin}}}_{n+8}(B\Gamma)\xrightarrow{\times \mathfrak B} R^{{\textup{spin}}}_{n+16}(B\Gamma) \to \cdots.$$ Since the higher index class ${\textup{\,Ind}}(D_{L_\Gamma})$ associated to $(L, f, h)$ is invariant under taking direct product with $\mathfrak B$, it follows that the above index map induces the following well-defined index map: $$\theta\colon R^{{\textup{spin}}}_n(B\Gamma)[\mathfrak B^{-1}] \to KO_n(C^\ast_r(\Gamma; \mathbb R)), \quad (L, f, h) \mapsto {\textup{\,Ind}}(D_{L_\Gamma}).$$ \[conj:stolz\] The index map $$\theta\colon R^{{\textup{spin}}}_n(B\Gamma)[\mathfrak B^{-1}] \to KO_n(C^\ast_r(\Gamma; \mathbb R))$$ is an isomorphism. Similarly, if one works with the universal space $\underline{E}\Gamma$ for proper $\Gamma$-actions instead, then the same argument from above also produces a similar index map $$\Theta\colon R^{{\textup{spin}}}_n(\underline{E}\Gamma)^\Gamma[\mathfrak B^{-1}] \to KO_n(C^\ast_r(\Gamma; \mathbb R))$$ where $R^{{\textup{spin}}}_n(\underline{E}\Gamma)^\Gamma[\mathfrak B^{-1}] $ is the direct limit of the following directed system: $$R^{{\textup{spin}}}_n(\underline{E}\Gamma)^\Gamma\xrightarrow{\times \mathfrak B} R^{{\textup{spin}}}_{n+8}(\underline{E}\Gamma)^\Gamma\xrightarrow{\times \mathfrak B} R^{{\textup{spin}}}_{n+16}(\underline{E}\Gamma)^\Gamma \to \cdots.$$ One has the following analogue of the Stolz conjecture above, which will be called the generalized Stolz conjecture from now on. \[conj:stolz2\] The index map $$\Theta\colon R^{{\textup{spin}}}_n(\underline{E}\Gamma)^\Gamma[\mathfrak B^{-1}] \to KO_n(C^\ast_r(\Gamma; \mathbb R))$$ is an isomorphism. Here is an immediate geometric consequence of the generalized Stolz conjecture. Let $(L, f, h)$ be an element in $ R^{{\textup{spin}}}_n(\underline{E}\Gamma)^\Gamma$, that is, $L$ is a $n$-dimensional spin $\Gamma$-manifold whose boundary $\partial L$ carries a $\Gamma$-invariant positive scalar curvature metric $h$, together with a $\Gamma$-equivariant map $f\colon L \to \underline{E}\Gamma$. Suppose the higher index ${\textup{\,Ind}}(D_{L})$ associated to $(L, f, h)$ vanishes, then the generalized Stolz conjecture implies that $(L, f, h)$ is stably $\Gamma$-equivariantly cobordant to the empty set. More precisely, if $(L', f', h')$ is the direct product of $(L, f, h)$ with sufficiently many copies of $\mathfrak B$, then $(L', f', h')$ is $\Gamma$-equivariantly cobordant to the empty set. In particular, this implies that, if the higher index of $(L, f, h)$ vanishes, then $\partial L'$ bounds a spin $\Gamma$-manifold $V$ such that $V$ admits a $\Gamma$-invariant positive scalar curvature metric $g_0$, which has product structure near the boundary $\partial V = \partial L'$, and the restriction of $g_0$ to the boundary is equal to $h'$. If no confusion is likely to arise, we shall simply say $(\partial L, h)$ *stably* bounds a spin $\Gamma$-manifold $V$ with a $\Gamma$-invariant positive scalar curvature metric. Again, suppose $M$ is an odd-dimensional closed spin manifold endowed with a positive scalar curvature metric and $\Gamma$ is a finitely generated discrete group. Let $\widetilde M$ be a $\Gamma$-regular covering space of $M$ and $\widetilde D$ the Dirac operator lifted from $M$. For each finite-index normal subgroup $\Gamma'$ of $\Gamma$, let $M_{\Gamma'} = \widetilde M/\Gamma'$ be the associated finite-sheeted covering space of $M$. Denote by $D_{\Gamma'}$ the Dirac opeartor on $M_{\Gamma'}$ lifted from $M$. Let $\varphi\colon M \to B\Gamma$ be the classifying map for the covering $\widetilde M \to M$, that is, the pullback of $E\Gamma$ by $\varphi$ is $\widetilde M$. In following proposition, we shall in addition assume that a multiple of $(M, \varphi, h)$ *stably bounds*. \[def:bound\] We say a multiple of $(M, \varphi, h)$ stably bounds if there exists a compact spin manifold $W$ and a map $\Phi\colon W\to B\Gamma$ such that $\partial W = \bigsqcup_{i=1}^\ell M'$ and $\Phi|_{\partial W} = \bigsqcup_{i=1}^\ell \varphi'$, where $(M', \varphi', h')$ is the direct product of $(M, \varphi, h)$ with $\ell$ copies of $\mathfrak B$ and $\bigsqcup_{i=1}^\ell M'$ is the disjoint union of $\ell$ copies of $M'$. \[prop:red\] Suppose a multiple of $(M, \varphi, h)$ stably bounds. Given a non-identity element $\alpha \in \Gamma$, suppose $\{\Gamma_i\}$ is a sequence of finite-index normal subgroups of $\Gamma$ that distinguishes the conjugacy class $\langle \alpha \rangle$. If both the rational Baum-Connes conjecture and the generalized Stolz’ conjecture Conjecture $\ref{conj:stolz2}$ hold for $\Gamma$, then $$\lim_{i\to\infty}\eta_{\langle\pi_{\Gamma_i}(\alpha)\rangle}(D_{\Gamma_i})$$ stabilizes, that is, $ \exists k>0$ such that $\eta_{\langle\pi_{\Gamma_i}(\alpha)\rangle}(D_{\Gamma_i}) = \eta_{\langle\pi_{\Gamma_k}(\alpha)\rangle}(D_{\Gamma_k}) $, for all $ i\geqslant k.$ For notational simplicity, let us assume $(M, \varphi, h)$ itself bounds, that is, there exists a compact spin manifold $W$ and a map $\Phi\colon W\to B\Gamma$ such that $\partial W = M$ and $\Phi|_{\partial W} = \varphi$. The general case can be proved in exactly the same way. Endow $W$ with a Riemannian metric $g$ which has product structure near $\partial W = M$ and whose restriction on $\partial W$ is the positive scalar curvature metric $h$. Let $\widetilde W$ be the covering space of $W$ induced by the map $\Phi\colon W\to B\Gamma$ and $\tilde g$ be the lift of $g$ from $W$ to $\widetilde W$. Due to the positive scalar curvature of $\tilde g$ near the boundary of $\widetilde W$, the corresponding Dirac operator $D_{\widetilde W}$ on $\widetilde W$ with respect to the metric $\tilde g$ has a well-defined higher index ${\textup{\,Ind}}(D_{\widetilde W}, \tilde g)$ in $KO_{n+1}(C_r^\ast(\Gamma;\mathbb R)). $ Now for each normal subgroup $\Gamma_i$ of $\Gamma$, let $M_{\Gamma_i} = \widetilde M/\Gamma_i$, $W_{\Gamma_i} = \widetilde W/\Gamma_i$ and $g_i$ be the lift of $g$ to $W_{\Gamma_i}$. Similarly, the corresponding Dirac operator $D_{W_{\Gamma_i}}$ on $W_{\Gamma_i}$ with respect to the metric $g_i$ has a well-defined higher index ${\textup{\,Ind}}(D_{W_{\Gamma_i}}, g_i) $ in $KO_{n+1}(C_r^\ast(\Gamma/\Gamma_i;\mathbb R)). $ Moreover, we have $$\partial ({\textup{\,Ind}}(D_{W_{\Gamma_i}}, g_i) ) = \rho(D_{M_{\Gamma_i}}) \textup{ in } KO_n(C_{L,0}^\ast(E(\Gamma/\Gamma_i); \mathbb R)^{\Gamma/\Gamma_i}),$$ cf. [@MR3286895 Theorem 1.14][@Xiepos Theorem A]. By [@Xie Lemma 3.9 & Theorem 4.3], for each $\Gamma_i$, there exists a determinant map $$\tau_i\colon K_1(C^*_{L,0, \max}(E(\Gamma/\Gamma_i))^{\Gamma/{\Gamma_i}}) \to \mathbb C$$ such that $$\frac{1}{2}\eta_{\langle \pi_{\Gamma_i}(\alpha)\rangle }(D_{M_{\Gamma_i}}) = - \tau_{i}(\rho(D_{M_{\Gamma_i}}))= {\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\pi_{\Gamma_i}(\alpha)\rangle }({\textup{\,Ind}}(D_{W_{\Gamma_i}}, g_i)).$$Therefore, to prove the proposition, it suffices to show that there exists $p\in KO_{n+1}(C_{\max}^\ast(\Gamma;\mathbb R))$ such that $p$ has finite propagation and $$(\pi_{\Gamma_i})_\ast(p) = {\textup{\,Ind}}(D_{W_{\Gamma_i}}, g_i)$$ for all $k$, where $(\pi_{\Gamma_i})_\ast\colon C_{\max}^\ast(\Gamma; \mathbb R) \to C_r^\ast(\Gamma/\Gamma_i; \mathbb R) $ is the morphism induced by the quotient homomorphism $\pi_{\Gamma_i}\colon \Gamma \to \Gamma/\Gamma_i$. The existence of such a “universal" $K$-theory element with finite propagation can be seen as follows. Since the rational Baum-Connes conjecture holds for $\Gamma$, there exists a spin $\Gamma$-manifold $Z$ such that the higher index ${\textup{\,Ind}}(D_Z)$ of its Dirac operator $D_Z$ is equal to $-{\textup{\,Ind}}(D_{\widetilde W}, \tilde g)$. Let $Z_1$ be the $\Gamma$-equivariant connected sum[^10] of $\widetilde W$ with $Z$. Then $Z_1$ is a spin $\Gamma$-manifold whose boundary is equal to $\partial \widetilde W = \widetilde M$. Moreover, the higher index ${\textup{\,Ind}}(D_{Z_1})$ of the Dirac operator $D_{Z_1}$ is zero. By assumption, the generalized Stolz’s conjecture (Conjecture $\ref{conj:stolz2}$) holds for $\Gamma$. It follows from the discussion right after Conjecture $\ref{conj:stolz2}$ that $(\widetilde M, \tilde h) = (\partial Z_1, \tilde h)$ stably bounds a spin $\Gamma$-manifold $\widetilde V$ equipped with a $\Gamma$-invariant positive scalar curvature metric. Let $Y$ be the spin $\Gamma$-manifold obtained by gluing $\widetilde V$ and $\widetilde W$ along their common boundary $\widetilde M$. Since the scalar curvature on $\widetilde V$ is uniformly bounded below by a positive number, it follows that $${\textup{\,Ind}}_{\max}(D_Y) = {\textup{\,Ind}}_{\max}(D_{\widetilde W}, \tilde g) \textup{ in } KO_{n+1}(C_{\max}^\ast(\Gamma;\mathbb R)).$$ Let $p = {\textup{\,Ind}}_{\max}(D_Y)$. Observe that ${\textup{\,Ind}}_{\max}(D_Y)$ can be represented by an idempotent with finite propagation. Furthermore, we have $$(\pi_{\Gamma_i})_\ast({\textup{\,Ind}}_{\max}(D_{\widetilde W}, \tilde g)) = {\textup{\,Ind}}(D_{W_{\Gamma_i}}, g_i)$$ for all $i$. This finishes the proof. Let us briefly comment on the assumption that a multiple of $(M, \varphi, h)$ stably bounds. For the sake of argument, let us assume the Baum-Connes assembly map $$\mu_{\mathbb R}\colon KO_\bullet^\Gamma(\underline{E}\Gamma)\to KO_\bullet(C_{r}^*(\Gamma;\mathbb R))$$ is rationally injective[^11] and the map $$\theta\colon R^{{\textup{spin}}}_\bullet (B\Gamma)[\mathfrak B^{-1}] \to KO_\bullet(C^\ast_r(\Gamma; \mathbb R))$$ in Stolz’s conjecture is rationally surjective. There is a long exact sequence for $KO$-theory of reduced $C^\ast$-algebras analogous to commutative diagram $\eqref{cd:longexact}$. Now a similar argument as in the proof of Proposition $\ref{prop:max}$ shows the higher rho invariant $\rho(\widetilde D) = \partial [p] $ (up to rational multiples) for some element $[p]\in KO_{n+1}(C^\ast_r(\Gamma; \mathbb R))$, where $n = \dim M$. By the rational surjectivity of $\theta$, it follows that there exists an element $(L, f, h) \in R^{{\textup{spin}}}_{n+1} (B\Gamma)[\mathfrak B^{-1}] $ such that $\theta(L, f, h) = [p]$ (up to rational mutiples). Recall that (cf. [@MR3286895 Theorem 1.14][@Xiepos Theorem A]) $$\partial (\theta(L, f, g)) = \rho(D_{\widetilde {\partial L}}) \textup{ in } KO_n(C_{L,0}^\ast(E\Gamma; \mathbb R)^\Gamma),$$ where $\rho(D_{\widetilde {\partial L}})$ is the higher rho invariant of $D_{\widetilde{\partial L}}$ with respect to the positive scalar curvature metric $h$. In particular, this implies that $\rho(\widetilde D) = \rho(D_{\widetilde {\partial L}})$. Hence, as far as $\rho(\widetilde D)$ is concerned, we could work with $(\partial L, f, g)$, which clearly bounds, instead of $(M, \varphi, h)$. On the other hand, it is an open question whether the higher rho invariants for $M$ and $\partial L$ remain equal to each other, for corresponding finite-sheeted covering spaces of $M$ and $\partial L$. In Theorem $\ref{prop:max}$ and Proposition $\ref{prop:red}$, we have mainly focused on the part (I) of Question $\ref{question}$. In the following, we shall try to answer part (II) of Question $\ref{question}$ in some special cases. Note that, a key ingredient of the proofs for Theorem $\ref{prop:max}$ and Proposition $\ref{prop:red}$ is the existence of a $K$-theory element $p_{\max}$ with finite propagation[^12] in $K_{n+1}(C_{\max}^\ast(\Gamma))$ such that $\partial (p_{\max}) = \rho_{\max}(\widetilde D)$, where $$\partial \colon K_{n+1}(C_{\max}^\ast(\Gamma))\to KO_{n}(C_{L,0, \max}^\ast(E\Gamma)^\Gamma)$$ is the usual boundary map in the corresponding $K$-theory long exact sequence. We shall assume the existence of such a $K$-theory element throughout the rest of the section. In addition, suppose there exists a smooth dense subalgebra $\mathcal A$ of $C_r^\ast(\Gamma)$ such that $\mathcal A\supset \mathbb C\Gamma$ and the trace map ${\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \alpha \rangle }\colon \mathbb C\Gamma \to \mathbb C$ extends to a trace map $\mathcal A \to \mathbb C$. In this case, ${\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \alpha \rangle }\colon \mathcal A\to \mathbb C$ induces a trace map $${\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \alpha \rangle }\colon K_0(C_r^\ast(\Gamma)) \cong K_0(\mathcal A) \to \mathbb C$$ and a determinant map (cf. [@Xie]) $$\tau_{\alpha}\colon K_1(C^*_{L,0}(E\Gamma)^\Gamma)\to \mathbb C$$ such that the following diagram commutes: $$\xymatrix{ K_0(C_r^\ast(\Gamma)) \ar[r]^-{\partial} \ar[d]_{-{\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \alpha\rangle }} & K_1(C^*_{L,0}(E\Gamma)^\Gamma) \ar[d]^{\tau_{\alpha}}\\ \mathbb C \ar[r]^{=} & \mathbb C}$$ Such a smooth dense subalgebra indeed exists if $\langle \alpha \rangle$ has polynomial growth (cf. [@CM90][@Xie]) or $\Gamma$ is word hyperbolic (cf. [@Puschnigg][@CWXY]). Note that the canonical morphism $$K_1(C^*_{L,0, \max}(E\Gamma)^\Gamma) \to K_1(C^*_{L,0}(E\Gamma)^\Gamma)$$ maps $\rho_{\max}(\widetilde D)$ to $\rho(\widetilde D)$. Let $p_r$ be the image of $p_{\max}$ under the canonical morphism $K_0(C_{\max}^\ast(\Gamma)) \to K_0(C_r^\ast(\Gamma))$. The same argument from the proof of Theorem $\ref{prop:max}$ shows that $\partial(p_r) = \rho(\widetilde D)$ and $$\frac{1}{2}\eta_{\langle \alpha\rangle }(\widetilde D) = - \tau_{\alpha}(\rho(\widetilde D))= {\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\alpha\rangle }(p_r).$$ Similarly, for each finite-index normal subgroup $\Gamma_i\subset \Gamma$, let $$(\pi_{\Gamma_i})_\ast\colon K_0(C_{\max}^*(\Gamma))\to K_0(C_r^*(\Gamma/\Gamma_i))$$ be the natural morphism induced by the quotient map $\pi_{\Gamma_i}\colon \Gamma\to \Gamma/\Gamma_i$. Let us denote $ p_i \coloneqq (\pi_{\Gamma_i})_\ast(p)$. We have $\partial(p_i) = \rho(D_{\Gamma_i})$ and $$\frac{1}{2}\eta_{\langle \pi_{\Gamma_i}(\alpha)\rangle }(D_{\Gamma_i}) = -\tau_{i}(\rho(D_{\Gamma_i}))= {\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\pi_{\Gamma_i}(\alpha)\rangle }(p_i).$$ where $$\tau_i\colon K_1(C^*_{L,0}(E(\Gamma/\Gamma_i))^{\Gamma/{\Gamma_i}}) \to \mathbb C$$ is a determinant map induced by the trace map ${\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \pi_{\Gamma_i}(\alpha)\rangle}$, cf. [@Xie Lemma 3.9 & Theorem 4.3]. Since $p_{\max}$ has finite propagation, it follows that the limit $ \displaystyle \lim_{i \to \infty } {\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\pi_{\Gamma_i}(\alpha)\rangle }(p_i) $ stabilizes and is equal to ${\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\alpha\rangle }(p_r)$. Thus the limit $$\lim_{i\to\infty}\eta_{\langle \pi_{\Gamma_i}(\alpha)\rangle }(D_{\Gamma_i})$$ stabilizes and is equal to $\eta_{\left\langle \alpha\right\rangle }(\widetilde D)$. Scalar curvature and $\ell^1$-summability {#sec:sc} ========================================= In this section, we show that the answers to both part (I) and part (II) of Question $\ref{question}$ are positive when the scalar curvature of the given spin manifold $M$ is bounded below by a sufficiently large positive number. Throughout this section, assume $M$ is an odd-dimensional closed spin manifold endowed with a positive scalar curvature metric and $\Gamma$ is a finitely generated discrete group. Let $\widetilde M$ be a regular $\Gamma$-covering space of $M$ and $\widetilde D$ be the Dirac operator lifted from $M$. For each finite-index normal subgroup $\Gamma'$ of $\Gamma$, let $M_{\Gamma'} = \widetilde M/\Gamma'$ be the associated finite-sheeted covering space of $M$. Denote by $D_{\Gamma'}$ the Dirac opeartor on $M_{\Gamma'}$ lifted from $M$. Let $S$ be a symmetric finite generating set of $\Gamma$ and $\ell$ be the associated word length function on $\Gamma$. There exist $C>0$ and $B>0$ such that $$\label{eq:growthofGamma} \#\{\gamma\in \Gamma :\ \ell(g)\leqslant n\}\leqslant Ce^{B\cdot n}.$$ for all $n\geq 0$. Let $K_\Gamma$ be the infimum of all such numbers $B$. Furthermore, there exist $\theta_0,\theta_1,c_0,c_1>0$ such that $$\label{eq:equivwithlength} \theta_0\cdot \ell(\beta)-c_0\leqslant {\mathrm{dist}}(x,\beta x)\leqslant \theta_1\cdot \ell(\beta)+c_1$$ for all $x\in \mathcal F$ and $\beta\in \Gamma$, where $\mathcal F$ is a fundamental domain of $\widetilde M$ under the action of $\Gamma$. In particular, we may define $\theta_0$ as follows: $$\label{eq:distortion} \theta_0=\liminf_{\ell(\beta)\to\infty}\left(\inf_{x\in\mathcal F}\frac{{\mathrm{dist}}(x,\beta x)}{\ell(\beta)}\right).$$ \[def:cst\] With the above notation, let us define $$\sigma_\Gamma \coloneqq \frac{2 K_{\Gamma}}{\theta_0}.$$ The following theorem answers both part (I) and part (II) of Question $\ref{question}$ positively, under the condition that the spectral gap of $\widetilde D$ at zero is sufficiently large. \[main\] With the same notation as above, suppose $\{\Gamma_i\}$ is a sequence of finite-index normal subgroups that distinguishes the conjugacy class $\langle \alpha\rangle$ of a non-identity element $\alpha\in \Gamma$. If the spectral gap of $\widetilde D$ at zero is greater than $\sigma_\Gamma$, then $$\lim_{i\to\infty}\eta_{\langle \pi_{\Gamma_i}(\alpha)\rangle }(D_{\Gamma_i})=\eta_{\left\langle \alpha\right\rangle }(\widetilde D).$$ It suffices to find a function of $t$ that is a dominating function for all the following functions: $${\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\left\langle \alpha \right\rangle }(\widetilde De^{-t^2\widetilde D^2})= \sum_{\gamma\in\langle \alpha \rangle}\int_\mathcal F {\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}( K_t(x,\gamma x) )dx$$ and $${\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\pi_{\Gamma_i}(\alpha)\rangle }(D_{\Gamma_i}e^{-t^2D^2_{\Gamma_i}}) =\sum_{\omega \in \langle\pi_{\Gamma_i}(\alpha )\rangle}\int_{\mathcal F}{\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}((K_i)_t(x,\omega x))dx.$$ and show that ${\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\pi_{\Gamma_i}(\alpha)\rangle }(D_{\Gamma_i}e^{-t^2D^2_{\Gamma_i}})$ converges to ${\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\left\langle \alpha \right\rangle }(\widetilde De^{-t^2\widetilde D^2})$, as $i\to\infty$, for each $t$. Indeed, the theorem then follows by the dominated convergence theorem. Recall that $K_t(x,y)$ (resp. $(K_i)_t(x,y)$) is the Schwartz kernel of $\widetilde De^{-t^2\widetilde D^2}$ (resp. $D_{\Gamma_i} e^{-t^2 D_{\Gamma_i}^2}$). We have the following estimates (cf. [@CWXY Section 3]). 1. By [@CWXY Lemma 3.8], for any $\mu>1$ and $r>0$, there exists a constant $c_{\mu, r}>0$ such that $$\label{eq:bound} \|K_t(x,y)\|\leqslant c_{\mu, r}\cdot F_t\left(\frac{{\mathrm{dist}}(x, y)}{\mu}\right),$$ for $\forall x,y\in \widetilde M$ with ${\mathrm{dist}}(x,y)>r$. Here $\|K_t(x, y)\|$ is the operator norm of the matrix $K_t(x, y)$, and the function $F_t$ is defined by $$F_t(s)\coloneqq \sup_{n\leqslant \frac{3}{2}\dim M+3}\int_{|\xi|>s}\left|\frac{d^n}{d\xi^n}\widehat f_t(\xi)\right|d\xi,$$ where $\widehat f_t$ is the Fourier transform of $f_t(x) = xe^{-t^2x^2}$. It follows that for $\mu>1$ and $r>0$, there exist $c_{\mu, r}>0, n_1>0$ and $ m_1>0$ such that $$\label{eq:smallt} \|K_t(x,y)\|\leqslant c_{\mu, r}\frac{(1+{\mathrm{dist}}(x,y))^{n_1}}{t^{m_1}}\exp\left( \frac{-{\mathrm{dist}}(x,y)^2}{4\mu t^2}\right),$$ for all $t>0$ and for all $ x,y\in \widetilde M$ with ${\mathrm{dist}}(x,y)>r$. 2. By [@CWXY Lemma 3.5], there exists $c_2 > 0$ such that $$\label{eq:supnorm} \sup_{x,y\in M}\|K_t(x,y)\|\leqslant c_2\cdot \sup_{k+j \leqslant \frac{3}{2}\dim M+3}\|\widetilde{D}^{k}(\widetilde{D}e^{-t^2\widetilde{D}^2}) \widetilde{D}^{j}\|_{op},$$ for all $x, y\in \widetilde M$, where $\|\cdot\|_{op}$ stands for the operator norm. It follows that there exist positive numbers $c_2$, $m_2$ and $\delta$ such that $$\label{eq:larget} \|K_t(x,y)\|\leqslant c_2 \frac{1}{t^{m_2}}\exp(-(\sigma_\Gamma+\delta)^2 \cdot t^2),$$ for all $t>0$ and all $x, y\in \widetilde M$. In fact, since the manifolds $\widetilde M$ and $M_{\Gamma_i}$ have uniformly bounded geometry, the constants $c_{\mu, r}$, $n_1$, $m_1$, $c_2$, $m_2$ and $\delta$ from above can be chosen so that for all $i\geq 1$, we have $$\label{eq:smalltN} \|(K_i)t(x,y)\|\leqslant c_{\mu, r}\frac{(1+{\mathrm{dist}}(x,y))^{n_1}}{t^{m_1}}\exp\left( \frac{-{\mathrm{dist}}(x,y)^2}{4\mu t^2}\right),$$ for all $t>0$ and for all $ x,y\in M_{\Gamma_i}$ with ${\mathrm{dist}}(x,y)>r$; and $$\label{eq:largetN} \|(K_i)_t(x,y)\|\leqslant c_2 \frac{1}{t^{m_2}}\exp(-(\sigma_\Gamma+\delta)^2 \cdot t^2),$$ for all $t>0$ and all $x, y\in M_{\Gamma_i}$. For the rest of the proof, let us fix $r>0$. Note that we have $$\begin{aligned} {\mathrm{dist}}(x, \gamma y) & \leq {\mathrm{dist}}( x, \gamma x) + {\mathrm{dist}}(\gamma x, \gamma y)\\ & = {\mathrm{dist}}( x, \gamma x) + {\mathrm{dist}}(x, y)\end{aligned}$$ for all $x, y\in \widetilde M $ and $\gamma \in \Gamma$. Similarly, we have $${\mathrm{dist}}( x, \gamma x) - {\mathrm{dist}}(x, y) \leq {\mathrm{dist}}(x, \gamma y).$$ By line $\eqref{eq:equivwithlength}$, we have $$\theta_0\cdot \ell(\gamma)-c_0 - {\mathrm{dist}}(x, y)\leqslant {\mathrm{dist}}(x,\gamma y)\leqslant \theta_1\cdot \ell(\gamma)+c_1 + {\mathrm{dist}}(x, y)$$ for all $x, y\in \widetilde M $ and $\gamma \in \Gamma$. In particular, there exist $c'_0>0$ and $c'_1>0$ such that $$\label{eq:length} \theta_0\cdot \ell(\gamma)-c'_0\leqslant {\mathrm{dist}}(x,\gamma y)\leqslant \theta_1\cdot \ell(\gamma)+c'_1$$ for all $x, y\in \mathcal F$ and $\gamma\in \Gamma$, where $\mathcal F$ is a precompact fundamental domain of $\widetilde M$ under the action of $\Gamma$. Let us define $$F \coloneqq \{ \beta\in \Gamma \mid {\mathrm{dist}}(x, \beta y) \leq r \textup{ for some } x, y \in \mathcal F\}.$$ Clearly, $F$ is a finite subset of $\Gamma$. For any given $t>0$, it follows from line $\eqref{eq:growthofGamma}$ , $\eqref{eq:smallt}$ and $\eqref{eq:length}$ that the Schwartz kernel $K_t$ is $\ell^1$-summable (cf. Definition $\ref{def:L1}$). Now approximate $f_t(x) = xe^{-t^2x^2}$ by smooth functions $\{\varphi_j\}$ whose Fourier transforms are compactly supported. By applying the estimates in line $\eqref{eq:bound}$ and $\eqref{eq:supnorm}$ to the Schwartz kernel $K_{\varphi_j(\widetilde D)}$ (resp. $K_{\varphi_j(D_{\Gamma_i})}$) of the operator $\varphi_j(\widetilde D)$ (resp. $\varphi_j(D_{\Gamma_i})$), it is not difficult to see that $K_{\varphi_j(\widetilde D)}$ (resp. $K_{\varphi_j(D_{\Gamma_i})}$) converges to $K_t$ (resp. $(K_i)_t$) in $\ell^1$-norm (defined in Definition $\ref{def:L1}$). Note that $\varphi_j(\widetilde D)$ has finite propagation. Since $\widetilde D$ locally coincides with $D_{\Gamma_i}$, it follows from finite propagation estimates of wave operators that (cf. [@guoxieyu]): $$K_{\varphi_j(D_{\Gamma_i})}(\pi_{\Gamma_i}(x), \pi_{\Gamma_i}(y)) = \sum_{\beta\in \Gamma_i}K_{\varphi_j(\widetilde D)}(x, \beta y)$$ for all $x, y\in \widetilde M$. As a consequence of the above discussion, we have $$\label{eq:fold} (K_i)_{t}(\pi_{\Gamma_i}(x),\pi_{\Gamma_i}(y))=\sum_{\beta\in \Gamma_i}K_t(x,\beta y).$$ for all $x, y\in \widetilde M$ and for all $t>0$. Furthermore, by Lemma $\ref{lemma:limitkerneltrace}$, we have the following convergence: $${\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\pi_{\Gamma_i}(\alpha)\rangle }(D_{\Gamma_i}e^{-t^2D^2_{\Gamma_i}}) \to {\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\left\langle \alpha \right\rangle }(\widetilde De^{-t^2\widetilde D^2}), \textup{ as } j \to \infty,$$ for each $t>0$, since $\{\Gamma_i\}$ distinguishes the conjugacy class $\langle \alpha \rangle$. By line and , there exists a positive number $c_3$ such that $$\begin{aligned} & \|K_t(x,y)\|^2 \\ &\leq c_3 \frac{(1+{\mathrm{dist}}(x,y))^{n_1}}{t^{m_1+m_2}}\exp\left( \frac{-{\mathrm{dist}}(x,y)^2}{4\mu t^2}\right) e^{-(\sigma_\Gamma+\varepsilon)^2 \cdot t^2} e^{-\varepsilon^2 t^2}\\ & \leq c_3 \frac{(1+{\mathrm{dist}}(x,y))^{n_1}}{t^{m_1+m_2}}\exp\left( \frac{-{\mathrm{dist}}(x,y)\cdot (\sigma_\Gamma+\varepsilon)}{\mu}\right) e^{-\varepsilon^2 t^2},\end{aligned}$$ for all $x, y\in \widetilde M$ with ${\mathrm{dist}}(x, y)>r$, where $\varepsilon = \delta/2$. By choosing $\mu >1$ sufficiently close to $1$, we see that there exist $c_4 >0 $ and $\lambda >1$ such that $$\|K_t(x,y)\|\leq c_4 \frac{e^{-\varepsilon^2 t^2}}{t^{(m_1+m_2)/2}} \exp(- \frac{\lambda}{2} \cdot \sigma_\Gamma\cdot {\mathrm{dist}}(x, y))$$ for all $x, y\in \widetilde M$ with ${\mathrm{dist}}(x, y)>r$. It follows that there exist $c_5>0$ and $m>0$ such that $$\begin{aligned} & \sum_{\gamma\in \Gamma}\|K_t(x,\gamma y)\|\\ &\leqslant \sum_{\gamma\in F} c_2 \frac{e^{-(\sigma_\Gamma+\delta)^2 \cdot t^2}}{t^{m_2}} + c_4 \frac{e^{-\varepsilon^2 t^2}}{t^{(m_1+m_2)/2}}\sum_{\gamma\notin F}e^{-\lambda \cdot K_{\Gamma}\cdot (\ell(\gamma)- c'_0 \tau_0^{-1})} \\ & \leqslant c_2 \cdot |F|\cdot \frac{e^{-\varepsilon^2 t^2}}{t^{m_2}} + c_4 \frac{e^{-\varepsilon^2 t^2}}{t^{(m_1+m_2)/2}}\sum_{n=0}^\infty e^{K_{\Gamma}\cdot n}e^{-\lambda \cdot K_{\Gamma}\cdot (n-c'_0\tau_0^{-1})}\\ &<\frac{c_5}{t^{m }}e^{-\varepsilon^2 t^2}\end{aligned}$$ for all $t>0$. In particular, we have $$\sum_{\gamma\in \langle \alpha \rangle } \big|{\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}K_t(x,\gamma y) \big| < \frac{c_5}{t^{m }}e^{-\varepsilon^2 t^2}$$ for all $x, y\in \widetilde M$ and all $t>0$. By the same argument, we also have $$\sum_{\omega\in \langle \pi_{\Gamma_i}(\alpha) \rangle } \big|{\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}(K_i)_t(x',\omega y') \big| < \frac{c_5}{t^{m }}e^{-\varepsilon^2 t^2}$$ for all $x', y'\in M_{\Gamma_i}$ and all $t>0$. Therefore, the functions $$|{\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\alpha \rangle }(\widetilde D e^{-t^2\widetilde D^2})| \textup{ and } |{\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\pi_{\Gamma_i}(\alpha )\rangle }(D_{\Gamma_i}e^{-t^2D^2_{\Gamma_i}})|$$ are all bounded by the function $$c_5\cdot t^{-m}e^{-\varepsilon^2 t^2}.$$ The latter is clearly absolutely integrable on $[1, \infty)$. We have found above an appropriate dominating function on the interval $[1, \infty)$. Now let us find the dominating function on $(0, 1]$. Since $\Gamma$ acts on $\widetilde M$ freely and cocompactly, it follows that there exists $\varepsilon_0 >0$ such that $${\mathrm{dist}}(x, \gamma x) > \varepsilon_0$$ for all $x\in \widetilde M$ and all $\gamma \neq e \in \Gamma$. By applying line $\eqref{eq:smallt}$, a similar calculation as above shows that there exist $\varepsilon_1>0$ and $c_6 >0$ such that $$\sum_{\gamma\in \Gamma}\|K_t(x,\gamma x)\| < \frac{c_6}{t^{m_1}} e^{-\varepsilon_1\cdot t^{-2}}$$ for all $x\in \mathcal F$ and all $t\leq 1$. The same estimate also holds for $(K_i)_t$. Therefore, on the interval $(0, 1]$, the functions $$|{\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\alpha \rangle }(\widetilde D e^{-t^2\widetilde D^2})| \textup{ and } |{\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\pi_{\Gamma_i}(\alpha )\rangle }(D_{\Gamma_i}e^{-t^2D^2_{\Gamma_i}})|$$ are all bounded by the function $$c_6 \cdot t^{-m_1} e^{-\varepsilon_1\cdot t^{-2}}.$$ The latter is absolutely integrable on $(0, 1]$. This finishes the proof. By the proof of Theorem $\ref{main}$ above, in order to bound the function $$|{\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\alpha \rangle }(\widetilde D e^{-t^2\widetilde D^2})|,$$ it suffices to assume the spectral gap of $\widetilde D$ at zero to be greater than $$\sigma_{\langle \alpha\rangle} \coloneqq \frac{2\cdot K_{\langle \alpha \rangle}}{\theta_0},$$ where $\theta_0$ is the constant from line $\eqref{eq:distortion}$ and $K_{\langle \alpha \rangle}$ is nonnegative constant such that there exists some constant $C>0$ satisfying $$\#\{\gamma\in \langle \alpha\rangle :\ \ell(\gamma)\leqslant n\}\leqslant Ce^{K_{\langle \alpha \rangle }\cdot n}$$ for all $n$. In fact, if we have a uniform control of the spectral gap of $D_{\Gamma_i}$ at zero and the growth rate of the conjugacy class $\{\langle \pi_{\Gamma_i}(\alpha) \rangle\}$ for all $i\geq 1$, a notion to be made precise in the following, then the same proof above also implies that $$\lim_{i\to\infty}\eta_{\langle \pi_{\Gamma_i}(\alpha)\rangle }(D_{\Gamma_i})=\eta_{\left\langle \alpha\right\rangle }(\widetilde D)$$ in this case. Recall that $S$ a symmetric finite generating set of $\Gamma$. For each normal subgroup $\Gamma_i$ of $\Gamma$, the map $\pi_{\Gamma_i}\colon \Gamma\to \Gamma/\Gamma_i$ is the canonical quotient map. The set $\pi_{\Gamma_i}(S)$ is a symmetric generating set for $\Gamma/\Gamma_i$, hence induces a word length function $\ell_{\Gamma_i}$ on $\Gamma/{\Gamma_i}$. More explicitly, we have $$\ell_{\Gamma_i}(\omega):=\inf\{\ell(\beta) : \beta \in\pi_{\Gamma_i}^{-1}(\omega)\}.$$ for all $\omega\in \Gamma/\Gamma_i$. For a given conjugacy class $\langle \alpha \rangle$ of $\Gamma$, we say that $\langle \alpha \rangle$ has uniform exponential growth with respect to a family of normal subgroups $\{ \Gamma_i\}$, if there exist $C>0$ and $A\geq 0$ such that $$\#\big\{\omega \in \langle \pi_{\Gamma_i}(\alpha )\rangle:\ell_{\Gamma_i}(\omega)\leqslant n\big\}\leqslant C e^{A\cdot n}.$$ for all $i\geq 1$ and all $n\geq 0$. In this case, we define $K_u$ to be the infimum of all such numbers $A$. With the above notation, we define $$\sigma_u \coloneqq \frac{2\cdot K_u}{\theta_0},$$ where $\theta_0$ is the constant from line $\eqref{eq:distortion}$. The same argument from the proof of Theorem $\ref{main}$ can be used to prove the following. \[thm:conjcontrol\] With the same notation as in Theorem $\ref{main}$, suppose $\{\Gamma_i\}$ is a sequence of finite-index normal subgroups that distinguishes the conjugacy class $\langle \alpha\rangle$ of a non-identity element $\alpha\in \Gamma$. Assume $\langle\alpha\rangle$ has uniform exponential growth with respect to $\{\Gamma_i\}$. If there exists $\varepsilon >0$ such that the spectral gap of $D_{\Gamma_i}$ at zero is greater than $\sigma_u +\varepsilon$ for sufficiently large $i\gg 1$, then $$\lim_{i\to\infty}\eta_{\langle \pi_{\Gamma_i}(\alpha)\rangle }(D_{\Gamma_i})=\eta_{\left\langle \alpha \right\rangle }(\widetilde D).$$ Here is a geometric condition on $M$ that guarantees the spectral gap of $D_{\Gamma_i}$ at zero to be greater than $\sigma_u +\varepsilon$ for all $i\geq 1$. If the scalar curvature of $M$ is strictly bounded below[^13] by $4 \cdot \sigma_u^2$, then it follows from the Lichnerowicz formula that there exists $\varepsilon >0$ such that the spectral gap of $D_{\Gamma_i}$ at zero is greater than $\sigma_u +\varepsilon$ for all $i\geq 1$. Separation rates of conjugacy classes {#sec:sep} ===================================== In this section, we introduce a notion of separation rate for how fast a sequence of normal subgroups $\{\Gamma_i\}$ of $\Gamma$ distinguishes a conjugacy class $\langle \alpha \rangle $ of $\Gamma$ and use it to answer Question $\ref{question}$ in some cases. \[def:fastsep\] For each normal subgroup $\Gamma'$ of $\Gamma$, let $\pi_{\Gamma'} \colon \Gamma \to \Gamma/\Gamma'$ be the quotient map from $\Gamma$ to $\Gamma/\Gamma'$. Given a conjugacy class $\langle \alpha \rangle$ of $\Gamma$, we define the injective radius of $\pi_{\Gamma'}$ with respect to $\langle \alpha\rangle $ to be $$\label{eq:s(N)} r(\Gamma')\coloneqq \max\{n \mid \textup{ if } \gamma \notin\langle \alpha \rangle \textup{ and } \ell(\gamma) \leq n, \textup{ then } \pi_{\Gamma'}(\gamma )\notin\langle\pi_{\Gamma'}(\alpha)\rangle\}.$$ Suppose that $\{\Gamma_i\}$ is a sequence of finite-index normal subgroups of $\Gamma$ that distinguishes $\langle \alpha \rangle$. We say that $\{\Gamma_i\}$ distinguishes $\langle \alpha \rangle$ sufficiently fast if there exist $C>0$ and $R>0$ such that $$\label{eq:seqspeed} |\langle\pi_{\Gamma_i}(\alpha)\rangle|\leqslant Ce^{R\cdot r({\Gamma_i})}.$$ In this case, we define the separation rate $R_{\langle \alpha \rangle, \{\Gamma_i\}}$ of $\langle \alpha\rangle$ with respect to $\{\Gamma_i\}$ to be the infimum of all such numbers $R$. We have the following proposition. \[prop:sep\] Let $\langle \alpha\rangle$ be the conjugacy class of a non-identity element $\alpha\in \Gamma$. Suppose $\{\Gamma_i\}$ is a sequence of finite-index normal subgroups that distinguishes $\langle \alpha \rangle$ sufficiently fast with separation rate $R = R_{\langle \alpha \rangle, \{\Gamma_i\}}$. If $\eta_{\langle \alpha \rangle}(\widetilde D)$ is finite[^14] and there exists $\varepsilon >0$ such that the spectral gap of $D_{\Gamma_i}$ at zero is greater than $\sigma_R + \varepsilon$ for all sufficiently large $i\gg 1$, where $$\sigma_R \coloneqq \frac{2 (K_\Gamma \cdot R)^{1/2}}{\theta_0},$$ then we have $$\lim_{i\to\infty}\eta_{\langle \pi_{\Gamma_i}(\alpha)\rangle }(D_{\Gamma_i})=\eta_{\left\langle \alpha \right\rangle }(\widetilde D).$$ By assumption, the integral $$\eta_{\left\langle \alpha \right\rangle }(\widetilde D)\coloneqq \frac{2}{\sqrt\pi}\int_{0}^\infty {\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\left\langle \alpha \right\rangle }(\widetilde De^{-t^2\widetilde D^2})dt$$ converges. To prove the proposition, it suffices to show that there exists a sequence of positive real numbers $\{s_i\}$ such that $s_i\to\infty$, as $i\to\infty$, and 1. $$\label{eq:smalllim} \lim_{i\to\infty} \int_0^{s_i}\left({\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \pi_{\Gamma_i}(\alpha)\rangle}(D_{\Gamma_i}e^{-t^2D^2_{\Gamma_i}})-{\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \alpha\rangle}(\widetilde De^{-t^2\widetilde D^2}) \right)dt=0$$ 2. and $$\label{eq:largelim} \lim_{i\to\infty}\int_{s_i}^\infty{\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \pi_{\Gamma_i}(\alpha)\rangle}(D_{\Gamma_i}e^{-t^2D^2_{\Gamma_i}})dt=0.$$ Let $K_t(x,y)$ (resp. $(K_i)_t(x,y)$) be the Schwartz kernel of $\widetilde De^{-t^2\widetilde D^2}$ (resp. $D_{\Gamma_i} e^{-t^2D_{\Gamma_i}^2}$). Recall that we have (cf. line $\eqref{eq:fold}$) $$(K_i)_t(\pi_{\Gamma_i}(x),\pi_{\Gamma_i}(y))=\sum_{\beta\in \Gamma_i}K_t(x,\beta y)$$ for all $x, y\in \widetilde M$. It follows that $$\begin{aligned} &\big|{\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \pi_{\Gamma_i}(\alpha )\rangle}(D_{\Gamma_i}e^{-t^2D^2_{\Gamma_i}})-{\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \alpha\rangle}(\widetilde De^{-t^2\widetilde D^2})\big|\\ \leqslant &\sum_{\substack{\gamma\in \pi_{\Gamma_i}^{-1}\langle\pi_{\Gamma_i}(\alpha )\rangle \\ \textup{but } \gamma \notin \langle \alpha \rangle}}\int_{x\in\mathcal F}\|K_t(x, \gamma x)\|dx. \end{aligned}$$ By the definition of $r(\Gamma_i)$ in line , we see that $$\{ \gamma \in \Gamma \mid \gamma \in \pi_{\Gamma_i}^{-1}\langle\pi_{\Gamma_i}(\alpha)\rangle \textup{ but }\gamma\notin \langle \alpha \rangle \}\subseteq\{\gamma\in \Gamma \mid \ell(\gamma)\geqslant r(\Gamma_i)\}.$$ From line , we have $$\begin{aligned} \sum_{\ell(\gamma)\geqslant r(\Gamma_i)}\int_{x\in\mathcal F}\|K_t(x,\gamma x)\|dx \leqslant c_{\mu, r}\sum_{m=r(\Gamma_i)}^\infty e^{-\frac{(\theta_0\cdot m-c_0)^2}{4\mu t^2}}e^{K_\Gamma \cdot m}. \end{aligned}$$ Note that $$\begin{aligned} &\int_0^{s_i}\sum_{m=r(\Gamma_i)}^\infty e^{-\frac{(\theta_0\cdot m-c_0)^2}{4\mu t^2}}e^{K_\Gamma \cdot m}dt \leqslant s_i\sum_{m=r(\Gamma_i)}^\infty e^{-\frac{(\theta_0\cdot m-c_0)^2}{4\mu s_i^2}+K_\Gamma \cdot m}. \end{aligned}$$ The right hand side goes to zero, as $s_i\to\infty$, as long as there exists $\lambda_1 >1$ such that $$\label{eq:upper} \frac{(\theta_0\cdot r(\Gamma_i) - c_0)^2}{4\mu s_i^2} > \lambda_1\cdot K_\Gamma \cdot r(\Gamma_i)$$ for all sufficiently large $i\gg 1$. Since $\{\Gamma_i\}$ distinguishes $\langle \alpha \rangle$, we have that $r(\Gamma_i)\to \infty $, as $i\to\infty$. So the condition in line $\eqref{eq:upper}$ is equivalent to $$s_i^2 < \frac{\theta_0^2 \cdot r(\Gamma_i)}{4\mu \cdot \lambda_1 \cdot K_\Gamma}$$ for sufficiently large $i\gg 1$. On the other hand, by the inequality from line , there exist $c>0$ and $\varepsilon>0$ such that $$\left|\int_{s_i}^\infty{\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \pi_{\Gamma_i}(\alpha)\rangle}(D_{\Gamma_i}e^{-t^2D^2_{\Gamma_i}})dt\right|\leqslant c\cdot e^{-(\sigma_R+\varepsilon)^2 \cdot s_i^2} \cdot |\langle\pi_{\Gamma_i}(\alpha)\rangle|$$ for all sufficiently large $i\gg 1$. Note that the right hand side goes to zero, as $s_i \to \infty$, as long as there exists $\lambda_2 >1$ such that $$\label{eq:lower} (\sigma_R+\varepsilon)^2\cdot s_i^2 > \lambda_2\cdot R \cdot r(\Gamma_i).$$ for all sufficiently large $j\gg 1$. Combining the two inequalities in line $\eqref{eq:upper}$ and $\eqref{eq:lower}$ together, we can choose a sequence of real numbers $\{s_i\}$ that satisfies the limits in both line $\eqref{eq:smalllim}$ and $\eqref{eq:largelim}$, as long as there exists $\lambda_3 >1$ such that $$\label{eq:specgap} \frac{\theta_0^2 \cdot r(\Gamma_i)}{4\mu \cdot \lambda_1 \cdot K_\Gamma} > \lambda_3 \frac{R\cdot r(\Gamma_i) }{(\sigma_R+\varepsilon)^2}$$ for all sufficiently large $i\gg 1$. By choosing $\mu$ sufficiently close to $1$, the inequality in line $\eqref{eq:specgap}$ follows from the definition of $\sigma_R$. This finishes the proof. We finish this section with the following calculation of the separation rates of conjugacy classes of ${\textup{SL}}_{2 }(\mathbb Z)$. The group ${\textup{SL}}_2(\mathbb Z)$ can be presented by $$\langle x,y\ |\ x^4=1,\ x^2=y^3\rangle,$$ where $x=\begin{psmallmatrix} 0&-1\\1&0 \end{psmallmatrix}$ and $y=\begin{psmallmatrix} 0&-1\\1&1 \end{psmallmatrix}$. Recall that ${\textup{SL}}_2(\mathbb Z)$ is a conjugacy separable group [@Stebe]. We now show that for any torsion element $\alpha\in {\textup{SL}}_2( \mathbb Z)$, there exists a sequence of finite-index normal subgroups $\{\Gamma_i\}$ of ${\textup{SL}}_2(\mathbb Z)$ that distinguishes $\langle \alpha\rangle$ such that the corresponding separation rate $R_{\langle \alpha \rangle, \{\Gamma_i\}} = 0$. Every element in ${\textup{SL}}_2(\mathbb Z)$ can be written as $$\pm xy^{n_1}xy^{n_2}\cdots\text{ or } \pm y^{n_1}xy^{n_2}x\cdots,$$ where $n_i\in\{1,2\}$. In particular, if $\alpha $ is a torsion element of $SL(2, \mathbb Z)$, then by induction $\alpha$ is conjugate to some power of $x$ or $y$. Let $\psi\colon {\textup{SL}}_2(\mathbb Z)\to\mathbb Z/12\mathbb Z$ be the group homomorphism defined by $\psi(x)=3$ and $\psi(y)=2$. In particular, we have $$\psi(e) = 0, \psi(x) = 3, \psi(x^2) = \psi(y^3) = 6, \psi(x^3)= 9,$$ $$\psi(y) = 2, \psi(y^2) = 4, \psi(y^4)= 8, \textup{ and } \psi(y^5) = 10.$$ It follows that any two torsion elements $\gamma_1$ and $\gamma_2$ of ${\textup{SL}}_2( \mathbb Z)$ are conjugate in ${\textup{SL}}_2( \mathbb Z)$ if and only if $\psi(\gamma_1) = \psi(\gamma_2)$. Now given any finite-index normal subgroup $N$ of ${\textup{SL}}_2(\mathbb Z)$, the group $ N_1 = N\cap \ker(\psi)$ is a finite-index normal subgroup of ${\textup{SL}}_2(\mathbb Z)$. By the discussion above, we see that any two torsion elements $\gamma_1$ and $\gamma_2$ of ${\textup{SL}}_2(\mathbb Z)$ are conjugate in ${\textup{SL}}_2(\mathbb Z)$ if and only if they are conjugate in ${\textup{SL}}_2(\mathbb Z)/N_1$. In other words, a set $\{N_1\}$ consisting of a single finite-index normal subgroup distinguishes the conjugacy class $\langle \alpha \rangle$ of any torsion element $\alpha\in {\textup{SL}}_2(\mathbb Z)$. Moreover, the injective radius $r(N_1)$ of $\pi_{N_1}\colon {\textup{SL}}_2(\mathbb Z) \to {\textup{SL}}_2(\mathbb Z)/N_1$ with respect to $\langle \alpha\rangle $ is infinity. It follows that the separation rate $R_{\langle \alpha \rangle, \{N_1\}} = 0$ in this case. Since ${\textup{SL}}_2(\mathbb Z)$ is hyperbolic, Puschnigg’s smooth dense subalgebra $\mathcal A$ of $C_r^\ast({\textup{SL}}_2(\mathbb Z))$ admits a continuous extension of the trace map ${\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \alpha \rangle}$ for any conjugacy class $\langle \alpha \rangle$ of ${\textup{SL}}_2(\mathbb Z)$ (cf. [@Puschnigg]). In this case, for any element $\alpha\neq e \in \Gamma$, the delocalized eta invariant $\eta_{\langle \alpha \rangle}(\widetilde D)$ is finite[^15] (cf. [@Lott Section 4][@CWXY Section 6]). Hence we can apply Proposition $\ref{prop:sep}$ to answer positively both part (I) and (II) of Question $\ref{question}$ for the group ${\textup{SL}}_2(\mathbb Z)$. On other hand, since ${\textup{SL}}_2(\mathbb Z)$ is also a-T-menable, we can equally apply Proposition $\ref{prop:atmenable}$ to answer positively both part (I) and (II) of Question $\ref{question}$ for the group ${\textup{SL}}_2(\mathbb Z)$ (cf. the discussion at the end of Section $\ref{sec:max}$). [10]{} M. F. Atiyah, V. K. Patodi, and I. M. Singer. Spectral asymmetry and [R]{}iemannian geometry. [I]{}. , 77:43–69, 1975. M. F. Atiyah, V. K. Patodi, and I. M. Singer. Spectral asymmetry and [R]{}iemannian geometry. [II]{}. , 78(3):405–432, 1975. M. F. Atiyah, V. K. Patodi, and I. M. Singer. Spectral asymmetry and [R]{}iemannian geometry. [III]{}. , 79(1):71–99, 1976. Paul Baum and Alain Connes. -theory for discrete groups. In [*Operator algebras and applications, [V]{}ol. 1*]{}, volume 135 of [*London Math. Soc. Lecture Note Ser.*]{}, pages 1–20. Cambridge Univ. Press, Cambridge, 1988. Paul Baum, Alain Connes, and Nigel Higson. Classifying space for proper actions and [$K$]{}-theory of group [$C^\ast$]{}-algebras. In [*[$C^\ast$]{}-algebras: 1943–1993 ([S]{}an [A]{}ntonio, [TX]{}, 1993)*]{}, volume 167 of [*Contemp. Math.*]{}, pages 240–291. Amer. Math. Soc., Providence, RI, 1994. Boris Botvinnik and Peter B. Gilkey. The eta invariant and metrics of positive scalar curvature. , 302(3):507–517, 1995. Ulrich Bunke. A [$K$]{}-theoretic relative index theorem and [C]{}allias-type [D]{}irac operators. , 303(2):241–279, 1995. Xiaoman Chen, Jinmin Wang, Zhizhang Xie, and Guoliang Yu. Delocalized eta invariants, cyclic cohomology and higher rho invariants. , 2019. Alain Connes and Henri Moscovici. Cyclic cohomology, the [N]{}ovikov conjecture and hyperbolic groups. , 29(3):345–388, 1990. Hao Guo, Zhizhang Xie, and Guoliang Yu. A [L]{}ichnerowicz vanishing theorem for the maximal [R]{}oe algebra. arXiv:1905.12299, 2019. Hao Guo, Zhizhang Xie, and Guoliang Yu. Functoriality for higher invariants of elliptic operators. , 2020. Nigel Higson and Gennadi Kasparov. -theory and [$KK$]{}-theory for groups which act properly and isometrically on [H]{}ilbert space. , 144(1):23–74, 2001. D. D. Joyce. Compact [$8$]{}-manifolds with holonomy [${\rm Spin}(7)$]{}. , 123(3):507–552, 1996. Vincent Lafforgue. La conjecture de [B]{}aum-[C]{}onnes à coefficients pour les groupes hyperboliques. , 6(1):1–197, 2012. Eric Leichtnam and Paolo Piazza. On higher eta-invariants and metrics of positive scalar curvature. , 24(4):341–359, 2001. John Lott. Delocalized [$L^2$]{}-invariants. , 169(1):1–31, 1999. Igor Mineyev and Guoliang Yu. The [B]{}aum-[C]{}onnes conjecture for hyperbolic groups. , 149(1):97–122, 2002. Paolo Piazza and Thomas Schick. Rho-classes, index theory and [S]{}tolz’ positive scalar curvature sequence. , 7(4):965–1004, 2014. Michael Puschnigg. New holomorphically closed subalgebras of [$C^*$]{}-algebras of hyperbolic groups. , 20(1):243–259, 2010. John Roe. , volume 90 of [*CBMS Regional Conference Series in Mathematics*]{}. Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 1996. John Roe. Positive curvature, partial vanishing theorems and coarse indices. , 59(1):223–233, 2016. Yanli Song and Xiang Tang. Higher orbit integrals, cyclic cocyles, and [K]{}-theory of reduced group [C\*]{}-algebra. arXiv:1910.00175, 2019. Peter F. Stebe. Conjugacy separability of groups of integer matrices. , 32:1–7, 1972. Stephan Stolz. Concordance classes of positive scalar curvature metrics. preprint available at <http://www.nd.edu/~stolz>. Stephan Stolz. Positive scalar curvature metrics—existence and classification questions. In [*Proceedings of the [I]{}nternational [C]{}ongress of [M]{}athematicians, [V]{}ol. 1, 2 ([Z]{}[ü]{}rich, 1994)*]{}, pages 625–636, Basel, 1995. Birkh[ä]{}user. Bai-Ling Wang and Hang Wang. Localized index and [$L^2$]{}-[L]{}efschetz fixed-point formula for orbifolds. , 102(2):285–349, 2016. Shmuel Weinberger, Zhizhang Xie, and Guoliang Yu. Additivity of higher rho invariants and nonrigidity of topological manifolds. , 2020. Zhizhang Xie and Guoliang Yu. Positive scalar curvature, higher rho invariants and localization algebras. , 262:823–866, 2014. Zhizhang Xie and Guoliang Yu. A relative higher index theorem, diffeomorphisms and positive scalar curvature. , 250:35–73, 2014. Zhizhang Xie and Guoliang Yu. Higher rho invariants and the moduli space of positive scalar curvature metrics. , 307:1046–1069, 2017. Zhizhang Xie and Guoliang Yu. Delocalized eta invariants, algebraicity, and [$K$]{}-theory of group [$C^*$]{}-algebras. , 2019. Zhizhang Xie and Guoliang Yu. Higher invariants in noncommutative geometry. 05 2019. Guoliang Yu. Localization algebras and the coarse [B]{}aum-[C]{}onnes conjecture. , 11(4):307–318, 1997. Guoliang Yu. A characterization of the image of the [B]{}aum-[C]{}onnes map. In [*Quanta of maths*]{}, volume 11 of [*Clay Math. Proc.*]{}, pages 649–657. Amer. Math. Soc., Providence, RI, 2010. [^1]: The first author is partially supported by NSFC 11420101001. [^2]: The second author is partially supported by NSF 1800737. [^3]: The third author is partially supported by NSF 1700021, NSF 1564398 and Simons Fellows Program. [^4]: There is also an extra technical assumption that the conjugacy class $\langle \alpha \rangle$ used in the definition of the delocalized eta invariant is required to have polynomial growth. [^5]: A smooth dense subalgebra of $C^\ast_r(\Gamma)$ is a dense subalgebra of $C^\ast_r(\Gamma)$ that is closed under holomorphic functional calculus. [^6]: The trace map $ {\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle \alpha \rangle}$ is given by the formula: $ \sum_{\beta\in \Gamma}a_\beta \beta\mapsto \sum_{\beta\in\left\langle \alpha \right\rangle }a_\beta.$ [^7]: To be more precise, the spectral gap of $\widetilde D$ at zero is greater than $\sigma_\Gamma$, where $\sigma_\Gamma$ is the constant given in Definition $\ref{def:cst}$. [^8]: For example, when $\Gamma$ is a Gromov’s hyperbolic group, Puschnigg’s smooth dense subalgebra [@Puschnigg] is such an subalgebra which admits a continuous extension of the trace map ${\hspace{.3mm}\mathrm{tr}\hspace{.3mm}}_{\langle\alpha\rangle}$ for all conjugacy classes $\langle h \rangle$. [^9]: Since $\Gamma/\Gamma_i$ is finite, we have $C^*_{L,0, \max}(E(\Gamma/\Gamma_i))^{\Gamma/{\Gamma_i}} \cong C^*_{L,0}(E(\Gamma/\Gamma_i))^{\Gamma/{\Gamma_i}}$. [^10]: The connected sum is performed away from the boundary of $\widetilde W$. [^11]: The rational injectivity of $\mu_{\mathbb R}$ follows from the rational injectivity of the complex version $\mu\colon K_\bullet^\Gamma(\underline{E}\Gamma)\to K_\bullet(C_{r}^*(\Gamma)).$ [^12]: In the case of Proposition $\ref{prop:red}$, we map $KO$-theory to $K$-theory. [^13]: The scalar curvature function $\kappa(x)$ of $M$ satisfies that $\kappa(x)> 4 \cdot \sigma_u^2$ for all $x\in M.$ [^14]: To be precise, $\eta_{\langle \alpha \rangle}(\widetilde D)$ is finite if the integral in line $\eqref{delocalizedeta}$ converges. In particular, the integral in line $\eqref{delocalizedeta}$ does *not* necessarily absolutely converge. [^15]: For any hyperbolic group and the conjugacy class of any nonidentity element, the integral in line $\eqref{delocalizedeta}$ absolutely converges.
--- abstract: 'The CDEX-1 experiment conducted a search of low-mass ($<$ 10 GeV/c$^{2}$) Weakly Interacting Massive Particles (WIMPs) dark matter at the China Jinping Underground Laboratory using a p-type point-contact germanium detector with a fiducial mass of 915 g at a physics analysis threshold of 475 eVee. We report the hardware set-up, detector characterization, data acquisition and analysis procedures of this experiment. No excess of unidentified events are observed after subtraction of known background. Using 335.6 kg-days of data, exclusion constraints on the WIMP-nucleon spin-independent and spin-dependent couplings are derived.' author: - 'W. Zhao' - 'Q. Yue' - 'K.J. Kang' - 'J.P. Cheng' - 'Y.J. Li' - 'H.T. Wong' - 'S.T. Lin' - 'J.P. Chang' - 'J.H. Chen' - 'Q.H. Chen' - 'Y.H. Chen' - 'Z. Deng' - 'Q. Du' - 'H. Gong' - 'X.Q. Hao' - 'H.J. He' - 'Q.J. He' - 'H.X. Huang' - 'T.R. Huang' - 'H. Jiang' - 'H.B. Li' - 'J. Li' - 'J. Li' - 'J.M. Li' - 'X. Li' - 'X.Y. Li' - 'Y.L. Li' - 'F.K. Lin' - 'S.K. Liu' - 'L.C. Lü' - 'H. Ma' - 'J.L. Ma' - 'S.J. Mao' - 'J.Q. Qin' - 'J. Ren' - 'J. Ren' - 'X.C. Ruan' - 'V. Sharma' - 'M.B. Shen' - 'L. Singh' - 'M.K. Singh' - 'A.K. Soma' - 'J. Su' - 'C.J. Tang' - 'J.M. Wang' - 'L. Wang' - 'Q. Wang' - 'S.Y. Wu' - 'Y.C. Wu' - 'Z.Z. Xianyu' - 'R.Q. Xiao' - 'H.Y. Xing' - 'F.Z. Xu' - 'Y. Xu' - 'X.J. Xu' - 'T. Xue' - 'L.T. Yang' - 'S.W. Yang' - 'N. Yi' - 'C.X. Yu' - 'H. Yu' - 'X.Z. Yu' - 'M. Zeng' - 'X.H. Zeng' - 'Z. Zeng' - 'L. Zhang' - 'Y.H. Zhang' - 'M.G. Zhao' - 'Z.Y. Zhou' - 'J.J. Zhu' - 'W.B. Zhu' - 'X.Z. Zhu' - 'Z.H. Zhu' title: 'A Search of Low-Mass WIMPs with p-type Point Contact Germanium Detector in the CDEX-1 Experiment ' --- I. Introduction {#1.introduction} =============== The long-term goal of the CDEX (China Dark matter EXperiment) program [@cdextarget] is to conduct an experiment at the China Jinping Underground Laboratory (CJPL) [@cjpl] with a ton-scale point-contact germanium detector array for low-mass WIMP searches [@rpp; @dm2015; @dm_dbd] and studies of double-beta decay in $^{76}$Ge [@rpp; @dm_dbd; @dbd2015; @dbd2012]. The pilot experiment CDEX-0 was with small planar germanium detectors in array form with a target mass of 20 g [@cdex0], achieving a threshold of 177 eVee (electron equivalent energy eVee is used to characterize detector response throughout in this article, unless otherwise stated). The CDEX-1 experiment adopted kg-scale p-type point contact germanium ($\textsl{p}$PCGe) detectors. Data taking of the first phase was performed only with a passive shielding system, and dark matter results were published with 14.6 kg-days of data taken from August to September, 2012 and a threshold of 400 eVee [@cdex1]. Starting November 2013, Phase-II measurements are based on the design of earlier work [@texono2003-2007; @cdex0], with an active NaI(Tl) anti-Compton (NaI-AC) detector installed. First results with 53.9 kg-days of data were reported [@cdex12014], providing an order of magnitude improvement on the spin-independent $\chi$-N coupling (WIMPs denoted by $\chi$). In particular, the allowed region implied by the CoGeNT [@cogent] experiment is probed and excluded with an identical detector target. We describe the details of the CDEX-1 experiment and report the results with 335.6 kg-days of data taking at CJPL in the following Sections. II. Experimental Setup {#2.experimental setup} ====================== A. China Jinping Underground Laboratory {#2.1 CJPL introduction} --------------------------------------- The China Jinping Underground Laboratory (CJPL) is located in Sichuan province, with a vertical rock overburden of more than 2400 m, providing 6720 meters of water equivalent overburden as passive shield against cosmic rays and their induced backgrounds. The flux of cosmic ray and associated backgrounds is down to 61.7 y$^{-1}$$\cdot$m$^{-2}$ [@cjplmuon]. In addition, the radioactivities of $^{232}$Th, $^{238}$U and $^{40}$K from rock surrounded CJPL were very low based on in situ measurement [@cjplgamma]. The low cosmic-ray flux and radioactivities of $^{238}$U and $^{232}$Th give rise to low level of neutron flux. B. Detector Hardware {#2.2 DAQ} -------------------- CDEX-1 experiment adopted one single module at 1kg-scale mass $\textsl{p}$PCGe to search for WIMPs. The p-type germanium crystal is a cylinder with about 62 mm of both height and diameter which give rise to 994 g mass. It has two electrodes, the outer electrode is n$^{+}$ type, providing high voltage (HV) and signal, and the tiny point-like center electrode is p$^{+}$ type, with order of 1 mm diameter resulting in order of 1 pF capacitance, leading to low energy threshold in potential. At phase I experiment, the outer electrode signal was read out by a resistive feedback preamplifier [@cdex1]. At phase II measurement, the signal output was removed due to its induced noise, such that the outer electrode served only as a HV electrode. The center electrode signal was read out by an ultra-low noise JFET nearby and then supplied into a pulsed-reset feedback preamplifier. The preamplifier generates three identical energy-related signals (OUT$_{-}$E), one timing-related signal (OUT$_{-}$T) and one inhibit signal (IHB) marking the inactive time of the preamplifier. Meanwhile, the preamplifier can accept a test input, typically from an electronic pulser to simulate physical signals. The NaI(Tl) scintillator crystal of the AC detector is well-shaped which can enclose the cryostat of the $\textsl{p}$PCGe, as shown in Figure \[fig1\], and the thickness of its side and top is 48 mm and 130 mm, respectively. The scintillation light from NaI(Tl) crystal were read out by a photomultiplier tube (PMT), which has two outputs from anode and dynode respectively, one was loaded to a shaping amplifier at high gain determining the time over NaI-AC energy threshold, and another was loaded to a timing amplifier at low gain, which was used to measure energy as well as discriminate background sources based on pulse characteristics for different radiation. The schematic of CDEX-1 data acquisition (DAQ) system is shown in Figure \[fig2\], which was based on commercial NIM/VME modules and crates from CANBERRA and CAEN. The $\textsl{p}$PCGe worked at +3500 V provided by an high voltage module (CANBERRA 3106D). The three identical OUT$_{-}$E signals were loaded to shaping amplifiers (CANBERRA 2026) at 6 us (S$_{\textrm{p}6}$), 12 us shaping time (S$_{\textrm{p}12}$) and a timing amplifier (CANBERRA 2111) (T$_{\textrm{p}}$), respectively. Each gain of these amplifiers was adjusted to achieve maximal signal-to-noise ratio and maximal information for low energy events. The energy range was limited to 12 keVee. The S$_{\textrm{p}6,12}$ signal provided energy measurement and system trigger of the data acquisition (DAQ). The T$_{\textrm{p}}$ signal recorded the raw pulse shape information of one event, so it can provide the rise time information. The OUT$_{-}$T signal was distributed into a timing amplifier with low gain to measure high energy backgrounds, intending to analysis background source and opening a window to study $^{76}$Ge neutrinoless double-beta decay. These outputs were digitized and recorded by a flash analog-to-digital convertor (FADC, CAEN V1724) at 100 MHz sampling rate with a resolution of 14 bit. The data acquisition software is based on LabVIEW program. The discriminator output of the inhibit signal provided another trigger of the DAQ and was recorded to determine the exact time of the beginning of discharge process of the preamplifier. To monitor the noise level and dead time of the system, random trigger signals (RT) at 0.05 Hz generated by a precision pulser were injected into the system, providing system trigger. The NaI-AC detector is optimized on its energy threshold, energy linearity in broad energy range, energy resolution and stability. The NaI-AC signals were recorded only when the $\textsl{p}$PCGe detector was fired and triggered the DAQ, and this kind of coincidence events was denoted as AC$^{+}$. The anticoincidence events which only fired in the $\textsl{p}$PCGe detector but without signals at the NaI-AC detector are denoted as AC$^{-}$. Figure \[fig3\] shows an example of AC$^{+}$ event recorded by the DAQ. In general, the DAQ took data at low trigger rate ($\sim$3-5 Hz) to decrease penalty of dead time. ![\[fig1\] Schematic diagram of CDEX-1 experimental setup](Figure1.png){height="5.5cm" width="8.0cm"} ![image](Figure2.pdf){height="9.0cm" width="17.0cm"} ![\[fig3\] Example of one AC$^{+}$ event recorded by FADC, corresponding to energy $\sim$10.37 keVee deposited in $\textsl{p}$PCGe.](Figure3.pdf){width="1.0\linewidth"} C. Shielding System {#2.3 shield} ------------------- The passive shielding structure of CDEX-1 in CJPL is displayed in Figure \[fig1\]. The outermost is 20 cm of lead to shield ambient gamma ray. The inner is 20 cm thick layer of $\sim$ 30$\%$ borated polyethylene, acting as thermal neutron absorber. At phase I experiment, the innermost is a minimum of 20 cm of Oxygen Free High Conductivity (OFHC) copper surrounding the 994 g $\textsl{p}$PCGe detector cryostat in all directions, to further reduce gamma ray surviving from outer shield. Exterior to the OFHC shield is a plastic bag which is used to seal the working space to prevent radon incursion. The radon exclusion volume is continuously flushed with nitrogen gas from a pressurized Dewar. At phase II experiment, interior to OFHC shield is a NaI-AC detector with a well-shaped cavity enclosing the $\textsl{p}$PCGe detector cryostat to provide passive and active shielding. Detailed discussion about its performances is provided in Section III. The entire structure was located in a 1 m thick of polyethylene room, which can moderate and absorb ambient neutron. ![\[fig4\] Typical pulse of S$_{\textrm{p}6}$ at 1.84 keVee. Some parameters are defined, (A$_{\textrm{max}}$ , T$_{\textrm{max}}$) represent the maximal amplitude and its corresponding time of the pulse; (A$_{\textrm{min}}$ , T$_{\textrm{min}}$) represent the minimal amplitude and its corresponding time of the pulse; Q means integration of the pulse; Ped means the pedestal of the pulse.](Figure4.pdf){width="1.0\linewidth"} III. Detector Characterization {#3.ppcge characterization} ============================== The performances of the detection system were studied in details. Characterization of the $\textsl{p}$PCGe, the NaI-AC and the DAQ are discussed in the following Sections. A. Energy Definition and Calibration {#3.1 energy calibration} ------------------------------------ A typical pulse of the $\textsl{p}$PCGe was displayed in Figure \[fig4\], with the parameters defined consistently for all channels. Two energy-related parameters are defined: (i) maximal amplitude of one pulse (A$_{\textrm{max}}$); (ii) integration of one pulse (Q). Optimized partial integration of S$_{\textrm{p}6}$ was chosen to define as energy ($\textsl{T}$) for its excellent energy linearity at low energy range. Since the active volume of $\textsl{p}$PCGe crystal is surrounded by $\sim$ 1.0 mm dead layer and 1.5 mm of OFHC copper cryostat, external low energy X-rays at at $<$ 50 keVee range cannot penetrate into the $\textsl{p}$PCGe crystal. Energy calibration was therefore done with its internal characteristic X-rays originated from the electron capture (EC) of the cosmogenic radioisotopes [@texono2013; @cdex1; @cdex12014]. Figure \[fig5\](a) shows the energy calibration by the two dominant K-shell X-rays: $^{68}$Ge (10.368 keVee), $^{65}$Zn (8.98 keVee) and RT events (0 keVee). The inset figure displays energy difference between the calibrated energy and the real energy of these three peaks, together with other peaks observed in the measured CDEX-1 background spectrum, demonstrating good linearity of less than 0.8$\%$ deviation. The relationship between energy and its resolution is also depicted in Figure \[fig5\](b), showing good linearity between $\surd$*T* and the energy resolution FWHM (Full Width at Half Maximum). The energy resolution at low energy region is derived from this line. ![\[fig5\] (a) Calibration line relating the optimal Q measurements from S$_{\textrm{p}6}$ with the known energies from $^{68}$Ge and $^{65}$Zn K-shell X rays and RT. The error bars are smaller than the data point size. The energy difference between the energy derived from the calibration and the real energy for these three peaks are depicted in the inset, together with K-shell X rays of $^{68}$Ga, $^{55}$Fe, $^{54}$Mn and $^{49}$V. (b) Relation between energy of K-shell X rays and energy resolution.](Figure5a.pdf "fig:"){width="8.0cm" height="7.0cm"} ![\[fig5\] (a) Calibration line relating the optimal Q measurements from S$_{\textrm{p}6}$ with the known energies from $^{68}$Ge and $^{65}$Zn K-shell X rays and RT. The error bars are smaller than the data point size. The energy difference between the energy derived from the calibration and the real energy for these three peaks are depicted in the inset, together with K-shell X rays of $^{68}$Ga, $^{55}$Fe, $^{54}$Mn and $^{49}$V. (b) Relation between energy of K-shell X rays and energy resolution.](Figure5b.pdf "fig:"){width="8.0cm" height="7.0cm"} ![\[fig6\] Energy calibration for NaI(Tl) SA channel. The error bars are smaller than the data point size. The inset figure depicts the measured background energy spectrum of NaI(Tl), and the energy threshold was set at the edge of noise.](Figure6.pdf){height="7.0cm" width="8.0cm"} NaI-AC detector was developed with emphasis on low energy threshold to achieve high efficiency of AC$^{+}$ background suppression. A$_{\textrm{max}}$ was used to define its energy, and calibrated by a $^{152}$Eu (121.78 keV, 244.70 keV, 344.28 keV) source together with RT events. The energy threshold of NaI-AC detector was achieved as low as 6 keVee for background measurement, as illustrated in Figure \[fig6\]. B. Quenching Factor {#3.2 QF} ------------------- Quenching factor (QF) is defined as the ratio of the measured energy to the total nuclear recoil energy deposited in the detector medium. It is crucial to know the relation between QF and nuclear recoil energy in the studies of WIMP search. Figure 7 showed a compilation of all experiment measurements and calculations of QF for recoiled germanium nuclei [@cdex12014]. Several experiments have measured the QF down to a few keVnr (nuclear recoil energy). Typically, two methods can be used to calculate the QF for different nuclear recoil energy. In TRIM software simulation, several aspects of stopping power, range and straggling distributions of a recoiled nucleon with certain energy are considered, while Hartree-Fock atoms and lattice effects are also included [@trim]. In analytic Lindhard calculation, an ideal and static atom is adopted, and Lindhard model is parameterized to a constant *k* which is related to stopping power [@lindard]. The TRIM results agree well with the QF experimental results at a larger energy range and therefore are adopted in our analysis. As illustrated in Figure \[fig7\], QF function derived from TRIM with a 10$\%$ systematic uncertainty is applied in our analysis. ![\[fig7\] QF results for germanium from both experiments and calculations. The QF curve which is derived from TRIM [@trim] as a function of nuclear recoil energy, together with a 10$\%$ systematic error band. The various experimental measurements are overlaid, so are the alternative QFs from parameterization of CoGeNT [@cogent] and the Lindhard theory [@lindard] at *k*=0.2 and *k*=0.157 adopted by CDMSlite [@cdmslite]. It shows that the TRIM results with uncertainties covers most data points as well as the alternative formulations.](Figure7.png){height="8.0cm" width="8.0cm"} C. Dead Layer {#3.3 dead layer} ------------- The n$^{+}$ outer surface electrode of $\textsl{p}$PCGe is fabricated by lithium diffusion, resulting in normally about 1 mm depth of dead layer. This dead layer is composed of totally dead layer where the electric field is zero, and transition layer where the electric field is weak. Interior to transition layer is active volume. Electron-hole pairs generated from events taking place in transition layer have slower drift velocity than those in active volume, leading to pulse with typically slow rise time as well as degraded amplitude due to partial charge collection [@texonobs]. We denoted events at the active volume with complete charge collection as bulk events and events at the dead layer as surface events, as illustrated in Figure \[fig8\]. The totally dead layer acts as passive shield against external low energy $\gamma$/$\beta$, and transition layer acts as active shield against ambient gamma rays through bulk/surface events discrimination based on rise time characteristics. This is self-shield effect of $\textsl{p}$PCGe. On the contrary, the dead layer produced fiducial mass loss. Since the attenuation of gamma rays by the dead layer was dependent on energy, the ratio of these gamma rays at photoelectron peaks would be changed. $^{133}$Ba source with various energy gamma rays was used to measure the thickness of the dead layer for the $\textsl{p}$PCGe, and it was derived to be (1.02$\pm$0.14) mm via comparison of measured and simulated intensity ratios of those gamma peaks [@deadlayer_majorana]. This give rise to fiducial mass to be 915 g with 1$\%$ uncertainty. ![\[fig8\] Schematic diagram of $\textsl{p}$PCGe crystal configuration.](Figure8.pdf){height="6.0cm" width="8.0cm"} D. Trigger Efficiency {#3.4 trigger} --------------------- In principle, physical events over the DAQ threshold would produce triggers and be recorded. The efficiency that events produced triggers for the DAQ is defined as trigger efficiency, which was 50$\%$ for events at the discriminator threshold. AC$^{+}$ events from source sample were used to derive the trigger efficiency [@cdex0; @texono2009; @cdex1]. Figure \[fig9\] displayed the trigger efficiency together with 1$\sigma$ band derived from $^{137}$Cs AC$^{+}$ sample. It is shown that the trigger threshold was 246$\pm$2 eVee, and the trigger efficiency was 100$\%$ above our analysis threshold 475 eVee. ![\[fig9\] Trigger efficiency derived from $^{137}$Cs AC$^{+}$ samples, adopting error function to fit the experiment data points, and 1 $\sigma$ band of the error was superimposed.](Figure9.pdf){height="7.0cm" width="8.0cm"} E. Stability {#3.5 stability} ------------ Both the trigger rate and the noise of RT of the $\textsl{p}$PCGe detector were monitored, shown in Figure \[fig10\]. An improvement of the Laboratory power supply took place at the time period of I. A power filter was used to stabilize the power supply, and the electronic noise of the detector system decreased around 10$\%$. Calibration was performed from late July to late August, 2014, corresponding to the time period of II. During the time period of III and IV, the construction work at the PE room prevented data taking. Both the trigger rate and the noise of RT were kept stable to 16$\%$ and 2$\%$, respectively, during the data taking periods. ![\[fig10\] Top Panel: daily average trigger rate of the $\textsl{p}$PCGe detector system; Bottom Panel: daily average RT electronic noise of the $\textsl{p}$PCGe detector system.](Figure10.pdf){width="1.0\linewidth"} IV. Data Analysis {#4. data analysis} ================= The data analysis is based on timing and amplitude parameters extracted from pulses recorded by the DAQ described in Section II-B. A. Parameters Definition {#4.1 parameters} ------------------------ The amplitude parameters are defined in Section III-A. The timing parameters can be classified into three categories: (i) the timing differences between one event and its closest prior and post IHB events, denoted as T$_{-}$ and T$_{+}$, the detailed information described in [@cdex1]; (ii) the timing interval of one event recorded by the $\textsl{p}$PCGe detector and the NaI-AC detector, $\Delta$t; (iii) the rise time of one event $\tau$, defined as the time interval between 5$\%$ and 95$\%$ of the T$_{\textrm{p}}$ pulse height. To calculate the $\tau$, the pulse-processing algorithm in [@texono2013; @cdex12014; @texonobs] was applied. This rise-time provides the location information where one event happened, in active volume or in dead layer, to discriminate the Bulk/Surface events. ![\[figtt\] The scatter plots of T$_{-}$ and T$_{+}$ for random trigger events and background events. The TT cut has also been overlaid on the scatter plot. The inset figure shown the T$_{+}$ spectra of background before (black) and after the TT cut (green), together with the T$_{+}$ spectra for RT events (red).](FigureTT.png){height="7.0cm" width="8.5cm"} ![\[fig11ped\] The distributions of Ped of S$_{\textrm{p}6}$ for both background and RT events, and the Ped cut criteria.](Figure11.png){height="6.0cm" width="8.0cm"} ![image](Figure12.png){height="5.5cm" width="16.5cm"} B. Data Selection {#4.2 data selection procedure} ----------------- We developed one data selection procedure to determine the WIMPs induced nuclear recoil events, after the dataset calibration and data quality checking [@cdex1]. The procedure contains three categories of selection criteria: 1. Basic Cuts (BC): This basic criteria were aimed to differentiate physical events from electronic noise and spurious signals, such as microphonics. Several methods were applied to eliminate noise events according to their characteristics. The first method was based on timing information of events, derived from the distribution of parameters T$_{-}$ and T$_{+}$ , and the class of ¡°mid-period noise¡± with obvious timing distinction was identified and wiped out by the TT cut as shown in Figure \[figtt\]. The second method was deduced from pedestal of S$_{\textrm{p}6,12}$ and T$_{\textrm{p}}$ (Ped), which was irrelevant to the pulse shape, and therefore the criteria was defined by RT events. This method was used to discriminate the noise events whose pedestals behaved anomalously, which was mostly originated from IHB signals, as illustrated in Figure \[fig11ped\]. Both TT and Ped cuts are independent on event energy. The third method was dependent on pulse shape discrimination (PSD), which was on the basis of correlations of A$_{\textrm{min}}$, A$_{\textrm{max}}$, Q and T$_{\textrm{max}}$, since physical events performs different distributions in these parameters from those of noise events. The criteria was determined by physical events defined by AC$^{+}$ events of $^{137}$Cs calibration data, as depicted in Figure \[fig12bc\]. 2. AC$^{+}$ versus AC$^{-}$ events selection (AC cut): Considering the $\chi$N interaction cross section, WIMPs can hardly induce signals in both $\textsl{p}$PCGe and NaI-AC detectors. However, $\gamma$ ray can produce signals in both detectors. The distribution of $\Delta$t was presented in Figure \[fig13ac\]. The AC$^{+}$ events with coincidence of $\textsl{p}$PCGe and NaI-AC distributed in the specific band, while AC$^{-}$ and RT events has a fixed $\Delta$t except for events with accidental coincidence. The accidental coincidence events are uniformly distributed in the time range. The trigger timing is defined by a constant amplitude discriminator of the S$_{\textrm{p6}}$ signal, such that $\Delta$t between the two detectors varies with energy. 3. Bulk versus Surface events selection (BS cut): This selection criteria was the final cut to identify AC$^{-}$ physical events which took place in the active bulk volume based on $\tau$ defined in Section IV-A. The scatter plot of $\tau$ versus energy was shown in Figure \[fig14bs\](a), which emerges two characteristic bands representing bulk (B) and surface (S) events respectively. Typical B and S events as well as their fitting profile at analysis energy threshold ($\sim$500 eVee) are depicted in Figure \[fig14bs\](b). ![\[fig13ac\] $\Delta$t versus energy distribution and AC cut criteria. The rejected band are the AC$^{+}$ events. ](Figure13.png){height="6.5cm" width="8.0cm"} ![\[fig14bs\] (a) Scatter plot of $\log_{10}(\tau)$ versus energy for AC$^{-}$ events, the BS cut criteria was defined by the $\tau_{0}$ line; (b) (c) Typical S and B events with energy at  500 eVee, together with their fitting profiles.](Figure14a.pdf "fig:"){width="8.0cm" height="6.5cm"} ![\[fig14bs\] (a) Scatter plot of $\log_{10}(\tau)$ versus energy for AC$^{-}$ events, the BS cut criteria was defined by the $\tau_{0}$ line; (b) (c) Typical S and B events with energy at  500 eVee, together with their fitting profiles.](Figure14b.pdf "fig:"){width="8.0cm" height="5.0cm"} C. Efficiency Evaluation {#4.3 efficiency} ------------------------ At a total DAQ rate of $\sim$3 Hz, the DAQ live time was 99.9$\%$ measured by the survival probabilities of the RT events generated by a pulse generator at high precision and stability. Different methods have been adopted to calibrate the efficiencies for different data selection criteria. The signal efficiencies for TT, Ped and AC cuts, which are energy independent, can be evaluated by RT events accurately, and were 94.0$\%$, 96.8$\%$ and nearly 100$\%$ respectively. The efficiency for the energy-dependent PSD cuts was derived from the physics events due to radioactive sources. Exact cuts were applied to these samples and the survival fractions provided measurements of $\varepsilon_\textrm{PSD}$ , as displayed in Figure 15a. The final efficiency calibration is for BS cut, which required the evaluation of the B-signal retaining ($\effbs$) and S-background rejection ($\lmbdbs$) efficiencies. These two efficiency factors can translate the measured spectra (B, S) to the actual spectra (B$_{0}$, S$_{0}$), and their relationship was illustrated by the following coupled equations: $$\begin{aligned} \label{eq::elcoupled} {\rm B} & = & \effbs \cdot {\rm B}_0 ~ + ~ ( 1 - \lmbdbs ) \cdot {\rm S}_0 \\ {\rm S} & = & ( 1 - \effbs) \cdot {\rm B}_0 ~ + ~ \lmbdbs \cdot {\rm S}_0 ~. \nonumber %\label{eq::elcoupled}\end{aligned}$$ Since $>$99$\%$ of background from external radioactivity measured by our $\textsl{p}$PCGe detector are with energy of less than 1.5 MeV [@cdex1], $\gamma$ sources of corresponding energies \[ $^{241}$Am (59.5 keV), $^{57}$Co (122keV), $^{137}$Cs (662 keV) and $^{60}$Co (1173 keV, 1332 keV) \] were used to calibrate the ($\effbs$, $\lmbdbs$) and the detailed procedures were described in our previous work [@texono2013; @cdex12014; @texonobs]. The energy-dependent $\effbs$ was shown in Figure 15a and $\lmbdbs$ in Figure 15b. The ($\effbs$, $\lmbdbs$)-corrected spectra B$_0$ can be derived via Eq. (1): $$\begin{aligned} {\rm B}_0 & = & \frac{\lmbdbs}{\effbs + \lmbdbs - 1} \cdot {\rm B} ~ - ~\frac{1 - \lmbdbs}{\effbs + \lmbdbs - 1} \cdot {\rm S} ~~\\ {\rm S}_0 & = & \frac{\effbs }{\effbs + \lmbdbs - 1} \cdot {\rm S} ~ - ~\frac{1 - \effbs}{\effbs + \lmbdbs - 1} \cdot {\rm B} ~. \nonumber \label{eq::elsol}\end{aligned}$$ It was demonstrated that if the neglected (that is, taking $\lmbdbs$=1) or under-estimated S-contaminations to the B-samples can result in incorrectly-assigned signal events. ![\[fig15eff\] (a) The measured $\varepsilon_{\textrm{PSD}}$ and $\effbs$ as function of energy. (b) The measured $\lmbdbs$ as function of energy. ](Figure15.pdf){height="11.0cm" width="8.0cm"} D. Systematic Uncertainties {#4.4 sys. error} --------------------------- The systematic uncertainties of AC$^{-}$$\otimes$B$_0$ derived from raw data are summarized in Table \[sys. err\], using two typical energy ranges as illustration. The systematic contributions arise from: 1. Data Taking: 1. The DAQ was in stable operation at more than 98$\%$ of the time. The trigger rate is low and the DAQ live time is close to 100$\%$. Contributions to systematic uncertainties are negligible. 2. Trigger Efficiency $-$ Since the analysis threshold (475 eVee) is much higher than the trigger threshold (246 eVee at 50$\%$), the trigger efficiency of the physics events relevant to this analysis was 100$\%$, resulting in negligible contribution to the systematic uncertainties. 3. Fiducial Mass $-$ The error of the measured thickness of the dead layer gave rise to a 1$\%$ uncertainty at fiducial mass. This corresponds to an additional 0.1$\%$ contribution to the total systematic uncertainty at 475 eVee. 2. Signal Selection: The systematic uncertainties originated from the stability of BC and AC cuts, and were studied with the change of cut parameters around the nominal values. The BC cut contributed an additional 0.5$\%$ to contribution to the total systematic uncertainty at 475 eVee, while the contribution arising from AC cut is negligible. 3. Bulk Events Selection: The evaluation of systematic effects follow the procedures described in our earlier work [@texonobs; @cdex12014]. In particular, 1. The leading systematic uncertainties is from the B-event selection and (${\rm \varepsilon_{BS}}$, ${\rm \lambda_{BS}}$) calibration due to possible differences in locations and energy spectra between the calibration sources and background events. The calibration sources probe the surface effects due to both low energy (surface richer) and high energy (bulk richer) photons. The $\tau$ distributions for B-events are identical for both sources and physics background, while those for S-events showed intrinsic difference due to the difference in surface penetration which manifest as the difference of slopes in the (${\rm \varepsilon_{BS}}$,  ${\rm \lambda_{BS}}$) plane [@cdex12014].The systematic uncertainties are derived from the spread of the (${\rm \varepsilon_{BS}}$,  ${\rm \lambda_{BS}}$) intersections of calibration bands, relative to the combined best-fit solution. This leads to a 25.0$\%$ contribution to the total error in the efficiencies-corrected Bulk rates B$_{0}$, accounting for the most significant in the total systematic uncertainty. 2. The systematic uncertainties related with different locations are studied with the sources placed at several positions of the top and the side (the cylindrical surface) of the $\textsl{p}$PCGe. Among them, the $^{241}$Am $\gamma$ from the side are strongly attenuated due to additional thickness from the cylindrical copper support structure and curved surface of the germanium crystal and therefore do not produce useful signals. The higher energy $\gamma$ from $^{57}$Co, $^{137}$Cs and $^{60}$Co at top and side, as well as those from physics samples (BC$\otimes$AC$^{+}$ and BC$\otimes$AC$^{-}$), show similar distributions in $\tau$, independent of locations. The shift in (${\rm \varepsilon_{BS}}$, ${\rm \lambda_{BS}}$) based on calibration source data at different locations is less than 4$\%$, corresponding to a 3.7$\%$ contribution to the total error in the efficiencies-corrected Bulk rates B$_{0}$. 4. Choice Quenching Function: Two studies were performed to investigate the sensitivities to exclusion limits from the choice of QF. (i) As displayed in Figure \[fig7\], the red line evaluated by TRIM software together with the yellow band (10$\%$ systematic uncertainty) were adopted. Analysis is performed by scanning QF within 10$\%$ of their nominal value. It was shown that the difference among these results are small, eg. the variation of $\sigma^{SI}_{\chi N}$ is about 15$\%$ at $m_{\chi}$=8 GeV/c$^{2}$, and the least stringent bounds among them at a given WIMP-mass were adopted as our final physics limits. (ii) The same procedure described in (i) but the QF evaluated by Lindhard ($\textit{k}$=0.157) and CoGeNT [@cogent] were applied. It was concluded that the difference were small, eg. about 14$\%$ deviations in $\sigma^{SI}_{\chi N}$ at m$_{\chi}$=8 GeV/c$^{2}$). The results have been displayed in [@cdex12014] and our formulation with TRIM provided the most conservative limits among the alternatives. In our previous work, the 53.9 kg-days exposure has shown that the statistical uncertainties were dominant and contributed 86$\%$ relative to the total uncertainty [@cdex12014]. As the exposure expanded to 335.6 kg-days, the statistical uncertainties were secondary and systematic uncertainties dominated 81$\%$ relative to the total uncertainty. It was crucial to develop new method to evaluate the (${\rm \varepsilon_{BS}}$, ${\rm \lambda_{BS}}$) to decrease the systematic uncertainty further which contributes the main part of the systematic uncertainty. --------------------------------- --------------------------------------------------------------------------------------------------------- ------------------- Energy Bin 0.475-0.575 keVee 1.975-2.075 keVee AC$^-$$\otimes$B$_0$ and Errors $ 4.00 \pm 0.64 [$stat$] \pm 0.87 [$sys$]$ & $3.61 \pm 0.36 [$stat$] \pm 0.28 [$sys$]$\ (kg$^{-1}$keV$^{-1}$day$^{-1}$) & $=4.00 \pm 1.08 $ & $=3.61 \pm 0.46 $\ I) Statistical Uncertainties :\ (i)Uncertainties on Calibration (${\rm \varepsilon_{BS}}$,${\rm \lambda_{BS}}$) : & 0.32 & 0.08\ (ii)Derivation of (${\rm \varepsilon_{BS}}$,${\rm \lambda_{BS}}$)-corrected Bulk Rates : & 0.55 & 0.35\ Combined : & 0.64 & 0.36\ II) Systematic Uncertainties :\ A. Data Taking :\ (i) DAQ : & 0.00 & 0.00\ (ii) Trigger Efficiency : & 0.00 & 0.00\ (iii) Fiducial Mass : & 0.05 & 0.05\ B. Signal Selection :\ (i) BC cuts : & 0.08 & 0.05\ (ii) AC cut : & 0.00 & 0.00\ C. Bulk Event Selection :\ (i) Rise-time Cut-Value $\tau_{0}$ & 0.27 & 0.12\ (ii) Normalization Range (3-5 keVee) & 0.07 & 0.01\ (iii) (${\rm B_{0}}$,${\rm S_{0}}$) = (B,S) at Normalization & 0.10 & 0.10\ (iv) Choice of Discard Region & 0.30 & 0.06\ (v) Source Location & 0.28 & 0.19\ (vi) Source Energy Range and Spectra & 0.72 & 0.12\ Combined : & 0.87 & 0.28\ --------------------------------- --------------------------------------------------------------------------------------------------------- ------------------- V. Limits on WIMPs {#5. exclusion physical results} ================== The measured energy spectra and its evolution with data selection progress are depicted in Figure \[fig16sp\] (a). Six cosmogenic nuclides can be identified clearly through the K-shell X-rays peaks, and the contributions of the corresponding L-shell X-rays at low energy range can be calculated accurately since the ratioes of the intensities of K-shell and L-shell X-rays are definite, as shown in Figure \[fig17lxres\] (a). The half-lives of the dominant nuclides can be measured by their K-shell X-rays rays. Figure \[fig16sp\] (b) displayed the decay of $^{68}$Ge, $^{65}$Zn and $^{55}$Fe. High energy $\gamma$ rays originated from ambient radioactivity contributed a flat electron-recoil background. The nature of the interaction between WIMPs with baryonic matter is a priori unknown. The data was analysed with two benchmark $\chi$-N cross-sections: spin-independent (SI, scalar) and spin-dependent (SD, axial-vector) couplings: $$\begin{aligned} \label{eq::SISD} {\rm \frac{d\sigma_{\chi N}}{dE_{R}}} & = & {\rm (\frac{d\sigma_{\chi N}}{dE_{R}})_{SI} ~ + ~(\frac{d\sigma_{\chi N}}{dE_{R}})_{SD}} ~.\end{aligned}$$ In general, the SI cross section can be written as: $$\begin{aligned} \label{eq::SI} {\rm (\frac{d\sigma_{\chi N}}{dE_{R}})_{SI}} & = & {\rm \frac{2 m_{N}}{\pi \emph{v}^{2}}[Z\emph{f}_{p} ~ + ~ (A ~ - ~Z)\emph{f}_{n}]^{2}F^{2}(E_{R}) } ~.\end{aligned}$$ where the $\emph{f}_{\textrm{p,n}}$ describe the WIMPs couplings to proton and neutron. In most cases $\emph{f}_{\textrm{p}}$ $\approx$ $\emph{f}_{\textrm{n}}$, and the Eq. (4) can be simplified to: $$\begin{aligned} \label{eq::SIsi} {\rm (\frac{d\sigma_{\chi N}}{dE_{R}})_{SI}} & = & {\rm \frac{2 m_{N}}{\pi \emph{v}^{2}} A^{2} (\emph{f}_{p})^{2} F^{2}(E_{R}) } ~.\end{aligned}$$ leading to $\textrm{A}^{2}$ dependence of the SI cross section. The SD differential cross section can be expressed as: $$\begin{aligned} \label{eq::SD} {\rm (\frac{d\sigma_{\chi N}}{dE_{R}})_{SD}} & = & {\rm \frac{16 m_{N}}{\pi \emph{v}^{2}} \Lambda^{2} G_{F}^{2} J(J + 1) \frac{S(E_{R})}{S(0)} }~.\end{aligned}$$ where the J is the total angular momentum of the nucleus. The Eq. (6) illustrates that the SD cross section is proportional to a function of the total angular momentum of the nucleus, J/(J+1) [@sisd]. A best-fit analysis was applied to the residual spectrum of Figure 17b after subtraction of the L-shell X-rays, with two parameters representing flat gamma background and possible $\chi$N spin-independent cross section $\sigma^{SI}_{\chi N}$, scanning m$_{\chi}$ between 4 and 30 GeV/c$^{2}$. Standard WIMP halo assumption [@wimphalo] and conventional astrophysical models [@wimpmodel] are applied to describe WIMP-induced interactions, with the local WIMP density of 0.3 GeV/cm$^{3}$, the Maxwellian velocity distribution with $v_{0}$=220 km/s and the galactic escape velocity of $v_{esc}$=544 km/s. Exclusion plots on (m$_{\chi}$, $\sigma^{SI}_{\chi N}$) at 90$\%$ confidence level were shown in Figure \[fig18exp\](a), together with bounds and allowed regions from several representative experiments [@cogent; @dama; @cdmslite; @lux; @supercdms; @cresst2015; @cdms2si]. The sensitivities of $\sigma^{SI}_{\chi N}$ has been improved a few times over our work last year [@cdex12014] due to several times larger exposure. Most of the light WIMP regions within 6 and 20 GeV/c$^2$ implied by earlier experiments are probed and rejected. ![\[fig16sp\] (a) Measured energy spectra of 1kg-$\textsl{p}$PCGe detector and its evolution with data selection progress. Six cosmogenic nuclides have been identified. (b) Time evolution of the three dominant K-shell X-rays: $^{68}$Ge, $^{65}$Zn and $^{55}$Fe. The measured half-lifes 279.7$\pm$17.8 days, 235.3$\pm$16.0 days and 955.5$\pm$411.2 days, respectively, are consistent with the nominal values of 270.8 days, 244.3 days and 997.1 days. ](Figure16.pdf){width="8.0cm" height="10.0cm"} ![\[fig17lxres\] (a) Energy spectrum with all selection cuts and efficiency correction factors applied. Various L-shell X-rays are identified based on measured K-shell X-rays intensities, and superimposed on a flat background from ambient high-energy gamma-rays. (b) The residual spectrum with contributions subtracted.The red-line represent the best-fit with two parameters: flat gamma-background and spin-independent $\chi$-N cross-section, at m$_{\chi}$=8 GeV/c$^{2}$. An excluded (m$_{\chi}$; $\sigma^{SI}_{\chi N}$) scenario of CDMS(Si) [@cdms2si] is superimposed. ](Figure17.pdf){width="7.0cm" height="13.0cm"} The limits on spin-dependent $\chi$-neutron (denoted by $\chi$n) cross sections were also extracted. Exclusion plots on (m$_{\chi}$,$\sigma^{SD}_{\chi n}$) plane at 90$\%$ confidence level for light WIMPs was also derived, as depicted in Figure \[fig18exp\](b), and bounds from other benchmark experiments [@dama_sd_allowed; @cdms_sd_le; @xenon100_sd] are also superimposed. The limits were derived from the model-independent approaches prescribed in Refs [@sd1; @sd2]. Different $^{73}$Ge nuclear physics matrix elements [@geme] adopted as input generated consistent results. The DAMA allowed region at low-m$_{\chi}$ was probed and excluded. Furthermore, it was shown that these results were competitive around m$_{\chi}$=6 GeV/c$^{2}$. For completeness, the exclusion limits for the spin-dependent cross-section derived from our earlier CDEX-0 data [@cdex0] are also displayed in Figure \[fig18exp\](b). [**(a)**]{}\ ![ The 90$\%$ confidence level upper limit of (a) spin-independent $\chi$-N coupling and (b) spin-dependent $\chi$-neutron cross-sections. The CDEX-1 results from this work are depicted in solid black. Bounds from other benchmark experiments [@cdex1; @cdex12014; @cogent; @dama; @cdmslite; @lux; @supercdms; @cresst2015; @cdms2si] are superimposed. []{data-label="fig18exp"}](Figure18a.pdf "fig:"){width="8.0cm" height="8.0cm"}\ [**(b)**]{}\ ![ The 90$\%$ confidence level upper limit of (a) spin-independent $\chi$-N coupling and (b) spin-dependent $\chi$-neutron cross-sections. The CDEX-1 results from this work are depicted in solid black. Bounds from other benchmark experiments [@cdex1; @cdex12014; @cogent; @dama; @cdmslite; @lux; @supercdms; @cresst2015; @cdms2si] are superimposed. []{data-label="fig18exp"}](Figure18b.pdf "fig:"){width="8.0cm" height="8.0cm"} VI. Summary and Prospects {#6. conclusion} ========================= The hardware, operation and analysis details of the CDEX-1 experiment are described in this article. New limits on both SI and SD cross-sections are derived with a data size of 335.6 kg-days, spanning over 17 months. The studies of annual modulation effects with this data set are being pursued. Another 1 kg $\textsl{p}$PCGe with lower threshold is taking data at CJPL with data analysis and background understanding underway. A $\textsl{p}$PCGe “CDEX-10” detector array with target mass of the range of 10 kg and installed in liquid nitrogen as cryogenic medium is being commissioned. A future option of replacement with liquid argon to serve in addition an anti-Compton detector is being explored. In the meantime, a $\textsl{p}$PCGe detector completely fabricated by the CDEX Collaboration with a Ge crystal provided by the Industry is being constructed. This would allow complete control on the choice of materials which are crucial towards the future goal of ton-scale Ge detectors for dark matter and double beta decay experiments. Acknowledgements {#7. thanks} ================ This work was supported by the National Natural Science Foundation of China (Contracts No.10935005, No.10945002, No.11275107, No.11175099, No.11475099) and National Basic Research program of China (973 Program) (Contract No. 2010CB833006) and NSC 99-2112-M-001-017-MY3 and Academia Sinica Principal Investigator 2011-2015 Grant from Taiwan. [99]{} K.J. Kang et al., Front. Phys. [**8**]{}, 412 (2013). K. J. Kang, J. P. Cheng, Y. H. Chen, Y. J. Li, M. B. Shen, S. Y. Wu, and Q. Yue, J. Phys. Conf. Ser. [**203**]{}, 012028 (2010). K.A.Olive et al., Review of Particle Physics, Chin. Phys. C, [**38**]{} 090001 (2014). Marc Schumann., EPJ Web of Conferences [**96**]{} 01027 (2015). P. Cushman, C. Galbiati, et al., arXiv:1310.8327. A.S. Barabash., Physics Procedia [**74**]{} (2015) 416šC422. A.S. Barabash., AIP Conf.Proc. [**1686**]{} (2015) 020003. S.M. Bilenky, C. Giunti., Mod. Phys. Lett. A [**27**]{}, 1230015 (2012). S.R. Elliott, Mod. Phys. Lett. A, [**27**]{}, 1230009 (2012). S. K. Liu et al., Phys. Rev. D [**90**]{}, 032003 (2014). W. Zhao et al., Phys. Rev. D [**88**]{}, 052004 (2013); K. J. Kang et al., Chin. Phys. C [**37**]{}, 126002 (2013). H.B. Li et al, PRL [**90**]{}, 131802 (2003); H.T. Wong et al., PRD [**75**]{}, 012001 (2007). Q. Yue et al., Phys. Rev. D [**90**]{}, 091701(R) (2014). C. E. Aalseth et al., Phys. Rev. D [**88**]{}, 012002 (2013); arXiv:1401.3295. Y. C. Wu et al., Chin. Phys. C [**37**]{}, 086001 (2013). Zhi Zeng et al., J Radioanal Nucl Chem [**301**]{}:443-450(2014). H. B. Li et al., Phys. Rev. Lett. [**110**]{}, 261301 (2013). J. F. Ziegler, http://www.srim.org. J. Lindhard et al., K. Dan. Vidensk. Selsk. Mat. Fys. Medd. [**33**]{}, 10 (1963). H.B. Li et al. Artropart. Phys. [**56**]{}, 1 (2014). E. Aguayo et al., Nucl. Inst. Meth. A [**701**]{}, (2013). S. T. Lin et al., Phys. Rev. D [**79**]{}, 061101 (2009). R. Agnese et al., arXiv:1509.02448v1. David G. Cerdeno and Anne M. Green., PARTICLE DARK MATTER:Observations, Models and Searches. [**17**]{}:347-352 (2010). F. Donato, N. Fornengo, and S. Scopel, Astropart. Phys. [**9**]{},247 (1998). M. Drees and G. Gerbier, Phys. Rev. D [**88**]{}, 012002 (2013), and references therein. R. Bernabei et al., Eur. Phys. J. C [**56**]{}, 333 (2008); R. Bernabei et al., Eur. Phys. J. C [**67**]{}, 39 (2010). G. Angloher et al., Eur. Phys. J. C [**72**]{}, 1971 (2012). G. Angloher, et al., arXiv:1509.01515v1. R. Agnese et al., Phys. Rev. Lett. [**111**]{}, 251301 (2013). D. S. Akerib et al., Phys. Rev. Lett. [**112**]{}, 091303 (2014). R. Agnese et al., Phys. Rev. Lett. [**112**]{}, 241302 (2014). C. Savage et al., arXiv:0808.3607v2. Z. Ahmed et al., Phys. Rev. Lett. [**106**]{}, 131302 (2011), arXiv:1011.2482v3. Z. Ahmed et al., Phys. Rev. Lett. [**102**]{}, 011301 (2009), arXiv:0802.3530v2. E. Aprile et al., arXiv:1301.6620v2. A. Bottino et al., Phys. Lett. B [**402**]{}, 113 (1997). D. R. Tovey et al., Phys. Lett. B [**488**]{}, 17 (2000). M. T. Ressell et al., Phys. Rev. D [**48**]{}, 5519 (1993); V. I. Dimitrov, J. Engel, and S. Pittel, Phys. Rev. D [**51**]{}, R291 (1995).
--- abstract: 'A class of shell models for turbulent energy transfer at varying the inter-shell separation, $\lambda$, is investigated. Intermittent corrections in the continuous limit of infinitely close shells ($\lambda \rightarrow 1$) have been measured. Although the model becomes, in this limit, non-intermittent, we found universal aspects of the velocity statistics which can be interpreted in the framework of log-poisson distributions, as proposed by She and Waymire (1995, [*Phys. Rev. Lett.*]{} [**74**]{}, 262). We suggest that non-universal aspects of intermittency can be adsorbed in the parameters describing statistics and properties of the most singular structure. On the other hand, universal aspects can be found by looking at corrections to the monofractal scaling of the most singular structure. Connections with similar results reported in other shell models investigations and in real turbulent flows are discussed.' address: | $^1$ AIPA, Via Po 14, 00100 Roma, Italy\ $^2$ Dept. of Physics, University of Tor Vergata, Via della Ricerca Scientifica 1, I-00133 Roma, Italy\ $^{3}$ INFM-Dept. of Physics, University of Cagliari, Via Ospedale 72, I-09124, Cagliari, Italy author: - 'R. Benzi$^{1}$, L. Biferale$^{2}$, and E.Trovatore$^{3}$' title: 'Universal statistics of non-linear energy transfer in turbulent models' --- One of the basic questions in understanding the physics of fully developed three dimensional turbulence is the dynamical mechanism characterizing the energy transfer from large to small scales. According to the Kolmogorov theory (K41) of fully developed turbulence, the energy should be transferred downwards from large scale to small scales following a self-similar and homogeneous process entirely dependent on the energy transfer rate, $\epsilon$, and the scale, $l$. By assuming local homogeneity and isotropy, the K41 theory allows us to predict the scaling properties of the structure functions $S_p(l) \equiv \langle(\delta v(l))^p\rangle$, where $\delta v (l) \sim |v(x+l)-v(x)|$. It turns out that $S_p(l) \sim \langle \epsilon \rangle ^{p/3} l^{p/3}$. The K41 theory has been questioned by several authors because of strong scale dependent fluctuations of the energy dissipation (intermittency). Because of intermittency, the scaling properties of the velocity structure functions acquire anomalous scaling, i.e., $S_p(l)\sim l^{\zeta(p)}$, where the scaling exponents $\zeta (p)$ are non linear functions of $p$ [@MS; @BCTBS]. The universal character of energy transfer statistics has been also questioned. Different $\zeta(p)$ exponents have been measured in anisotropic flows (shear flows [@roberto_shear] and boundary layers [@sreeni_bl]) and strong dependencies on active quantities as the temperature in Rayleigh-Benard [@roberto_rayleigh] or the magnetic field in MHD [@MHD] have been quoted.\ The question of finding unifying and universal aspects common to all turbulent flows naturally arises.\ Recently, cascade descriptions based on random multiplicative processes and infinitely divisible distributions for random multipliers have been applied in order to clarify the energy transfer physics. In particular it has been recently shown ([@sl]-[@sw]) that a log-Poisson distribution is able to provide an extremely good fit to experimental data. In [@sw] authors proposed that the random multipliers $W_{L_2,L_1}$ connecting velocity fluctuations at different scales $L_2$ and $L_1$ $$\delta v(L_2) = W_{L_2,L_1} \delta v(L_1) \label{multiplier}$$ should follow a log-poisson statistics.\ Log-poissonity should naturally arise in turbulent flows as the limit of a Bernoulli fragmentation process between infinitely close scales. In [@sw] authors argued that the energy transfer between two adjacent scales $l_1$ and $l_2$ with $\log(l_1)-\log(l_2) << 1$ can be described in terms of only two bricks. The first one corresponds to the most singular structure characterized by a local scaling exponents $h_0$: $ \delta v(l_2) = (l_2/l_1)^{h_0} \delta v(l_1)$. The second brick is a [*defect*]{}-like energy transfer which modulates the most singular events by a factor $\beta < 1$; in this second case we would have a typical scaling: $ \delta v(l_2) = \beta (l_2/l_1)^{h_0} \delta v(l_1).$ Let us also assume that, in the limit $\log(l_1/l_2) \rightarrow 0$, [*defects*]{} along the cascade happen with a probability that goes to zero proportionally to the logarithm of the scale separation $ \sim d_0 \log(l_1/l_2)$, where $d_0$ is the parameter which controls how probable a defect is. It is now simple to show, following [@sw], that the finite-scale-separation transfer must have a log-poisson statistics: $<(W_{L_2,L_1})^p> = (L_2/L_1)^{h_0p -d_0(1-\beta^p)}$, which corresponds to the She-Leveque proposal for intermittent exponents [@sl]: $$\zeta(p) = h_0 p -d_0 (1-\beta^p). \label{sheleveque}$$ The multifractal interpretation of (\[sheleveque\]) is that $h_0$ is the most singular scaling exponent and $d_0$ corresponds to the codimension of the fractal set where the most singular scaling is observed. One may argue that the structure and the statistics of the most singular event could be strongly non-universal.\ As a consequence, a possible scenario can be proposed where $h_0 $ and $d_0$ could be system-dependent, while $\beta$ maybe constant in a wider universality class [@sw].\ Some evidences of this universal character of $\beta$ have already been reported in [@gess1] where it has been shown that the differences of intermittent exponents measured in Rayleigh-Benard, MHD, shear flows and in boundary layers can all be re-adsorbed by properly changing $h_0$ and $d_0$ at constant $\beta$. In [@gess1] it has been shown that also viscous effects can be included in a suitable dependency on the scale of $h_0$ and $d_0$, showing for the first time that non-trivial intermittent corrections, due to the $\beta$ dependent part of $\zeta(p)$ curve, can be detected at viscous scales. In the following we will show how log-poisson description of intermittency and its underlying interpretation can be applied for describing some important aspects of energy transfer in a class of dynamical models of turbulence [@leo_physicstoday; @bbkt; @BK]. In particular, we will be able to explain the continuous transition towards the K41 non-intermittent statistics in terms of the statistical properties of the most singular structure of the model. This trend towards K41 statistics is observed in a class of shell models at varying (diminishing) the characteristic shell separation, i.e. in the so-called continuous limit of infinitely close shells.\ Universal aspects of the shell statistics are recovered by measuring the $\beta$-dependent part of the probability distribution. As proposed by She and Waymire, we find that the $\beta$-dependent part of the intermittent statistics is remarkably constant for all values of the shell separation $\lambda$ explored. Shell models have demonstrated to be very useful for the understanding of many properties connected to the non-linear turbulent energy transfer ([@BK]-[@BKLMW]). The most popular shell model, the Gledzer-Ohkitani-Yamada (GOY) model ([@BK]-[@BKLMW]), has been shown to predict scaling properties for $\zeta(p)$ similar to what is found experimentally (for a suitable choice of the free parameters). The GOY model can be seen as a severe truncation of the Navier-Stokes equations: it retains only one complex mode $u_n$ as a representative of all Fourier modes in the shell of wave numbers $k$ between $k_n=k_0\lambda^n$ and $k_{n+1}$, $\lambda$ being an arbitrary scale parameter ($\lambda>1$), usually taken equal to $2$. In two recent works [@BK; @bbkt] the GOY model has been generalized in terms of shell variables, $u_n^+$ and $ u_n^-$, transporting positive and negative helicity, respectively. These models have at least one inviscid invariant non-positive defined which is very similar to the 3d Navier-Stokes helicity. In the following we will focus on the intermittent properties of one of such models at varying the separation between shells, $\lambda$. The time evolution for positive-helicity shells is [@bbkt]: $$\begin{aligned} \frac{d}{dt} u^+_n&=& i k_n ( u^{-}_{n+2} u^{+}_{n+1}+b u^{-}_{n+1} u^{+}_{n-1} +c u^{-}_{n-1} u^{-}_{n-2})^*\nonumber \\ &-& \nu k_n^2 u^+_n +\delta_{n,n_0} f^+, \label{eq:shells}\end{aligned}$$ and the same, but with all helical signs reversed, holds for $u_n^-$. The coefficients $b,c$ are determined imposing inviscid conservation of energy, $E= \sum_n(\vert u^+_n \vert^2 + \vert u^-_n \vert^2))$, and helicity, $H=\sum_n k_n (\vert u^+_n \vert^2 - \vert u^-_n \vert^2)$. In particular we have: $b=-(1+\lambda^2)/(\lambda^3+\lambda^2)$ and $c=(1-\lambda)/(\lambda63+\lambda64)$.\ Structure functions for these models are naturally defined as $ S_p(n) = <~(\sqrt{|u_n^+|^2 + |u_n^-|^2})^p> \sim k_n^{-\zeta(p)}.$ Many investigation have been done on the intermittent properties of this model [@bbkt] and of the original GOY model [@JPV; @BLLP; @BKLMW] at varying the coefficients such as to have different conserved quantities, in order to investigate the importance of inviscid invariants in determining the energy transfer properties. Some evidences supporting non-trivial effects introduced by non-positive invariants have been quoted.\ In this letter we discuss the dependency on the other important free parameter entering shell-modelization: the separation between neighboring shells $\lambda$. By decreasing $\lambda$ one describes turbulent energy transfer in terms of interactions which become more and more local in Fourier space. In the limit of $\lambda \rightarrow 1$ one recovers a $1$-dimensional partial differential equation [@parisi; @bbkt]. Locality of interactions has always been a long-debated issues of the K41 picture. In fig. 1 we show the $6$th order structure function of model (\[eq:shells\]) for different inter-shell separations: $\lambda=2$ and $\lambda=1.05$. In both cases the total number of shells is chosen such that the physical length of the inertial range stays constant. Clearly, there is a net trend toward a less intermittent state by decreasing the shell-ratio. Nevertheless, the exceptionally good scaling properties allow us to estimate, using the Extended Self Similarity method [@BCTBS], small deviations from K41 scaling also for the less intermittent value $\lambda=1.05$. In view of the discussion about log-poisson statistics this analysis could not be sufficient. Non-universal aspects can be masked by trivial properties of the most singular structure. We have therefore tried to highlight possible universal aspects of the chaotic energy transfer by focalizing our attention on the non-linear part of intermittent corrections and in particular on the parameter $\beta$ which can be extracted by assuming log-poisson intermittency. The best way of extracting $\beta$ from numerical data is to look if structure functions verify the log-poisson hierarchy [@sl]: $$\frac{S_{p+1}(n)}{S_p(n)} = A_n [\frac{S_p(n)}{S_{p-1}(n)}] ^{\beta^\prime}, \label{eq:hierarchy}$$ where $\beta^\prime=\beta^{1/3}$ and $A_n$ has a scaling dependency on $k_n$ but it is $p$-independent. If structure functions follow a log-poisson statistics one can extract $\beta^\prime$ from a linear fit of a log-log plot of (\[eq:hierarchy\]), at varying $p$ and fixed $n$.\ In fig. 2 we show relation (\[eq:hierarchy\]) for the two inter-shell separations $\lambda=2,1.05$ and for $p=1,...,7$. As it is possible to see, the straight line behaviour in the log-log plot is nicely verified (supporting the log-poisson assumption) and the slope of the two lines is the same, indicating that although the full intermittent corrections are different, the part directly affected by the $\beta$ parameter is the same. In table 1 we collect all our estimates of $\beta^\prime$ as a function of $\lambda$. From these data one can safely conclude that $\beta$ stays constant by approaching the continuous limit $\lambda \rightarrow 1$. The remarkable change in the statistics reflected by changes in the $\zeta(p)$ exponents of fig. 1 may be only due to changes of the most singular energy-burst statistics, i.e., to changes of the parameters $h_0 $ and $d_0$ in the log-poisson jargon. Let us notice that already other authors have focused the attention on burst-like solutions in GOY models as possible explanation of its intermittent properties [@parisi; @dombre]. Moreover, it is very simple to interpret the measured tendency toward K41 statistics as the results of a smoothing in the energy transfer due to the increasing local character of dynamical interactions, in the limit $\lambda \rightarrow 1$. In other words, we interpret the corrections to K41 for $\lambda >> 1$ as the consequence of burst-structures travelling along the inertial range. Very singular bursts (with $h_0 $ less than the Kolmogorov value $1/3$) appear only when there is a relevant mismatch between eddy-turnover times of neighboring shells, i.e. only when shell-ratios are much larger than one. Otherwise, energy tends to flow smoothly toward small scales following K41 statistics.\ These bursts are non-universal in the sense that their statistics and their degree of singularity is a function of the inter-shell ratio, $\lambda$. On the other hand, the mechanism underlying burst transfer seems only determined by the non-linear structure of shell-model dynamics and it seems to fix a universal value of $\beta$ in the log-poisson description. To conclude, we have analysed and discussed the intermittent properties of a class of shell models for turbulent energy transfer at varying the inter-shell ratio parameter. We have shown that there exists a suitable limit, $\lambda \rightarrow 1$, for which intermittent fluctuations disappear and the scaling properties of the system become close to the K41 theory. By employing a log-poisson description of intermittent corrections, we have disentangled the limit $\lambda \rightarrow 1$ in two main components: from one hand the most singular structure seems to tend continuously toward a K41 scaling ($h_0=1/3$); from the other hand the probability distribution of intermittent fluctuations is still log-Poisson and described by the same [*defect*]{} parameter $\beta$. One possible way of describing this result, in terms of the scaling exponents $\zeta(p)$, consists in saying that the quantity $$\rho_{pq} =\frac{\zeta(p)-p/3 \,\zeta(3)}{\zeta(q)-q/3 \,\zeta(3)}$$ is constant for $\lambda \rightarrow 1$. Similar universal properties have already been reported in shell models at varying the dissipation mechanism, i.e. using hyperviscosities [@sl_hypervis].\ Our results are qualitatively similar to what observed in real turbulent flows going from the inertial subrange to the viscous subrange. Indeed in this case anomalous scaling disappears (i.e. $\zeta(p)/\zeta(3) \rightarrow p/3$) while $\rho_{pq}$ stays constant, as recently observed in [@gess1]. [99]{} C. Meneveau and K. R. Sreenivasan, J. Fluid Mech. [**224**]{}, 429 (1991). R. Benzi, S. Ciliberto, R. Tripiccione, C. Baudet, C. Massaioli and S. Succi, Phys. Rev. E [**48**]{}, R29 (1993); R. Benzi, S. Ciliberto, C. Baudet and G. R. Chavarria, Physica D [**80**]{}, 385 (1994). R. Benzi, S. Ciliberto, C. Baudet, G. Ruiz Chavarria, R. Tripiccione, Europhys. Lett. [**24**]{}, 275 (1993). G. Stolovitzky, K. R. Sreenivasan, Phys. Rev. E [**48**]{}, 32 (1993). R. Benzi, R. Tripiccione, F. Massaioli, S. Succi and S. Ciliberto, Europhys. Lett. [**25**]{}, 331 (1994). R. Grauer, Phys. Lett. A [**195**]{}, 335 (1994). Z. S. She and E. Leveque, Phys. Rev. Lett. [**72**]{}, 336 (1994). B. Dubrulle, Phys. Rev. Lett. [**73**]{}, 959 (1994). E.A. Novikov, Phys. Rev. E [**50**]{}, R3303 (1994). Z.S. She and E.C. Waymire, Phys. Rev. Lett. [**74**]{}, 262 (1995). R. Benzi, L. Biferale, S. Ciliberto, M. V. Struglia and R. Tripiccione, Phys. Rev. E [**53**]{}, 3025 (1996). L. Kadanoff, Physics Today [**48**]{}, 11 (1995). L. Biferale and R. Kerr, Phys. Rev. E [**52**]{}, 6113 (1995). R. Benzi, L. Biferale, R. Kerr and E. Trovatore, Phys. Rev. E [**53**]{}, 3541 (1996). E.B. Gledzer, Sov. Phys. Dokl. [**18**]{}, 216 (1973). M. Yamada and K. Ohkitani, Prog. Theor. Phys. [**81**]{}, 329 (1989); J. Phys. Soc. Jpn. [**56**]{}, 4210 (1987); Phys. Rev. Lett [**60**]{}, 983 (1988). M.H. Jensen, G. Paladin and A. Vulpiani, Phys. Rev. A [**43**]{}, 798 (1991). L. Biferale, A. Lambert, R. Lima and G. Paladin, Physica D [**80**]{}, 105 (1995). D. Pisarenko, L. Biferale, D. Courvoisier, U. Frisch and M. Vergassola, Phys. Fluids [**A65**]{}, 2533 (1993). L. Kadanoff, D. Lohse, J. Wang and R. Benzi, Phys. Fluids [**7**]{}, 617 (1995). G. Parisi, unpublished (1990). T. Dombre and J. L. Gilson, [*Intermittency, chaos and singular fluctuations in the mixed Obukhov-Novikov shell model of turbulence*]{}, preprint (1996). Z. S. She and E. Leveque, Phys. Rev. Lett. [**75**]{}, 2690 (1995). FIGURE CAPTIONS - FIGURE 1: Log-log plot of the 6th order structure function vs $k$, for the two cases $\lambda=2$ (circles) and $\lambda=1.05$ (diamonds). A linear fit in the inertial range, using Extended Self Similarity, gives: $\zeta(6)/\zeta(3)=1.983\pm0.005$ for $\lambda=1.05$ and $\zeta(6)/\zeta(3)=1.76\pm0.01$ for $\lambda=2$. - FIGURE 2: Log-poisson hierarchy (eq. \[eq:hierarchy\]) in a log-log plot for $\lambda=2$ (circles) and $\lambda=1.05$ (diamonds). For each $\lambda$, we took $p=1,...,7$ for three different scales $n$ in the inertial range and we shifted the sets along y-axis in order to perform a single linear fit (solid lines). TABLE CAPTIONS - TABLE 1: $\beta^\prime$ at varying $\lambda$. Each value is the average of slopes evaluated as in fig. 2 for all scales $n$ in the inertial range. The theoretical prediction [@sl] is $\beta^\prime=(2/3)^{1/3}=0.87$. $\lambda$ $\beta^\prime$ ----------- ----------------- $1.05$ $0.86 \pm 0.01$ $1.2$ $0.88 \pm 0.01$ $1.5$ $0.88 \pm 0.01$ $2$ $0.86 \pm 0.01$ \[tab\]
--- author: - | J. M. Skotheim & L. Mahadevan [^1]   \ Department of Applied Mathematics and Theoretical Physics\ University of Cambridge\ Wilberforce Road, Cambridge CB3 0WA, UK\ \ [*to appear in the Proceedings of the Royal Society*]{}\ \ title: Dynamics of poroelastic filaments --- We investigate the stability and geometrically non-linear dynamics of slender rods made of a linear isotropic poroelastic material. Dimensional reduction leads to the evolution equation for the shape of the [*poroelastica*]{} where, in addition to the usual terms for the bending of an elastic rod, we find a term that arises from fluid-solid interaction. Using the [*poroelastica*]{} equation as a starting point, we consider the load controlled and displacement controlled planar buckling of a slender rod, as well as the closely related instabilities of a rod subject to twisting moments and compression when embedded in an elastic medium. This work has applications to the active and passive mechanics of thin filaments and sheets made from gels, plant organs such as stems, roots and leaves, sponges, cartilage layers and bones. Introduction ============ Poroelasticity is the continuum theory used to describe the behaviour of a biphasic material in which fluid flow is coupled to the elastic deformation of a solid skeleton (see Selvadurai (1996), Wang (2000) and references therein). The first applications of this theory were to geological problems such as consolidation of saturated soil under a uniform load (Biot 1941). Since then the theory has grown to cover many and varied applications, some of which are displayed in Table 1. If a medium having interstitial fluid of viscosity $\nu$ is forced to oscillate with a characteristic time $\tau$, the Stokes’ length of the motion, $\hbox{L}_{\hbox{s}} = \sqrt{\nu\tau}$, will characterise the range of influence of the solid into the fluid. If $\hbox{L}_{\hbox{s}} \ll l_p$ (the pore length scale) the fluid within the pores moves out of phase relative to the solid. On the other hand, if $\hbox{L}_{\hbox{s}} \gg l_p$, the fluid will only move relative to the solid when the volume fraction of solid matrix changes locally. This limit ($\hbox{L}_{\hbox{s}}\gg l_p$) was first considered by Biot (1941) for an isotropic poroelastic material. Later work using averaging techniques led to equations of the same form as well an understanding of how the microstructure of the material influences the constitutive equations of the material (Auriault & Sanchez-Palencia 1977, Burridge & Keller 1981, Mei & Auriault 1989, Lydzba & Shao 2000). Whereas geological applications are concerned primarily with bulk behaviour, many engineering, physical and biological applications have extreme geometries which allow for the application of asymptotic methods to reduce the dimension of the problem. Some examples include the active and passive mechanics of thin filaments and sheets made of gels, plant organs such as stems, roots and leaves, sponges, cartilage layers, bones etc. In this paper we use the constitutive behaviour of a linear isotropic poroelastic solid to investigate the stability and dynamics of slender rods made of this material. In §\[goveq\] we give a physically motivated derivation of the constitutive equations for a poroelastic material. In §\[buckling\] we use the bulk poroelastic constitutive equations to determine the equation for the time dependent bending of a slender poroelastic rod subject to an externally applied compressive force $P$. Dimensional reduction leads to the equation for the [*poroelastica*]{}, where in addition to the usual terms due to the bending of an elastic rod we find a term due to the fluid resistance. It arises from the fluid-solid interaction and has a form similar to that in a Maxwell fluid (Bird, Armstrong & Hassager 1987). In §\[Pcontrolled\] we solve the problem of load controlled buckling. Although the poroelastic nature of the material does not change the buckling threshold or the final stable shape, it governs the dynamics of the system as it evolves from the unstable to the stable state. Both the short and the long time limit are investigated investigated using asymptotic methods. We then use numerical methods to corroborate our asymptotic approaches and follow the non-linear evolution of the poroelastica. In §\[dcb\] we treat the problem of displacement controlled buckling and compare the results of the [*poroelastica*]{} with those of the classical [*elastica*]{} under similar loading conditions. In §\[twist\] we consider the linear instability of a slender poroelastic filament embedded in an infinite elastic medium, when it is subjected to an axial twisting moment and an axial thrust. Finally, in §7, we summarise our results and discuss possible applications to such problems as the mechanics of cartilaginous joints and rapid movements in plants. \[chart\] ![image](pechart.eps){width="10cm"} Governing equations for poroelastic media {#goveq} ========================================= We begin with the equations for a homogeneous, elastic isotropic poroelastic material in the limit where the Stokes’ length, $\hbox{L}_{\hbox{s}}=\sqrt{\nu\tau} $ is much larger than the system size $l_m$ and further that $l_m $ is much larger than the pore size $l_p$. We will also neglect inertial effects in the solid and liquid phases. In this limit the viscous resistance to fluid flow in the pores is balanced by the pressure gradient so that the momentum balance in the fluid yields $$\label{bala} \rho\nu\nabla_{l_p}^2{\bf v} - \nabla p -\nabla_{l_p}p_p = 0,$$ where ${\bf v}$ is the fluid velocity with characteristic scale $V$, $\rho$ is the fluid density, $\nabla$ and $\nabla_{l_p}$ denote gradients on the system scale and the pore scale respectively, $p$ is the macroscopic pressure driving the flow, and $p_p$ is the microscopic pressure in the pore. When the pore scale and system size are well separated ($l_p/l_m \ll 1$) (2.1) gives the following scaling relations $$p \sim \frac{l_m\rho\nu V}{l_p^2} \gg \frac{\rho\nu V}{l_p} \sim p_p.$$ Thus the dominant contribution to the fluid stress in the medium arises from the pressure. The simplest stress-strain law for the composite medium then arises by considering the linear superposition of the dominant components of the fluid and solid stress tensor. Assuming that the elastic behaviour of the solid skeleton is well characterised by Hookean elasticity (i.e. the strains are small), we can write the following constitutive equation for the poroelastic medium (see Appendix A for a derivation using the method of multiple scales): $$\label{CE} \boldsymbol{\sigma} = 2\mu{\bf e} + \lambda \nabla \cdot {\bf u}\,{\bf \hbox{I}} - \alpha p{\bf \hbox{I}}.$$ Here $\boldsymbol{\sigma}$ is the stress tensor, ${\bf u}$ is the displacement field, ${\bf e} = (\nabla{\bf u} + \nabla{\bf u}^{T})/2$ is the linearised strain, $\mu$ and $\lambda$ are the effective Lamé coefficients of the material (dependent on the material properties [*and*]{} the microstructure), $\alpha$ is related to the fluid volume fraction, but includes a contribution from the pressure in the surrounding fluid (see Appendix A), and $\bf I$ is the identity tensor in 3 dimensions. These material parameters can be derived using microstructural information (see Appendix A). In the limit when inertia can be neglected, the equations of equilibrium are $$\nabla \cdot \boldsymbol{\sigma} =0. \label{F}$$ Mass conservation and continuity requires that the rate of dilatation of the solid is balanced by the differential motion between the solid and fluid in a poroelastic solid. This yields (see Appendix A for a derivation using the method of multiple scales) $$\label{Cont} \nabla \cdot {\bf k}\cdot\nabla p = \beta \partial_t p + \alpha\partial_t\nabla\cdot{\bf u} ,$$ where the solid skeleton is composed of a material with bulk modulus $\beta^{-1}$ ($\ne \lambda +2\mu/3$ since the lamé coefficients $\lambda$ and $\mu$ are for the composite material and take into account the microstructure, while $\beta^{-1}$ is independent of the microstructure) and ${\bf k}$ is the fluid permeability tensor of the solid matrix. In words, (\[Cont\]) states that the flux of fluid into a material element is balanced by the change in solid volume due to the bulk compressibility of the matrix. For a rigid incompressible skeleton, $\beta = 0$. We will assume that the solid skeleton is isotropic so that ${\bf k} =k {\bf \hbox{I}} $; however, many structured and biological materials are anisotropic and one may need to revisit this assumption. Equations (\[CE\]), (\[F\]) and (\[Cont\]) when subject to appropriate boundary conditions describe the evolution of displacements $\bf u$ and fluid pressure $p$ in a poroelastic medium. The typical values of the parameters for a soft gel are $\alpha\sim1$, $\mu\sim\lambda\sim10^6Pa$, and $k\sim10^{-12}m^2/sec\,Pa$. ![Schematic diagram of (a) a bent rod, where $\theta(x)$ is the angle between the deformed and undeformed tangent vector, $x,y$ and $z$ are body-fixed coordinates in the reference frame of the rod; (b) the circular cross section.[]{data-label="s"}](schematic2.eps){width="12cm"} Equations of motion for a slender filament {#buckling} ========================================== We consider a naturally straight slender circular rod of length $L$ and radius $R \ll L$ with a tangent to the centre line that makes an angle $\theta$ with the horizontal (see Figure \[s\]). At its ends an axial force $P$ is applied suddenly at time $t=0$. We will further assume that the lateral surfaces of the filament are free of tractions. This assumption could break down when the solid matrix is very dilute, so that interfacial forces become comparable to the internal forces in the filament, but we will not consider this case here. The slenderness of the filament implies that the axial stresses vary rapidly across the cross-section and much more slowly along it, so that we can use an averaging procedure to deduce low-dimensional equations that describe the motion of the filament. This long-wavelength approximation can be formalised using an asymptotic expansion in the aspect ratio of the filament $R/L\ll 1$. Here, we will proceed directly by noting that since the rod is slender, bending it is easier than stretching or shearing it (Love 1944). At the level of scaling, geometry implies that the out-of-line (bending) displacement of the centre line scales as $R$ while the axial displacement scales as $R^2/L$. At the surface of the filament no stress is applied. Since the filament is thin this implies that $\sigma_{yy} \approx \sigma_{zz} \approx 0$. For a displacement field ${\bf u}=(u_x,u_y,u_z)$ equation (\[CE\]) yields $$\begin{aligned} \sigma_{yy}=-\alpha p+(2\mu+\lambda)\partial_y u_y + \lambda(\partial_x u_x +\partial_z u_z)=0, \\ \sigma_{zz}=-\alpha p+(2\mu+\lambda)\partial_z u_z + \lambda(\partial_x u_x +\partial_y u_y)=0,\end{aligned}$$ which can be solved for $\partial_yu_y$ and $\partial_zu_z$ to give $$\begin{aligned} \label{eliminate} \partial_yu_y=\partial_zu_z=\frac{\alpha\, p-\lambda\partial_xu_x}{2(\mu+\lambda)}.\end{aligned}$$ Equations (\[CE\]) and (\[eliminate\]) give the axial stress $$\label{sf} \sigma_{xx}=-\frac{\alpha \mu}{\lambda+\mu}p + \frac{3\lambda\mu+2\mu^2}{\lambda+\mu}\partial_xu_x.$$ More specifically, when an infinitesimal axial element of the rod of length $dx$ is bent so that locally the centreline curvature is $\partial_x \theta$, fibres that are parallel to the neutral axis (coincident with the centre line for a homogeneous circular cross-section) and at a perpendicular distance $y$ from the neutral plane (defined by the neutral axis and the axis of bending) will be either extended or contracted by an amount $y\partial_x \theta dx$, so that the elastic strain $\partial_xu_x = -y\partial_x \theta$. This leads to an elastic stress that varies linearly across the cross-section; in addition there is a fluid pressure that is determined by (\[Cont\]). We insert (\[eliminate\]) into (\[Cont\]) and find the evolution equation for the fluid pressure $$\label{cont2} k(\partial_{xx}p+\partial_{yy}p+\partial_{zz}p) = (\beta + \frac{\alpha^2}{\lambda+\mu})\partial_t p -\frac{\alpha\mu}{\lambda+\mu}y\partial_{xt} \theta.$$ To make the equations dimensionless we use the following definitions for the dimensionless primed variables: $$\begin{aligned} x = L\,x', ~~~~~~~ y = R\,y', ~~~~~~~ z = R\,z', ~~~~~~~ \theta = \frac{R}{L} \theta', \nonumber \\ \sigma_{xx} = \frac{(2\mu^2 + 3\lambda\mu)R^2}{(\mu + \lambda)L^2}\,\sigma '_{xx}, ~~~ P = \frac{\pi(2\mu^2 + 3\lambda\mu)R^4}{4(\mu + \lambda)L^2}\,P', \nonumber \\ p = \frac{\alpha \mu R^2}{[(\mu+\lambda)\beta + \alpha^2]L^2}\,p', ~~~~~ t = \frac{[(\mu+\lambda)\beta + \alpha^2]R^2}{(\mu+\lambda)k}\,t'.\end{aligned}$$ and the immediately drop the primes, referring exclusively to dimensionless variables from now on. We note that the axial stress, the compressive force and the pressure are scaled to reflect the dominance of bending deformations over all other modes, and the time is scaled to reflect the dominance of radial diffusion over axial diffusion. Then the stress in the filament, given by equation (\[sf\]), can be written in dimensionless form as $$\begin{aligned} \label{S6} \sigma_{xx} = -y\,\partial_{x}\theta -\frac{\delta}{4} p\end{aligned}$$ Here, the first term reflects the purely elastic contribution well known from the theory of beams (Love; 1944), while the second term is proportional to the fluid pressure in the pores. The dimensionless parameter $ \delta = \frac{4\alpha^2\mu}{(2\mu + 3\lambda)[(\mu+\lambda)\beta + \alpha^2]} \sim O(1)$ for most materials denotes the ratio of the fluid and solid stress. ![Schematic of the local torque balance in a bent rod under an externally applied compression $P$. We balance torques about the point $O$, $x,y$ and $z$ are the coordinates in the body-fixed frame with $O$ as the origin. Let $M(x)$ denote the total moment due to internal stresses generated in part by bending the elastic skeleton and in part by the fluid pressure field. Balancing torques gives: $M(x+dl)-M(x)+{\bf dl}\times{\bf P} =0$. In the limit $dl\to0$, $\partial_xM + P\sin\theta=0$. The total moment $M=E_pI\partial_x\theta+\alpha\int p\,y\,dA$, where $E_p$ is the effective elastic modulus of the poroelastic skeleton and $I$ is the moment of inertia of the cross section.[]{data-label="elastica"}](elastica.eps){width="7cm"} In the long wavelength approximation, it is preferable to use the stress resultant $F= \int \sigma_{xx} dA$ and the torque resultant $M=-\int y\sigma_{xx} dA= \int y^2 \partial_x \theta dA+ \frac{\delta}{4}\int y p dA=M_e+M_f,$ as the variables of interest. Here $M_e$ is the elastic torque and $M_f$ is the fluid torque that arises due the transient effects of a pressure gradient across the filament. Then, local force and torque balances, which can be derived from (\[CE\]), (\[F\]) and (\[sf\]), or equivalently directly (Figure \[elastica\]), yield the dimensionless equations $$\begin{aligned} \label{Equil} \partial_x F &=&0 \nonumber \\ \partial_x M + \frac{\pi}{4}P \sin \theta &=&0.\end{aligned}$$ The first of these equations can be integrated immediately to yield $F =P$ with $P$ a constant determined by the boundary conditions. The second equation combines the effects of the elastic and fluid stresses that arise due to the fluid pressure, and requires the solution of the continuity equation (\[cont2\]). For a rod with a circular cross-section, there is rotational symmetry in the problem. Choosing the the axis of bending to coincide with the $z$ axis, we rewrite (\[cont2\]) in polar coordinates $(r,\phi)$ using dimensionless variables as $$\label{PP} \partial_tp - \frac{1}{r}\partial_r(r \partial_r p) - \frac{1}{r^2}\partial_{\phi\phi}p = r\,\sin\phi \,\partial_{xt}\theta.$$ We see that the pressure in the fluid arises from the extensional and compressional stresses in the filament due to bending. The boundary conditions for the pressure can be deduced using the following considerations: (a) the centre line of the rod does not suffer any deformation, and is symmetrically disposed, and (b) the pressure at the surface is determined by the permeability of the surface layers and the flux through it. Then $$\begin{aligned} p = 0 ~~~ \hbox{at} ~~~ r=0, \nonumber \\ \label{bcp} \hbox{Bi}\,p + \partial_r p = 0 ~~~ \hbox{at} ~~~ r=1.\end{aligned}$$ where Bi = $\frac{\eta R}{k}$, and $\eta$ characterizes the flux through the surface for a given pressure drop (the ambient external pressure is assumed to be zero). The second boundary condition in (\[bcp\]) on the pressure states that the flux of fluid through the surface is proportional to the pressure drop across the surface. Bi = $\infty$ corresponds to a freely draining rod, where there is no pressure jump across the surface, and Bi = 0 corresponds to a jacketed rod, which allows no flux through the surface. For a sponge, Bi$ >1$, while for a plant (root, stem or leaf) Bi $< 1$ since it is designed to retain water. Expanding $p$ in terms of the homogeneous solutions of (\[PP\]), we write $$p=\sum_{m=0}^\infty\sum_{n=1}^\infty[A_{mn}\sin\,m\phi+B_{mn}\cos\,m\phi]J_m(r\sqrt{\lambda_{mn}})e^{-\lambda_{mn}t},$$ where $A_{mn}$ and $B_{mn}$ are constants, $J_m$ is the Bessel function of order $m$, and $\lambda_{mn}$ is determined by the boundary conditions. Inspection of the inhomogeneous term on the RHS of equation (\[PP\]) yields $m=1$ so that $A_{1n}=A_n$, $B_{mn}=0$, and $\lambda_{1n}=\lambda_n$. Since the boundary condition is a linear combination of $p$ and $\partial_r p$ we are guaranteed to have a complete basis. We therefore look for a solution to the inhomogeneous equation (\[PP\]) of the form $$\label{ep} p=\sum_{n=1}^\infty A_n(t)\sin\,\phi J_1(r\sqrt{\lambda_n}),$$ where $\lambda_n$ is determined by substituting (\[ep\]) into (\[bcp\]) which yields $$\label{lam} \partial_rJ_1(r\sqrt{\lambda_n}) + \hbox{Bi}\,J_1(r\sqrt{\lambda_n}) = 0 ~~~~ \hbox{at} ~ r=1.$$ Inserting (\[ep\]) into (\[PP\]) yields $$\label{13} \sum_{n=1}^\infty (\partial_tA_n+\lambda_nA_n)J_1(r\sqrt{\lambda_n}) = r\partial_{xt}\theta.$$ Multiplying (\[13\]) by $r\,J_1(r\sqrt{\lambda_{n'}})$ and integrating across the cross-section gives $$\label{A_n} \partial_tA_n + \lambda_nA_n = \chi_n \partial_{xt} \theta,$$ where $$\chi_n = \frac{\int_0^1r^2J_1(r\sqrt{\lambda_n}) dr}{\int_0^1r [J_1(r\sqrt{\lambda_n})]^2 dr}.$$ Solving equation (\[A\_n\]) yields $$A_n=\chi_n\int_0^te^{-\lambda_n(t-t')}\partial_{xt'}\theta dt',$$ so that (\[ep\]) may be rewritten as: $$p = \sum_{n=1}^\infty \chi_n \sin\phi \,J_1(r\sqrt{\lambda_n})\int_0^te^{-\lambda_n(t-t')}\partial_{xt'}\theta dt'.$$ Then (\[S6\]) allows us to write the total axial stress $\sigma_{xx}$ at a cross-section as $$\sigma_{xx} = -r\sin\phi\, \partial_x\theta -\frac{\delta}{4}\sum_{n=1}^\infty \chi_n \sin\,\phi J_1(r\sqrt{\lambda_n})\int_0^te^{-\lambda_n(t-t')}\partial_{xt'}\theta dt'.$$ The dimensionless torque resultant is given by $$\begin{aligned} \label{fmom} M = - \int r\sin\phi \,\sigma_{xx}\,dA =\frac{\pi}{4}\partial_x\theta+\frac{\pi\delta}{4} \sum_{n=1}^\infty \gamma_n \int_0^te^{-\lambda_n(t-t')}\partial_{xt'}\theta dt',\end{aligned}$$ where $\gamma_n = \chi_n \int_0^1r^2J_1(r\sqrt{\lambda_n}) dr$. Substituting the result into the equation for torque balance (\[Equil\]) yields the dimensionless equation for the poroelastica (see Figure \[elastica\]) $$\label{poro} \partial_{xx}\theta + P\, \sin\theta + \delta \sum_n^\infty \gamma_n \int_0^te^{-\lambda_n(t-t')}\partial_{xxt'}\theta dt'=0,$$ The first two terms correspond to the usual terms in the classical elastica (Love 1944) for the bending of a rod with a circular cross section, while the final term is due to the instantaneous fluid pressure not being equilibrated across the cross section. The influence of the fluid is to create a material with “memory”, so that the current state of the filament is determined by its entire history. The kernel in the memory function for the fluid resistance is $e^{-\lambda_n(t-t')}$ so that the fluid resistance is analogous to that of a Maxwell fluid (Bird, Armstrong and Hassager 1987) with relaxation times $1/\lambda_n$ which measure the rate of decay of the $n^{th}$ transverse mode in response to the rate of change of the curvature of the filament $\partial_{xt} \theta$. A mechanical analogue of the resistance of a poroelastic filament is presented in Figure \[mech\] and shows the connection to simple viscoelastic models. The dynamics of a poroelastic rod are then determined by the solution of the equation for local torque balance (\[poro\]), subject to the boundary condition on the fluid pressure at the surface (\[lam\]) which determines the decay constants $\lambda_n$, and additional boundary conditions on the ends of the rod, which we now consider in some specific cases. ![Mechanical analogue of the bending resistance of a poroelastic rod. For rapid displacements, the dashpot will not move and the fluid resistance due to the instantaneous pressure yields a response similar to a stiff (fluid) spring in parallel with an elastic spring. Eventually the dashpot will move to relieve the stress in the spring and the fluid resistance gradually decays, leading to a purely elastic steady state.[]{data-label="mech"}](mechanicanalog.eps){width="7cm"} Planar load controlled buckling {#Pcontrolled} =============================== When an initially straight rod that is simply supported at either end is subject to a constant compressive force $P$ applied suddenly at $t=0$, the boundary conditions at the ends are $$\begin{aligned} \partial_x\theta(0,t)&=&0, \nonumber \\ \partial_x\theta(1,t)&=&0, \label{BC}\end{aligned}$$ and the initial condition is $$\label{ic} \theta(x,0) = 0.$$ The complete time evolution of the rod is then given by the solution of the integrodifferential equation (\[poro\]) subject to the boundary conditions (\[BC\]), the initial conditions (\[ic\]) and the condition (\[lam\]) which determines the rate constants $\lambda_n$. Short time behaviour, $t\ll1$ {#short} ----------------------------- Expanding the solution about the initially straight state $\theta=0$, we write $$\theta = \epsilon \theta_1(t) \label{stexp}$$ where $\epsilon \ll1$. Substituting the expression (\[stexp\]) into equation (\[poro\]) and linearizing yields $$\label{lin} \partial_{xx}\theta_1 + P\,\theta_1 + \delta \sum_{n=1}^\infty \gamma_n\int_0^te^{-\lambda_n(t-t')}\partial_{xxt'}\theta_1 dt'=0,$$ subject to the boundary conditions $$\partial_x\theta_1(0,t)=\partial_x\theta_1(1,t)=0. \label{ibc}$$ To solve (\[lin\]-\[ibc\]) we use separation of variables writing $\theta_1(x,t) = g(x)f(t)$ and substituting the result into (\[lin\]) to obtain two equations for $g(x)$ and $f(t)$. The function $g(x)$ is determined by the solution of the eigenvalue problem : $$\label{21} (1+\xi)\partial_{xx}g + P\,g=0, ~~~ \partial_xg(0)=0, ~~~ \partial_xg(1)=0.$$ Here the separation constant $\xi=(P-\pi^2)/\pi^2$ is the relative difference between the applied load $P$ and the dimensionless buckling load, $P_c = \pi^2$ for a purely elastic rod that is simply supported at its ends. The function $f(t)$ satisfies $$\label{22} \frac{\xi f(t)}{\delta}=\sum_{n=1}^\infty \gamma_n\int_0^te^{-\lambda_n(t-t')}\partial_{t'}f (t')dt',$$ Using Laplace transforms (${\cal L} (f(t))=\int_0^\infty e^{-st}f(t)\,dt$) we solve (\[22\]) and find that $$\label{res} f(t)=-\sum_{s\in S}e^{st}f(0)\sum_{n=1}^\infty\frac{\gamma_n}{\lambda_n+s}.$$ where $f(0)$ is determined by the initial condition and the set $S$ is composed of elements which satisfy $$\label{sss} \frac{\xi}{\delta}-\sum_{n=1}^\infty\frac{\gamma_ns}{\lambda_n+s}=0.$$ The growth rate at the onset of the dynamic buckling instability is therefore given by the largest $s$ that satisfies (\[sss\]). In Figure \[s(p)\], we plot the growth rate $s$ as a function of the rescaled external load $\frac{\xi}{\delta} =\frac{P-P_c}{\delta P_c}$, with $P_c = \pi^2$, obtained by solving (\[21\]) and (\[sss\]). When $s<0$ we do not have an instability, corresponding to the case when $P<P_c$. In the poroelastic regime, when $P_c<P<P_c(1+\frac{\delta}{4})$, fluid flows across the filament in response to the stress gradient in the transverse direction, and the phenomenon is qualitatively different from the buckling of a purely elastic rod. Since the time it takes fluid to flow across the filament is longer than it takes a bending wave to propagate the length of the filament, poroelastic buckling is sometimes called creep buckling (Biot 1964). ![Growth rate $s$ of the deflection as a function of the dimensionless external load $P$; $P_c=\pi^2$ is the critical compression above which a simply supported purely elastic rod buckles. Here the surface permeability parameter Bi$=0.1$. When $\frac{P-P_c}{\delta P_c}=\frac{1}{4}$ the growth rate becomes infinite and one must consider inertial effects, which are neglected here. For comparison, we show the numerical results obtained by solving (\[poro\]) with initial conditions $\theta(0)=0.001\,\cos\,\pi x$, $\theta(dt)=0.001\,e^{s\,dt} \cos\,\pi x$, where $s$ is the theoretically calculated exponent, with $dt=0.001$, $dx=0.01$. []{data-label="s(p)"}](sp.eps){width="8cm"} We now turn to the dependence of the buckling transition on the surface permeability parameter Bi. Substituting a pressure field of the form $p(x,r,\phi,t)=h(r)\partial_x\Theta(x)\sin\phi\,e^{st}$ and $\theta = e^{s\,t}\Theta(x)$ into (\[PP\]) yields $$\label{PD2} s\,h -\frac{1}{r}\partial_r(r\partial_rh) + \frac{h}{r^2} = s\,r,$$ subject to boundary conditions (\[bcp\]) which are now given by $$\begin{aligned} h= 0 ~~~ \hbox{at} ~~~ r=0, \nonumber \\ \label{bcp2} \hbox{Bi}\,h + \partial_r h = 0 ~~~ \hbox{at} ~~~ r=1,\end{aligned}$$ Thus (\[PD2\]) yields $\lim_{s\to0}h = 0$ and $\lim_{s\to\infty}h=-r$ corresponding to the case of infinitely slow and infinitely fast growth rates respectively. Consequently, for infinitely slow buckling the fluid supports no load. In the case $s\to\infty$ near $r=1$ a boundary layer emerges where the internal solution ($h=r$) is matched to the boundary condition (\[bcp2\]) at $r=1$. Balancing the first two terms of (\[PD2\]) the length scale of the boundary layer $l_{bl}\sim1/\sqrt s$ or in dimensional terms $l_{bl}\sim \sqrt{\frac{(\mu+\lambda)k}{s[(\mu+\lambda)\beta+\alpha^2]}}$. To complement these asymptotic results, we solve (\[PD2\]-\[bcp2\]) numerically and plot the radial variation in the pressure. In Figure \[pchbi\]a we show $h(r)$ for $s=1$ and various Bi and in Figure \[pchbi\]b we show $h(r)$ for Bi = $\infty$ and various $s$. As expected, we see that as the surface permeability increases ([*i.e.*]{} Bi increases) for a given growth rate of the instability (corresponding to a given load) the pressure variations across the filament decrease. On the other hand, as the growth rate increases, a boundary layer appears in the vicinity of the free surface of the rod to accommodate the slow permeation of fluid in response to the stress gradients. ![The radial variation of the fluid pressure at the onset of buckling, $h(r)$, (a) for growth rate $s$ = 1 and various values of the surface permeability parameter Bi (larger Bi corresponds to a more permeable surface) and (b) for Bi = $\infty$ and various $s$.[]{data-label="pchbi"}](pchs.eps){width="8cm"} Having considered the onset of poroelastic buckling we now turn to the transition from poroelastic to inertial dynamics which occurs for very large compressive loads when the fluid cannot move rapidly enough to keep up with the elastic deformations. For large $s$ the condition that determines the growth rate (\[sss\]) reads $$\label{sim} 0 \approx \frac{\xi}{\delta}-\sum_n\gamma_n(1-\lambda_n/s).$$ We have computed $\sum_n \gamma_n=1/4$ for integrals of Bessel functions. Using the definition of the separation constant $\xi=P/P_c-1$ we solve equation (\[sim\]) for the growth rate: $$s \approx \frac{\sum_n\gamma_n\lambda_n}{\frac{P-P_c}{\delta P_c} - \frac{1}{4}},$$ showing that it indeed diverges when $\frac{P-P_c}{\delta P_c} \to \frac{1}{4} $ consistent with Figure \[s(p)\]. Long time dynamics $t\gg1$ {#long} -------------------------- In the long time limit $t\gg1$, i.e. when the fluid has enough time to diffuse across and along the filament, the shape of the filament approaches that of the ideal [*elastica*]{}. To capture the dynamics of this process, we linearise (\[poro\]) about the steady state solution by letting $$\theta = \theta_0(x) + \epsilon \theta_1(x,t) \label{ltexp}$$ with $\epsilon \ll 1$. Substituting the expansion (\[ltexp\]) into (\[poro\]), at leading order we get $$\partial_{xx}\theta_0 + P\,\sin\theta_0=0.$$ At $O(\epsilon$), we get $$\label{lt} \partial_{xx}\theta_1 + P\,\cos(\theta_0)\,\theta_1 + \delta\sum_n\gamma_n\int^te^{-\lambda_n(t-t')}\partial_{xxt'}\theta_1dt' =0.$$ To simplify the equations further we consider the convolution integral in (\[lt\]) for typical values of Bi = 0.1, corresponding to the case for soft gels and biological materials. Then (\[lam\]) yields {$\lambda_n$} = {3.67, 28.6, 73.1, 137, 221, 325, ...} and $\sum_{n=2}^\infty \gamma_n/\gamma_1 = 0.0152$. Given the large separation between the decay constants, we see that the dominant contribution in the integral arises from $\lambda_1$ leading to an approximation of (\[lt\]) that reads $$\partial_{xx}\theta_1 + P\,\cos(\theta_0)\,\theta_1 + \delta\gamma_1\int^te^{-\lambda_1(t-t')}\partial_{xxt'}\theta_1dt'=0.$$ Using separation of variables, $\theta_1(x,t) = g(x)f(t)$, we find that $g(x)$ is given by the solution of the eigenvalue problem $$(1-\xi)\partial_{xx}g + P\,\cos(\theta_0)\,g = 0, ~~~ \partial_xg(0)=0, ~~~ \partial_xg(1)=0,$$ while the temporal part $f(t)$ satisfies $$\label{lt2} -f\xi = \delta\gamma_1\int^te^{-\lambda_1(t-t')}\partial_{t'}f\,dt'.$$ Since we are interested in the asymptotic behaviour for $t\gg1$ we multiply both sides of equation (\[lt2\]) by $e^{\lambda_1t}$ and differentiate with respect to time to find $$\label{psi} \partial_t f = \frac{-\lambda_1\xi}{\delta\gamma_1+\xi}f ~ \equiv -\psi f$$ We observe that the poroelastic solution approaches the elastic steady shape exponentially fast at late times. In Figure \[exp\], we plot the exponent $\psi=\lambda_1\xi /(\delta\gamma_1+\xi)$ versus $(P-P_c)/\delta P_c$ and see that the larger the value of $P$ the faster the solution approaches the final shape. ![As the solution approaches the equilibrium shape the difference between the current and equilibrium shape decays exponentially with a rate $\psi$, which is plotted against the applied compression. $\delta=1$, Bi = 0.1, $dx=0.01$, $dt=0.01$. The numerical computation is begun with $\theta = 0.9\,\theta_0$. []{data-label="exp"}](psifinalnew.eps){width="7cm"} Intermediate time dynamics {#numerics} -------------------------- For intermediate times we have to solve for the shape of the poroelastica numerically. Our arguments in the previous section allow us to neglect the contributions from the higher modes so that a good approximation to (\[poro\]) is given by $$\partial_{xx}\theta + P \sin\,\theta + \gamma_1\int_0^te^{-\lambda_1(t-t')}\partial_{xxt'}\theta dt' = 0. \label{pesim}$$ For ease of solution, we convert the integrodifferential equation to a partial differential equation by multiplying (\[pesim\]) by $e^{\lambda_1t}$ and differentiating with respect to time, so that $$\label{solve} (1+\gamma_1)\partial_{xxt}\theta + \lambda_1\partial_{xx}\theta + P\,\cos\,\theta\,\partial_t\theta + \lambda_1\,P\,\sin\,\theta = 0.$$ We solve equation (\[solve\]) subject to the boundary conditions (\[BC\]) using a Crank-Nicolson finite difference scheme in space and we extrapolate the non-linearity using the previous two time steps. This gives us a scheme with second order accuracy in time. For a time step $dx=0.01$ and a space step $dt=0.001$ the difference between the numerical and analytical initial growth rate is 0.2% (see Figure \[s(p)\]). In Figure \[full\] we show the variation of the angle $\theta(0,t)$ determined using the numerical simulation for the case when the dimensionless buckling load is slightly larger than the threshold for the poroelastic buckling, with $(P-P_c)/P_c \sim 0.17$. For comparison, we also show the asymptotic solutions for short and long times determined in the previous sections, and find that they agree well with the numerical solution. To determine the shape of the filament we use the kinematic relations $$\partial_xX=\cos\theta, ~~~\partial_xY=\sin\theta,$$ where $X(x,t)$ and $Y(x,t)$ are the position of the centreline. Figure \[bs\] shows the shape of the filament as it evolves from the initially unstable straight shape to the final elastic equilibrium via a transient overdamped route. In sharp contrast, a purely elastic rod subject to the same initial and boundary conditions would vibrate about the final state forever (in the absence of any damping). ![$\theta(0,t)$ for $P=11$, $\delta=1$, Bi=0.1, $dx=0.01$, $dt=0.001$ and $\theta(x,0)=\theta(x,dt)=0.01\,\cos\,\pi x$. The short time asymptotic is for a growth rate $s(P)$ found from equation (\[sss\]). The long time asymptotic is of the form $Ae^{-\psi \, t} + \theta_0$, where $\psi$ is the rate of decay to the equilibrium angle $\theta_0$, and $A$ is a fitting parameter.[]{data-label="full"}](theta0.eps){width="8cm"} ![Shape of the buckling filament $X(t)$, $Y(t)$ for $P=11$, in the laboratory frame as a function of time for $P=11$, $\delta=1$, Bi=0.1, $dx=0.01$, $dt=0.001$ and $\theta(t=0)=\theta(t=dt)=0.01\,\cos\,\pi x$. []{data-label="bs"}](bucklingshape.eps){width="10cm"} Displacement controlled planar buckling {#dcb} ======================================= In many problems involving instabilities, there is a qualitative difference between load controlled and displacement experiments. To understand the difference we consider the problem of displacement controlled buckling of a poroelastic filament and compare the results with those of the previous section. Since the centre line of the filament is assumed to be inextensible, the change in the end-to-end distance is given by $$\label{constraint} \Delta(t) = 1 - \int_0^1\cos\,\theta(x,t)\,dx.$$ We choose the functional form $$\label{Delta1} \Delta(t)=\frac{\Delta_{max}}{2}[1+\tanh\,at],$$ to allow us to ramp up the displacement to a maximum amplitude $\Delta_{max}$ at a characteristic rate $\frac{1}{2}\Delta_{max} a$. The shape of the poroelastica is now determined by the solution of (\[solve\]), (\[constraint\]) and (\[Delta1\]); the unknown load $P(t)$ is now determined at every time step by using an iteration method to enforce (\[constraint\]). For an initial guess to start this procedure, we note that after the onset of buckling when $P>P_c$, for small amplitudes $\theta = \epsilon\,\cos\pi x$ ($\epsilon\ll1$) is a solution of (\[solve\]) and (\[BC\]). Substituting into (\[constraint\]) gives $$\label{Delta} \Delta = 1 - \int_0^1\cos(\epsilon\,\cos\pi x)\,dx \approx \frac{\epsilon^2}{4}.$$ ![$\frac{P-P_c}{\delta P_c}$, where $P$ is the load and $P_c$ is the critical load required for buckling, corresponding to a change in end to end displacement $\Delta(t)\approx 0.1[1+\tanh\,at]$, for various $a$. For some later times the more quickly applied displacement corresponding to larger $a$ requires a lower compressive force. The graphs correspond to the following parameter values: $dx=0.01$, $dt=0.002$, Bi=0.1, $\delta=1$, $\theta(-3)= \theta(-3+dt)=2\sqrt{\Delta(-3)}\,\cos\,\pi x$.[]{data-label="dispa"}](displacement.eps){width="8cm"} Therefore, we choose $\theta(x,t_0) \sim 2\sqrt{\Delta(t_0)}\,\cos\,\pi x$. In Figure \[dispa\]b, we show the evolution of the load $P(t)$ for various values of $a$. $P$ is roughly constant for very short and very long times, but changes as $\Delta$ varies quickly for intermediate times. We can understand the initial plateau by considering the case when ($e^{at}\ll1$), so that (\[Delta1\]) yields $$\Delta = \frac{\Delta_{max}}{2}[1- \frac{1-e^{2at}}{1+e^{2at}}] \approx \Delta_{max}e^{2at}.$$ In light of the geometrical constraint (\[Delta\]) valid for small displacements, this yields $$\theta \approx 2\sqrt{\Delta_{max}}e^{at}\,\cos\,\pi x.$$ Comparing this with the short time behaviour of a poroelastic filament considered in §4\[short\] we see that exponential growth of small angles corresponds to a constant compressive force seen in Figure \[dispa\]. A similar argument holds for late times, when the system relaxes to its purely elastic equilibrium. For intermediate times, the load can be larger than that for the case of a purely elastic filament. The difference in the loads is due to the fluid resistance in the porelastica. A way of visualising this is shown in Figure \[Pmax\]. For slowly applied displacement fields $P_{max}$ is almost the same for the elastic and poroelastic cases; however, for rapidly applied displacements, corresponding to large values of $a$, the compressive force in the poroelastic case is larger due to fluid resistance arising from the pressure gradients across the bending filament. ![Maximum compressive load , $P_{max}$, for displacement controlled buckling, where the displacement field is given by $\Delta(t) \approx 0.1[1+\tanh\,at]$. Bi = 0.1, $\delta=1$, $dt=0.01$, $dx=0.01$.[]{data-label="Pmax"}](Pmaxvsa.eps){width="8cm"} Filament embedded in an external medium and subject to axial torque and axial thrust {#twist} ==================================================================================== We finally turn to the case of a rod embedded in an external medium subject to an axial moment, $K$, and a compressive force $P$ (see Figure \[schem2\]). The presence of the twist causes the instability to become non-planar, and the filament adopts a helical conformation; the presence of an external medium typically causes the instability to manifest itself with a higher wave number than otherwise. Letting the displacements of the centre line in the $y$ and $z$ directions be $Y(x,t)$, $Z(x,t)$ respectively, we scale the kinematic variables accordingly to define the dimensionless displacements $$Y=R\,Y',~~~~Z=R\,Z'.$$ The dimensionless axial moment is defined as $K = \frac{\pi(2\mu^2 + 3\lambda\mu)R^4}{4(\mu + \lambda)L}K'$. If the transverse displacements of the filament are small, the resistance of the external medium can be well approximated using the response of a linear Hookean solid. In light of the analogy between linear elasticity and Stokes flow, we can use the results of classical slender body theory in hydrodynamics (Batchelor 1970, Cox 1970) and write the vector of dimensionless external forces on the filament as ${\bf F'}_{external}=(0,-\pi E\,Y/4,-\pi E\,Z/4)$, where the dimensionless parameter $E$ is given by $$E = \frac{16\mu_mL^2(\mu + \lambda)}{\hbox{ln}(\frac{L}{R})\,R^2(2\mu^2 + 3\mu\lambda)},$$ where $\mu_m$ is the Lamé coefficient of the surrounding medium. This approximation is valid when $R/L \ll 1$, a condition consistent with the geometry of a thin filament. We will further assume that the filament is free to rotate in the medium, i.e. there is no torque resisting this mode of motion, which varies in any case as $R^2$ and is thus negligible in most situations. ![Schematic diagram of a rod buckling under an applied twist and compression in an external medium. $Y(x,t)$ and $Z(x,t)$ are the displacements of the centre line in the $y$ and $z$ directions respectively. The dotted line denotes the axis of symmetry.[]{data-label="schem2"}](schematic3.eps){width="10cm"} To derive the evolution equation for the shape of the filament we use the constitutive equation (\[CE\]) to write down equations for the balance of forces in the $y-$ and $z-$directions as for the planar filament. After dropping primes this leads to $$\begin{aligned} \label{N1} \partial_{xxxx}Y + K\partial_{xxx}Z + \delta\sum_{n=1}^\infty \gamma_n \int_0^te^{-\lambda_n(t-t')}\partial_{xxxxt}Y dt' + P\partial_{xx}Y + E\,Y=0, \\ \label{N2} \partial_{xxxx}Z - K\partial_{xxx}Y + \delta\sum_{n=1}^\infty \gamma_n \int_0^te^{-\lambda_n(t-t')}\partial_{xxxxt}Z dt' + P\partial_{xx}Z + E\,Z =0, \end{aligned}$$ where, since equation (\[PP\]) is linear, we have superposed the two solutions for bending in the $y-$ and $z-$directions. Taking $\zeta = Y + i\,Z$ equations (\[N1\]) and (\[N2\]) may be written as a single equation for the complex variable $\zeta$ $$\label{N3} 0=\partial_{xxxx}\zeta - i\,K\partial_{xxx}\zeta + P\partial_{xx}\zeta + E\,\zeta + \delta \sum_{n=1}^\infty \gamma_n \int_0^te^{-\lambda_n(t-t')}\partial_{xxxxt'}\zeta dt'.$$ For a simply supported filament, the four boundary conditions are $$\begin{aligned} \label{boun} \zeta(0) = \zeta(1) = \partial_{xx}\zeta(0)-i\,K\partial_x\zeta(0)=\partial_{xx}\zeta(1)-i\,K\partial_x\zeta(1) = 0.\end{aligned}$$ We can treat equations (\[N3\])-(\[boun\]) in exactly the same fashion as the planar problem and use separation of variables $\zeta(x,t)=g(x)f(t)$ to get $$\begin{aligned} \label{38} (1+\xi)\partial_{xxxx}g - i\,K\partial_{xxx}g + P\partial_{xx}g + E\,g =0,\nonumber \\ g(0)=g(1)=\partial_{xx}g(0)-i\,K\partial_xg(0)=\partial_{xx}g(1)-i\,K\partial_xg(1)=0,\end{aligned}$$ an eigenvalue problem for the separation constant $\xi$ and $g(x)$. The temporal part of the solution $f(t)$ satisfies $$\frac{\xi}{\delta}-\sum_{n=1}^\infty\frac{\gamma_ns}{\lambda_n+s}=0.$$ which is the same as equation (\[lt2\]) for the temporal part of the solution for planar buckling. Thus, once the separation constant $\xi$ is found, equation (\[sss\]) yields the growth rate $s(P,K,E,\delta,\hbox{Bi})$ as a function of the loading parameters and the material constants (see Figure \[s(p)\]). As an example of how the influence of an external medium can lead to higher modes becoming unstable at lower compressions than the fundamental mode we consider equation (\[38\]) in the case $K=0$ $$\label{sep} (1+\xi)\partial_{xxxx}\zeta + P\partial_{xx}\zeta + E\,\zeta =0,$$ which is an eigenvalue problem for $\xi$ and $\zeta$ for a given $P$. At the ends ($x=0$ and $x=1$) the displacements and bending moments vanish so that the boundary conditions associated with (\[sep\]) are: $\zeta(0)=\partial_{xx}\zeta(0)=\zeta(1)=\partial_{xx}\zeta(1)=0$. The only nonzero solutions to (\[sep\]) occur when $\zeta = \sin\,q_nx$, where $q_n = n\pi,$ $n=1,2,...$ The critical compression $P_c(n)$ where the $n^{th}$ mode becomes unstable is found to be (Landau & Lifshitz 1970) $$P_c(n) = \pi^2n^2 + \frac{\hbox{E}}{\pi^2n^2}.$$ We see that the critical buckling load for a given mode number $n$ increases as the stiffness of the environment $E$ increases. Furthermore, for large $E$, the chosen mode shape does not correspond to the fundamental mode $n=1$, since $\partial P_c/\partial n =0$ yields $n=\frac{1}{\pi}E^{1/4}$ for an infinite rod. Physically this occurs because short wavelength modes do not deform the stiff elastic environment as much, while the penalty associated with a higher curvature is not too much of a price to pay. For the case when $K \ne 0$, we cannot solve the eigenvalue problem analytically, and present the results using a phase diagram shown in Figure \[SPK\]. We find three distinct regimes: for $s<0$ the system is stable; for $0<s<\infty$ we have the poroelastic regime where the system buckles on the same time scale as the fluid pressure diffuses; finally, we have the elastic regime where the system buckles so fast that the fluid does not move and all the deformation occurs in the solid skeleton. ![For the case of an applied compression $P$ and axial twisting moment $K$, we show the three short time regimes (stable, poroelastic, and inertial) as in Figure \[s(p)\]. $E=\delta=1$.[]{data-label="SPK"}](twist.eps){width="7cm"} Discussion {#disc} ========== The usefulness of poroelastic theory is limited to a range of time scales. The poroelastic time scale associated with decay of pressure fields is $\tau_{p} \sim \frac{\alpha^2 R^2}{\mu\, k}$, recalling that $\alpha$ is the fluid volume fraction, $R$ is the smallest macroscopic length scale of the system ($R\gg l_p$), $\mu$ is the effective Lamé coefficient of the composite material, and $k \sim \frac{l_p^2}{\rho\nu}$ is the matrix permeability. If the time scale of the forcing, $\tau \ll \tau_{p}$ the fluid will not move relative to the solid and Hookean elasticity and the effects of inertia are sufficient to describe the system adequately. If $\tau \gg \tau_{p}$, the fluid pressure will equilibrate with the surroundings and once again classical elasticity suffices to describe the system, albeit with different Lamé coefficients. However, if $\tau \sim \tau_{p}$ the dynamics will be governed by poroelasticity. Biological systems are composed mainly of fluid, so poroelasticity will be applicable at some time scale (see Table 2 for estimates). Furthermore, they are characterized by extreme geometries (e.g. beams, plates and shells), which led us to consider in detail the dynamics of slender poroelastic objects, and particularly the buckling of a planar filament. Biological materials are usually anisotropic and we expect the permeability and elasticity tensors to reflect this feature. Taking $k_l$ to be the permeability in the axial direction, we can neglect axial diffusion if $\frac{k_l R^2}{k L^2}\ll 1$. The opposite limit, where $\frac{k_l R^2}{k L^2}\gg 1$, has been studied by Cederbaum et al. (2000). The dynamical behavior of these objects is separated into two different regimes, one governed by fast inertial effects, and the other by the slow dynamics of fluid flow. These regimes are of course well known in bulk materials (Biot 1957), but here they appear in a slightly different guise due to the effect of the slender geometry of the system. The onset of planar poroelastic load controlled buckling was first considered by Biot (1964). In this paper we broaden and deepen the understanding of this phenomenon. An important outcome of our studies is the [*poroelastica*]{} equation, which is a simple integro-differential equation with one time constant that describes the dynamics of a poroelastic filament under a compressive load. The bending resistance of the filament is analogous to a (fictional) Maxwell material, where the time constant is the rate at which the pressure field decays (determined by the material parameters and the geometry). We then used this equation to study not only the onset of buckling, but also the entire dynamics up until saturation for both load controlled and displacement controlled buckling. A series of three-point-bending experiments (Scherer 1992; Scherer 1996) have shown that the mechanical response of a silica gel rod immersed in acetone or ethanol can be described using poroelastic theory. The theory developed by Scherer (1992) applies only to situations where the displacement is applied much faster than the poroelastic time scale and is a special case of the more general theory presented in this paper. The lack of experiments on slender poroelastic filaments being deformed on the poroelastic time scale prevents us from testing our predictions quantitatively. Since our results are relevant to swollen polymer networks, gel actuators and sensors, the mechanics of cartilaginous joints, and the physics of rapid movements in plants, an important next step is the quantitative experimental study of slender poroelastic structures. \[tab2\] Application $\mu$ ($Pa$) $\alpha$ $k$ ($m^2/Pa\,sec$) $R$ ($m$) $\tau_p$ ($sec$) ---------------------- -------------- ---------- --------------------- ----------- ------------------ Actin cytoskeleton $100$ 0.8 $10^{-12}$ $10^{-6}$ $10^{-2}$ Bones $10^{10}$ 0.05 $10^{-14}-10^{-16}$ $10^{-2}$ $0.1 - 10^{-3}$ Cartilage $10^6$ 0.8 $(1-6)10^{-16}$ $10^{-3}$ $10^3$ Plant stem/root $10^8$ 0.8 $10^{-11}$ $10^{-2}$ $10^{-2}$ Venus’ fly trap leaf $10^6$ 0.8 $10^{-12}$ $10^{-3}$ 0.1 : Applications of poroelasticity in biology. Acknowledgments {#acknowledgments .unnumbered} =============== We acknowledge support via the Norwegian Research Council (JS), the US Office of Naval Research Young Investigator Program (LM), the US National Institutes of Health (LM) and the Schlumberger Chair Fund (LM). The authors thank Mederic Argentina for insightful discussions. J.-L. Auriault & E. Sanchez-Palencia, “Étude du comportment macroscopique d’un milieu poreux saturé déformable," J. Mec. [**16**]{}, 575-603 (1977). S.I. Barry & M. Holmes, “Asymptotic behaviors of thin poroelastic layers," IMA Journal of Applied Mathematics [**66**]{}, 175-194 (2001). G.K. Batchelor, “Slender-body theory for particles of arbitrary cross-section in Stokes flow," J. Fluid Mech [**44**]{}, 419-40 (1970). M.A. Biot, “General theory of three-dimensional consolidation,” Journal of Applied Physics [**12**]{}, 155-165 (1941). M.A. Biot, “Theory of propagation of elastic waves in a fluid-saturated porous solid. I. Low-frequency range." Journal of the Acoustical Society of America [**28**]{}, 168-178 (1956). M.A. Biot, “Theory of propagation of elastic waves in a fluid-saturated porous solid. II. higher frequency range." Journal of the Acoustical Society of America [**28**]{}, 179-191 (1956). M.A. Biot & D.G. Willis, “The elastic coefficients of the theory of consolidation," J. Appl. Mech. [**24**]{}, 594-601 (1957). M.A. Biot, “Theory of buckling of a porous slab and its thermoelastic analogy," Journal of Applied Mechanics [**31**]{}, 194-198 (1964). R. Bird, R. Armstrong and O. Hassager, [*Dynamics of polymeric fluids, v. I*]{} (Wiley, 1987) R. Burridge & J.B. Keller, “Poroelasticity equations derived from microstructure," J. Accoust. Soc. A. [**70**]{}, 1140-1146 (1981). G. Cederbaum, L.P. Li & K. Schulgasser, [*Poroelastic structures*]{} (Elsevier, Oxford, 2000). R.G. Cox, “The motion of long slender bodies in a viscous fluid. Part I. General theory," J. Fluid Mech. [**44**]{}, 791-810 (1970). L.D Landau & E.M. Lifshitz, [*Theory of elasticity*]{} (second ed. Pergamon, 1970). A.E.H. Love, [*A Treatise on the mathematical theory of elasticity*]{} (fourth ed. Dover 1944). D. Lydzba & J.F. Shao, “Study of poroelasticity material coefficients as response of microstructure," Mech. Cohes.-Frict. Mater. [**5**]{}, 149-171 (2000). C.C. Mei & J.-L. Auriault, “Mechanics of heterogeneous porous media with several spatial scales," Proc. R. Soc. Lond. A [**426**]{}, 391-423 (1989). G.W. Scherer, “Bending of gel beams: method for characterizing elastic properties and permeability," J. Non-Cryst. Solids [**142**]{}, 18-35 (1992). G.W. Scherer, “Influence of viscoelasticity and permeability on the stress response of silica gel," Langmuir [**12**]{}, 1109-1116 (1996). A.P.S. Selvadurai (Ed.), [*Mechanics of Poroelastic Media*]{}, Solid Mechanics and its Application series, vol.35, Wolters Kluwer Academic Publishers. H.F. Wang, [*Theory of Linear Poroelasticity with Applications to Geomechanics and Hydrogeology*]{} (Princeton University Press, 2000). \[appendix\] The derivations of the equations of poroelasticity have been many and varied. Partly this has been because several qualitatively different parameter regimes containing distinct leading order force balances exist. We focus here on the equations which govern the second row of Table 1, namely where the Stokes’ length is much larger than the pore size, [*i.e.*]{} L$_s \gg l_p$. The methods used to derive equations for this region of parameter space can be classified into three categories: physical arguments and superposition (Biot 1941, Biot & Willis 1957), mixture theory (Barry & Holmes 2001), and micro structural derivations (Auriault & Sanchez-Palencia 1977, Burridge & Keller 1981, Mei & Auriault 1989). First, we show in detail a version of the micro structural derivations, which uses ideas from both Burridge & Keller (1981) and Mei & Auriault (1989). The equations that govern the behavior in the incompressible interstitial fluid at low Re are $$\begin{aligned} \label{fs} \boldsymbol{\sigma}_f = -p{\bf I} + 2\epsilon\mu{\bf e (v)}, \\ \nabla\cdot\boldsymbol{\sigma}_f = 0, \\ \nabla\cdot {\bf v} = 0,\end{aligned}$$ where $\sigma_f$ is the stress tensor in the fluid, $\epsilon = l_p/l_m$ (see figure \[pe\]), ${\bf e}(..) = \frac{1}{2}[\nabla(..) + \nabla(..)^T]$ is the strain operator, and ${\bf v}$ is the fluid velocity. ![Typical porous medium illustrating the separation of length scales, where $l_p$ is the pore scale and $l_m$ is the system scale.[]{data-label="pe"}](pemedia.eps){width="7cm"} In the solid the analogous equations are $$\begin{aligned} \label{ss} \boldsymbol{\sigma}_s = {\bf A}:{\bf e (u)}, \\ \label{fsb} \nabla \cdot \boldsymbol{\sigma}_s = 0,\end{aligned}$$ where ${\bf u}$ is the displacement field, ${\bf A}$ is the tensor of elastic moduli, and $\sigma_s$ is the stress tensor in the solid. At the solid-fluid interface continuity of displacements and tractions yields $$\begin{aligned} \label{bc2} {\bf v - \partial_tu} = 0, \\ \label{bc1} \boldsymbol{\sigma}_s\cdot{\bf n} - \boldsymbol{\sigma}_f\cdot{\bf n} = 0.\end{aligned}$$ Here ${\bf n}$ is the unit normal vector to the surface separating the two phases. Looking for a perturbation solution in powers of the small parameter $\epsilon$, we use an asymptotic expansion of the variables $$\begin{aligned} \boldsymbol{\sigma_f} = \boldsymbol{\sigma}_f^0 + \epsilon\boldsymbol{\sigma}_f^1 + ... \nonumber \\ \boldsymbol{\sigma}_s = \boldsymbol{\sigma}_s^0 + \epsilon\boldsymbol{\sigma}_s^1 + ... \nonumber \\ p = p^0 + \epsilon p^1 + ... \nonumber \\ {\bf u} = {\bf u}^0 + \epsilon{\bf u}^1 + ... \nonumber \\ {\bf v} = {\bf v}^0 + \epsilon{\bf v}^1 + ...\end{aligned}$$ with a multiple-scale expansion for the gradient $$\nabla = \nabla_{x'} + \epsilon\nabla,$$ where ${\bf x}$ denotes the macroscopic scale, ${\bf x'} = \epsilon {\bf x}$ denotes the pore scale, $\nabla$ denotes the gradient relative to the macroscopic scale and $\nabla_{x'}$ denotes the gradient relative to the pore scale. Since we assume that the flow is driven on the macroscopic scale, the leading order deformation is a function only of ${\bf x}$. Then equations (\[fs\]) and (\[ss\]) yield the following expressions for the fluid and solid stress tensors: $$\begin{aligned} \boldsymbol{\sigma}_s^0 = {\bf A}:[{\bf e (u}^0) + {\bf e}_{x'} ({\bf u}^1)], \\ \boldsymbol{\sigma}_s^1 = {\bf A}:[{\bf e (u}^1) + {\bf e}_{x'} ({\bf u}^2)], \\ \boldsymbol{\sigma}_f^0 = -p^0{\bf I}, \\ \boldsymbol{\sigma}_f^1 = -p^1{\bf I} + \mu{\bf e}_{x'}({\bf v}^0), \end{aligned}$$ where ${\bf e}$ and ${\bf e_{x'}}$ denote the strain relative to the system scale and pore scale coordinates respectively. The stress balance in the fluid (\[fsb\]) yields: $$\begin{aligned} \nabla_{x'}p^0=0, \\ \label{bal} \mu\nabla^2{\bf v}^0 - \nabla_{x'}p^1 - \nabla p^0 = 0.\end{aligned}$$ Thus the leading order pressure gradient $p^0({\bf x})$ is only a function of the system scale coordinate. The stress balance in the solid yields $$\begin{aligned} \nabla_{x'}\cdot\boldsymbol{\sigma}_s^0 = 0, \\ \nabla_{x'}\cdot\boldsymbol{\sigma}_s^1 + \nabla\cdot\boldsymbol{\sigma}_s^0=0.\end{aligned}$$ we define $\boldsymbol{\sigma}$ to be the total stress tensor: $$\begin{aligned} \boldsymbol{\sigma} = \boldsymbol{\sigma}_s ~~ \hbox{in} ~~ V_s, \nonumber \\ \boldsymbol{\sigma} = \boldsymbol{\sigma}_f ~~ \hbox{in} ~~ V_f,\end{aligned}$$ where $V_s$ and $V_f$ are the solid and fluid parts of a volume element. Stress balance in the fluid and solid imply $$\label{st} \nabla_{x'}\cdot\boldsymbol{\sigma}^1 + \nabla\cdot\boldsymbol{\sigma}^0 = 0.$$ Averaging (\[st\]) over the pore scale. $$\frac{1}{V}\int\nabla\cdot\boldsymbol{\sigma}^0dV + \frac{1}{V}\int\nabla_{x'}\cdot\boldsymbol{\sigma}^1dV = \frac{1}{V}\int\nabla\cdot\boldsymbol{\sigma}^0dV +\frac{1}{V}\int{\bf n}\cdot\boldsymbol{\sigma}^1dS = 0,$$ where $V = V_f + V_s$. In the limit $V\to\infty$, $\frac{1}{V}\int{\bf n}\cdot\boldsymbol{\sigma}^1dS \to 0$ since the surface to volume ratio tends to zero. Consequently, $$\begin{aligned} \frac{1}{V}\int\nabla\cdot\boldsymbol{\sigma}^0dV = \nabla\cdot<\boldsymbol{\sigma}^0> = 0, \\ <\boldsymbol{\sigma}^0> = <{\bf A}:[{\bf e (u}^0) + {\bf e}_{x'} ({\bf u}^1)]> - \phi_fp^0{\bf I},\end{aligned}$$ where $\phi_f$ is the fluid volume fraction and $< >$ denotes averages over the pore scale. In order to write the averaged equations in terms of ${\bf u}^0$ and $p^0$, we must eliminate ${\bf u}^1$. This is achieved by using the stress balance in the solid so that $$\label{ms1} \nabla_{x'}\cdot \sigma_s^0 = \nabla_{x'}\cdot \{ {\bf A}:[{\bf e} ({\bf u}^0) + {\bf e}_{x'} ({\bf u}^1)] \} = 0.$$ The boundary condition (\[bc1\]) at the fluid solid surface yields $$\label{ms} {\bf A}:[{\bf e (u}^0) + {\bf e}_{x'} ({\bf u}^1)]\cdot {\bf n} = -p^0 {\bf n}.$$ Since this is a linear system of equations, ${\bf u}^1$ is a linear combination of $p^0$ and ${\bf e (u}^0)$: $$\label{u1} {\bf u^1} = {\bf B}:{\bf e (u}^0) - {\bf C}p^0,$$ where the third rank tensor ${\bf B}$ and vector ${\bf C}$ vary on the pore and system scales, and can only be found explicitly by solving the micro structural problem (\[ms1\])-(\[ms\]). The averaged stress tensor becomes $$\label{sig} <\boldsymbol{\sigma}^0> = <{\bf A} + {\bf A}:{\bf e_{x'}(B)} >:{\bf e(u}^0) - <{\bf A}: {\bf e_{x'}(C})>p^0- \phi_f p^0 {\bf I},$$ where in index notation ${\bf e_{x'}(B)}=(\partial_{x'_m}B_{nkl} + \partial_{x'_n}B_{mkl})$ If we assume that the material is isotropic on the macroscopic scale we can further reduce (\[sig\]): $$\label{pe1} <\sigma^0> = 2\mu{\bf e (u}^0) + \lambda\nabla\cdot{\bf u}^0{\bf I} + (-\phi_f + \gamma)p^0{\bf I},$$ where $\gamma{\bf I} = <{\bf A:e(C)}>$ is an isotropic pressure in the solid due to the fluid pressure exerted at the interface. Substituting (\[pe1\]) into the stress balance equation $$\label{pe3} \nabla\cdot<\boldsymbol{\sigma}^0> = 0,$$ gives us three equations for the four unknowns (${\bf u}^0$ and $p^0$). We now turn to continuity to give us the final equation. Continuity will give the final equation. Since the fluid stress balance (\[bal\]) is linearly forced by the external pressure gradient we can define a tensor ${\bf k}$ relating the external pressure gradient to the pore scale flow: $$\label{darcy} {\bf v}^0 - \partial_t{\bf u}^0 = -{\bf k}\cdot\nabla p^0,$$ Averaging over the fluid volume yields $$<{\bf v}^0> - \phi_f\partial_t{\bf u}^0 = -<{\bf k}>\cdot\nabla p^0.$$ Since the fluid is incompressible averaging the continuity equation $$\nabla\cdot{\bf v}^0 + \nabla_{x'}\cdot{\bf v}^1=0,$$ gives $$\begin{aligned} 0 = \nabla\cdot<{\bf v}^0> + \frac{1}{V}\int\nabla_{x'}\cdot {\bf v}^1dV = \nabla\cdot<{\bf v}^0> + \frac{1}{V}\int{\bf n}\cdot {\bf v}^1dS \nonumber \\ \label{v0} = \nabla\cdot<{\bf v}^0> + \frac{1}{V}\int{\bf n}\cdot \partial_t{\bf u}^1dS = \nabla\cdot<{\bf v}^0> - \frac{1}{V}\int\nabla\cdot \partial_t{\bf u}^1dV,\end{aligned}$$ where we have used (\[bc2\]). Taking the divergence of (\[darcy\]) and using (\[v0\]) and (\[u1\]) to eliminate ${\bf v}^0$ and ${\bf u}^1$ respectively yields $$\begin{aligned} -\nabla\cdot<{\bf k}>\cdot\nabla p^0 = \nabla\cdot(<{\bf v}^0> - \phi_f\partial_t{\bf u}^0) \nonumber \\ \label{rhs} = <\nabla_{x'}\cdot {\bf B}>:{\bf e}(\partial_t{\bf u}^0) - <\nabla_{x'}\cdot{\bf C}>\partial_tp^0 - \partial_t{\bf u}^0 \cdot\nabla\phi_f - \phi_f\nabla\cdot\partial_t{\bf u}^0.\end{aligned}$$ If the change in solid volume fraction is much smaller than the volume fraction itself $\partial_t {\bf u} \cdot \nabla \phi_f \approx 0$. Furthermore, if the solid skeleton is incompressible then $\nabla\cdot<{\bf v}^0> = 0$ so that the first two terms on the right hand side of equation (\[rhs\]) are negligible. For a compressible isotropic skeleton (\[rhs\]) yields $$\label{pe2} \beta \partial_t p^0 - \nabla \cdot <{\bf k}>\cdot\nabla p^0 = -\alpha\partial_t\nabla\cdot{\bf u}^0,$$ where the $\beta = <\nabla_{x'}\cdot{\bf C}>$ is the bulk compliance of the solid skeleton and $\alpha= \phi_f - <\nabla_{x'}\cdot {\bf B}>_{ii}/3$ is the effective fluid volume fraction. In general if the solid is treated as compressible, the fluid must also be treated as such since their bulk moduli are comparable. Thus, $\beta$ is really a measure of the compressibility when the system is jacketed, so that for a mixture of an incompressible solid and fluid, $\beta=0$. Multiple scale analysis (Auriault & Sanchez-Palencia 1977) shows that $<\nabla_{x'}\cdot {\bf B}> = <{\bf A:e(C)}>=\gamma$ so that (A27) takes the final form $$\label{lpe} <\boldsymbol{\sigma}^0> = 2\mu{\bf e(u}^0) + \lambda \nabla \cdot {\bf u}^0\,{\bf I} - \alpha p^0{\bf I}.$$ Equations (\[pe3\]), (\[pe2\]) and (\[lpe\]) are the equations of poroelasticity, identical in form to the equations written down by Biot (1941). Removing the brackets and superscripts we recover equations (\[CE\]) and (\[Cont\]) from §\[goveq\]. Studying poroelasticity from the micro structural point of view allows us to see that Biot’s (1941) equations correspond to a locally compressible solid skeleton and the equations of mixture theory (Barry & Holmes 2001) correspond to an incompressible solid skeleton. \[plate\] The equations of motion for a sheet of thickness $H$ and length $L$ are found using the same techniques as for a filament. The displacement field ${\bf u}=(u(y),v(y),0)$ is two dimensional, where the $y$ direction is normal to the neutral surface and the free surfaces are located at $y=\pm H/2$. We use the following dimensionless parameters $$\begin{aligned} t = (\beta + \frac{\alpha^2}{2\mu + \lambda})\frac{H^2}{k} \,t', ~~~~~~ p= \frac{2\mu\alpha}{[\beta(2\mu+\lambda)+\alpha^2]}\frac{H^2}{L^2}\,p', \nonumber \\ y=H\, y', ~~~ \sigma_{xx} = \frac{4\mu(\mu+\lambda)}{2\mu+\lambda}\frac{H^2}{L^2}\, \sigma_{xx}', ~~~ P = \frac{\mu(\mu+\lambda)H^3}{3(2\mu+\lambda)L^2}\,P'.\end{aligned}$$ The dimensionless parameter $\delta$ characterising the ratio of the fluid dtress to the solid stress is $$\delta = \frac{12\mu\alpha^2}{(\mu+\lambda)[\beta(2\mu+\lambda)+\alpha^2]}$$ The pressure field is found by solving the 1-dimensional diffusion equation $$\partial_tp - \partial_{yy}p = -y\,\partial_{xt}\theta,$$ with the boundary conditions $$\begin{aligned} \partial_y p + \hbox{Bi}\,p = 0 ~~ \hbox{at} ~~ y= 1/2, \nonumber \\ -\partial_y p + \hbox{Bi}\,p = 0 ~~ \hbox{at} ~~ y=-1/2. \end{aligned}$$ Then $$p=-\sum_n\chi_n\,\sin \sqrt{\lambda_n}y\,\int_0^te^{-\lambda_n(t-t')}\partial_{xt'}\theta\,dt'$$ where the $\lambda_n$ satisfy $$\label{B6} \sqrt{\lambda_n} \cos\frac{\sqrt{\lambda_n}}{2} + Bi\,\sin\frac{\sqrt{\lambda_n}}{2}=0,$$ and $\chi_n$ and $\gamma_n$ are given by $$\label{B7} \chi_n=\frac{2(2+Bi)\sin\frac{\sqrt{\lambda_n}}{2}}{\lambda_n(1-\frac{\sin\sqrt{\lambda_n}}{\sqrt{\lambda_n}})},~~~ \gamma_n=\frac{2(2+Bi)^2\sin^2\frac{\sqrt{\lambda_n}}{2}}{\lambda_n^2(1-\frac{\sin\sqrt{\lambda_n}}{\sqrt{\lambda_n}})}.$$ Equations (\[B6\]) and (\[B7\]) together with equation (\[poro\]) for the motion of a poroelastic plate with time-dependent plane stress. \[5\] In this section we construct the equilibrium equations for a thin poroelastic rod whose deformation is not necessarily in the plane. The case of a purely elastic filament is treated in Love (1944). The configuration is given by the position of the centre line and the orientation of its cross-section at every point along it. At every point along the centre line of the rod ${\bf r(X,t)} = (X(x,t),Y(x,t),Z(x,t))$, where $x$ is the arc-length we consider the orthogonal triad ${\bf d}_i(x,t)$, $i=1,2,3,$ where ${\bf d}_1$ and ${\bf d}_2$ lie along the principal axes of the cross-section of the rod and $$\label{kine} {\bf d}_3 = \partial_x{\bf r}$$ is the vector tangent to the centre line. The orientation is determined by a body-fixed director frame that allows us to consider finite deformations. In this case we use the director basis to follow the evolution of the fluid pressure field. The vector of strains $ \kappa$ is given by $$\hbox{\boldmath$\kappa$\unboldmath} = \kappa^{(1)}{\bf d}_1 +\kappa^{(2)}{\bf d}_2 + \Omega {\bf d}_3,$$ which defines the rotation of the principal axes along the filament. Here $\kappa^{(1)}$ and $\kappa^{(2)}$ are the projections of the curvature of the centre-line onto the principal axes of the cross-section and $\Omega$ is the twist strain. $$\partial_x {\bf d}_i = \hbox{\boldmath$\kappa$\unboldmath} \times {\bf d}_i.$$ The stress resultant vector ${\bf F}(x,t)$ and the couple resultant vector ${\bf M}(x,t)$ at any cross-section can be written as $$\begin{aligned} {\bf F} = \sum_{i=1}^3F^{(i)}(x,t){\bf d}_i(x,t), ~~~ {\bf M} = \sum_{i=1}^3M^{(i)}(x,t){\bf d}_i(x,t),\end{aligned}$$ where $F^{(1)}$ and $F^{(2)}$ are the shear forces and $M^{(1)}$ and $M^{(2)}$ are the bending moments along the principal axes, $F^{(3)}$ is the tensile force and $M^{(3)}$ is the twisting moment. Since the equation for the diffusion of pressure is linear we consider the bending about the principal axes separately. In light of equation (\[fmom\]) above ($\partial_x\theta$ being the curvature along one of the principal axes) we can write equations for the dimensionless couple resultant vector ${\bf M}$ $$\begin{aligned} \label{MM} M^{(1)} = \kappa^{(1)} + \delta \sum_{n=1}^\infty \gamma_n \int_0^te^{-\lambda_n(t-t')}\partial_{t'}\kappa^{(1)} dt', \nonumber \\ M^{(2)} = \kappa^{(2)} + \delta \sum_{n=1}^\infty \gamma_n \int_0^te^{-\lambda_n(t-t')}\partial_{t'}\kappa^{(2)} dt', \nonumber \\ M^{(3)} = C\,\Omega,\end{aligned}$$ where $C(=\frac{2(\lambda+\mu)}{3\lambda+2\mu}$for a circular rod) is the dimensionless torsional rigidity (normalised by the bending contribution to $M^{(1)}$), $\tau_a$ is the dimensionless twist strain, and the $\lambda_n$ are determined by solving equation (\[lam\]). We note that the twisting moment has no poroelastic contribution because it is purely a shear deformation, and poroelastic effects arise only from volumetric deformations as seen in equation (\[Cont\]). Finally, the local balance of forces and torques give the equilibrium equations $$\begin{aligned} \label{FFF} \partial_x{\bf F} + {\bf F}_{external} = 0, \\ \label{MMM} \partial_x{\bf M} + {\bf d}_3 \times {\bf F} = 0,\end{aligned}$$ where ${\bf F}_{external}$ is the external body force acting on the cross-section. The complete set of equations that determine the poroelastic behaviour of a filament are (\[kine\]),(\[MM\]),(\[FFF\]) and (\[MMM\]). \[lastpage\] [^1]: current address: Harvard University, Cambridge, MA 02138, USA; [*Email: lm@deas.harvard.edu*]{}
--- abstract: 'In this paper we consider a $d$-dimensional ($d=1,2$) parabolic-elliptic Keller-Segel equation with a logistic forcing and a fractional diffusion of order $\alpha \in (0,2)$. We prove uniform in time boundedness of its solution in the supercritical range $\alpha>d\left(1-c\right)$, where $c$ is an explicit constant depending on parameters of our problem. Furthermore, we establish sufficient conditions for $\|u(t)-u_\infty\|_{L^\infty}\rightarrow0$, where $u_\infty\equiv 1$ is the only nontrivial homogeneous solution. Finally, we provide a uniqueness result.' address: - | Institute of Mathematics, Polish Academy of Sciences, Warsaw, Śniadeckich 8, 00-656, Poland\ OxPDE, Mathematical Institute, University of Oxford, UK - 'Univ Lyon, Université Claude Bernard Lyon 1, CNRS UMR 5208, Institut Camille Jordan, 43 blvd. du 11 novembre 1918, F-69622 Villeurbanne cedex, France.' author: - Jan Burczak - 'Rafael Granero-Belinchón' bibliography: - 'bibliografia.bib' title: 'Boundedness and homogeneous asymptotics for a fractional logistic Keller-Segel equations' --- Introduction ============ We consider the following drift-diffusion equation on ${{\mathbb T}}^d=[-\pi,\pi]^d$ with periodic boundary conditions, $d=1,2$ (equivalently, on $\mathbb{S}^d$) $$\begin{aligned} \label{eqDD} {\partial_t}u&=- \Lambda^\alpha u+\chi\nabla\cdot(u \nabla v)+ru(1-u),& \text{ in }(x,t)\in {{\mathbb T}}^d\times (0,\infty)\\ \Delta v -v&=u, &\text{ in }(x,t)\in {{\mathbb T}}^d\times(0,\infty)\label{eqDD3}\\ u(x,0)&=u_0(x)\geq0 &\text{ in }x\in {{\mathbb T}}^d,\label{eqDD2}\end{aligned}$$ where $\Lambda^\alpha =(-\Delta)^{\alpha/2}$ with $0<\alpha< 2$. In this paper we will assume that $r<\chi$, which is the most difficult case from the perspective of our goal: studying the large time behaviour. This is due to the fact that then the potentially ‘destabilizing’ term, whose influence is measured by $\chi>0$, is relatively powerful compared to the ‘homeostatic force’ quantified by $r>0$. Let us note that - can be written as the following active scalar equation $${\partial_t}u=- \Lambda^\alpha u+\chi\nabla\cdot(u B(u))+ru(1-u) \qquad \text{ in }(x,t)\in {{\mathbb T}}^d\times(0,\infty),$$ where the nonlocal operator $B$ is defined as $$B(u)=\nabla(\Delta-1)^{-1}u.$$ In the remainder of this introduction, let us discuss some of the reasons for dealing with problem and the known results. Classical Patlak-Keller-Segel system ------------------------------------ Our interest in - follows from an aggregation equations related to the Patlak-Keller-Segel system. The classical (parabolic-elliptic) Patlak-Keller-Segel equation reads $$\label{cKS2} \begin{aligned} {\partial_t}u &= \Delta u + \chi \nabla\cdot(u \nabla v ), \\ \Delta v- \nu v&= u. \end{aligned}$$ This system models *chemotaxis*, *i.e.* a chemically-induced motion of cells and certain simple organisms (e.g. bacteria, slime mold). In its more general version, it was proposed by Patlak [@patlak1953random] (in a different context of mathematical chemistry, hence his name is sometimes not used in the mathematical biology context) and Keller & Segel [@keller1970initiation; @keller1971model; @keller1980assessing], see also reviews by Blanchet [@blanchet2011parabolic] and Hillen & Painter [@Hillen3]. In the biological interpretation, $u$ denotes density of cells (organisms) and $v$ stands for density of a chemoattractant. We will restrict ourselves to the (relevant biologically) case of $u \ge 0$, ensured by $u_0 \ge 0$. The parameter $\chi>0$ quantifies the sensitivity of organisms to the attracting chemical signal and $\nu \ge 0$ models its decay[^1]. Since $\nu>0$ plays a role of damping, let us for a moment consider on ${\mathbb R}^2$ with $\nu=0$. Note that equation preserves the total mass ($\|u(0)\|_{L^1} = \|u(t)\|_{L^1}$). Furthermore, in the case $\nu=0$, the space $L^1$ is invariant under the scaling of the equation. It turns out that, despite its simplicity, the Patlak-Keller-Segel equation reveals in this setting an interesting global smoothness/blowup dichotomy. Namely, for $\|u(0)\|_{L^1} > {8 \pi}{\chi^{-1}}$ the classical solutions blow-up in $L^\infty$-norm in a finite time, for $\|u(0)\|_{L^1} < {8 \pi}{\chi^{-1}}$ they exist for all times (and are bounded), whereas for $\|u(0)\|_{L^1} = {8 \pi}{\chi^{-1}}$ they exist for all times but their $L^\infty$-norm grow to infinity in time. The related literature is abundant, so let us only mention here the seminal results by Jäger & Luckhaus [@jager1992explosions] and Nagai [@nagai1995blow], the concise note by Dolbeault & Perthame [@Dolbeault2], where the threshold mass $8 \pi \chi^{-1}$ is easy traceable, as well as Blanchet, Carrillo & Masmoudi [@BCM], focused precisely on the threshold mass case. Generalisations --------------- Our system - differs from in two aspects: it involves the semilinearity $ru(1-u)$ and the fractional diffusion. We explain below what are both applicational and analytical reasons to consider each of these modifications separately. ### Motivation for the logistic term {#ssec121} Introduction of the logistic term $r u (1-u)$ in a biology-related equation is the (second) most classical way to take into account a population dynamics (after the Malthusian exponential models, that do not cover the full lifespan of a population), compare formula (3) of Verhulst [@Verhulst] and model M8 of [@Hillen3] in the context of chemotaxis. In agreement with the homeostatic character of the logistic function, the equation $$\label{cKS2l} \begin{aligned} {\partial_t}u &= \Delta u + \chi \nabla\cdot(u \nabla v ) +ru(1-u), \\ \Delta v- \nu v&= u, \end{aligned}$$ is less prone to admit solutions that blow-up for $r>0$ than for $r=0$, compare Tello & Winkler [@TelloWinkler]. What is interesting, blowups are in fact excluded for *any initial mass*, no matter what is the relation between it and parameters $r, \chi$. For further results, including the parabolic-parabolic case, we refer to Winkler [@Winkler4; @winkler2014global]. Let us note that a logistic term appears in the three-component urokinase plasminogen invasion model (see Hillen, Painter & Winkler [@Hillen1]) and in a chemotaxis-haptotaxis model (see Tao & Winkler [@TaoWinkler]). The question of the nonlinear stability of the homogeneous solution $u_\infty\equiv 1$, $v_\infty\equiv-1$ has received a lot of interest recently. For instance, Chaplain & Tello [@chaplain2016stability] Galakhov, Salieva & Tello [@galakhov2016parabolic] (see also Salako $\&$ Shen [@salako2017global]) studied the parabolic-elliptic Keller-Segel system and proved that if $r>2\chi$ then $ \|u(t)-1\|_{L^\infty({{\mathbb T}}^d)}\rightarrow 0. $ Let us note that the authors in [@chaplain2016stability; @galakhov2016parabolic] did not provide with any explicit rate of convergence. In the case of doubly parabolic Keller-Segel system, the question of stability of the homogeneous solution was addressed by Lin & Mu [@lin2016global], Winkler [@winkler2014global], Xiang [@xiang2016strong] and Zheng [@zheng2017boundedness], see also Tello & Winkler [@TelloWinkler2]. For conditions forcing the solutions to vanish, compare Lankeit [@Lankeit15]. ### Motivation for the fractional diffusion Since 1990’s, a strong theoretical and empirical evidence has appeared for replacing the classical diffusion with a fractional one in Keller-Segel equations: $\Lambda^\alpha$, $\alpha<2$ instead of the standard $-\Delta=\Lambda^{2}$. Namely, in low-prey-density conditions, feeding strategies based on a Lévy process (generated in its simplest isotropic-$\alpha$-stable version by $(-\Delta)^\frac{\alpha}{2} u$) are closer to optimal ones from theoretical viewpoint than strategies based on the Brownian motion (generated by $-\Delta u$). Furthermore, these strategies based on a Lévy process are actually used by certain organisms. The interested reader can consult Lewandowsky, White & Schuster [@Lew_nencki] for amoebas, Klafter, Lewandowsky & White [@Klaf90] as well as Bartumeus et al. [@Bart03] for microzooplancton, Shlesinger & Klafter [@Shl86] for flying ants and Cole [@Cole] in the context of fruit flies. Surprisingly, even the feeding behavior of groups of large vertebrates is argued to follow Lévy motions, the fact referred sometimes as to the *Lévy flight foraging hypothesis*. For instance, one can read Atkinson, Rhodes, MacDonald & Anderson [@Atk] for jackals, Viswanathan et al. [@Vnature] for albatrosses, Focardi, Marcellini & Montanaro [@deers] for deers and Pontzer et al. [@hadza] for the Hadza tribe. Interestingly, the (fractional) Keller-Segel system can be recovered as limit cases of other equations. In this regards, Lattanzio & Tzavaras [@lattanzio2016gas] considered the Keller-Segel system as high friction limits of the Euler-Poisson system with attractive potentials (note that the case with fractional diffusion corresponds to the nonlocal pressure law $p(u)=\Lambda^{\alpha-2}u(x)$) while Bellouquid, Nieto & Urrutia [@bellouquid2016kinetic] obtained the fractional Keller-Segel system as a hydrodynamic limit of a kinetic equation (see also Chalub, Markowich, Perthame & Schmeiser [@chalub2004kinetic], Mellet, Mischler & Mouhot[@mellet2011fractional], Aceves-Sanchez & Mellet [@aceves2016asymptotic] and Aceves-Sanchez & Cesbron [@aceves2016fractional]). In view of the last two paragraphs, our aim to onsider the combined effect of (regularizing) logistic term and (weaker than classical) fractional diffusion is both analytically interesting and reasonable from the viewpoint of applications. Prior results for the Keller-Segel systems with fractional diffusions {#ssec1.5} --------------------------------------------------------------------- Let us recall now certain analytical results for the fraction Keller-Segel systems and its generalisations. The system is part of a larger family of aggregation-diffusion-reaction systems $$\label{eq:1} \left\{\begin{aligned} {\partial_t}u&=-\Lambda^\alpha u-\chi\nabla\cdot (u K(v))+F(u),\\ \tau{\partial_t}v&=\kappa\Delta v+G(u,v), \end{aligned}\right.$$ $\alpha \in (0,2)$. The system is referred to as a ‘parabolic-parabolic’ one if $\tau,\kappa>0$, ‘parabolic-elliptic’ if $\tau=0$, $\kappa>0$ and ‘parabolic-hyperbolic’ if $\tau>0$, $\kappa=0$. For a more exhaustive discussion of these models, we refer to the extensive surveys by Hillen & Painter [@Hillen3], Bellomo, Bellouquid, Tao & Winkler [@bellomo2015towards] and Blanchet [@blanchet2011parabolic]. In what follows, let us recall known results, in principle for the following (generic) choices $F(u) = r u (1-u)$, $r \ge 0$ and $G(u,v)=u-v$ or $G(u,v)=u$. The first interaction operator $K$ that one should have in mind is the most classical $K(v) = \nabla v$, but other choices are studied, that critically influence the system’s behavior. ### Case of no logistic term $r=0$ Since $ -\Lambda^\alpha u$ provides for $\alpha<2$ a weaker dissipation than the classical one, it is expected that a blowup may occur. This is indeed the case for the generic fractional parabolic-elliptic cases in $d\geq2$, compare for instance results by Biler, Cie[ś]{}lak, Karch & Zienkiewicz [@biler2014local] and Biler & Karch [@biler2010blowup]. The results for other interaction operators can be found in a vast literature on aggregation equations, not necessarily motivated by mathematical biology, including Biler, Karch & Laurençot [@biler2009blowup], Li & Rodrigo [@li2009finite; @li2010exploding; @li2009refined; @li2010wellposedness]. Naturally, there are small-data global regularity results available, compare e.g. Biler & Wu [@BilerWu] or [@BG2]. To the best of our knowledge, the question of global existence vs. finite time blow up of the fully parabolic Keller-Segel system ($\tau,\kappa>0$) with fractional diffusion and arbitrary initial data remains open (compare with Wu & Zheng [@WuZheng] and [@BG2]). Similarly, as far as we know, the finite time blow up for the parabolic-hyperbolic case (the extreme case $\kappa=0$), remains an open problem even for low values of $\alpha$, compare [@Ghyperparweak; @Ghyperparstrong]. The $1$d case received much attention in the recent years. Let us review here some of the related results - A majority of the currently available results concerns the parabolic-elliptic Keller-Segel system ($\tau=0$, $\kappa\neq0$). In this context it is natural to look for a minimal strength of diffusion that gives rise to global in time smooth solutions. Escudero [@escudero2006fractional] proved that $\alpha>1$ leads to global existence of solutions in the large (i.e. without data smallness). Next, Bournaveas & Calvez [@bournaveas2010one] obtained finite time blow up in the supercritical case $\alpha<1$ and established that for $\alpha=1$ there exists a (non-explicit) constant $K$ such that $\|u_0\|_{L^1}\leq K$ implies global in time solutions. Such a constant was later explicitly estimated as ${2\pi}^{-1}$ in Ascasibar, Granero-Belinchón and Moreno [@AGM] and improved in [@BG]. It was also conjectured that the case $\alpha=1$ is critical, i.e. that a large $\|u_0\|_{L^1}$ leads to a finite-time blowup, see [@bournaveas2010one]. Quite recently we were able to disprove that conjecture in [@BG3] by showing that, regardless of the size of initial data, the smooth solution exists for arbitrary large times (but our global bound is unfortunately not uniform in time yet). - The parabolic-parabolic problem was considered in [@BG2], both without the logistic term and with it. In the former case, beyond a typical short-time existence result and continuation criteria, we showed smoothness and regularity for $\alpha>1$ as well as, under data smallness, for $\alpha=1$. Further results for the logistic case will be recalled in the next section. - The parabolic-hyperbolic problem $\tau>0$, $\kappa=0$, was proposed by Othmer & Stevens [@stevens1997aggregation] as a model of the movement of myxobacteria. However, this model has been used also to study the formation of new blood vessels from pre-existing blood vessels (see Corrias, Perthame & Zaag [@corrias2003chemotaxis], Fontelos, Friedman & Hu [@fontelos2002mathematical], Levine, Sleeman, Brian & Nilsen-Hamilton [@levine2000mathematical], Sleeman, Ward & Wei [@sleeman2005existence]). Because of this, this system captured the interest of numerous researchers (see [@corrias2003chemotaxis; @Ghyperparweak; @Ghyperparstrong; @fontelos2002mathematical; @zhang2013global; @li2010nonlinear; @xie2013global; @zhang2015global; @fan2012blow; @li2015initial; @mei2015asymptotic; @li2015quantitative; @li2009nonlinear; @zhang2007global; @li2014stability; @wang2008shock; @wang2016asymptotic; @li2011hyperbolic; @hao2012global; @li2012global] and the references therein). ### Case with logistic term $r>0$ Let us first quickly recall our $1$d results in [@BG2] for parabolic-parabolic fractional Keller-Segel with logistic term, beyond those holding without it. We obtained global-in-time smoothness for $\alpha \ge 1$. Interestingly, partially due to the logistic term, the considered system shows spatio-temporal chaotic behavior with peaks that emerge and eventually merge with other peaks. In that regard, we studied the qualitative properties of the attractor and obtained bounds for the number of [peaks]{}. This number may be related to dimension of the attractor. Mathematically, this estimate was obtained with a technique applicable to other problems with chaotic behavior, compare for instance [@GH]. The currently available regularity results are much better for the parabolic-elliptic case. It turns out that the logistic term provides enough stabilisation to allow for global-in-time smooth solutions even for certain ‘supercritical’ regime of diffusions $\alpha<d$. Namely, we have considered the $1$d case in [@BG4] and $2$d case in [@burczak2016suppression]. Let us recall some of these results, focusing on the potentially most singular case $r< \chi$, since it is within the scope of this note. For any $$\alpha > d\left(1-\frac{r}{\chi}\right)$$ the problem - enjoys global in time smooth solutions, but with no uniform-in-time bounds (i.e. without excluding the infinite-time-blowup). More precisely, we obtained in [@BG4; @burczak2016suppression] that $$\begin{aligned} \max_{0\leq t \leq T}\|u\|_{L^{\frac{\chi}{\chi-r}}}&\leq e^{rT}\|u_0\|_{L^{\frac{\chi}{\chi-r}}}\label{bound2}\\ \max_{0\leq t \leq T}\|u\|_{L^{p}}&\leq c_1( e^{c_2 T} +1)\|u_0\|^{c_2}_{L^{p}} \quad \text{for any finite $p$} \label{bound2s}\\ \max_{0\leq t \leq T}\|u\|_{L^{\infty}}&\leq c_2 e^{c_1 T},\label{bound3}$$ where $c_1 (\| u_0\|_1, p, r,\chi, \alpha,d)$ and $c_2 (p, r,\chi, \alpha,d)$. For $d=2$, the estimate is given as (4.11) of Lemma 4.3 in [@burczak2016suppression], occupies Lemma 4.4 there and follows from computations leading to the estimate of Theorem 2. For $d=1$ the analogous results come from [@BG4] (some of them are not stated explicitly there, but they follow the lines of the $2$d case). Up to now, the only uniform in time bounds we were able to provide concerned the $1$d case and they were far from satisfactory ones. They involved either dissipations that clearly outweigh aggregation ($\alpha > d=1$) or certain smallness assumptions. For instance, for $\alpha=1, d=1$ $$\label{eq:advS} \chi<r+\frac{1}{2\pi\max\{\|u_0\|_{L^1},2\pi\}}$$ implies an uniform in time bound $$\label{bound61} \max_{0\leq t \leq T}\|u\|_{L^{\infty}}\leq c_3(r,\chi,u_0),$$ see [@BG4], Proposition 1. Purpose of this note -------------------- In the case $r=0$, the system - with $\alpha<d$ develops finite-time blowups. Hence the regime $$d> \alpha > d\left(1-\frac{r}{\chi}\right),$$ where our just-recalled existence result on global-in-time smooth solutions holds, can be seen as an interestingly ’supercritical’ one. However, the non-uniformity in time of our global bounds - appeared to us far from optimal. Consequently, in this note, we sharpen the estimates - to time-independent ones. Moreover, we provide conditions that ensure the convergence of the solution $u$ towards the only nontrivial homogeneous steady state $u_\infty\equiv1$, including some speed of convergence estimates. We present also a ‘semi-strong’ uniqueness result. For statements of our results, we refer to Section \[sec:mr\]. Notation for functional spaces ------------------------------ Let us write ${\partial}^n,$ $n\in\mathbb{Z}^+$, for a generic derivative of order $n$. Then, the fractional $L^p$-based Sobolev spaces $W^{s,p}({{\mathbb T^d}})$ (also known as Sobolev-Slobodeckii or Besov spaces $B^{s,p}_p({{\mathbb T^d}})$) are $$W^{s,p} ({{\mathbb T^d}})=\left\{f\in L^p({{\mathbb T^d}}) \; | \quad {\partial}^{\lfloor s\rfloor} f\in L^p({{\mathbb T^d}}), \frac{|{\partial}^{\lfloor s\rfloor}f(x)-{\partial}^{\lfloor s\rfloor}f(y)|}{|x-y|^{\frac{d}{p}+(s-\lfloor s\rfloor)}}\in L^p({{\mathbb T^d}}\times{{\mathbb T^d}})\right\},$$ endowed with the norm $$\|f\|_{W^{s,p}}^p=\|f\|_{L^p}^p+\|f\|_{\dot{W}^{s,p}}^p,$$ $$\|f\|_{\dot{W}^{s,p}}^p=\|{\partial}^{\lfloor s\rfloor} f\|^p_{L^p}+\int_{{{\mathbb T^d}}}\int_{{{\mathbb T^d}}}\frac{|{\partial}^{\lfloor s\rfloor}f(x)-{\partial}^{\lfloor s\rfloor}f(y)|^p}{|x-y|^{d+(s-\lfloor s \rfloor)p}}dxdy.$$ In the case $p=2$, we write $H^s({{\mathbb T^d}})=W^{s,2}({{\mathbb T^d}})$ for the standard non-homogeneous Sobolev space with its norm $$\|f\|_{H^s}^2=\|f\|_{L^2}^2+\|f\|_{\dot{H}^s}^2, \quad \|f\|_{\dot{H}^s}=\|\Lambda^s f\|_{L^2}.$$ Next, for $s\in (0,1)$, let us denote the usual Hölder spaces as follows $$C^{s} ({{\mathbb T^d}})=\left\{f\in C({{\mathbb T^d}}) \;| \quad \frac{|f(x)-f(y)|}{|x-y|^{s}}\in L^\infty({{\mathbb T^d}}\times{{\mathbb T^d}})\right\},$$ with the norm $$\|f\|_{C^{s}}=\|f\|_{L^\infty}+\|f\|_{\dot{C}^{s}},\quad \|f\|_{\dot{C}^{s}}=\sup_{(x,y)\in{{\mathbb T^d}}\times{{\mathbb T^d}}}\frac{|f(x)-f(y)|}{|x-y|^{s}}.$$ For brevity, the domain dependance of a function space will be generally suppressed. Finally, we will use the standard notation for evolutionary (Bochner) spaces, writing $L^p(0,T; W^{s,p})$ etc. In case of suppressing the time and space domain, the outside is always time related, i.e. $L^p(L^q)$ denotes $L^p(0,T; L^q ({{\mathbb T}}^d) )$. Main results and their discussion {#sec:mr} ================================= Classical solvability and uniform-in-time boundedness ----------------------------------------------------- In our first result, we prove that $\|u(t)\|_{L^\infty({{\mathbb T}}^d)}$ remains in fact uniformly bounded. In order to compute the bound, let us introduce the following numbers $$\mathscr{C}_{d,\alpha}=2\left(\int_{{\mathbb R}^d}\frac{4\sin^2\left(\frac{x_1}{2}\right)}{|x|^{d+\alpha}} dx\right)^{-1}, \qquad \mathscr{P}_{d,\alpha}= \frac{2 \mathscr{C}_{d,\alpha}}{\left(2\pi \right)^{\alpha}d^{\frac{d+\alpha}{2}}},$$ and for any $\epsilon \in (0, r)$, $p=\frac{\chi}{\chi-r+\epsilon}$ $$\mathscr{M}_1(d,p,\alpha)=\left(\frac{\pi^{d/2}}{2^{1+p}}\int_{0}^\infty z^{d/2}e^{-z}dz\right)^{1/p}, \qquad \mathscr{M}_2(d,p,\alpha)=\mathscr{C}_{d,\alpha}\frac{\left(\frac{\pi^{d/2}}{\int_{0}^\infty z^{d/2}e^{-z}dz}\right)^{1+\alpha/d}}{4\cdot 2^{\frac{(p+1)\alpha}{d}}}$$ and quantities $$\mathcal{R}_0(r,\epsilon,\chi,d,\alpha, u_0)= \left(\frac{ r}{\mathscr{P}_{d,\alpha}} \left(\frac{r}{\epsilon}\frac{\chi}{2\chi-r+\epsilon}\right)^{\frac{\chi}{\chi-r+\epsilon}}+\max \left\{ (2\pi)^{-d} \|u_0\|^2_{L^1({{\mathbb T}}^d)},(2\pi)^d\right\} \right)^ {1-\frac{r-\epsilon}{\chi}}$$ $$\mathcal{\tilde R}_2(r,\epsilon,\chi,d,\alpha)= \left(\frac{ r}{\mathscr{P}_{d,\alpha}} \left(\frac{r}{\epsilon}\frac{\chi}{2\chi-r+\epsilon}\right)^{\frac{\chi}{\chi-r+\epsilon}}+ 3 (2\pi)^d \right)^ {1-\frac{r-\epsilon}{\chi}},$$ with the latter being a data-independent one. They are needed for (uniformly bounded in time) $$\begin{aligned} \mathcal{Q}_0 (t; r,\epsilon,\chi,d,\alpha, u_0) =& \|u_0\|_{L^{\frac{\chi}{\chi-r+\epsilon}}} e^{- \mathscr{P}(d,\alpha) t } + (1- e^{- \mathscr{P}(d,\alpha) t }) \mathcal{R}_0 , \\ \mathcal{\tilde Q}_2 (t; r,\epsilon,\chi,d,\alpha, u_0) =& \|u({t_0= r^{-1} \ln2})\|_{L^{\frac{\chi}{\chi-r+\epsilon}}} e^{- \mathscr{P}(d,\alpha) t } + (1- e^{- \mathscr{P}(d,\alpha) t })\mathcal{\tilde R}_2 \end{aligned}$$ that are involved in $$\mathscr{ R}_3 (t; r,\epsilon,\chi,d,\alpha, u_0) = 2 e^{-t} \|u_0\|_{L^\infty({{\mathbb T}}^d)} +2 \mathcal{Q}^{\frac{3}{\sigma}}_0 (\mathscr{M}_1+ {\left(\frac{4 \chi}{\mathscr{M}_2}\right)}^{ \frac{1}{2} +\frac{1}{\sigma}} +1 )$$ and $$\mathscr{ \tilde R}_3 (t; r,\epsilon,\chi,d,\alpha, u_0) = 2 e^{-t} \|u({t_0= r^{-1} \ln 2}) \|_{L^\infty({{\mathbb T}}^d)} +2 \mathcal{\tilde Q}^{\frac{3}{\sigma}}_2 (\mathscr{M}_1+ {\left(\frac{4 \chi}{\mathscr{M}_2}\right)}^{ \frac{1}{2} +\frac{1}{\sigma}} +1 )$$ again uniformly bounded in time. Observe that $$\label{Rinf} \mathscr{ \tilde R}_\infty (r,\epsilon,\chi,d,\alpha) = \lim_{t \to \infty} \mathscr{ \tilde R}_3 (t; r,\epsilon,\chi,d,\alpha, u_0) = 2 \mathcal{\tilde R}^{\frac{3}{\sigma}}_2 (\mathscr{M}_1+ {\left(\frac{4 \chi}{\mathscr{M}_2}\right)}^{ \frac{1}{2} +\frac{1}{\sigma}} +1 )$$ is additionally $u_0$-independent. Having the above notions, we are ready to state \[thm1\] Let $u_0\in H^{d+2}$ be nonnegative, $\alpha \in (0,2)$ and $\chi>r>0$. Then, as long as $$\alpha>d\left(1-\frac{r}{\chi}\right),$$ the problem - admits a nonnegative classical solution $$u \in C (0, T; H^{d+2} ({{\mathbb T}}^d)) \quad \cap \quad C^{2, 1} ({{\mathbb T}}^d \times (0,T))$$ for any finite $T$ (with nonpositive $v$ solving ). Moreover, $u$ is uniformly bounded in time: $$\label{boundn1} \begin{aligned} \|u(t)\|_{L^\infty({{\mathbb T}}^d)} &\leq \mathscr{ R}_3 (t; r,\epsilon,\chi,d,\alpha, u_0) \quad \forall_{t \ge 0}, \\ \|u(t)\|_{L^\infty({{\mathbb T}}^d)} &\leq \mathscr{ \tilde R}_3 (t; r,\epsilon,\chi,d,\alpha, u_0) \quad \forall_{t \ge r^{-1} \ln 2 }, \end{aligned}$$ hence in particular $$\label{boundninf} \limsup_{t \to \infty} \|u(t)\|_{L^\infty({{\mathbb T}}^d)} \le \mathscr{ \tilde R}_\infty (r,\epsilon,\chi,d,\alpha)$$ no matter what the initial data are. Let us recall that the condition $\chi>r$ was chosen not as a simplification, but to focus ideas. In fact, it is the most demanding case. It can be easily seen, since writing $\bar u (t) = max_x u (x, t) = u (x_t, t) $, $v_u (t) = v (x_t, t)$ we have the ODI (formally, but easily made rigorous) $$\frac{d}{dt} \bar u \le 0 + \chi (0 + \bar u \Delta v_u) + r \bar u - r \bar u^2 \le \chi \bar u (\bar u + v_u) + r \bar u - r \bar u^2 \le r \bar u - (r- \chi) \bar u^2$$ which is a Bernoulli ODI, so $$\bar u (T) \le \frac{ \bar u (0) }{e^{-rT} + \frac{r- \chi}{r} (1-e^{-rT}) \bar u (0)} \to r/ (r- \chi) \quad \text{ as } T \to \infty.$$ Stability of the homogeneous solution $u_\infty=1$, $v_\infty=-1$ ----------------------------------------------------------------- The first result here ensures the exponential convergence $u\rightarrow u_\infty$, as long as $\chi$ and $r$ are close to each other in terms of the initial data. Let us define the time-independent upper bound for $\mathscr{ R}_3 (t; r,\epsilon,\chi,d,\alpha, u_0)$ of via $$\mathscr{ \bar R}_3 (r,\epsilon,\chi,d,\alpha, u_0) = 2 \|u_0\|_{L^\infty({{\mathbb T}}^d)} +2 \mathcal{Q}^{\frac{3}{\sigma}}_0 (\mathscr{M}_1+ {\left(\frac{4 \chi}{\mathscr{M}_2}\right)}^{ \frac{1}{2} +\frac{1}{\sigma}} +1 )$$ \[thm2\] Let $u\in H^{d+2}$ be the classical solution to - starting from $u_0\in H^{d+2}$, $u_0 \not\equiv 0$, $u_0 \ge 0$. Assume that $\alpha \in (0,2)$ and $\chi>r>0$. Let $\alpha$ be such that $$\alpha>d\left(1-\frac{r}{\chi}\right).$$ Moreover, assume that $\chi>r$ are close enough in terms of data, so that$$\label{ssc1} -\gamma:=2\chi-r+2(\chi-r)\left( \mathscr{ \bar R}_3 (r,\epsilon,\chi,d,\alpha, u_0) -1\right)-\frac{(2\pi)^d\mathscr{C}_{d,\alpha}}{({2}\pi\sqrt{d})^{d+\alpha}} < 0.$$ Then $$\|u(t)-1\|_{L^\infty({{\mathbb T}}^d)}\leq \left(\|u(t)\|_{L^\infty({{\mathbb T}}^d)}-\min_{x\in{{\mathbb T}}^d}u(x,t)\right)\leq \left(\|u_0\|_{L^\infty({{\mathbb T}}^d)}-\min_{x\in{{\mathbb T}}^d}u_0(x)\right)e^{-\gamma t} $$ \[cor:1\] In condition we can replace $\mathscr{\bar R}_3$ with the initial-data independent $\mathscr{ \tilde R}_\infty$ of , at the cost of having the statement valid only for times $t^* \ge t (r,\epsilon,\chi,d,\alpha, u_0)$ Our second stability result for $u_\infty=1$, $v_\infty=-1$ concerns the ‘critical’ case $\alpha =1$ in $d=1$. We do not obtain rate of convergence as before, but the conditions on $\chi$ and $r$ are straightforward. \[thm2b\] Assume that $d=\alpha=1$. Let $u\in H^{3}$ be the classical solution to - starting from $u_0\in H^{3}$, $u_0 \not\equiv 0$, $u_0 \ge 0$. Assume that $\alpha \in (0,2)$, $\chi > r >0$ and $$\chi<\frac{1}{8\pi^2}.$$ Then we have that $$\|u(t)-1\|_{L^\infty({{\mathbb T}}^d)}\rightarrow 0.$$ Let us recall from Section \[ssec121\] that question of the nonlinear stability of the homogeneous solution $u_\infty\equiv 1$, $v_\infty\equiv-1$ in classical (more diffusive) setting has received a lot of interest recently [@chaplain2016stability; @galakhov2016parabolic; @salako2017global]. Uniqueness ---------- The previous theorems employ notions of solutions with high regularity. However, for the sake of our uniqueness result (Theorem \[thm3\] below), let us introduce \[def:1\] If a function $$u \in L^2(0,T;L^2({{\mathbb T}}^d))$$ satisfies , in the following sense $$\begin{aligned} {\int_{{{\mathbb T}}^ d} u_0 \varphi(0)dx} -\int_0^T \int_{{{\mathbb T}}^ d} u {\partial_t}\varphi dx ds +\int_0^T \int_{{{\mathbb T}}^ d} u \Lambda^{\alpha} \varphi dxds &= -\chi \int_0^T \int_{{{\mathbb T}}^ d} (u B(u)) \nabla \varphi dxds \\ &\quad+ r \int_0^T \int_{{{\mathbb T}}^ d} u(1-u) \varphi dxds\end{aligned}$$ for a sufficiently smooth $\varphi$, it is called a *distributional solution* to , . It holds \[thm3\] Let $\alpha>1$ and $r \ge 0$. Nonnegative solutions $u\in L^\infty (L^2) \cap L^2 (H^\alpha)$ to - are unique (in $L^\infty (L^2) \cap L^2 (H^\alpha)$ class). Discussion ---------- In Theorem \[thm1\], we prove the uniform-in-time boundedness of the solution to - (regardless of the size of the initial data) when the dissipation strength lies in the regime $$\alpha>d\left(1-\frac{r}{\chi}\right).$$ In particular, the estimate in Theorem \[thm1\] sharpens both by excluding any (in particular, exponential) dependence on $T$ and by removing the additional assumptions . Let us also mention that we provide our results via a new and much shorter reasoning than that of [@BG4; @burczak2016suppression]. In Theorem \[thm2\] and its Corollary \[cor:1\] we prove some conditions (one of them depends on lower norms of $u_0$) that lead to the nonlinear stability of the homogeneous solution $u_\infty\equiv1$. Furthermore, Theorem \[thm2\] also proves that the decay towards the equilibrium state $u_\infty$ is exponential with a explicitly computable rate. In Theorem \[thm2b\], we show for the case $d=\alpha=1$ that $\chi< (8 \pi^2)^{-1}$ suffices for $u_\infty \equiv 1$ to be asymptotically stable. Let us emphasize it is a phenomenon [independent of]{} $u_0$. Let us compare our results with the previous ones in [@chaplain2016stability; @galakhov2016parabolic; @salako2017global] - Both Theorems \[thm2\] and \[thm2b\] cover the case of (weaker) fractional dissipations $0<\alpha<2$, while the previous ones hold for the classical laplacian $\alpha=2$, but, on the other hand, some of the previous results are valid for an arbitrary space dimension. - Both our Theorems \[thm2\] and \[thm2b\] consider the case $r<\chi$, while the previous impose at best $2\chi<r$. - Theorem \[thm2\] provides an exponential decay with a computable rate. As far as we know, the available uniqueness results for Keller-Segel-type systems are standard ones, regarding uniqueness of classical solutions. Theorem \[thm3\] indicates in particular that classical solutions are unique within any $L^\infty (L^2) \cap L^2 (H^\alpha)$ solution, as long as $\alpha>1$. On one hand, it is a relaxation of standard classical uniqueness, but on the other hand it would be more natural to look for a weak-strong uniqueness result, with ‘weak’ part related to certain simple global energy estimates. Since however actually these are no more than $L^1$, $LlogL$ ones (or slightly better with the logistic term, compare Lemma 3 of [@BG4]), a satisfactory weak-strong result remains open for now. Proof of Theorem \[thm1\] (classical solvability and uniform-in-time bounds) ============================================================================ The stated regularity $$u \in C (0, T; H^{d+2} ({{\mathbb T}}^d)) \quad \cap \quad C^{2, 1} ({{\mathbb T}}^d \times (0,T))$$ for any finite $T$, follows from the main results of [@BG4; @burczak2016suppression]. For nonnegative data $u_0$, the corresponding solution $u(t)$ is also nonnegative. Furthermore the solution to satisfies $$\label{eq:signv} v \le 0.$$ To realise that, it suffices to consider $x_t$ such that $$v (t, x_t) = \max_{y} v (t, y),$$ then we have $$0 \le u(x_t) = \Delta v(x_t) - v(x_t) \implies v(x_t) \le \Delta v(x_t) \le 0.$$ Furthermore it holds $$\label{propv} -\min_x u(x,t)\geq v(x,t)\geq -\max_x u(x,t).$$ Therefore, to conclude Theorem \[thm1\] it suffices to prove the uniform estimate . Define $$\label{R1} \mathcal{R}_1(u_0,d)=\max\left\{(\|u_0\|_{L^1({{\mathbb T}}^d)},(2\pi)^d\right\}$$ Then the solution $u$ verifies the following estimates $$\begin{aligned} \sup_{0\leq t<\infty}\|u(t)\|_{L^1({{\mathbb T}}^d)}&\leq \mathcal{R}_1(u_0,d), \label{LinfL1}\\ \limsup_{t\rightarrow\infty}\|u(t)\|_{L^1({{\mathbb T}}^d)}&\leq (2\pi)^d, \label{LinfL12}\\ \int_0^T\|u(s)\|_{L^2({{\mathbb T}}^d)}^2ds&\leq \frac{\|u_0\|_{L^1({{\mathbb T}}^d)}}{r}+\mathcal{R}_1(u_0,d)T, \label{L2L2}\end{aligned}$$ The estimate follows for $d=2$ from [@burczak2016suppression], Lemma 4.3 and for $d=1$ from [@BG4], Lemma 4. To justify the estimates , , we use for $\eta (t) = \|u(t)\|_{L^1({{\mathbb T}}^d)}$ the (Bernoulli) ODI $$\label{odi} \frac{d}{dt} \eta (t) \leq r \eta (t) - r(2\pi)^{-d} \eta^2 (t)$$ following from integration in space of and the Jensen inequality. Introducing $\kappa$ through $1= \kappa (\delta + \eta)$ we obtain after $\delta \to 0$ that $$\eta (T) \le \frac{\eta (0) }{e^{-rT} + \frac{r(2\pi)^{-d}}{r} (1-e^{-rT}) \eta (0)}.$$ Hence $$\|u(T)\|_{L^1({{\mathbb T}}^d)} \le \frac{ \|u_0\|_{L^1({{\mathbb T}}^d)}}{e^{-rT} + (2\pi)^{-d} (1-e^{-rT}) \|u_0\|_{L^1({{\mathbb T}}^d)}}. \label{LinfL3}$$ Considering in large $T$ implies , while the uniform bound for r.h.s. of implies . We will find useful the next lemma stating the uniform-in-time boundedness in some $L^p$ norm for $p$ very close to 1. For its formulation, let us recall that $\mathscr{P}(d,\alpha)$ comes from Lemma \[lemapoincare\] and let us define $$\label{R2} \mathcal{R}_2 = \mathcal{R}_2(r,\epsilon,\chi,d,\alpha, u_0)= \left(\frac{ r\left(\frac{r}{\epsilon}\frac{\chi}{2\chi-r+\epsilon}\right)^{\frac{\chi}{\chi-r+\epsilon}}}{\mathscr{P}(d,\alpha)}+\frac{\mathcal{R}_1(u_0,d)^2}{(2\pi)^d}+\mathcal{R}_1(u_0,d)\right)^ {1-\frac{r-\epsilon}{\chi}},$$ with $\mathcal{R}_1(u_0,d)$ defined in , as well as recall that $\mathcal{\tilde R}_2$ is defined as $$\label{R2t} \mathcal{\tilde R}_2 = \mathcal{\tilde R}_2(r,\epsilon,\chi,d,\alpha)= \left(\frac{ r\left(\frac{r}{\epsilon}\frac{\chi}{2\chi-r+\epsilon}\right)^{\frac{\chi}{\chi-r+\epsilon}}}{\mathscr{P}(d,\alpha)}+ 3 (2\pi)^d \right)^ {1-\frac{r-\epsilon}{\chi}}.$$ Observe the last quantity is data-independent. \[keylemma\] Let $d=1$ or $2$ and $$u \in C (0, T; H^{d+2} ({{\mathbb T}}^d)) \quad \cap \quad C^{2, 1} ({{\mathbb T}}^d \times (0,T))$$ solve - starting from nonnegative $u_0\in H^{d+2}$. Assume that $\chi>r$, fix any $0<\epsilon<r$. Then $$\label{eqLp} \max_{0\leq t <\infty}\|u\|_{L^{\frac{\chi}{\chi-r+\epsilon}}({{\mathbb T}}^d)}\leq \|u_0\|_{L^{\frac{\chi}{\chi-r+\epsilon}}} e^{- \mathscr{P}(d,\alpha) t } + (1- e^{- \mathscr{P}(d,\alpha) t }) \mathcal{R}_2(r,\epsilon,\chi,d,\alpha, u_0).$$ Furthermore, we have that $$\label{limsupLpa} \max_{r^{-1} \ln2 \leq t <\infty} \|u(t)\|_{L^{\frac{\chi}{\chi-r+\epsilon}}} \le \|u({t_0= r^{-1} \ln2})\|_{L^{\frac{\chi}{\chi-r+\epsilon}}} e^{- \mathscr{P}(d,\alpha) t } + (1- e^{- \mathscr{P}(d,\alpha) t })\mathcal{\tilde R}_2(r,\epsilon,\chi,d,\alpha),$$ which gives in particular that $$\label{limsupLp} \limsup_{t \to \infty} \|u(t)\|_{L^{\frac{\chi}{\chi-r+\epsilon}}} \le \mathcal{\tilde R}_2(r,\epsilon,\chi,d,\alpha).$$ For an $s>0$ (to be fixed below), we compute $$\begin{aligned} \frac{1}{1+s}\frac{d}{dt}\|u\|_{L^{1+s}({{\mathbb T}}^d)}^{1+s}+\int_{{{\mathbb T}}^d} u^s(x)\Lambda^{\alpha}u(x)dx\leq\left(\chi\frac{s}{1+s}-r\right)\int_{{{\mathbb T}}^d} u^{2+s}(x)dx+r\|u\|_{L^{1+s}({{\mathbb T}}^d)}^{1+s},\end{aligned}$$ where $\Delta v = u + v \le u $ was used, see . Using Lemma \[lemapoincare\], we find that $$\begin{aligned} \frac{1}{1+s}\frac{d}{dt}\|u\|_{L^{1+s}({{\mathbb T}}^d)}^{1+s}+\mathscr{P}(d,\alpha)\|u\|_{L^{1+s}({{\mathbb T}}^d)}^{1+s}&\leq \left(\chi\frac{s}{1+s}-r\right)\|u\|_{L^{2+s}({{\mathbb T}}^d)}^{2+s}+r\|u\|_{L^{1+s}({{\mathbb T}}^d)}^{1+s}\\ &\quad+\frac{\mathscr{P}(d,\alpha)}{(2\pi)^d}\|u\|_{L^1({{\mathbb T}}^d)}\left(\int_{{{\mathbb T}}^d}u^s(x)dx\right),\end{aligned}$$ with $\mathscr{P}(d,\alpha)$ the constant in Lemma \[lemapoincare\]. We fix $\epsilon$ such that $0<\epsilon<r$. Utilizing the bounds $$\begin{aligned} ry^{1+s}-\epsilon y^{2+s}&\leq r\left(\frac{r(1+s)}{\epsilon(2+s)}\right)^{1+s}-\epsilon \left(\frac{r(1+s)}{\epsilon(2+s)}\right)^{2+s}\leq r\left(\frac{r(1+s)}{\epsilon(2+s)}\right)^{1+s}\;\forall\, y\geq0,\;\epsilon>0,\end{aligned}$$ $$y^s\leq y+1\;\forall\, y\geq0,\;0<s\leq 1,$$ we have that $$\begin{aligned} \frac{1}{1+s}\frac{d}{dt}\|u\|_{L^{1+s}}^{1+s}+\mathscr{P}(d,\alpha)\|u\|_{L^{1+s}}^{1+s}&\leq \left(\chi\frac{s}{1+s}-r+\epsilon\right)\|u\|_{L^{2+s}}^{2+s}+r\left(\frac{r(1+s)}{\epsilon(2+s)}\right)^{1+s}\\ &\quad+\frac{\mathscr{P}(d,\alpha)}{(2\pi)^d}\mathcal{R}_1(u_0,d)\left(\mathcal{R}_1(u_0,d)+(2\pi)^d\right).\end{aligned}$$ Let us define $$s=\frac{r-\epsilon}{\chi-r+\epsilon}.$$ Recall that $\chi>r$, so $s>0$ and $$\frac{s}{1+s}=\frac{r-\epsilon}{\chi},\;r\left(\frac{r(1+s)}{\epsilon(2+s)}\right)^{1+s}=r\left(\frac{r}{\epsilon}\frac{\chi}{2\chi-r+\epsilon}\right)^{\frac{\chi}{\chi-r+\epsilon}}.$$ We obtain that $$\begin{aligned} \frac{1}{\frac{\chi}{\chi-r+\epsilon}}\frac{d}{dt}\|u\|_{L^{\frac{\chi}{\chi-r+\epsilon}}}^{\frac{\chi}{\chi-r+\epsilon}}+\mathscr{P}(d,\alpha)\|u\|_{L^{\frac{\chi}{\chi-r+\epsilon}}}^{\frac{\chi}{\chi-r+\epsilon}}&\leq \frac{\mathscr{P}(d,\alpha)}{(2\pi)^d}\mathcal{R}_1(u_0,d)\left(\mathcal{R}_1(u_0,d)+(2\pi)^d\right)\\ &\quad+ r\left(\frac{r}{\epsilon}\frac{\chi}{2\chi-r+\epsilon}\right)^{\frac{\chi}{\chi-r+\epsilon}}.\end{aligned}$$ The previous ODI can be written as $$\frac{d}{dt}Y(t)+\mathcal{A}Y(t)\leq \mathcal{B},$$ for $$Y(t)=\|u(t)\|_{L^{\frac{\chi}{\chi-r+\epsilon}}({{\mathbb T}}^d)}^{\frac{\chi}{\chi-r+\epsilon}},$$ $$\mathcal{A}=\frac{\chi}{\chi-r+\epsilon}\mathscr{P}(d,\alpha)$$ and $$\mathcal{B}=\frac{\chi}{\chi-r+\epsilon}\left( r\left(\frac{r}{\epsilon}\frac{\chi}{2\chi-r+\epsilon}\right)^{\frac{\chi}{\chi-r+\epsilon}}+\frac{\mathscr{P}(d,\alpha)}{(2\pi)^d}\mathcal{R}_1(u_0,d)\left(\mathcal{R}_1(u_0,d)+(2\pi)^d\right)\right).$$ Integrating in time, we find that $$Y(t)\leq Y(0)e^{-\mathcal{A}t}+\frac{\mathcal{B}}{\mathcal{A}}\left(1-e^{-\mathcal{A}t}\right) $$ hence $$\|u(t)\|_{L^{\frac{\chi}{\chi-r+\epsilon}}({{\mathbb T}}^d)} = Y^{1-\frac{r-\epsilon}{\chi}} (t) \leq Y^{1-\frac{r-\epsilon}{\chi}} (0) e^{-\mathcal{A} ({1-\frac{r-\epsilon}{\chi}} )t}+\left(\frac{\mathcal{B}}{\mathcal{A}} \right)^{1-\frac{r-\epsilon}{\chi}} \left(1-e^{-\mathcal{A}t}\right)^{1-\frac{r-\epsilon}{\chi}}$$ using that $ \left(1-e^{-\mathcal{A}t}\right)^{1-\frac{r-\epsilon}{\chi}} \le 1-e^{-\mathcal{A}({1-\frac{r-\epsilon}{\chi}} )t}$ and the definitions of $\mathcal{B}$ and $\mathcal{A}$ we arrive at $$\|u(t)\|_{L^{\frac{\chi}{\chi-r+\epsilon}}} \le \|u_0\|_{L^{\frac{\chi}{\chi-r+\epsilon}}} e^{- \mathscr{P}(d,\alpha) t } + (1- e^{- \mathscr{P}(d,\alpha) t }) \left(\frac{ r\left(\frac{r}{\epsilon}\frac{\chi}{2\chi-r+\epsilon}\right)^{\frac{\chi}{\chi-r+\epsilon}}}{\mathscr{P}(d,\alpha)}+\frac{\mathcal{R}_1(u_0,d)^2}{(2\pi)^d}+\mathcal{R}_1(u_0,d)\right)^ {1-\frac{r-\epsilon}{\chi}}$$ which is . Recall . It implies that for any $t \ge r^{-1} \ln2$ $$\|u(t)\|_{L^1}\leq 2 (2\pi)^d.$$ We can consider our ODI not starting at the initial time, but at $t= r^{-1} \ln2$. This implies, along the lines leading to , the inequality . Now we can proceed with the proof of Theorem \[thm1\] i.e. with showing the uniform estimate . We define $x_t$ such that $$\max_{y\in{{\mathbb T}}^d} u(y,t)=u(x_t,t)=\|u(t)\|_{L^\infty({{\mathbb T}}^d)}.$$ Then, due to regularity of the solution $u$, we have that $\|u(t)\|_{L^\infty({{\mathbb T}}^d)}$ is Lipschitz: $$|u(x_s,s) - u(x_t,t)| = \begin{cases} u(x_s,s) - u(x_t,t) \le u(x_s,s) - u(x_s,t) \\ u(x_t,t) - u(x_s,s) \le u(x_t,t) - u(x_t,s) \end{cases} \left.\begin{aligned} \end{aligned}\right\rbrace \sup_{y \in [x_t,x_s]^d,\tau \in [t,s]} |\partial_\tau u (\tau, y)| |s-t|.$$ Hence due to the Rademacher theorem $\|u(t)\|_{L^\infty({{\mathbb T}}^d)}$ is differentiable almost everywhere. Moreover, its derivative verifies for almost every $t$ $$\frac{d}{dt}\|u(t)\|_{L^\infty({{\mathbb T}}^d)} \le (\partial_t u)(x_t,t),$$ the precise argument for the above may be found for instance in Cordoba & Cordoba [@cor2], p.522. In what follows we will use a few arguments based on a strict inequality for a pointwise value of $\frac{d}{dt}\|u(t)\|_{L^\infty({{\mathbb T}}^d)}$ at, say, $t^*$. Since it is in fact defined only almost everywhere in time, such an inequality should be understood as the inequality for $\int_{t^*}^{t^*+\delta}\frac{d}{dt}\|u(t)\|_{L^\infty({{\mathbb T}}^d)} dt$. Let us fix $0<\epsilon<r$ such that $$\alpha>d\left(1-\frac{r-\epsilon}{\chi}\right).$$ Then, due to Lemma \[keylemma\], we have that $$\label{eq:lem1r} \begin{aligned} \max_{ t \ge 0}\|u\|_{L^{\frac{\chi}{\chi-r+\epsilon}}({{\mathbb T}}^d)} &\leq \|u_0\|_{L^{\frac{\chi}{\chi-r+\epsilon}}} e^{- \mathscr{P}(d,\alpha) t } + (1- e^{- \mathscr{P}(d,\alpha) t }) \mathcal{R}_2 \equiv \mathcal{Q}_2, \\ \max_{t \ge r^{-1} \ln2} \|u(t)\|_{L^{\frac{\chi}{\chi-r+\epsilon}}} &\le \|u({t_0= r^{-1} \ln2})\|_{L^{\frac{\chi}{\chi-r+\epsilon}}} e^{- \mathscr{P}(d,\alpha) t } + (1- e^{- \mathscr{P}(d,\alpha) t })\mathcal{\tilde R}_2 \equiv \mathcal{\tilde Q}_2\end{aligned}$$ with $u_0$ depending $\mathcal{R}_2$ defined in and $u_0$ independent $\mathcal{R}_2$ defined in . Let us take $$p=\frac{\chi}{\chi-r+\epsilon}$$ and with this choice consider the dichotomy of Lemma \[lemaaux3\]. It implies that either $$u(x_t,t) \le \tag{A} \mathscr{M}_1(d,p,\alpha)\|u(t)\|_{L^p} \le \mathscr{M}_1(d,p,\alpha)\mathcal{Q}_2$$ or $$\tag{B} \Lambda^\alpha u(x_t,t)\geq \mathscr{M}_2(d,p,\alpha)\frac{u(x_t,t)^{1+\alpha p/d}}{\|u(t)\|^{\alpha p/d}_{L^p}}.$$ Let us introduce $$\sigma = \frac{\alpha}{d}\frac{\chi}{\chi-r+\epsilon}-1\qquad K=\frac{\mathscr{M}_2(d,p,\alpha)}{\mathcal{Q}^{\frac{\alpha}{d}\frac{\chi}{\chi-r+\epsilon} }_2}, \qquad \tilde K=\frac{\mathscr{M}_2(d,p,\alpha)}{\mathcal{\tilde Q}^{\frac{\alpha}{d}\frac{\chi}{\chi-r+\epsilon} }_2 (r,\epsilon,\chi,d,\alpha)}$$ Assume now that over the evolution of $\|u(t)\|_{L^\infty({{\mathbb T}}^d)}$ it may take values greater than $$\mathscr{M}_3 = \max\left\{ 2\|u_0\|_{L^\infty({{\mathbb T}}^d)} e^{-t}, 2\mathscr{M}_1(d,p,\alpha)\mathcal{Q}_2, \left( \frac{4\chi}{K} \right)^{\frac{1}{2} + \frac{1}{\sigma}}, 1 \right\}$$ Let us consider the first $t^*>0$ such that $$\|u(t^*)\|_{L^\infty}= \mathscr{M}_3 .$$ Observe that $t_*>0$ thanks to the first entry of the formula for $ \mathscr{M}_3 $. Since $\mathscr{M}_3$ exceeds (by the middle entry of its definition) the bound related to the case (A), we find ourselves at the case (B). Consequently $$\frac{d}{dt}\|u(t_*)\|_{L^\infty({{\mathbb T}}^d)} \leq (\chi-r)\|u(t_*)\|_{L^\infty({{\mathbb T}}^d)}^2+r\|u(t_*)\|_{L^\infty({{\mathbb T}}^d)}- \mathscr{M}_2(d,p,\alpha)\frac{\|u(t_*)\|_{L^\infty({{\mathbb T}}^d)}^{1+\alpha p/d}}{\|u(t_*)\|^{\alpha p/d}_{L^p}}.$$ Hence via $$\label{eqM2} \begin{aligned} \frac{d}{dt}\|u(t_*)\|_{L^\infty({{\mathbb T}}^d)}&\leq (\chi-r)\|u(t_*)\|_{L^\infty({{\mathbb T}}^d)}^2+r\|u(t_*)\|_{L^\infty({{\mathbb T}}^d)}-\frac{\mathscr{M}_2(d,p,\alpha)}{\mathcal{Q}^{\frac{\alpha}{d}\frac{\chi}{\chi-r+\epsilon} }_2 (r,\epsilon,\chi,d,\alpha, u_0)}\|u(t_*)\|_{L^\infty({{\mathbb T}}^d)}^{1+\frac{\alpha}{d}\frac{\chi}{\chi-r+\epsilon}}\\ &= (\chi-r) \mathscr{M}_3^2 + r \mathscr{M}_3- K \mathscr{M}_3^{2+\sigma} \le \chi \mathscr{M}_3^2 - K \mathscr{M}_3^{2+\sigma} \end{aligned}$$ since $\mathscr{M}_3 \ge 1$. Observing that $\sigma \in (0, 1]$ with $ \chi \mathscr{M}_3^2 \le \frac{K}{2} \mathscr{M}_3^{2+\sigma} + \frac{\sigma \chi}{2 (1+ \frac{\sigma}{2})^{1 + \frac{2}{\sigma}}} \left(\frac{2 \chi}{K} \right)^\frac{2}{\sigma}$ yields $$\frac{d}{dt}\|u(t_*)\|_{L^\infty({{\mathbb T}}^d)} \le \frac{\sigma \chi}{2 (1+ \frac{\sigma}{2})^{1 + \frac{2}{\sigma}}} \left(\frac{2 \chi}{K} \right)^\frac{2}{\sigma} - \frac{K}{2} \mathscr{M}_3^{2+\sigma} \le \frac{ \chi}{2} \left(\frac{2 \chi}{K} \right)^\frac{2}{\sigma} - \frac{K}{2} \mathscr{M}_3^{2},$$ where we have used $\sigma \le 1$. Our choice of $\mathscr{M}_3$ (see the third entry of its definition) gives hence $$\frac{d}{dt}\|u(t_*)\|_{L^\infty({{\mathbb T}}^d)} < 0,$$ which is against the assumption that $t_*$ would be a first time when $\|u(t)\|_{L^\infty({{\mathbb T}}^d)}$ takes the value $\mathscr{M}_3$. Consequently $$\sup_{t \ge 0} \|u(t)\|_{L^\infty({{\mathbb T}}^d)} \le \mathscr{M}_3.$$ Let us observe that, analogously to the proof of of Lemma \[keylemma\], we can obtain that $$\sup_{t \ge r^{-1} \ln 2} \|u(t)\|_{L^\infty({{\mathbb T}}^d)} \le \mathscr{\tilde M}_3$$ with $$\mathscr{\tilde M}_3 = \max\left\{ 2\|u({t_0= r^{-1} \ln 2}) \|_{L^\infty({{\mathbb T}}^d)} e^{-t}, 2\mathscr{M}_1(d,p,\alpha)\mathcal{\tilde Q}_2, \left( \frac{4\chi}{\tilde K} \right)^{\frac{1}{2} + \frac{1}{\sigma}}, 1 \right\}$$ In order to simplify the formula for $ \mathscr{ M}_3 $ we can write $$\mathscr{ M}_3 \le 2 \left( \|u_0\|_{L^\infty({{\mathbb T}}^d)} e^{-t}+ (\mathscr{M}_1+1) \mathcal{Q}_2+ {\left(\frac{4 \chi}{\mathscr{M}_2}\right)}^{ \frac{1}{2} +\frac{1}{\sigma}}\mathcal{Q}^{\frac{3}{\sigma}}_2 \right) \le 2 e^{-t} \|u_0\|_{L^\infty({{\mathbb T}}^d)} +2 \mathcal{Q}^{\frac{3}{\sigma}}_2 (\mathscr{M}_1+ {\left(\frac{4 \chi}{\mathscr{M}_2}\right)}^{ \frac{1}{2} +\frac{1}{\sigma}} +1 )$$ and analogously $$\mathscr{ \tilde M}_3 \le 2 e^{-t} \|u({t_0= r^{-1} \ln 2}) \|_{L^\infty({{\mathbb T}}^d)} +2 \mathcal{\tilde Q}^{\frac{3}{\sigma}}_2 (\mathscr{M}_1+ {\left(\frac{4 \chi}{\mathscr{M}_2}\right)}^{ \frac{1}{2} +\frac{1}{\sigma}} +1 ).$$ These are the bounds $\mathscr{ R}_3, \mathscr{ \tilde R}_3$ from the thesis of Theorem \[thm1\], with the exception that $\mathcal{R}_2$ of $\mathcal{Q}_2$, see , is substituted with its upper bound $\mathcal{R}_0$, thus new $\mathcal{Q}_0$. Theorem \[thm1\] is proved. Proof of Theorem \[thm2\] (stability of the homogeneous solution I) and corollary ================================================================================= Let us define the new variables $$U=u-1,\;V=v+1.$$ These new variables solve $$\begin{aligned} \label{eqU} {\partial_t}U&=- \Lambda^\alpha U+\chi\nabla\cdot((U+1) \nabla V)-r(U+1)U,\text{ in }(x,t)\in {{\mathbb T}}^d\times[0,\infty)\\ \Delta V -V&=U,\text{ in }(x,t)\in {{\mathbb T}}^d\times[0,\infty)\label{eqV}\\ U(x,0)&=u_0(x)-1\text{ in }x\in {{\mathbb T}}^d.\label{eqU2}\end{aligned}$$ Furthermore, and imply $$\|U(t)\|_{L^1}\leq (2\pi)^d+\max\left\{(\|u_0\|_{L^1({{\mathbb T}}^d)},(2\pi)^d\right\}.$$ Let us define $$\overline{U}(t)=U(\overline{x}_t,t)=\max_x U(x,t),$$ $$\underline{U}(t)=U(\underline{x}_t,t)=\min_x U(x,t).$$ Due to $$-1\leq U(x,t)\leq (1+\delta) \mathscr{ R}_3 (t; r,\epsilon,\chi,d,\alpha, u_0) -1,$$ $$-\overline{U}(t)\leq V(x,t)\leq -\underline{U}(t).$$ Using the pointwise method already described in the proof of Theorem \[thm1\], we have consequently that $$\begin{aligned} \frac{d}{dt}\overline{U}(t)&= -\Lambda^{\alpha}U(\overline{x}_t,t)+\chi(\overline{U}(t)+1)(\overline{U}(t)+V(\overline{x}_t,t))-r(\overline{U}(t)+1)\overline{U}(t)\\ &\leq-\Lambda^{\alpha}U(\overline{x}_t,t)+(\chi-r)\overline{U}(t)^2 +(\chi-r)\overline{U}(t)-\chi\overline{U}(t)\underline{U}(t)-\chi\underline{U}(t),\end{aligned}$$ $$\begin{aligned} \frac{d}{dt}\underline{U}(t)&= -\Lambda^{\alpha}U(\underline{x}_t,t)+\chi(\underline{U}(t)+1)(\underline{U}(t)+V(\underline{x}_t,t))-r(\underline{U}(t)+1)\underline{U}(t)\\ &\geq-\Lambda^{\alpha}U(\underline{x}_t,t)+(\chi-r)\underline{U}(t)^2 +(\chi-r)\underline{U}(t)-\chi\overline{U}(t)\underline{U}(t)-\chi\overline{U}(t).\end{aligned}$$ Collecting both estimates, we obtain $$\begin{aligned} \frac{d}{dt}\left(\overline{U}(t)-\underline{U}(t)\right)&\leq -\Lambda^{\alpha}U(\overline{x}_t,t)+\Lambda^{\alpha}U(\underline{x}_t,t) +(\chi-r)\left(\overline{U}(t)^2 -\underline{U}(t)^2\right)\\ &\quad+(\chi-r)\left(\overline{U}(t)-\underline{U}(t)\right)+\chi\left(\overline{U}(t)-\underline{U}(t)\right)\\ &\leq -\Lambda^{\alpha}U(\overline{x}_t,t)+\Lambda^{\alpha}U(\underline{x}_t,t) +\left(\overline{U}(t)-\underline{U}(t)\right)\left[2\chi-r+(\chi-r)\left(\overline{U}(t)+\underline{U}(t)\right)\right].\end{aligned}$$ Now let us note that $$\begin{aligned} \Lambda^{\alpha}U(\overline{x}_t,t)&\geq\mathscr{C}_{d,\alpha}\text{P.V.}\int_{{{\mathbb T}}^d}\frac{u(\overline{x}_t,t)-u(\overline{x}_t-y,t)dy}{|y|^{d+\alpha}}\\ &\geq\mathscr{C}_{d,\alpha}\text{P.V.}\int_{{{\mathbb T}}^d}\frac{u(\overline{x}_t,t)-u(\overline{x}_t-y,t)dy}{({2}\pi\sqrt{d})^{d+\alpha}}\\ &\geq \mathscr{C}_{d,\alpha}\frac{(2\pi)^d\overline{U}(t)-\int_{{{\mathbb T}}^d}U(y,t)dy}{({2}\pi\sqrt{d})^{d+\alpha}}.\end{aligned}$$ Similarly $$-\Lambda^{\alpha}U(\underline{x}_t,t)\geq\mathscr{C}_{d,\alpha}\text{P.V.}\int_{{{\mathbb T}}^d}\frac{-u(\underline{x}_t,t)+u(\underline{x}_t-y,t)dy}{|y|^{d+\alpha}}\geq \mathscr{C}_{d,\alpha}\frac{-(2\pi)^d\underline{U}(t)+\int_{{{\mathbb T}}^d}U(y,t)dy}{({2}\pi\sqrt{d})^{d+\alpha}}.$$ Thus $$\begin{aligned} \frac{d}{dt}\left(\overline{U}(t)-\underline{U}(t)\right)&\leq -\mathscr{C}_{d,\alpha}\frac{(2\pi)^d\left(\overline{U}(t)-\underline{U}(t)\right)}{(\pi\sqrt{d})^{d+\alpha}}+\left(\overline{U}(t)-\underline{U}(t)\right)\left[2\chi-r+(\chi-r)\left(\overline{U}(t)+\underline{U}(t)\right)\right]\\ &\leq \left(\overline{U}(t)-\underline{U}(t)\right)\left[2\chi-r+(\chi-r)2\overline{U}(t)-\frac{(2\pi)^d\mathscr{C}_{d,\alpha}}{({2}\pi\sqrt{d})^{d+\alpha}}\right]-(\chi-r)\left(\overline{U}(t)-\underline{U}(t)\right)^2.\end{aligned}$$ Therefore, if $$-\gamma=2\chi-r+(\chi-r)2\left(\mathscr{ R}_3 (t; r,\epsilon,\chi,d,\alpha, u_0) -1\right)-\frac{(2\pi)^d\mathscr{C}_{d,\alpha}}{({2}\pi\sqrt{d})^{d+\alpha}} <0,$$ then there exists such $\delta$ that $$2\chi-r+(\chi-r)2\left( (1+\delta) \mathscr{ R}_3 (t; r,\epsilon,\chi,d,\alpha, u_0) -1\right)-\frac{(2\pi)^d\mathscr{C}_{d,\alpha}}{({2}\pi\sqrt{d})^{d+\alpha}} \le 0.$$ Hence $$\left(\overline{U}(t)-\underline{U}(t)\right)\leq \left(\overline{U}(0)-\underline{U}(0)\right)e^{-\gamma t}\rightarrow 0.$$ Translating the previous inequality into our original variable $u$, we obtain that $$\left(\|u(t)\|_{L^\infty({{\mathbb T}}^d)}-\min_{x\in{{\mathbb T}}^d}u(x,t)\right)\leq \left(\|u_0\|_{L^\infty({{\mathbb T}}^d)}-\min_{x\in{{\mathbb T}}^d}u_0(x)\right)e^{-\gamma t}\rightarrow 0.$$ This inequality implies that the solution $u$ converges to a constant $c_u$. Similarly, $v$ converges to another constant $-c_u$. There are only two possible steady state solutions that are constants, namely $(1,-1)$ and $(0,0)$. However, it is easy to see that the case $(0,0)$ is unstable. Indeed, our nonlocal fractional diffusion manifest its another useful feature here. Namely, for the classical laplacian one usually discards the possibility of vanishing of a solution (thus of staying in the case $(0,0)$) via assuming that $\min u_0 >0$, since then $\min_x u (t,x) >0$. In our case it suffices to have an initial data $u_0$ not identically zero. Indeed, assume that $0=\min_y u(y,t)$. Then, if we write $\underline{x}_t$ for the point such that $\min_y u(y,t)=u(\underline{x}_t,t)$, we have that for $u(t)$ not identically zero $$\partial_t u(\underline{x}_t,t)=-\Lambda^\alpha u(\underline{x}_t,t)+\chi u(\underline{x}_t,t)(u(\underline{x}_t,t)+v(\underline{x}_t,t))+ru(\underline{x}_t,t)(1-u(\underline{x}_t,t))=-\Lambda^\alpha u(\underline{x}_t,t)>0$$ thanks to $$-\Lambda^\alpha u(\underline{x}_t,t)>0 \; \text{ for $u(t)$ not identically zero}.$$ The only left scenario is then such that solutions vanishes uniformly. But this due to time-continuity demands that both $u$ and $v$ are close to $0$ uniformly earlier, and then ODI homeostatic part prevents further approaching zero (observed by looking again at the minimum of such a nonzero and close to zero solution). We can write hence $$\begin{aligned} \|u(t)-1\|_{L^\infty({{\mathbb T}}^d)}&=\max\{\|u(t)\|_{L^\infty({{\mathbb T}}^d)}-1,1-\min_{x\in{{\mathbb T}}^d}u(x,t)\}\\ &\leq\|u(t)\|_{L^\infty({{\mathbb T}}^d)}-1+1-\min_{x\in{{\mathbb T}}^d}u(x,t)\\ &\leq \left(\|u_0\|_{L^\infty({{\mathbb T}}^d)}-\min_{x\in{{\mathbb T}}^d}u_0(x)\right)e^{-\gamma t}.\end{aligned}$$ Theorem \[thm2\] is therefore shown. In order to prove Corollary \[cor:1\], it suffices to observe that in the above proof one may replace every $\mathscr{ R}_3 (t; r,\epsilon,\chi,d,\alpha, u_0) $ with the data independent $\mathcal{R}_\infty(r,\epsilon,\chi,\alpha,d)$, at the cost of considering only sufficiently large times, see . Proof of Theorem \[thm2b\] (stability of the homogeneous solution II) ===================================================================== We consider now the case $d=\alpha=1$. Let us pick a small parameter $\min\{ 1,(4 \pi \chi)^{-1} -2 \pi \} \ge \delta>0$. As a consequence of we know that there exists a transient time $t^*(u_0,r,\delta)$ such that $$\label{t3a} \|u(t)\|_{L^1}\leq 2\pi+\delta\quad\forall\,t\geq t^*.$$ We will restrict our analysis to $t\geq t^*$. Let us we denote by $\overline{x}_t$, $\underline{x}_t$ the points such that $$\overline{u}(t)=\max_y u(y)=u(\overline{x}_t),\;\underline{u}(t)=\min_y u(y)=u(\underline{x}_t)$$ Note that we have $$0\leq \underline{u}(t)\leq 1+\frac{\delta}{2\pi}.$$ As in the proof of Theorem \[thm2\], we obtain $$\frac{d}{dt}\overline{u}\leq-\Lambda u(\overline{x}_t)+\chi\overline{u}(\overline{u}+v(\overline{x}_t,t))+r\overline{u}(1-\overline{u}),$$ $$\frac{d}{dt}\underline{u}\geq-\Lambda u(\underline{x}_t)+\chi\underline{u}(\underline{u}+v(\underline{x}_t,t))+r\underline{u}(1-\underline{u}).$$ Using , we have that $$-\underline{u}\geq v(\overline{x}_t,t),\;v(\underline{x}_t,t)\geq-\overline{u}.$$ Via in Appendix A we also compute $$\Lambda u(\overline{x}_t)\geq \frac{1}{4\pi}\left(2\pi \overline{u}-\int_{{\mathbb T}}u(y,t)dy\right),\;-\Lambda u(\underline{x}_t)\geq \frac{-1}{4\pi}\left(2\pi \underline{u}-\int_{{\mathbb T}}u(y,t)dy\right)$$ so $$\frac{d}{dt}\overline{u}\leq\frac{-1}{4\pi}\left(2\pi \overline{u}-\int_{{\mathbb T}}u(y,t)dy\right)+\chi\overline{u}(\overline{u}-\underline{u})+r\overline{u}(1-\overline{u}),$$ $$\frac{d}{dt}\underline{u}\geq\frac{-1}{4\pi}\left(2\pi \underline{u}-\int_{{\mathbb T}}u(y,t)dy\right)+\chi\underline{u}(\underline{u}-\overline{u})+r\underline{u}(1-\underline{u}),$$ hence together $$\label{eq:s3f} \frac{d}{dt}\left(\overline{u} -\underline{u} \right) \leq -\frac{\overline{u}-\underline{u}}{2}+(\chi-r)(\overline{u}-\underline{u})(\overline{u}+\underline{u})+r(\overline{u}-\underline{u}).$$ Lemma \[lemaaux3\] now says that either 1. $ (4 + \frac{2 \delta}{\pi} \ge)\; \frac{2}{\pi}\|u(t)\|_{L^1} \geq \|u(t)\|_{L^\infty} \; (\geq \frac{\|u(t)\|_{L^1}}{2\pi}), $ or 2. $ \Lambda u(\overline{x}_t,t)\geq \frac{1}{4\pi}\frac{u(\overline{x}_t)^{2}}{\|u(t)\|_{L^1}} \;(\geq \frac{1}{4\pi}\frac{u(\overline{x}_t)^{2}}{2\pi+\delta}). $ Let us now argue that there exists a time $t^{**} \ge t^*$ such that $\overline{u} (t^{**} ) \le 4+\frac{2\delta}{\pi}$. Assume otherwise, i.e. for all $t \ge t^*$ it holds $\overline{u} (t) > 4+\frac{2\delta}{\pi}$. This excludes the case (i) and hence as in the proof of Theorem \[thm1\] we have that $$\label{eq:s3g} \frac{d}{dt}\|u(t)\|_{L^\infty} \leq \left(\chi-\frac{1}{4\pi (2\pi+\delta)}\right)\|u(t)\|_{L^\infty}^2+r\|u(t)\|_{L^\infty}(1-\|u(t)\|_{L^\infty}) \le r\|u(t)\|_{L^\infty}(1-\|u(t)\|_{L^\infty}),$$ where the second inequality comes from the assumed $$\chi-(8\pi^2)^{-1}<0$$ and our choice of $\delta \le (4 \pi \chi)^{-1} -2 \pi$. Consequently $\|u(t)\|_{L^\infty}$ approaches $1$, therefore the assumption $\overline{u} (t) > 4+\frac{2\delta}{\pi}$ for all $t \ge t^*$ is false. There must be then a finite $t^{**}$ at which $\overline{u} (t^{**}) \le 4+\frac{2\delta}{\pi}$. If for all $t > t^{**}$ it still holds $\overline{u} (t) \le 4+\frac{2\delta}{\pi}$, then $(\overline{u}+\underline{u}) \le 8+\frac{4\delta}{\pi}$ and thanks to $$\frac{d}{dt}\left(\overline{u}-\underline{u}\right) \leq \left[(\chi-r)(8+\frac{4\delta}{\pi})+r-\frac{1}{2}\right](\overline{u}-\underline{u}) = \epsilon (\overline{u}-\underline{u})$$ with $\epsilon \le 8 \chi - \frac{1}{2} + \frac{4\delta}{\pi} \chi < \frac{1}{\pi^2}- \frac{1}{2} + \frac{\delta}{2 \pi^3} \le \frac{1}{2} (\frac{1}{\pi} -1)$ , where we have used the before needed $\chi-\frac{1}{8\pi^2}<0$ and $\delta\le 1$. Hence $$\left(\overline{u}(t)-\underline{u}(t)\right) \le (\overline{u}(t^{**})-\underline{u}(t^{**}) ) e^{\frac{1}{2} (\frac{1}{\pi} -1) t} \le (8+\frac{4\delta}{\pi}) e^{\frac{1}{2} (\frac{1}{\pi} -1) t} \le 10 e^{\frac{1}{2} (\frac{1}{\pi} -1) t}.$$ As before, this implies that the solution $u$ converges to a constant. This constant can only be $0$ or $1$ and we have seen that $0$ is unstable. Finally it could happen that $\overline{u} (t^{**}) \le 4+\frac{2\delta}{\pi}$, but at some further time $\overline{u} (t) > 4+\frac{2\delta}{\pi}$, Hence there exists $t_\dagger$ such that $\overline{u} (t_\dagger) = 4+\frac{2\delta}{\pi}$ and immediately before $t_\dagger$ we have $\overline{u} ({t_\dagger}^-) < 4+\frac{2\delta}{\pi}$: this is merely continuity in the case $\overline{u} (t^{**}) < 4+\frac{2\delta}{\pi}$ and observation that $ \frac{d}{dt}\|u(t^{**})\|_{L^\infty} <0 $ provided $\overline{u} (t^{**}) = 4+\frac{2\delta}{\pi}$, compare , so the in this case $\overline{u}$ must drop below $4+\frac{2\delta}{\pi}$ immediately after $t^{**}$. But now the existence of $t_\dagger$ is contradicted again by giving $\frac{d}{dt}\|u(t_\dagger)\|_{L^\infty} <0 $ i.e. $\overline{u} ({t_\dagger}^-) > 4+\frac{2\delta}{\pi}$. Proof of Theorem \[thm3\]. Uniqueness ===================================== For a number $p$, let us denote by $p^+$ any number larger than $p$ and by $p^-$ any number smaller than $p$. In particular, $\infty^-$ is any finite number. Let us take two distributional solutions $u_1, u_2$ to starting from the same initial datum $u_0$ and belonging to $L^2 (L^{2^+}))$. Consequently for ${v}=u_1 - u_2$ one has $$\label{eq:ud} {\partial_t}{v}=- \Lambda^\alpha {v}+\chi \nabla\cdot({v}B (u_1) ) +\chi \nabla\cdot(u_2 B ({v}) ) + r {v}- r {v}(u_1 + u_2)$$ in the sense of distribution, where $$B(u)=\nabla(\Delta-1)^{-1}u.$$ Let us multiply with a sufficiently regular $\psi$. Hence $$\label{eq:psi1} \begin{aligned} &\int {\partial_t}{v}\psi = \\ &- \int \Lambda^{\alpha + \rho -1} {v}\Lambda^{1-\rho} \psi + \chi \int \Lambda^{\rho} R \cdot (v B (u_1)) \Lambda^{1-\rho} \psi + \chi \int \Lambda^\rho R \cdot (u_2 B ({v})) \Lambda^{1-\rho} \psi \\ &+ r \int {v}\psi - r \int {v}(u_1 + u_2) \psi, \end{aligned}$$ where integrals are over space and time. Observe that for a given $\rho \in [0,1)$ and $\psi \in L^2 (H^{1-\rho} )$ the first term on the r.h.s. above is finite provided $u_1, u_2$ belongs to $L^2 (H^{\alpha+\rho-1} )$. Let us consider conditions for finiteness of further terms on the r.h.s. for $\psi \in L^2 (H^{1-\rho} )$. Concerning the integrals involving $\chi$, when the differentiation does not hit $B$, it suffices that $ L^2 (H^{\rho^+})$ (or $ L^2 (H^{\rho})$ for $d=1$. Let us continue with the case $d=2$ only and observe in the analogous computations for the case $d=1$ one does not need $\cdot^+$), since for $d \le2$ $$\label{eq:buq} \|B (u)\|_{L^\infty (L^{\infty^-})}\le {c\|B (u)\|_{L^\infty (\dot{H}^{1})}}\le C\|u\|_{L^\infty (L^2)}.$$ When the differentiation hits $B$, since $$\| {v}(t)\|_{L^{p_3}} \| \Lambda^{\rho} B (u_1 (t)) \|_{L^{p'_3}} \|\Lambda^{1-\rho} \psi (t) \|_{L^2} \le C \| u_2 (t)\|_{H^{\rho}} \|v (t) \|_{L^2} \|\psi (t)\|_{H^{1-\rho}},$$ provided $$\rho -1 - \frac{d}{p'_3} \le -\frac{d}{2}, \quad - \frac{d}{p_3} \le \rho - \frac{d}{2},$$ we need $u_1, u_2 \in L^\infty (L^2) \cap L^2 (H^\rho)$. The same holds for the other integral involving $\chi$. Finally, to deal with quadratic terms involving $r$, we observe that by embedding $$\int |{v}(u_1 + u_2) \psi| \le C (\|u^2_1\|_{L^2 (L^\xi)} + \|u^2_2\|_{L^2 (L^\xi)} ) \|\psi\|_{L^2(H^{1-\rho})},$$ for $\xi = \frac{2d}{2 - 2 \rho +d}$ and for finiteness of $L^2 (L^\xi)$ norms we interpolate $L^\infty (L^2)$ and $L^2 (H^{(\rho -1 + d/2)^+})$. So $u_1, u_2 \in L^2 (H^{\rho^+}) \cap L^\infty (L^2)$ suffices here. Putting together all our requirements, we see that for $\psi \in L^2 (H^{1-\rho} )$ in it is enough to have $$\label{con3} u_1, u_2 \in L^\infty (L^2) \cap L^2 (H^{\rho^+}) \cap L^2 (H^{1-\rho}) \cap L^2 (H^{\alpha+\rho-1} )$$ Then l.h.s. of is finite, i.e. ${\partial_t}{v}\in L^2 (H^{\rho-1})$ and since $u_1, u_2 \in L^2 (H^{1- \rho})$, then by interpolation ${v}\in C(L^2)$. Next, in view of the assumed $u_1, u_2 \in L^2 (H^\alpha)$, we can have $\psi = {v}$, choosing $1-\rho = \alpha/2$. Consequently $$\frac{1}{2} \frac{d}{dt}\|{v}(t)\|^2_{L^2} + \|{v}(t)\|^2_{ \dot{H}^\frac{\alpha}{2}} \leq \chi \left| \langle {v}B (u_1) \nabla v \rangle \right|+ \chi \left| \int \Lambda^{1-\alpha/2} R \cdot (v B (u_1)) \Lambda^{\alpha/2} {v}\right| + r \|{v}(t)\|^2_{L^2},$$ where we have also used nonnegativity and $\langle \cdot \rangle$ denotes the duality pairing $\langle \cdot \rangle_{H^\rho, H^{1- \rho}}$. Hence, using very weak integration by parts $$\frac{1}{2} \frac{d}{dt}\|{v}(t)\|^2_{L^2} + \|{v}(t)\|^2_{\dot{H}^\frac{\alpha}{2}} \leq \frac{\chi}{2} \int \nabla \cdot B (u_1) (t) {v}^2 (t) + \chi \int |\Lambda^{1-\alpha/2} R \cdot (u_2 B ({v})) | |\Lambda^{\alpha/2} {v}| + r \|{v}(t)\|^2_{L^2}$$ and consequently, since $\|\nabla \cdot B f \|_p \le C\| f \|_p$ $$\begin{gathered} \label{eq:ud2} \frac{1}{2} \frac{d}{dt}\|{v}(t)\|^2_{L^2} + \|{v}(t)\|^2_{H^\frac{\alpha}{2}} \leq \\ \frac{\chi}{2} \| u_1 (t) \|_{L^{p_1}} \| {v}(t) \|^2_{L^{2p'_1}} + C \|\Lambda^{1-\alpha/2} u_2 (t)\|_{L^{2^+}} \|B ({v}(t)) \|_{L^{\infty^-}} \|\Lambda^{\alpha/2} {v}(t) \|_{L^{2}} + \\ C \| u_2 (t)\|_{L^{p_3}} \| \Lambda^{1-\alpha/2} B ({v}(t)) \|_{L^{p'_3}} \|\Lambda^{\alpha/2} {v}(t) \|_{L^2} + r \|{v}(t)\|^2_{L^2} \\ =: \frac{\chi}{2} I + C II + C III + r \|{v}(t)\|^2_{L^2}.\end{gathered}$$ Using interpolation and embeddings, we estimate the terms on r.h.s. as follows (suppressing $t$ for a moment) $$\begin{aligned} & I \le \| u_1 \|_{L^{p_1}} \| {v}\|^{2\theta}_{L^2} \| {v}\|^{2(1-\theta)}_{H^\frac{\alpha}{2}} \le \epsilon \| {v}\|^2_{H^\frac{\alpha}{2}} +C_\epsilon \| u_1 \|^\frac{1}{\theta} _{L^{p_1}} \|{v}\|^{2}_{L^2}, \qquad \text{ where } \frac{1}{p_1'} = \frac{\alpha}{d} (\theta -1) + 1\\ & II \le C\|\Lambda^{1-\alpha/2} u_2\|_{L^{2^+}} \|v \|_{L^2} \|\Lambda^{\alpha/2} {v}\|_{L^2} \le C\|u_2\|_{H^{(1-\alpha/2)^+} } \|v \|_{L^2} \| {v}\|_{H^{\alpha/2}} \le C\|u_2\|_{H^{\alpha} } \|v \|_{L^2} \| {v}\|_{H^{\alpha/2}}\\ & III \le C_\epsilon\| u_2\|^2_{L^{p_3}} \|v \|^2_{L^2} + \epsilon \|{v}\|^2_{H^\frac{\alpha}{2}} \qquad \text{ for } -\alpha/2 -\frac{d}{p'_3} \le - \frac{d}{2}. \end{aligned}$$ $$\frac{\chi}{2} \| u_1 (t) \|_{L^{p_1}} \| {v}(t) \|^2_{L^{2p'_1}}$$ More precisely, the middle inequality involves and $\alpha > (1-\alpha/2)^+$ by assumed $\alpha>1$. The last inequality uses the fact that $\Lambda^{1-\alpha/2} B$ is on the Fourier side $\sim |\xi|^{1- \alpha/2} \frac{\xi}{|\xi|^2 +1}$, hence there is no problem with null modes for $W^{-\alpha/2, p'_3}$ so by embedding for $ -\alpha/2 -\frac{d}{p'_3} \le - \frac{d}{2}$, $p'_3 \in (1, \infty)$ $$\| \Lambda^{1-\alpha/2} B ({v}) \|_{L^{p'_3}} \le C \| {v}\|_{L^2}.$$ Consequently $$III \le C_\epsilon \|u_2\|^2_{H^{(1-\alpha/2)} } \|v \|^2_{L^2} + \epsilon \|{v}\|^2_{H^\frac{\alpha}{2}} \le C_\epsilon \|u_2\|^2_{H^{\alpha} } \|v \|^2_{L^2} + \epsilon \|{v}\|^2_{H^\frac{\alpha}{2}} \quad \text{ provided } p_3 = \frac{2d}{d- 2 (1-\alpha/2)}$$ This last choice of $p_3$ is within the needed before condition $-\alpha/2 -\frac{d}{p'_3} \le - \frac{d}{2}$. Altogether, the above estimates yield in $$\frac{d}{dt}\|{v}(t)\|^2_{L^2} + \|{v}(t)\|^2_{H^\frac{\alpha}{2}} \leq C \| u_1 (t) \|^\frac{1}{\theta} _{L^{p_1}} \|{v}(t) \|^{2}_{L^2} + C \|u_2 (t)\|^2_{H^{(1-\alpha/2)^+} } \| {v}(t) \|^2_{L^2} + C \|{v}(t)\|^2_{L^2}.$$ Hence for uniqueness we need only to check whether $$\int_0^T \| u_1 (t) \|^\frac{1}{\theta}_{L^{p_1}} dt < \infty,$$ where $\frac{1}{\theta} = \frac{\alpha p_1}{\alpha p_1 -d}$. Requiring $ \frac{1}{\theta} = 2$, we need $p_1 = 2d/ \alpha$, so $u_1 \in L^2 (H^{\frac{d- \alpha}{2}}) $ is sufficient. On the fractional Laplacian =========================== The $d-$dimensional fractional (minus) Laplacian $\Lambda^{\alpha}$ is defined through the Fourier transform as $$\widehat{\Lambda^\alpha u}(\xi)=|\xi|^\alpha \hat{u}(\xi).$$ This operator also enjoys the following representation as a singular integral (see [@Ghyperparweak] for an elementary derivation): $$\label{eq:1b.1.5} \Lambda^{\alpha}u=\mathscr{C}_{d,\alpha} \text{P.V.}\int_{{\mathbb R}^d}\frac{u(x)-u(y)}{|x-y|^{d+\alpha}}dy,$$ where $$\mathscr{C}_{d,\alpha}=2\left(\int_{{\mathbb R}^d}\frac{4\sin^2\left(\frac{x_1}{2}\right)}{|x|^{d+\alpha}} dx\right)^{-1}.$$ In the case of periodic functions, we have the following equivalent representation $$\begin{aligned} \label{eq:9} \Lambda^\alpha u(x)&=\mathscr{C}_{d,\alpha}\bigg{(}\sum_{k\in {{\mathbb Z}}^d, k\neq 0}\int_{{{\mathbb T}}^d}\frac{u(x)-u(x-y)dy}{|y+2k\pi|^{d+\alpha}}+\text{P.V.}\int_{{{\mathbb T}}^d}\frac{u(x)-u(x-y)dy}{|y|^{d+\alpha}}\bigg{)}.\end{aligned}$$ Let us emphasize that in the case $d=1=\alpha$, the previous series can be computed and results in $$\begin{aligned} \label{eq:9b} \Lambda u(x)&=\frac{1}{4\pi}\int_{{{\mathbb T}}}\frac{u(x)-u(x-y)dy}{\sin^2\left(y/2\right)}.\end{aligned}$$ Then we have the following results \[lemaentropy2\] Let $0<s$, $0<\alpha<2$, $0<\delta<\alpha/(2+2s)$ and $d \ge 1$. Then for a sufficiently smooth $u \ge 0$ it holds $$\label{ene:1} \frac{4s}{(1+s)^2} \int_{{\mathbb T^d}}| \Lambda^\frac{\alpha}{2} (u^\frac{s+1}{2}) |^2dx \le \int_{{{\mathbb T^d}}}\Lambda^\alpha u(x) u^s(x)dx.$$ If additionally $s \le 1$, then $$\label{ene:2} \|u\|_{\dot{W}^{\alpha/(2+2s)-\delta,1+s} ({{\mathbb T^d}})}^{2+2s}\leq \mathscr{S}(\alpha,s,\delta, d)\|u\|_{L^{1+s} ({{\mathbb T^d}}) }^{1+s} \int_{{{\mathbb T^d}}}\Lambda^\alpha u(x) u^s(x)dx,$$ where $\mathscr{S}(\alpha,s,\delta, d)$ can be taken as $$\mathscr{S}(\alpha,s,\delta, d)=\frac{2^{2s+1}}{\mathscr{C}_{d,\alpha}s}\sup_{x\in{{\mathbb T}}^d}\int_{{{\mathbb T}}^d}\frac{1}{|x-y|^{d-2(1+s)\delta}}dy$$ Furthermore, for a sufficiently smooth $u \ge 0$, $0<\alpha<2$, $0<\delta<\alpha/2$ and $d \ge 1$, the extremal case $s=0$ holds $$\label{ene:3} \|u\|_{\dot{W}^{\alpha/2-\delta,1}({{\mathbb T}}^d)}^2\leq \mathscr{S}(\alpha,0,\delta, d)\|u\|_{L^1({{\mathbb T}}^d)}\int_{{{\mathbb T^d}}}\Lambda^\alpha u(x)\log(u(x))dx,$$ with $$\mathscr{S}(\alpha,0,\delta, d)=\frac{2}{\mathscr{C}_{d,\alpha}}\sup_{x\in{{\mathbb T}}^d}\int_{{{\mathbb T}}^d}\frac{1}{|x-y|^{d-2\delta}}dy.$$ \[lemapoincare\] Let $0\leq u\in L^{1+s}({{\mathbb T}}^d)$, $0<s<\infty$, be a given function and $0<\alpha<2$, be a fixed constant. Then, $$\mathscr{P}(d,\alpha)\|u\|_{L^{1+s}}^{1+s}\le \int_{{{\mathbb T}}^d}\Lambda^\alpha u(x) u^s(x)dx+\frac{\mathscr{P}(d,\alpha)}{\left(2\pi\right)^d}\left(\int_{{{\mathbb T}}^d}u(x)dx\right)\left(\int_{{{\mathbb T}}^d}u^s(x)dx\right),$$ for $$\mathscr{P}(d,\alpha)= \frac{4\left(\int_{{\mathbb R}^d}\frac{4\sin^2\left(\frac{x_1}{2}\right)}{|x|^{d+\alpha}} dx\right)^{-1}}{\left(2\pi \right)^{\alpha}d^{\frac{d+\alpha}{2}}}.$$ Furthermore, in the case $d=\alpha=1$, this constant can be taken $$\mathscr{P}(1,1)=1.$$ Note that $$\sup_{x,y\in{{\mathbb T}}^d}|x-y|=\text{length of $d$-dimensional hypercube's longest diagonal}= 2\pi \sqrt{d}$$ Due to the positivity of the terms $$0\leq \mathscr{C}_{d,\alpha}\sum_{k\in {{\mathbb Z}}^d, k\neq 0}\int_{{{\mathbb T}}^d} u^s(x)\int_{{{\mathbb T}}^d}\frac{u(x)-u(x-y)dy}{|y+2k\pi|^{d+\alpha}}dx,$$ we have that $$\begin{aligned} \int_{{{\mathbb T}}^d}u^s(x)\Lambda^{\alpha}u(x)dx&\geq \mathscr{C}_{d,\alpha}\int_{{{\mathbb T}}^d}\text{P.V.}\int_{{{\mathbb T}}^d}\frac{(u(x)-u(y))(u^s(x)-u^s(y))}{|x-y|^{d+\alpha}}dy dx\\ &\geq \frac{\mathscr{C}_{d,\alpha}}{\left(2\pi \sqrt{d}\right)^{d+\alpha}}\int_{{{\mathbb T}}^d}\int_{{{\mathbb T}}^d}(u(x)-u(y))(u^s(x)-u^s(y))dy dx\\ &\geq \frac{2\mathscr{C}_{d,\alpha}(2\pi)^d}{\left(2\pi \sqrt{d}\right)^{d+\alpha}}\|u\|_{L^{1+s}}^{1+s}-\frac{2\mathscr{C}_{d,\alpha}}{\left(2\pi \sqrt{d}\right)^{d+\alpha}}\left(\int_{{{\mathbb T}}^d}u(x)dx\right)\left(\int_{{{\mathbb T}}^d}u^s(x)dx\right).\end{aligned}$$ Then, we obtain that $$\mathscr{P}(d,\alpha)= \frac{2\mathscr{C}_{d,\alpha}}{\left(2\pi \right)^{\alpha}d^{\frac{d+\alpha}{2}}}.$$ In the case $d=\alpha=1$, we have that $$\Lambda u(x)=\frac{1}{4\pi}\text{P.V.}\int_{{\mathbb T}}\frac{u(x)-u(x-y)}{\sin^2(y/2)}dy.$$ Thus, repeating the argument using $\sin^2\leq 1$, we find that $$\mathscr{P}(1,1)=1.$$ Quite remarkably, the nonlocal character of the fractional Laplacian allows for pointwise estimates: \[lemaaux3\] Let $h\in C^2({{\mathbb T}}^d)$ be a function. Assume that $h(x^*):=\max_x h(x)>0$. Then, there exists two constants $\mathscr{M}_i(d,p,\alpha),$ $i=1,2$ such that either $$\mathscr{M}_1(d,p,\alpha)\|h\|_{L^p} \geq h(x^*),$$ or $$\Lambda^\alpha h(x^*)\geq \mathscr{M}_2(d,p,\alpha)\frac{h(x^*)^{1+\alpha p/d}}{\|h\|^{\alpha p/d}_{L^p}},$$ with $$\mathscr{M}_1(d,p,\alpha)=\left(\frac{\pi^{d/2}}{2^{1+p}}\int_{0}^\infty z^{d/2}e^{-z}dz\right)^{1/p}$$ and $$\mathscr{M}_2(d,p,\alpha)=\mathscr{C}_{d,\alpha}\frac{\left(\frac{\pi^{d/2}}{\int_{0}^\infty z^{d/2}e^{-z}dz}\right)^{1+\alpha/d}}{4\cdot 2^{\frac{(p+1)\alpha}{d}}}.$$ Furthermore, in the case $d=1=\alpha$, we have that $\mathscr{M}_i(1,1,1)$ can be taken as $$\mathscr{M}_1(1,1,1)=\frac{2}{\pi},\;\mathscr{M}_2(1,1,1)=\frac{1}{4\pi}.$$ Acknowledgements {#acknowledgements .unnumbered} ================ RGB was supported by the LABEX MILYON (ANR-10-LABX-0070) of Université de Lyon, within the program “Investissements d’Avenir” (ANR-11-IDEX-0007) operated by the French National Research Agency (ANR). [^1]: Observe that the Patlak-Keller-Segel equation is often written for unknowns $(u, -v)$
--- abstract: | Mean-field theory and scaling arguments are presented to model polyelectrolyte adsorption from semi-dilute solutions onto charged surfaces. Using numerical solutions of the mean-field equations, we show that adsorption exists only for highly charged polyelectrolytes in low salt solutions. Simple scaling laws for the width of the adsorbed layer and the amount of adsorbed polyelectrolyte are obtained. In other situations the polyelectrolyte chains will deplete from the surface. For fixed surface potential conditions, the salt concentration at the adsorption–depletion crossover scales as the product of the charged fraction of the polyelectrolyte $f$ and the surface potential, while for a fixed surface charge density, $\sigma$, it scales as $\sigma^{2/3}f^{2/3}$, in agreement with single-chain results.\ author: - Adi Shafir - David Andelman - 'Roland R. Netz' date: 'April 15, 2003' title: 'Adsorption and Depletion of Polyelectrolytes from Charged Surfaces\' --- Introduction {#Intro} ============ The phenomenon of adsorption of charged polymer chains (polyelectrolytes) to surfaces has generated a great deal of interest due to its numerous industrial applications and relevance to biological systems. The theoretical treatment is not yet well established because of the multitude of length scales involved, arising from different interactions: electrostatic interactions between monomers and counter-ions, excluded volume interactions and entropic considerations. Furthermore, when salt is added to the solution, the interplay between polyelectrolytes (PEs) and salt ions as well as the ion entropy has to be taken into account. The adsorption of PE chains onto charged surfaces has been addressed theoretically in several models in the past. They include among others: solutions of linearized mean-field equations [@Varoqui; @Varoquietal; @Chatellier; @Joanny; @Manghi; @Wiegel; @Muthukumar], numerical solutions of full mean-field equations [@Itamar; @Itamar1; @Itamar2], various scaling theories for single-chain adsorption [@Borisov; @Dobrynin], and formulation of a phenomenological criterion describing the adsorption–depletion transition from charged surfaces [@NetzJoanny; @review; @review1]. Other approaches employed multi-Stern layer models [@vanderSchee; @vandestig; @Bohmer], where a discrete lattice is used and each lattice site can be occupied by either a monomer, a solvent or a small ion. The electrostatic potential can then be calculated self-consistently together with the concentrations of the monomers and counterions. In this article we re-examine the mean-field equations describing the PE adsorption and their numerical solutions, with specific emphasis on the adsorption–depletion transition. The present paper can be regarded as an extension of Ref. [@Itamar]. It agrees with the previously obtained low-salt adsorption regime but proposes a different interpretation of the high-salt regime. We find that the high-salt adsorption regime of Ref. [@Itamar] is pre-empted by an adsorption–depletion transition, in analogy with single-chain results. The mean-field equations and their numerical solutions are formulated in Sec. \[EQEQ\], some simple scaling relationships in Sec. \[SCale\], and the adsorption–depletion transition in Sec. \[Phaset\]. A general discussion and comparison with other models are presented in Sec. \[comparison\]. The Mean Field Equations and Their Numerical Solution {#EQEQ} ===================================================== Consider an aqueous solution of infinitely long PEs, together with their counterions and an added amount of salt. Throughout this paper we assume that both the salt ions and counterions are monovalent. Let ${\phi}({\bf r})=\sqrt{c({\bf r})}$ be the square root of $c({\bf r})$, the local monomer concentration, $a$ the monomer size and $f$ the charge fraction on each PE chain. Also let $\phi_{\rm b}=\sqrt{c_{\rm b}}$ be the square root of the bulk monomer concentration $c_{\rm b}$, and $\psi({\bf r})$ the electrostatic potential. The mean-field free energy can be obtained either from phenomenological or field theoretical approaches: $$F= \int {\rm d} {\bf {r}} \left(f_{\rm pol}+f_{\rm ion}+ f_{\rm el}\right) \label{F}$$ $$\begin{aligned} f_{\rm pol}= k_{\rm B} T\left(\frac{a^2}{6}\left|\nabla \phi \right|^2+{\frac{1}{2}}v \left(\phi^4-\phi_{\rm b}^4\right)-\mu \left(\phi^2-\phi_{\rm b}^2\right) \right) \label{fpol}\end{aligned}$$ $$\begin{aligned} f_{\rm ion} = k_{\rm B} T \sum_{i=+,-}\left[c^i\ln c^i-c^i-\mu^i\left (c^i-c_{\rm b}^i\right)\right] \label{fion} \\ f_{\rm el}=\left(c^+ - c^- +f\phi^2\right)e\psi- \frac{ \epsilon}{8\pi}\left|\nabla \psi\right|^2 ~. \label{fel}\end{aligned}$$ While the full details can be found in Refs. [@Itamar; @Itamar1; @Itamar2], here we just briefly explain each of the terms. The first term of $f_{\rm{pol}}$ accounts for chain elasticity, the second describes the excluded volume interaction between monomers, where $\it{v}$ is the second virial coefficient. The third accounts for the coupling with a reservoir with bulk polymer concentration $\phi_{\rm b}^2=c_{\rm b}$ and chemical potential $\mu$. The $f_{\rm{ion}}$ contribution to the free energy takes into account the entropy of small ions and their chemical potential $\mu^\pm$. Lastly, $f_{\rm el}$ is the electrostatic free energy. Its first term is the interaction energy between the electrostatic potential and the charged objects; namely, the small ions and monomers. The last term is the self-energy of the electric field $-\frac{\varepsilon}{8\pi}\int \rm{d{\bf r}}\left|\nabla \psi\right|^2$. Minimizing the free energy with respect to $\psi$, $\phi$, $c^+$, $c^-$, and using the bulk boundary conditions: $\psi \left(x\rightarrow \infty\right)=0$, $\phi \left(x\rightarrow \infty\right) = \phi_{\rm b}$, $c_{\rm b}^-=c_{\rm{salt}}+f\phi_{\rm b}^2$ and $c_{\rm b}^+=c_{\rm{salt}}$, the profile equations of Ref. [@Itamar] are reproduced: $$c^-=\left(c_{\rm{salt}}+f\phi_{\rm b}^2\right){ {\rm e} }^{\beta e \psi} \label{cpluseq}$$ $$c^+=c_{\rm{salt}}{ {\rm e} }^{-\beta e \psi} \label{cminuseq}$$ $$\nabla^2\psi= \frac{8\pi e c_{\rm{salt}}}{\varepsilon} \sinh {\beta e\psi} + \frac {4\pi e}{\varepsilon}\left(\phi_{\rm b}^2 f{ {\rm e} }^{\beta e\psi}- f\phi^2\right) \label{PoissonBoltzmanmod}$$ $$\frac{a^2}{6}\nabla^2\phi={\it{v}}\left(\phi^3-\phi_{\rm b}^2\phi\right)+\beta f e\psi\phi ~, \label{Edwardsmod}$$ where $\beta=1/k_{\rm B}T$ is the inverse of the thermal energy $k_B T$. Equations (\[cpluseq\]) and (\[cminuseq\]) show that the small ions obey Boltzmann statistics, while Eq. (\[PoissonBoltzmanmod\]) is the Poisson equation where the salt ions, counterions and monomers can be regarded as the sources of the electrostatic potential. Equation (\[Edwardsmod\]) is the mean-field (Edwards) equation for the polymer order parameter $\phi({\bf r})$, taking into account the excluded volume interaction and external electrostatic potential $\psi({\bf r})$. The adsorption onto a flat, homogeneous and charged surface placed at $x=0$ depends only on the distance $x$ from the surface. In this case the above equations can be reduced to two coupled ordinary differential equations. Defining dimensionless variables $\eta \equiv \phi/\phi_{\rm b}$ and $y\equiv \beta e \psi$, Eqs. (\[PoissonBoltzmanmod\]) and (\[Edwardsmod\]) then read: $$\frac{{\rm d}^2y}{{\rm d}x^2}=\kappa^2\sinh y +k_m^2 \left({ {\rm e} }^y- \eta^2 \right)\\ \label{normPBmod}$$ $$\frac{a^2}{6}\frac{{\rm d}^2\eta}{{\rm d}x^2}=v\phi_{\rm b}^2 \left(\eta^3-\eta \right)+f y \eta ~, \label{normEdwardsmod}$$ where $\kappa^{-1}=\left(8\pi l_{\rm B} c_{\rm{salt}}\right)^{-1/2}$ is the Debye-Hückel screening length, determining the exponential decay of the potential due to the added salt. Similarly, $k_m^{-1}=\left(4\pi l_{\rm B} \phi_{\rm b}^2 f\right)^{-1/2}$ determines the exponential decay due to the counterions. The Bjerrum length is defined as $l_{\rm B}=e^2 / \varepsilon k_{\rm B} T$. For water with dielectric constant $\varepsilon=80$, at room temperature, $l_{\rm B}$ is equal to about $7$Å. Note that the actual decay of the electrostatic potential is determined by a combination of salt, counterions, and polymer screening effects. The solution of Eqs. (\[normPBmod\]) and (\[normEdwardsmod\]) requires four boundary conditions. Two of them are the boundary values in the bulk, $x\rightarrow \infty$: $\eta \left(x\rightarrow\infty\right)=1$ and $ y\left(x\rightarrow\infty\right)=0$, while the other two are the boundary conditions on the $x=0$ surface. In this article we use either constant surface charge density (Neumann boundary conditions) or constant surface potential (Dirichlet boundary conditions). For the former, ${\rm d}y/{\rm d}x\vert_{x=0}=-4\pi\sigma e/\varepsilon k_{\rm B} T=-4\pi(\sigma/e)l_{\rm B}$, where $\sigma$ is the surface charge density. For the latter, the surface potential is held fixed with a value: $y\left(0\right)=y_s$. The other boundary condition for the polymer concentration $\phi$ is taken as a non-adsorbing surface. Namely, $\phi\left(0\right)=0$. Note that far from the surface, $x\rightarrow\infty$, Eqs. (\[normPBmod\]) and (\[normEdwardsmod\]) already satisfy the boundary condition: $y=0$ and $\phi=\phi_{\rm b}$ (or $\eta=1$). Equations (\[normPBmod\]) and (\[normEdwardsmod\]) are two coupled non-linear differential equations that do not have a known analytical solution. The numerical solutions of these equations for low salt conditions were presented in Ref. [@Itamar] and are reproduced here on Figs. 1 and 2, using a different numerical scheme. The numerical results have been obtained using the relaxation method [@NR] based on a linearization procedure done on a discrete one-dimensional grid. Then, the equations are transformed to a set of algebraic equations for each grid point. The sum of the absolute difference between RHS and LHS over all grid points is minimized iteratively until convergence of the numerical procedure is achieved. In calculating the numerical profiles of Figs. 1 and 2 we assume positively charged polymers and a constant negative surface potential. In Fig. 1a the reduced electrical potential $y=\beta e \psi$ profile is shown as a function of the distance from the $x=0$ surface. Similarly, in Fig. 2a the monomer rescaled concentration profile $c(x)/c_{\rm b}$, is shown. In both figures a constant surface potential boundary condition is imposed. The different curves correspond to different surface potentials $y_s$, monomer charge fractions $f$ and monomer size $a$. From the numerical profiles of the electrostatic potential and monomer concentration it can be clearly seen that there is a distinct peak in both profiles. Although they do not occur exactly at the same distance from the surface, the corresponding peaks in Fig. 1 and 2 vary in a similar fashion with system parameters. The peak in the concentration (Fig. 2) marks a PE accumulation at the surface and is regarded as a signature of adsorption. The peak in the potential (Fig. 1) marks an over-compensation of surface charges. At the peak of $\psi(x)$, the electric field vanishes, $E=-{{\rm d}\psi}/{\rm d}x=0 $, meaning that the integrated charge density from the surface up to this distance exactly balances the surface charge. Scaling Estimate of the Adsorption Layer: Counterion only Case {#SCale} ============================================================== So far numerical solutions within mean-field theory, Eqs. (\[normPBmod\]) and (\[normEdwardsmod\]), have been described. We proceed by presenting simplified scaling arguments, which are in agreement with the numerical mean-field results. Note that the treatment here does not capture any correlation effect which goes beyond mean-field. The concept of polymer “blobs" can be useful in order to describe PE adsorption, where such polymer blob can be regarded as a macro-ion adsorbing on a charge surface. The blob size is determined by taking into account the polymer connectivity and entropy as well as the interaction with the charged surface. A single layer of adsorbing blobs is assumed instead of the full continuous PE profile as obtained from the mean-field equations. Therefore, the blob size characterizes the adsorption layer thickness. Fixed Surface Charge Density {#sigmaads} ---------------------------- The two largest contributions to the PE adsorption free energy are the electrostatic attraction with the surface and the chain entropy loss due to blob formation. For simplicity, the electrostatic attraction of the monomers with the surface is assumed to be larger than the monomer excluded volume and monomer-monomer electrostatic repulsion. With this assumption the chain has a Gaussian behavior inside each surface blob, $D\sim a g^{1/2}$, where $g$ is the number of monomers in a blob of size $D$, as is shown schematically in Fig. 3. The entropy loss of the chain balances the surface-monomer attraction. As a result the blob attraction with the surface is of order $k_B T$. It is now easy to get an estimate of the blob size $D$: $$\begin{aligned} f g \frac{{\tilde{\sigma}}}{\varepsilon} e^2 D \simeq k_{\rm B} T \label{gsigmaeq} \\ D\simeq ag^{1/2} \simeq \left(\frac{a^2}{l_{\rm B} f {\tilde{\sigma}}}\right)^{1/3} \label{Dscharge} \\ g \simeq \left(l_{\rm B} f a {\tilde{\sigma}}\right)^{-2/3}~, \label{gscharge}\end{aligned}$$ using a rescaled surface density ${\tilde{\sigma}}\equiv \left|{\sigma}/{e}\right|$. These results are in agreement with those describing the statistics of single-chain adsorption [@Wiegel; @Borisov]. The assumption that the electrostatic attraction to the surface is larger than the monomer-monomer electrostatic repulsion and excluded volume can now be checked self-consistently, yielding two conditions: $ {\tilde{\sigma}}\gg fa^{-2}$ and $ f \gg {\it v}^3/(a^{10}{\tilde{\sigma}}l_B)$. The average monomer concentration (per unit volume) in the adsorption layer $c_m$, is the blob concentration in the adsorption layer $n_0$, times the number of monomers per blob $g$, yielding $c_m=n_0 g$. It is now possible to get an estimate of the blob concentration per unit volume in the adsorption layer, $n_0$, by assuming that the adsorbed layer neutralizes the surface charges up to a numerical prefactor of order unity. Hence, $n_0\simeq {\tilde{\sigma}}/Dfg$. This assumption, which is in agreement with our numerical solutions, leads to $$\begin{aligned} n_0& \simeq& l_{\rm B}{\tilde{\sigma}}^2 \label{n0sig} \\ c_m &=&n_0 g \simeq (l_{\rm B} a^{-2}f^{-2}{\tilde{\sigma}}^{4})^{1/3} \sim {\tilde{\sigma}}^{4/3}f^{-2/3} ~. \label{cmscharge}\end{aligned}$$ Equation (\[n0sig\]) is just the Graham equation [@Israelachvili] relating the surface charge density with the counterion density at the surface vicinity. The only difference is that the counterions are replaced by the charged polymer blobs. Furthermore, Eq. (\[cmscharge\]) is in accord with the results of Ref [@review]. The total amount of PEs in the adsorption layer is $ \Gamma \simeq c_m D ={{\tilde{\sigma}}}/{f}$. In other words, the overall polymer charge in the adsorption layer (up to a numerical prefactor) is $ f\Gamma \simeq {\tilde{\sigma}}$. This is just another way to phrase the charge neutralization by the PEs mentioned above. Fixed Surface Potential {#ysads} ----------------------- Using the boundary condition of a fixed surface potential $\psi=\psi_s$, the scaling laws for $D$ and $g$ can be obtained in a similar fashion as was done in Sec. \[sigmaads\]. Alternatively, one can (in the absence of salt) relate the surface potential to the surface charge density by $\psi_s\simeq \sigma D/\varepsilon$. The adsorption energy of a blob of charge $gfe$ onto a surface held at potential $\psi_s$ is just $gfe\psi_s$. Requiring that this energy is of order of $k_BT$ we obtain in analogy to Eqs. (\[gsigmaeq\])-(\[gscharge\]): $$\begin{aligned} gfe\left|\psi_s\right| \simeq k_{\rm B} T \label{gpsieq} \\ D \simeq ag^{{\frac{1}{2}}}\simeq \frac{a}{\sqrt{f {\left|y_s\right|}}} \label{Dys}\\ g \simeq \frac{1}{f{\left|y_s\right|}} \label{gys}\end{aligned}$$ Together with the neutralization condition $n_0\simeq {\tilde{\sigma}}/(D f g)$ it yields: $$\begin{aligned} n_0 \simeq \frac{f{\left|y_s\right|}^3}{l_{\rm B} a^2} \label{n02} \\ c_m= n_0 g \simeq \frac{{\left|y_s\right|}^2}{l_{\rm B} a^2}. \label{cmys}\end{aligned}$$ Note that the above results are in accord with the ones previously derived in Ref. [@Itamar]. Just like in Sec. III.A the self-consistent check can be repeated here for the dominance of the surface-monomer interactions, yielding ${\left|y_s\right|}\gg f^{1/3}l_B^{2/3}a^{-2/3}$ and $f \gg {\it{v}^2}/{a^6 {\left|y_s\right|}}$. This condition has been verified, in addition, by examining numerically the mean-field adsorbing profiles. The overall charge of the polymer in the adsorbed layer is then: $$\begin{aligned} \Gamma \simeq c_m D \simeq {\left|y_s\right|}^{3/2}f^{-1/2} l_{\rm B}^{-1}a^{-1} \sim {\left|y_s\right|}^{3/2}f^{-1/2} \label{gammays}\\ f\Gamma \simeq {\left|y_s\right|}^{3/2}f^{1/2} l_{\rm B}^{-1} a^{-1}\simeq \frac{{\left|y_s\right|}}{l_{\rm B} D} \simeq \left|\frac{d\psi}{dx}\right|_{x=0} \simeq {\tilde{\sigma}}\label{delsigsig}\end{aligned}$$ which again verifies that the adsorbed amount scales like the surface charge. The numerical results of the mean-field equations for constant surface potential $y_s$ condition and in the low salt regime ($c_{\rm salt}=0.1\,$mM) are consistent with this scaling picture, as can be seen in Figs. 1 and 2. In Fig. 1b the rescaled potential $y/|y_s|$ is plotted in terms of a rescaled distance: $x/D$, with $D$ taken from Eq. (\[Dys\]). In Fig. 2b the concentration profile is rescaled by $c_m$, Eq. (\[cmys\]), and plotted in terms of the same rescaled distance $x/D$. The figures show clearly data collapse of the two profiles, indicating that the characteristic adsorption length $D$ is indeed given by the scaling predictions. Note that the agreement with the scaling argument occurs as long as the system stays in the low salt limit. The other limit of high salt is discussed next. The Adsorption – Depletion Transition in presence of Added Salt {#Phaset} =============================================================== The same numerical procedure outlined in Sec. II, is used to find when the chains stop adsorbing and instead will deplete from the surface. This is not a sharp transition but rather a crossover which is seen by calculating numerically the PE surface excess, as depicted in Fig. 4. The profiles were obtained by solving numerically the differential equations for several values of $f$ near the adsorption–depletion transition using a fixed surface potential boundary condition. For salt concentration of about $c_{\rm salt}^* \simeq 0.16 {\left|y_s\right|}f/(l_{\rm B}a^2)$ (solid line in Fig. 4), the figure show the disappearance of the concentration peak. Namely, a depletion–adsorption crossover. The dependence of $\Gamma=\int_0^\infty {\rm d}x \left(\phi^2-\phi_b^2\right)$ on $c_{\rm salt}$ and $f$ for constant surface potential is presented in Fig. 5. The place where $\Gamma=0$ indicates an adsorption–depletion transition, separating positive $\Gamma$ in the adsorption regime from negative ones in the depletion regime. In Fig. 5a the dependence of $\Gamma$ on $f$ is shown for several salt concentrations ranging from low- to high-salt conditions. For low enough $f$, $\Gamma<0$ indicates depletion. As $f$ increases, a crossover to the adsorption region, $\Gamma>0$, is seen. In the adsorption region, a peak in $\Gamma(f)$ signals the maximum adsorption amount at constant $c_{\rm salt}$. As $f$ increases further, beyond the peak, $\Gamma$ decreases as $1/\sqrt{f}$. Looking at the variation of $\Gamma$ with salt, as $c_{\rm salt}$ increases, the peak in $\Gamma(f)$ decreases and shifts to higher values of $f$. For very large amount of salt, [*e.g.,*]{} $c_{\rm salt}=0.5$M, the peak occurs in the limit $f\to 1$. In Fig. 5b, we plot $\Gamma(c_{\rm salt})$ for several $f$ values. The adsorption regime crosses over to depletion quite sharply as $c_{\rm salt}$ increases, signaling the adsorption-depletion transition. The salt concentration at the transition, $c_{\rm salt}^*$, increases with the charge fraction $f$. The dependence of $\Gamma$ on $c_{\rm salt}$ and $f$ for constant surface charge density is plotted in Fig. 6. Both salt and $f$ dependences show a similar behaviour to those shown in Fig. 5 for constant surface potential. The numerical phase diagrams supporting the adsorption–depletion transition are presented in Fig. 7 for constant surface charge conditions. The phase diagrams were obtained by solving numerically the mean-field equations. We scanned the $(f, c_{\rm salt})$ parameter plane for 50 values of $f$ between $0.01<f<1$ (Fig. 7a) and the $({\tilde{\sigma}}, c_{\rm salt})$ plane for 50 values of ${\tilde{\sigma}}=|\sigma/e|$ between $10^{-5}$\[Å$^{-2}$\] $< {\tilde{\sigma}}< 10^{-4}$\[Å$^{-2}$\] (Fig. 7b). From the log-log plots it can be seen that the adsorption–depletion transition is described extremely well by a line of slope $2/3$ in both Fig. 7a and 7b. Namely, at the transition $c_{\rm salt}^*\sim f^{2/3}$ for fixed ${\tilde{\sigma}}$ and $c_{\rm salt}^*\sim {\tilde{\sigma}}^{2/3}$ for fixed $f$. To complete the picture, the adsorption–depletion transition is also presented in Fig. 8 for constant surface potential. The phase diagrams are obtained by solving numerically the differential equations. We scanned the $(f, c_{\rm salt})$ parameter plane for 50 values of $f$ between $0.01<f<1$ (Fig. 8a) and the $({\left|y_s\right|}, c_{\rm salt})$ plane for 50 values of ${\left|y_s\right|}$ between $0.1<{\left|y_s\right|}<1.0$ (Fig. 8b). From the figure it is apparent that the adsorption–depletion transition line fits quite well a line of slope 1.0 in both Fig. 8a and 8b plotted on a log-log scale. Namely, $c_{\rm salt}^*\sim f$ for fixed $y_{\rm s}$, and $c_{\rm salt}^*\sim y_{\rm s}$ for fixed $f$. These scaling forms of $c_{\rm salt}^*$ at the transition can be explained using the simplified scaling arguments introduced in Sec. III. Scaling for Fixed Surface Charge {#sigdepsubsec} -------------------------------- If the blobs are taken as charged spheres, the mere existence of an adsorption process requires that the attraction of the monomers to the surface persists for all charges up to distances $D$ from the charged surface. For high ionic strength solutions, the electrostatic potential at distance $x$ for a charged surface can be approximated by the linearized Debye-Hückel potential: $$y\left(x\right)= 4\pi {\tilde{\sigma}}l_{\rm B} \kappa^{-1} { {\rm e} }^{-\kappa x}~. \label{schargekappa}$$ This is valid as long as the potential is low enough, $y\le 1$. The adsorption picture requires that the exponential decay of the potential will not vary substantially inside a region of size $D$ comparable to the size of surface blobs, $y(D)\simeq y_s$. Then, the exponential decay in Eq. (\[schargekappa\]) yields $$\kappa D < 1 ~. \label{depletion1}$$ Namely, the Debye-Hückel screening length is smaller than the adsorption layer thickness, $D$. Using Eq. (\[Dscharge\]) this yields: $$c_{\rm{salt}} < {\tilde{\sigma}}^{2/3} f^{2/3}l_{\rm B}^{-1/3}a^{-4/3} \label{depletion_condition_scharge}$$ The crossover between adsorption and depletion will occur when $c_{\rm{salt}}^* \simeq ({\tilde{\sigma}}^{2} f^{2}l_{\rm B}^{-1}a^{-4})^{1/3} $, in accord with Refs. [@Wiegel; @Muthukumar; @Dobrynin], and with the numerical results discussed above and presented in Fig. 7. Scaling for Fixed Surface Potential {#ysdepsubsec} ----------------------------------- For the boundary condition, $\psi=\psi_s$, the potential decay from the surface can be approximated to be: $$y\left(x\right) = y_s { {\rm e} }^{-\kappa x} \\ \label{yskappa}$$ and the same consideration as in Eq. (\[depletion1\]) and (\[depletion\_condition\_scharge\]) gives: $$c_{\rm{salt}} < \frac{{\left|y_s\right|}f}{l_{\rm B} a^2}. \label{depletion_condition_ys}$$ Namely, we expect an adsorption–depletion transition to occur for $c_{\rm salt}^* \simeq {\left|y_s\right|}f/(l_{\rm B}a^2)$, in the case of a fixed surface potential. This supports the numerical results as presented in Fig. 8. Discussion {#comparison} ========== We have presented numerical calculations of the mean-field equations describing the adsorption of PE chains onto charged surfaces, including multi-chain interactions. The main finding is the existence of an adsorption–depletion transition in presence of added salt or weakly charged chains. The numerical results are discussed in terms of simple scaling arguments describing the adsorption of PEs. The salt concentration at the adsorption–depletion transition scales like $c_{\rm salt}^* \sim f{\left|y_s\right|}$ for fixed surface potential and $c_{\rm salt}^* \sim \left(f{\tilde{\sigma}}\right)^{2/3}$ for fixed surface charge density. Within the scaling picture, the condition for depletion is the same as for a single chain, in agreement with our mean-field solutions. We briefly summarize the main approximations of our mean-field and scaling results. A non-adsorbing surface is used as the polymer boundary condition. However, if the surface has a strong non-electrostatic affinity for the PE chains, the electrostatic contribution does not have to be the dominant one. The method also assumes Gaussian blobs within mean-field theory. In a more refined theory, excluded volume interactions as well as lateral correlation in the blob-blob interactions will alter the adsorption behavior. When the surface charge (or potential) is high enough, the blob size $D$ can become comparable with the monomer size $a$, and the PE chains will lay flat on the surface. Further investigations might be necessary to address in more detail the above points. It will also be interesting to extend our results to geometries other than the planar charged surface. Several authors have addressed the problem of adsorption onto surfaces either of a single chain [@Borisov] or multiple chains [@Dobrynin] using similar arguments of blobs. In another approach, a Flory-like free energy [@Itamar] was introduced using the assumption of a single characteristic length scale. The latter gave adsorption-layer scaling laws as in Eqs. (\[Dys\]) and (\[cmys\]), but did not find the depletion criterion. Instead, an adsorption length scale and a characteristic concentration were predicted for the high-salt regime. We show here, using both numerical calculations and scaling arguments, that the high-salt regime does not exist because it is preempted by a PE depletion. 2truecm [*Acknowledgments.    *]{} It is a pleasure to thank G. Ariel, I. Borukhov, Y. Burak, E. Katzav and H. Orland for useful discussions and comments. DA acknowledges support from the Israel Science Foundation under grant No. [210/02]{} and the Alexander von Humboldt Foundation. RRN acknowledges support from the Deutsche Forschungsgemeinschaft (DFG, German-French Network) and the Fonds der Chemischen Industrie. [99]{} R. Varoqui, J. Phys. II (France) [**3**]{}, 1097 (1993). R. Varoqui, A. Johner, A. Elaissari, J. Chem. Phys. [**94**]{}, 6873 (1991). X. Chatellier and J.F. Joanny, J. Phys. II (France) [**6**]{}, 1669 (1996). J.F. Joanny, Eur. Phys. J. B. [**9**]{}, 117 (2000). M. Manghi and M. Aubouy, cond-mat/0202045 preprint. F.W. Wiegel, J. Phys A: Math. Gen. [**10**]{}, 299 (1977). M. Muthukumar, J. Chem. Phys. [**86**]{}, 7230 (1987). I. Borukhov, D. Andelman, H. Orland, Macromolecules [**31**]{}, 1665 (1998); Europhys. Lett. [**32**]{}, 499 (1995). I. Borukhov, D. Andelman, H. Orland, [Eur. Phys. J. B]{} [**5**]{}, 869 (1998). I. Borukhov, D. Andelman, H. Orland, J. Phys. Chem. B [**103**]{}, 5042 (1999). O.V. Borisov, E.B. Zhulina, T.M. Birshtein, J. Phys. II (France) [**4**]{}, 913 (1994). A.V. Dobrynin, A. Deshkovski, M. Rubinstein, Macromolecules [**34**]{}, 3421 (2001). R.R. Netz and J.F. Joanny, Macromolecules [**32**]{}, 9013 (1999). R.R. Netz and D. Andelman, [*Neutral and Charged Polymers at Interfaces*]{}, to be published, Phys. Rep., 2003. R.R. Netz and D. Andelman, in: [*Encyclopedia of Electrochemistry*]{}, Eds. M. Urbakh and E. Giladi, Vol. I, (Wiley-VCH, Weinheim, 2002). H.A. van der Schee and J. Lyklema, J. Phys. Chem. [**88**]{}, 6661 (1984). H.G.M. van de Steeg, M.A. Cohen Stuart, A. de Keizer, B.H. Bijsterbosch, Langmuir [**8**]{}, 2538 (1992). M.R. Böhmer, O.A. Evers, J.M.H.M. Scheutjens, Macromolecules [**23**]{}, 2288 (1990). W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, [*Numerical Recipes in C: The Art of Scientific Computing*]{}, (Cambridge University, Cambridge 1992), Chap. 17, p. 762. D. Andelman, in [*“Handbook of Biological Physics: Structure and Dynamics of Membranes"*]{}, Vol. 1B, edited by R. Lipowsky and E. Sackmann, (Elsevier Science B.V., Amsterdam, 1995), Chap. 12, p. 603. J.N. Israelachvili, [*Intermolecular and Surface Forces*]{}, (Academic Press, London, 1992). Figure Captions {#figure-captions .unnumbered} =============== - \(a) Numerical profiles of the rescaled electrostatic potential $y=\beta e \psi$ as function of the distance from the surface $x$ using Eqs. (\[normPBmod\]) and (\[normEdwardsmod\]), and constant surface potential. The solid line is for $a=5$Å, $f=1, y_s=-1.0$, the dotted line for $a=5$Å, $f=1, y_s=-0.5$, the dashed line is for $a=10$Å, $f=1, y_s=-0.5$, and the dash-dot line for $a=5$Å, $f=0.1, y_s=-0.5$. All profiles have $c_{\rm salt}=0.1$[mM]{}, $\phi_{\rm b}^2=10^{-6}$ Å$^{-3}$, $v=50$Å$^3$, $\varepsilon =80,\, T=300$K. The profiles reproduce those of Ref. [@Itamar] using a different numerical scheme. (b) Same profiles as in part (a) but in rescaled variables: $x/D$ and $y/{\left|y_s\right|}$. - \(a) The concentration profile $c(x)/c_b=\phi^2(x)/\phi_{\rm b}^2$ for the numerical calculations specified in Fig. 1. The profiles reproduce those of Ref. [@Itamar]. (b) Same as in part (a) but in rescaled variables: $x/D$ and $c(x)/c_m$. - A schematic drawing of polyelectrolyte adsorption onto flat surfaces and formation of Gaussian surface blobs each of size $D$ and having $g$ monomers. The monomer size is $a$. - Numerical polyelectrolyte concentration profiles exhibiting the transition from adsorption to depletion. The dashed line corresponds to $f=0.12$, the dot–dash line to $f=0.1$, the solid line to $f=0.09$, and the dotted line to $f=0.08$. All profiles have ${\left|y_s\right|}=0.5$, $\phi_b^2=10^{-6}$Å$^{-3}$, $v=50$Å$^3$, $a=5$Å, $c_{\rm salt}=70\,$mM. The adsorption–depletion transition is found to occur for $f=0.09$, corresponding to $c_{\rm salt}^*\simeq 0.16{{\left|y_s\right|}f}/{l_B a^2}$. - \(a) Surface excess of PE adsorption, $\Gamma$, as function of the chain charged fraction $f$, for several salt concentrations: 1.0mM (solid line), 10mM (dashed line), 0.1M (dash-dot line), 0.5M (dots), and for constant surface potential. As the salt concentration increases, the peak in $\Gamma$ shifts to higher $f$ values and disappears for $c_{\rm salt}=0.5$M. The depletion-adsorption transition occurs for $\Gamma=0$. (b) Surface excess as function of salt concentration, $c_{\rm salt}$, for several $f$ values: f=0.03 (dots), 0.1 (dashes), 0.3 (dot-dash), 1.0 (solid line). $\Gamma$ is almost independent of $c_{\rm salt}$ for low salt concentrations in the adsorption region. It is then followed by a steep descent into a depletion region at a threshold value. Other parameters used are: $y_s=-1.0$, ${\it v}=50$Å$^3$, $\phi_b^2=10^{-6}$Å$^{-3}$, $a=5$Å, $T=300$K and $\varepsilon=80$. - \(a) Surface excess of PE adsorption, $\Gamma$, as function of the chain charged fraction $f$, for several salt concentrations: 4.0mM (solid line), 8.0mM (dashed line), 21mM (dash-dot line), 63mM (dots), and for constant surface charge density. As the salt concentration increases, the peak in $\Gamma$ shifts to higher $f$ values and disappears for $c_{\rm salt}=63$mM. The depletion-adsorption transition occurs for $\Gamma=0$. (b) Surface excess as function of salt concentration, $c_{\rm salt}$, for several $f$ values: f=0.1 (dots), 0.2 (dashes), 0.45 (dot-dash), 1.0 (solid line). $\Gamma$ is almost independent of $c_{\rm salt}$ for low salt concentrations in the adsorption region. It is then followed by a steep descent into a depletion region at a threshold value. Other parameters used are: $\sigma/e=-10^{-4}$Å$^{-2}$, ${\it v}=50$Å$^3$, $\phi_b^2=10^{-6}$Å$^{-3}$, $a=5$Å, $T=300$K and $\varepsilon=80$. - Numerically calculated adsorption–depletion crossover diagram for constant surface charge condition. In (a) the $(f,c_{\rm salt})$ parameter plane on a log-log scale while ${\tilde{\sigma}}=|\sigma/e|$ is held constant at ${\tilde{\sigma}}=10^{-3}$Å$^{-2}$. The full squares represent the lowest salt concentration for which depletion is detected. The least-mean-square fit to the data points gives a straight line with slope of $0.69\pm 0.02$. The figure shows that the numerical results agree with a $2/3$ power law as predicted in Sec. IV.A, $c_{\rm salt}^* \sim f^{2/3}$. In (b) the crossover diagram in calculated numerically in the $(|\sigma/e|,c_{\rm salt})$ parameter plane on a log-log scale, while $f$ is fixed to be $f=0.1$ The least-mean-square line has a slope of $0.71\pm0.02$, showing that the numerical results agree with a $2/3$ power law as predicted in Sec. IV.A, $c_{\rm salt}^* \sim \sigma^{2/3}$. - Numerically calculated crossover diagram on a log-log scale for constant surface potential conditions. Notations and symbols are the same as in Fig. 6. In (a) the $(f,c_{\rm salt})$ parameter plane is presented for constant $y_{\rm s}=-1.0$. The least-mean-square fit has a slope of $1.00 \pm 0.02$, in excellent agreement with the scaling arguments, $c_{\rm salt}^*\sim f$. In (b) the $({\left|y_s\right|},c_{\rm salt})$ parameter plane is presented, for constant $f=0.1$. The least-mean-square fit has a slope of $1.04 \pm 0.02$, in agreement with scaling arguments, $c_{\rm salt}^*\sim {\left|y_s\right|}$. =1.0 Fig. 1, Shafir et al           =1.0 Fig. 2, Shafir et al =1.0 Fig. 3, Shafir et al =1.0 Fig. 4, Shafir et al =1.0 Fig. 5, Shafir et al           =1.0 Fig. 6, Shafir et al =1.0 Fig. 7, Shafir et al           =1.0 Fig. 8, Shafir et al
--- abstract: 'We advance all optical spin noise spectroscopy (SNS) in semiconductors to detection bandwidths of several hundred gigahertz by employing an ingenious scheme of pulse trains from ultrafast laser oscillators as an optical probe. The ultrafast SNS technique avoids the need for optical pumping and enables nearly perturbation free measurements of extremely short spin dephasing times. We employ the technique to highly n-doped bulk GaAs where magnetic field dependent measurements show unexpected large g-factor fluctuations. Calculations suggest that such large g-factor fluctuations do not necessarily result from extrinsic sample variations but are intrinsically present in every doped semiconductor due to the stochastic nature of the dopant distribution.' author: - Fabian Berski - Hendrik Kuhn - 'Jan G. Lonnemann' - Jens Hübner - Michael Oestreich title: | Ultrahigh Bandwidth Spin Noise Spectroscopy:\ Detection of Large g-Factor Fluctuations in Highly n-Doped GaAs --- [^1] Spin noise spectroscopy (SNS) has proven itself as a well-developed experimental technique in semiconductor spin quantum-optronics [@Muller.PhysicaE.2010; @Dahbashi.APL.2012; @Crooker.PRB.2009]. The low perturbing nature of SNS makes the technique an ideal tool to study the unaltered long coherence times of electron spins in semiconductors [@Romer.PRB.2010] and semiconductor nanostructures [@Muller.PRL.2008; @Dahbashi.APL.2012]. However, short spin coherence times require a high detection bandwidth and thus the temporal capabilities of SNS are usually limited by the speed of the electro-optic conversion and subsequent signal processing. A first successful step to overcome the temporal limitation has been made by employing a single ultrafast laser oscillator as a stroboscopic optical sampling tool, which directly enabled spin noise measurements of frequencies up to several GHz, but with a fixed bandwidth of roughly 0.1 GHz [@Muller.PRB.2010]. In this letter we report the first experimental demonstration of spin noise spectroscopy with a full bandwidth that is increased by several orders of magnitude to over one hundred GHz which corresponds to spin dephasing times in the picosecond regime. Thereby, the presented SNS method is ideally suited for systems which intrinsically show a fast decay of spin coherence and are yet susceptible to optical excitation, like hole spin systems with a high degree of spin-orbit interaction [@Wu.PhysicsReports.2010], carrier systems at very low temperatures ($<$100 mK), or Bose-Einstein condensation of magnons [@Demokritov.Nature.2006]. In the following, we employ the technique of ultrafast SNS to highly n-doped bulk GaAs and find in the metallic regime large g-factor fluctuations. Calculations reveal that large g-factor fluctuations are an intrinsic bulk property of doped semiconductors. The effect results from the stochastic distribution of donor atoms and the imperfect local averaging of electrons due to their finite momentum and spin dephasing times [@Dzhioev.PRB.2002]. Ultrafast SNS asserts itself as the perfect tool to measure such an effect since it combines the necessary high temporal resolution, negligible disturbance of the system, and efficient averaging over very large sample volumes compared to other optical methods due to the below-bandgap detection. The extended measurement principle of ultrafast SNS is based upon the repeated measurement of the correlated Faraday rotation signal $ \theta(t_{i}) \theta(t_{i}+\Delta t) $ of two ultrashort laser probe pulses with a temporal delay of $\Delta t$ [@Starosielec.APL.2008]. The point in time $t_{i}$ is arbitrary for every pulse pair due to the stochastic nature of the spin dynamics if the repetition period between two pulse pairs is much larger than the spin dephasing time. The average Faraday rotation signal $\left\langle \theta \right\rangle (\Delta t)$ vanishes if the non-magnetic sample is in thermal equilibrium. However, the variance $ \sigma _{\theta }^{2} (\Delta t)$ is not zero but is maximal for fully correlated Faraday rotation of the two laser pulses ($\Delta t=0$), decreases with increasing $\Delta t$ to a finite value due to spin dephasing, oscillates with $\Delta t$ in the presence of a transverse magnetic field $B$ due to Larmor precession of the electron spins, and approaches zero for anti-correlation. In this work the sampling pulses are delivered by two synchronized, ultrafast, picosecond laser oscillators with a common repetition rate of 80 MHz. The relative phase between the two emitted pulse trains is adjustable, so that pairs consisting of two laser pulses are formed with a temporal delay $\Delta t$ which can be conveniently tuned between a picosecond and a few nanoseconds. The correlated Faraday rotation signal of both pulses within a pulse pair is measured by a balanced detector which is so slow that it integrates over each pulse pair but is fast enough to distinguish two succeeding pulse pairs. In other words, the Faraday rotation signals $\theta (t)$ of the two pulses of a pulse pair are added up for $\Delta t < 12.5$ ns but the fluctuation from pulse pair to pulse pair is fully resolved. A rectification of the Faraday signal is implemented by taking the square of $\theta (t_i)+\theta (t_i+\Delta t)$ during the data acquisition. The experimental setup is depicted in Fig. \[fig:Aufbau\]. The two degenerate, linearly polarized laser pulses are combined in a polarization maintaining, single mode fiber to ensure a common beam profile in addition to identical pulse length, power, and wavelength. The blended laser light has an average power of 17 mW and is focussed to a spot diameter of about 50 $\mu$m onto the sample surface. After traversing the sample, the spin induced fluctuations of the linear polarization are analyzed by a polarization bridge given by a $1/2 \cdot \lambda $ waveplate for power balancing, a Wollaston prism, and a low noise, differential, optical photoreceiver with a 3 dB bandwidth of 150 MHz. The electrical output of the photoreceiver is passed through a low pass filter with a cut-off frequency of 70 MHz before being amplified in order to suppress any residual voltage peaks arising from the limited common noise rejection of the differential photoreceiver. Finally the filtered signal is digitized by a 180 MSample/s digitizer card and sent to a PC for further processing. The measured signal is not only composed of pure spin noise but also of residual background contributions, which are mainly caused by optical shot noise. The spin noise is extracted by using an electro optical modulator (EOM) before the polarization bridge, which acts either as $1/4 \cdot \lambda $ or as $0\cdot \lambda $ retarder with a square wave modulation of 4 kHz. For $0\cdot \lambda $ retardance, the EOM transmits the incoming polarization unchanged (spin noise is detected), but for $1/4\cdot \lambda $ retardance, every off-axes polarization component is transformed into elliptically polarized light and divided into two equal parts at the polarization bridge (no spin noise is detected; background only). This fast background acquisition strongly suppresses any parasitic fluctuations and yields extremely reliable data series. The measurement protocol is depicted in Fig. \[fig:Aufbau\]b: A single measurement window is 100 ms long. The start and end point are set in the presented measurement to 80 ps and 835 ps, respectively. The time delay is increased in 96 steps with step length of 1 ms. The exact time delay for each step has been verified with a calibrated streak camera system. During each step the EOM switches eight times between $1/4 \cdot \lambda $ and $0 \cdot \lambda $ retardance. The sample is Czochralski grown, bulk GaAs with a nominal n-doping concentration of $n_{d} = 8.2\cdot 10^{17}$ cm$^{-3} $. The doping concentration is about 40 times above the metal-to-insulator transition which guarantees that all valence electrons are in the metallic state. In lower doped samples, localized and free electrons can exist simultaneously and strong g-factor variations are consequently trivial. The sample thickness is $300\,\mu$m and both front and back surfaces are anti-reflection coated for optimized transmission. The high doping concentration yields a Fermi-level of $E_{F} =47.7$ meV above the conduction band minimum. The dominating spin dephasing mechanism in this regime is the Dyakonov-Perel [@Dyakonov.SPSS.1972] mechanism since the resulting energy dependent spin-splitting of the conduction band is large. Accordingly, we expect a very fast spin dephasing time on the order of some hundred picoseconds [@Dzhioev.PRB.2002]. Figure \[fig:spectrum\] shows the derivative of the measured (dots) spin correlation ${\frac{d}{dt}} \sigma _{\theta }^{2} $ as a function of the temporal pulse delay $\Delta t$. The derivative has been taken to suppress a slow varying background slope with $\Delta t$ which originates from the coupling of the two independent laser sources by a lock-to-clock system [^2]. Assuming a free induction decay of the free precessing electrons, $\cos(\omega_{L}t') e^{-t'/\tau_{s}}$, the derivative of the autocorrelation is given by: $$\frac{d}{dt} \sigma _{\theta }^{2} \propto \left\{\omega _{L} \tau _{s} \sin (\omega _{L} t)+2\cos (\omega _{L} t)\right\}e^{-t/\tau _{s} } \label{eq:ddtautocorr}$$ where $\omega _{L} =\hbar ^{-1} g^{*} B$ is the Larmor precession frequency with $g^{*} $ as effective electron g-factor and $\tau _{s} $ is the spin dephasing, i.e., spin correlation time. The red line in Fig. \[fig:spectrum\] is a fit with Eq. (\[eq:ddtautocorr\]) which matches with very high accuracy. Figure \[fig:B-Dependence\] shows the extracted dependence of spin dephasing time on the applied transverse magnetic field strength $B$. The spin dephasing time of $\tau _{s} \approx 360$ ps at vanishing magnetic field corresponds well to the expected spin dephasing time limited by the Dyakonov-Perel spin dephasing mechanism for the nominal doping concentration of the investigated sample [@Dzhioev.PRB.2002]. Surprisingly, the spin dephasing time decreases with increasing magnetic field due to a significant inhomogeneous spread of the electron Land[é]{} g-factor. The red line in Fig. \[fig:B-Dependence\] is a fit given by the inverse width $w_{v}$ of an approximated Voigt profile according to: $$\tau_{s} =\left(\pi \, w _{v} \right)^{-1} \approx \left(c_{0} \gamma _{h} +\sqrt{c_{1} \gamma _{h}^{2} +\gamma _{i}^{2} } \right)^{-1} \label{eq:voigt}$$ where $\gamma _{h} ,\gamma _{i} $ are the homogenous and inhomogeneous spin dephasing rates, respectively. The factor $\pi $ arises from the width of the Fourier transformation of a mono-exponential decay which we adopt as a valid approximation for the data analysis [@Romer.RSI.2007]; $c_{0} $ and $c_{1} $ are constants [^3]. The inhomogeneous spin dephasing rate $\gamma _{i} $ is directly linked to the standard deviation of the g-factor spread $\sigma _{g} $ by $\gamma _{i} =\sigma _{g} \mu _{B} B/\hbar $. From the fit with Eq. (\[eq:voigt\]) we obtain a g-factor variation $\sigma _{g} =0.0032$ which is surprisingly large taking into account that all valence electrons are well in the metallic state. We will explain the possible origin of this phenomena in the next paragraph. The inset of Fig. \[fig:B-Dependence\] depicts the dependence of the Larmor precession frequency on $B$. The relative measurement error of the Larmor frequency is smaller than $10^{-4}$ for $B \ge 2$ T while the absolute error is about $\pm 1$ % due to errors in the absolute calibration of $B$ and $\Delta t$. Please note that the demonstrated full bandwidth of about 120 GHz is only limited by the highest magnetic field of 6 T. The nearly perfect fit to a straight line yields the magnitude of the average free electron Land[é]{} g-factor which is $g^{*} =-0.241$. The negative sign is assigned from the relation $g^{*} =-0.48+\beta \cdot E_{F} $. We determine the factor $\beta $ which reflects the energy dependence of the Land[é]{} g-factor to $\beta \approx 5.1$ eV$^{-1}$ for this doping concentration and attribute the deviation from the commonly known factor of $6.3$ eV$^{-1}$ for slightly doped samples [@Hopkins.SST.1987; @Hubner.PRB.2009] to band gap renormalization arising from the high doping concentration. The deviation of $g^{\ast}$ being a constant is less than $10^{-3}$ T$^{-1}$ which is at least a factor of 5 lower than for low doped GaAs at low temperatures. Next, we discuss the origin of $\sigma_g$. Most interestingly, the measured g-factor variation in metallic bulk semiconductors can be attributed to an intrinsic contribution which arises from the pure thermodynamic distribution of dopant atoms in the material during growth. The Fermi–level is inherently constant over the entire sample, but the stochastic fluctuations of the dopant concentration give rise to local space charge densities [@PhysRevB.81.115332] which in turn shift the band structure with respect to the Fermi–level. In first approximation, an electron propagates in this local inhomogeneity undisturbed over an average distance $\overline r = v_f \cdot \tau_p /2$, where $\tau_p$ is the electron momentum scattering time and $v_f = \sqrt{2 E_F / m^*}$ is the Fermi–velocity. At low temperatures ionized impurity scattering is the main scattering mechanism for highly doped bulk semiconductors. The momentum scattering time can easily be extracted from the spin dephasing time measured at zero magnetic field by the relation [@Pikus.SR.1984; @Zutic.RMP.2004]: $$\tau_{s}^{-1} = \frac{{32}}{{105}}{\gamma _3}^{ - 1}{}{\alpha ^2}\frac{{{E_F}^3}}{{{\hbar ^2}E_g}} \cdot \tau _p \label{eq:taus}$$ with $\gamma_3=6$ for ionized impurity scattering and $\alpha = 0.07$ [@Marushchak.SPSS.1983]; $E_{g}$ is the energy gap. We determine from the measured $\tau_{s} \approx 360$ ps an average momentum scattering time of 70 fs, which is very reasonable for this kind of sample and scattering mechanism [@Zutic.RMP.2004]. An electron samples an average volume $\overline{V}=4 \pi /3 \, \overline{r}^3 \cdot \tau_s/ \tau_p $ during $\tau_{s}$ on its diffusive scattering path. For each electron the number of donor atoms within this volume fluctuates with $\sqrt{\overline{V} \cdot n_{d}}$. The resulting change in the local doping density is directly linked to the g-factor variation via the energy dependence of the g-factor as shown above. For the given doping density in our sample we calculate an intrinsic g-factor variation due to the local doping density fluctuations of $\sigma_{g} = 5\cdot 10^{-4}$. The experimental value is only a factor of six higher than this calculated value from our stochastic approximation. Certainly, other inhomogeneities may contribute to the measured g-factor fluctuation and the theoretical description is only an order of magnitude estimation. Nevertheless, a statistical distribution of donor atoms is inevitably present in doped semiconductor samples and the estimated effect on $\sigma_g$ is large. The effect should be in particular orders of magnitude larger than the familiar variable g-factor mechanism due to electrons in different quantum states [@PhysRevB.66.233206]. We want to point out that our estimation fully links $\sigma_g$ to the doping density since the spin dephasing rate is related to the momentum scattering rate by Eq. \[eq:taus\] and the momentum scattering rate is in turn related to the doping density by the Brooks-Herring formalism. The Fermi-level and Fermi-velocity are by definition determined by $n_{\rm d}$ and, therefore, the intrinsic inhomogeneous g-factor fluctuation should scale according to this approximation like $\sigma_{g}\propto n_{\rm d}^{2/3}$. The calculations also predict a larger $\sigma_g$ for higher temperatures since the faster spin and momentum relaxation times yield a smaller averaging volume. We measured such an increase of $\sigma_g$ by a factor of two for a temperature increase from 20 K to 200 K. However, we also measured a lateral g-factor fluctuation of about $\sigma _{g}^{{\rm lat}} \approx 0.0015$ by spatially changing the transmission spot on the sample and our current measurement setup does not guarantee a constant sampling spot if the temperature is increased. Clearly, further measurements with improved spot stability and doping dependent measurements on thick, high quality, molecular beam epitaxy grown GaAs together with sophisticated microscopic calculations are desirable to test and quantify the effect. In conclusion we successfully demonstrated ultrafast spin noise spectroscopy and increased the state of the art bandwidth by more than two orders of magnitude. The bandwidth is in principle only limited by the pulse width of the laser and should reach for femtosecond laser pulses the THz regime [^4]. Already the demonstrated bandwidth of 120 GHz enables SNS measurements on systems with picosecond spin dynamics. This applies for, e.g., magnons in yttrium iron garnet, hole spin systems at very low temperatures, as well as for many-electron systems at room temperature. We applied ultrafast SNS to highly n-doped bulk GaAs well above the metal-to-insulator transition, which is the archetype material for spintronic, and observed, despite being in the metallic regime, a large g-factor variance. Calculations estimate that such large g-factor variances are intrinsic to doped semiconductors and result even in perfect samples from the inevitable stochastic variation of the doping concentration. We acknowledge the financial support by the BMBF joint research project QuaHL-Rep, the Deutsche Forschungsgemeinschaft in the framework of the priority program “SPP 1285—Semiconductor Spintronics,” and the excellence cluster “QUEST—Center for Quantum Engineering and Space-Time Research”. [10]{} G. M. M[ü]{}ller, M. Oestreich, M. R[ö]{}mer, and J. H[ü]{}bner, Physica E **43**, 569 (2010). R. Dahbashi, J. H[ü]{}bner, F. Berski, J. Wiegand, X. Marie, K. Pierz, H. W. Schumacher, and M. Oestreich, Appl. Phys. Lett **100**, 031906 (2012). S. A. Crooker, L. Cheng, and D. L. Smith, Phys. Rev. B **79**, 035208 (2009). M. R[ö]{}mer, H. Bernien, G. M[ü]{}ller, D. Schuh, J. H[ü]{}bner, and M. Oestreich, Phys. Rev. B **81**, 075216 (2010). G. M[ü]{}ller, M. R[ö]{}mer, D. Schuh, W. Wegscheider, J. H[ü]{}bner, and M. Oestreich, Phys. Rev. Lett **101** (2008). G. M. M[ü]{}ller, M. R[ö]{}mer, J. H[ü]{}bner, and M. Oestreich, Phys. Rev. B **81**, 121202(R) (2010). M. Wu, J. Jiang, and M. Weng, Physics Reports **493**, 61 (2010). S. O. Demokritov, V. E. Demidov, O. Dzyapko, G. A. Melkov, A. A. Serga, B. Hillebrands, and A. N. Slavin, Nature **443**, 430 (2006). R. I. Dzhioev, K. V. Kavokin, V. L. Korenev, M. V. Lazarev, B. Y. Meltser, M. N. Stepanova, B. P. Zakharchenya, D. Gammon, and D. S. Katzer, Phys. Rev. B **66**, 245204 (2002). S. Starosielec and D. H[ä]{}gele, Appl. Phys. Lett **93**, 051116 (2008). M. I. Dyakonov and V. I. Perel, Sov. Phys. Solid State **13**, 3023 (1972). A free running ultrafast rapid temporal delay scanning scheme circumvents this background but raises other constraints which is discussed in detail in Ref. [@Hubner2012]. M. R[ö]{}mer, J. H[ü]{}bner, and M. Oestreich, Rev. Sci. Instrum **78**, 103903 (2007). $c_{0} ={\protect \rm 0.5346},c_{1} =0.2166$ [@Olivero.JQSRT.1977]. M. A. Hopkins, R. J. Nicholas, P. Pfeffer, W. Zawadzki, D. Gauthier, J. C. Portal, and M. A. DiForte-Poisson, Semicond. Sci. Technol **2**, 568 (1987). J. H[ü]{}bner, S. D[ö]{}hrmann, D. H[ä]{}gele, and M. Oestreich, Phys. Rev. B **79** (2009). M. M. Glazov, M. A. Semina, and E. Y. Sherman, Phys. Rev. B **81**, 115332 (2010). G. E. Pikus and A. N. Titkov, in *Optical orientation*, edited by F. Meier and B. P. Zakharchenya, p. 109 (North-Holland, Amsterdam, 1984). I. Zutic, J. Fabian, and S. Das Sarma, Review of Modern Physics **76**, 323 (2004). V. A. Marushchak, M. N. Stepanova, and A. N. Titkov, Sov. Phys. Solid State pp. 2035–2038 (1983). F. X. Bronold, I. Martin, A. Saxena, and D. L. Smith, Phys. Rev. B **66**, 233206 (2002). Without loss of applicability, one ultrafast laser together with a mechanical delay stage can be used instead of two ultrafast lasers. J. H[ü]{}bner, J. G. Lonnemann, P. Zell, H. Kuhn, F. Berski, and M. Oestreich, unpublished (2012). J. Olivero and R. Longbothum, Journal of Quantitative Spectroscopy and Radiative Transfer **17**, 233 (1977). [^1]: The authors F. B. and H. K. contributed equally to this work. [^2]: A free running ultrafast rapid temporal delay scanning scheme circumvents this background but raises other constraints which is discussed in detail in Ref. [@Hubner2012]. [^3]: $c_{0} ={\rm 0.5346},c_{1} =0.2166$ [@Olivero.JQSRT.1977] [^4]: Without loss of applicability, one ultrafast laser together with a mechanical delay stage can be used instead of two ultrafast lasers.
--- author: - | F. William Townes\ Department of Biostatistics\ Harvard University\ `ftownes@g.harvard.edu` bibliography: - 'refs.bib' title: Generalized Principal Component Analysis --- Introduction ============ Principal component analysis (PCA) [@hotelling_analysis_1933] is widely used to reduce the dimensionality of large datasets. However, it implicitly optimizes an objective function that is equivalent to a Gaussian likelihood. Hence, for data such as nonnegative, discrete counts that do not follow the normal distribution, PCA may be inappropriate. A motivating example of count data comes from single cell gene expression profiling (scRNA-Seq) where each observation represents a cell and genes are features. Such data are often highly sparse ($>90\%$ zeros) and exhibit skewed distributions poorly matched by Gaussian noise. To remedy this, Collins [@collins_generalization_2002] proposed generalizing PCA to the exponential family in a manner analogous to the generalization of linear regression to generalized linear models. Here, we provide a detailed derivation of generalized PCA (GLM-PCA) with a focus on optimization using Fisher scoring. We also expand on Collins’ model by incorporating covariates, and propose post hoc transformations to enhance interpretability of latent factors. Generalized linear models ========================= Generalized linear models (GLMs) are widely used for regression modeling when the outcome variable does not follow a normal distribution. For example, if the data are counts, a Poisson or negative binomial likelihood can be used. Let $Y$ be the outcome variable. A fundamental aspect of GLMs is that the noise model is assumed to follow an exponential family likelihood: $$\log f_Y(y;\theta) = c(y) + y\theta - \kappa(\theta)$$ In this formulation, $\theta$ is called the natural parameter and $\kappa(\theta)$ is called the cumulant function. The natural parameter is implicitly a function of the mean $\theta = \theta(\mu)$. The derivatives of the cumulant function yield moments. The mean is $\kappa'(\theta) = \mu$ and the variance is $\kappa''(\theta)$. Let $\rho(\mu)$ represent the variance function. It can be shown that the derivative of the natural parameter with respect to the mean is the inverse of the variance function: $$\frac{d\theta}{d\mu} = \frac{1}{\rho(\mu)}$$ In regression modeling, the mean is an invertible, nonlinear function of the covariates and coefficients. The inverse of this function is called the link function: $g(\mu) = x'\beta$. Therefore, the GLM framework for regression involves maximizing the likelihood of the data $(y_i,x_i)$ with respect to the unknown vector of regression coefficients $\beta$. The most widely used algorithm for this optimization is a second-order method called Fisher scoring. For more details on GLMs, refer to [@agresti_foundations_2015]. GLM-PCA ======= Suppose we have no covariates ($x$ is unknown) and $y$ is multivariate. Let $y_{ij}$ indicate the outcome of observation $i$ and feature $j$, with $i=1,\ldots,N$ and $j=1,\ldots,J$. In scRNA-Seq $i$ indexes over cells and $j$ indexes over genes. The GLM-PCA model, like PCA, seeks to reduce the dimensionality of the data $y_{ij}$ by representing it with an inner product of real-valued factors $u_i\in\mathbb{R}^L$ and loadings $v_j\in\mathbb{R}^L$. The number of latent dimensions is specified in advance as $L$. Let $r_{ij}=u_i'v_j$ be the real-valued linear predictor, $\mu_{ij} = g^{-1}(r_{ij})$ the mean, and $\theta_{ij} = \theta(\mu_{ij})$ the natural parameter. We define the derivative of the inverse link function as $$h_{ij} = h(r_{ij}) = \frac{d\mu_{ij}}{dr_{ij}} = \frac{dg^{-1}(r_{ij})}{dr_{ij}}$$ The likelihood of the data is $$\mathcal{L} = \sum_{i,j} c(y_{ij}) + y_{ij}\theta_{ij}-\kappa(\theta_{ij})$$ For numerical stability, we use a penalized likelihood as the objective function to be maximized: $$\mathcal{Q} = \mathcal{L} -\frac{1}{2}\sum_{i,l}\lambda_{ul}u_{il}^2 -\frac{1}{2}\sum_{j,l}\lambda_{vl}v_{jl}^2$$ where $\lambda_{ul}$ and $\lambda_{vl}$ are small, non-negative penalty terms for $l=1,\ldots,L$. The gradient is given by $$\frac{d\mathcal{Q}}{d u_{il}} = \sum_j \frac{y_{ij}-\mu_{ij}}{\rho(\mu_{ij})}h_{ij}v_{jl} - \lambda_{ul}u_{il}$$ Applying the chain rule, the Fisher information is given by $$\begin{aligned} -\operatorname{\mbox{E}}\left[\frac{d^2\mathcal{Q}}{du_{il}^2}\right] &= -\sum_j\operatorname{\mbox{E}}\left[\frac{\rho(\mu_{ij})(-1)-(y_{ij}-\mu_{ij})\psi'(\mu_{ij})}{\big(\rho(\mu_{ij})\big)^2}\big(h_{ij}^2v_{jl}^2\big)+\frac{y_{ij}-\mu_{ij}}{\rho(\mu_{ij})}\left(\frac{d^2\mu_{ij}}{dr_{ij}^2}v_{jl}^2\right)\right] + \lambda_{ul}\\ &= \sum_j\frac{h_{ij}^2 v_{jl}^2}{\rho(\mu_{ij})} + \lambda_{ul}\end{aligned}$$ Let $w_{ij} = 1/\rho(\mu_{ij})$. The Fisher scoring update for $u_{il}$ is given by $$u_{il}\gets u_{il} + \frac{\sum_j (y_{ij}-\mu_{ij})w_{ij}h_{ij}v_{jl} - \lambda_{ul}u_{il}}{\sum_j w_{ij}h_{ij}^2v_{jl}^2 + \lambda_{ul}}$$ By a symmetric argument, the update for $v_{jk}$ is given by $$v_{jl}\gets v_{jl} + \frac{\sum_i (y_{ij}-\mu_{ij})w_{ij}h_{ij}u_{il} - \lambda_{vl}v_{jl}}{\sum_i w_{ij}h_{ij}^2u_{il}^2 + \lambda_{vl}}$$ Since this update rule does not take into account any of the mixed second partial derivatives such as $d^2\mathcal{Q}/du_{il}dv_{jl}$ in computing the Fisher information, it is technically not true Fisher scoring but rather a diagonal approximation. This is actually an advantage since the true Hessian’s dimension would be too large to efficiently invert. Note that blockwise coordinate ascent is also possible by vectorizing the updates across rows and/or columns, for example, let $u^{(l)} = (u_{1l},\ldots,u_{Nl})$ and $v^{(l)} = (v_{1l},\ldots,v_{Jl})$. Let $Y$ be the $J\times N$ data matrix with features as rows and observations as columns such that $y_{ij}$ is in column $i$, row $j$. Let $M$, $W$, and $H$ be similarly defined $J\times N$ matrices. $$\begin{aligned} u^{(l)}&\gets u^{(l)} + \frac{\big((Y-M)\odot W\odot H\big)' v^{(l)} - \lambda_{ul}u^{(l)}}{\big(W\odot H^2\big)'\big((v^{(l)})^2\big)+\lambda_{ul}}\\ v^{(l)}&\gets v^{(l)} + \frac{\big((Y-M)\odot W\odot H\big) u^{(l)} - \lambda_{vl}v^{(l)}}{\big(W\odot H^2\big)\big((u^{(l)})^2\big)+\lambda_{vl}}\end{aligned}$$ where $\odot$ indicates elementwise multiplication, division is elementwise, and $H^2=H\odot H$. This is a generic formulation. In special cases the update equations simplify considerably. For example, consider the canonical link function $g(\mu_{ij})=\theta(\mu_{ij})$ which implies $h_{ij}=\rho(\mu_{ij})=1/w_{ij}$. In this case the gradient becomes $$\frac{d\mathcal{Q}}{du_{il}} = \sum_j (y_{ij}-\mu_{ij})v_{jl}-\lambda_{ul}u_{il}$$ and the Fisher information becomes $\sum_j \rho(\mu_{ij})v_{jl}^2 + \lambda_{ul}$. GLM-PCA with covariates ======================= So far we have implicitly assumed that for all dimensions $l=1,\ldots,L$, both $u^{(l)}$ and $v^{(l)}$ are unknown parameters to be estimated. In practice, row (feature-level) and/or column (observation-level) covariates may also be available. For example, in scRNA-Seq column covariates could indicate batch membership or cell cycle indicators which we want to regress out. Row covariates could include spline basis functions modeling gene-specific GC bias. Even if no covariates are available, simply incorporating a vector of all ones as a column covariate induces a row-specific intercept term, which is analogous to centering by feature in PCA. Let $\tilde{U}\in\mathbb{R}^{N\times L}$ be the matrix whose columns are $u^{(l)}$ and $\tilde{V}\in\mathbb{R}^{J\times L}$ be the matrix whose columns are $v^{(l)}$. As before let $Y$ be the $J\times N$ data matrix. Suppose we are provided observation (column) covariates as a design matrix $X\in\mathbb{R}^{N\times K_o}$ and feature (row) covariates as a design matrix $Z\in\mathbb{R}^{J\times K_f}$. In addition, we consider the offset vector ${\bm}{\delta}\in\mathbb{R}^N$ (if no offset is needed, set ${\bm}{\delta}={\bm}{0}$). We define the $J\times N$ real-valued linear predictor matrix as $$R = AX'+Z\Gamma'+\tilde{V}\tilde{U}'+{\bm}{1}{\bm}{\delta}'$$ where $A\in\mathbb{R}^{J\times K_o}$ and $\Gamma\in\mathbb{R}^{N\times K_f}$ are matrices of regression coefficients and ${\bm}{1}$ is a vector of length $J$ with all ones. Now define the augmented column and row matrices as $U=\big[X,\Gamma,\tilde{U}\big]\in\mathbb{R}^{N\times(K_o+K_f+L)}$ and $V=\big[A,Z,\tilde{V}\big]\in\mathbb{R}^{J\times(K_o+K_f+L)}$ such that $R=VU'+{\bm}{1}{\bm}{\delta}'$. We define the following sets of dimensionality indices: $\Omega_o=\{1,\ldots,K_o\}$, $\Omega_f=\{K_o+1,\ldots,K_o+K_f\}$, $\Omega_L=\{K_o+K_f+1,\ldots,K_o+K_f+L\}$ and $\Omega=\Omega_o\cup\Omega_f\cup\Omega_L$. The set of column indices in $U$ that can be updated is $\Omega_u=\Omega_f\cup\Omega_L$, and for $V$ the updateable index set is $\Omega_v=\Omega_o\cup\Omega_L$. To update $U$, for all $k\in\Omega_u$ do: $$\begin{aligned} R &\gets VU'+{\bm}{1}{\bm}{\delta}'\\ M &\gets g^{-1}(R)\\ W &\gets \frac{1}{\rho(M)}\\ H &\gets h(R)\\ U_{[:,k]}&\gets U_{[:,k]}+ \frac{\big((Y-M)\odot W\odot H\big)' V_{[:,k]} - \lambda_{uk}U_{[:,k]}}{\big(W\odot H^2\big)'\big(V_{[:,k]}^2\big)+\lambda_{uk}}\end{aligned}$$ In general it is not necessary to penalize the regression coefficients, so if $k\in\Omega_f$, we may set $\lambda_{uk}=0$. To update $V$, for all $k\in\Omega_v$ do: $$\begin{aligned} R &\gets VU'+{\bm}{1}{\bm}{\delta}'\\ M &\gets g^{-1}(R)\\ W &\gets \frac{1}{\rho(M)}\\ H &\gets h(R)\\ V_{[:,k]}&\gets V_{[:,k]}+ \frac{\big((Y-M)\odot W\odot H\big) U_{[:,k]} - \lambda_{vk}V_{[:,k]}}{\big(W\odot H^2\big)\big(U_{[:,k]}^2\big)+\lambda_{vk}}\end{aligned}$$ Where $\lambda_{vk}$ may be set to zero whenever $k\in\Omega_o$. At this point, all unknown parameters have been updated, so the objective function $\mathcal{Q}$ can be evaluated and monitored for convergence. As previously stated, the above procedure is a diagonal approximation to full Fisher scoring. Alternating between full Fisher scoring of $U$ and $V$ is likely to be computationally unstable, since there is feedback between updating the unknown latent factors $\tilde{U}$ and the unknown loadings $\tilde{V}$. However, full Fisher scoring as a subroutine can be used to update $A = V_{[:,\Omega_o]}$ and $\Gamma=U_{[:,\Omega_f]}$, since there is no feedback in updating the corresponding fixed covariate matrices $X=U_{[:,\Omega_o]}$ and $Z=V_{[:,\Omega_f]}$. For example, to update $A$, for each $j=1,\ldots,J$ do $$A_{[j,:]}'\gets A_{[j,:]}' + \big(X'\operatorname{diag}\left\{W_{[j,:]}\odot H_{[j,:]}^2\right\}X\big)^{-1}X'\operatorname{diag}\left\{W_{[j,:]}\odot H_{[j,:]}\right\}(Y_{[j,:]}-M_{[j,:]})$$ This can be used to show that ordinary GLM regression is a special case of GLM-PCA with covariates (namely, the case where $J=1$, $Z={\bm}{0}$, and either $\tilde{U}={\bm}{0}$ or $\tilde{V}={\bm}{0}$). However, due to the inversion of a $K_o\times K_o$ matrix separately for all $J$ features, it is computationally demanding. As an illustrative example of using covariates, consider a matrix of count data $Y$ with features in rows and observations in columns where the total counts in each column are not of interest (that is, the counts are only interpretable on a relative scale). We recommend setting the offset ${\bm}{\delta}$ to some constant multiple of the column sums of $Y$ such as the column means. Also recommended is to include feature-specific intercept terms by setting $X={\bm}{1}$. The intercept terms will then be given by the (single column) matrix $A$. The number of latent dimensions $L$ should be chosen by the same methods used to determine the number of principal components in PCA. Rotation of latent factors to orthogonality =========================================== Once the GLM-PCA objective function has been optimized on a dataset, postprocessing can improve interpretability of the latent factors. The first step, which we call the projection step, removes all correlation between latent factors and covariates without changing the predicted mean values $M=g^{-1}(R)$. Let $P_x=X(X'X)^{-1}X'$ and $P_z=Z(Z'Z)^{-1}Z'$ be projection matrices. Then the following reparametrization leaves $R$, and hence $M$ invariant (we omit the offset ${\bm}{\delta}$ for clarity): $$\begin{aligned} R &= A X'+Z\Gamma'+\tilde{V}\tilde{U}'\\ &= Z\Gamma'+A X'+\tilde{V}\tilde{U}'X(X'X)^{-1}X'+\tilde{V}\tilde{U}'(\mathbb{I}-P_x)\\ &= Z\Gamma'+\big(A+\tilde{V}\tilde{U}'X(X'X)^{-1}\big)X'+\tilde{V}\tilde{U}'(\mathbb{I}-P_x)\\ &= \big(A+\tilde{V}\tilde{U}'X(X'X)^{-1}\big)X' + Z\Gamma' + Z(Z'Z)^{-1}Z'\tilde{V}\tilde{U}'(\mathbb{I}-P_x)+(\mathbb{I}-P_z)\tilde{V}\tilde{U}'(\mathbb{I}-P_x)\\ &= \big(A+\tilde{V}\tilde{U}'X(X'X)^{-1}\big)X' + Z\big(\Gamma + (\mathbb{I}-P_x)\tilde{U}\tilde{V}'Z(Z'Z)^{-1}\big)'+(\mathbb{I}-P_z)\tilde{V}\tilde{U}'(\mathbb{I}-P_x)\end{aligned}$$ Based on this, the first step in postprocessing is to set $$\begin{aligned} A&\gets A+\tilde{V}\tilde{U}'X(X'X)^{-1}\\ \Gamma&\gets \Gamma + (\mathbb{I}-P_x)\tilde{U}\tilde{V}'Z(Z'Z)^{-1}\\ \tilde{U}&\gets (\mathbb{I}-P_x)\tilde{U}\\ \tilde{V}&\gets (\mathbb{I}-P_z)\tilde{V}\end{aligned}$$ As an example, consider the case where $X={\bm}{1}$, so $A$ is a vector of feature-specific intercept terms. Then $P_x \tilde{U}$ computes the column means of $\tilde{U}$ and $(\mathbb{I}-P_x)\tilde{U}$ is a matrix whose column means are all zero. In this way, including feature-specific intercepts is analogous to centering the data prior to applying PCA. Both methods produce latent factors whose means are zero. The second step in postprocessing, which we call the rotation step, is to rotate the factors so that the loadings matrix will have orthonormal columns. Let $\tilde{V}'=FD\hat{V}'$ be a singular value decomposition (SVD). By definition, $\hat{V}$ has orthonormal columns and we set this as the updated loadings matrix. Since $\tilde{V}\tilde{U}'=\hat{V}\big(DF'\tilde{U}'\big)$, we set $\hat{U}=\tilde{U}FD$ as the updated latent factors matrix. Note that if $\tilde{U}$ has column means of zero, then so does $\hat{U}$. PCA also produces an orthonormal loadings matrix. The final postprocessing step is to rearrange the latent dimensions in decreasing magnitude, just like PCA orders principal components in decreasing variance. The L2 norm of a vector $x\in\mathbb{R}^n$ is defined as $\Vert x \Vert_2 = \sqrt{\sum_{i=1}^n x_i^2}$. Whenever the empirical mean of $x$ is zero, its empirical standard deviation equals its L2 norm divided by the constant $\sqrt{n-1}$. Therefore, ordering dimensions by L2 norm is equivalent to ordering by variance as long as the column means are zero. As a result of the previous step, all columns of $\hat{V}$ have L2 norm of one, so the magnitude of each dimension can be computed solely from the columns of $\hat{U}$. For each $l=1,\ldots,L$, compute $\Vert \hat{U}_{[:,l]}\Vert_2$. Then, arrange the columns of both $\hat{V}$ and $\hat{U}$ in decreasing order according to these L2 norms. The postprocessing steps are computationally efficient so long as the numbers of latent dimensions $L$ and covariates $K_o,K_f$ are not too large. Specifically, the step is $\mathcal{O}\big(\max\{L,K_o,K_f\}^3\big)$ due to the matrix inversions and does not actually instantiate any large dense matrices like $R$ or $M$. Since our proposed Fisher scoring optimizer does not involve momentum terms that span iterations, it would be possible to perform the projection and/or rotation steps prior to convergence of the algorithm. For example, they could be run after every tenth iteration. However, this would reduce computational speed. Also, the postprocessing steps have no effect on predicted mean values $M$, and hence do not improve the theoretical goodness of fit to the data. The only benefit would be if the reduced correlation between dimensions improved numerical stability.
--- abstract: 'The effects of Horava-Lifshitz corrections to the gravito-magnetic field are analyzed. Solutions in the weak field, slow motion limit, referring to the motion of a satellite around the Earth are considered. The post-newtonian paradigm is used to evaluate constraints on the Horava-Lifshitz parameter space from current satellite and terrestrial experiments data. In particular, we focus on GRAVITY PROBE B, LAGEOS and the more recent LARES mission, as well as a forthcoming terrestrial project, GINGER.' author: - 'Ninfa Radicella[^1]' - 'Gaetano Lambiase[^2]' - 'Luca Parisi[^3]' - 'Gaetano Vilasi[^4]' title: 'Constraints on Covariant Horava-Lifshitz Gravity from frame-dragging experiment' --- Introduction ============ Tests on gravitational theories alternative to General Relativity, at a fundamental level, are essential to explore possible generalisations of Einstein theory [@will]. General Relativity has been deeply tested at Solar System level hence any alternative theory of gravity should pass such tests, basically reproducing General Relativity at the Solar System scale [@ruggiero07; @smith08; @bohmer10; @harko11; @lambiase13]. Among these theories, Horava-Lifshitz gravity is gaining much attention in recent years. Horava-Lifshitz is a power counting renormalisable theory based on an anisotropic scaling of space and time in the ultraviolet limit [@Horava09]. Diffeomorphism invariance is thus replaced by the so-called foliation-preserving diffeomorphism, and the theory violates the local Lorentz invariance in the ultraviolet. Such a symmetry is expected to be recovered in the infrared (IR) limit thanks to the renormalization group flow for the couplings of the model but still no results support this specific behaviour. Nevertheless, this alternative model of gravity with its healthy ultraviolet behaviour has attracted a big interest and has been analysed both in cosmology and in the weak field limit. Unfortunately, the breaking of general covariance introduces a dynamical scalar mode that may lead to strong coupling problem and instabilities [@wang11].\ Recently, a new Covariant version of Horava Lifshitz gravity has been formulated by Horava and Melby-Thompson [@HL] which includes two additional nondynamical fields $A$ and $\phi$, together with a new $U(1)$ symmetry. The latter eliminates the new scalar degree of freedom thus curing the strong coupling problem in the IR limit.\ In this paper we refer to the covariant version of Horava and Melby-Thompson with the coupling $\lambda$, in the extrinsic curvature term of the action, not forced to be $1$ as presented in [@daSilva11].\ In order to constrain such a theory against Solar System tests we will use the Parametrized Post Newtonian framework (PPN). In this formalism the metric of an alternative metric theory of gravity is analyzed in the weak field and slow motion limit and its deviations from General Relativity are expressed in terms of PPN parameters [@willbook]. Once a metric has been obtained, one can calculate predictions of the alternative theory which actually depend on these PPN parameters. The above-mentioned approximation allows to describe the spacetime around a spinning mass. Here, we look at the effects on the orbits of test particles and the precession of spinning objects in this spacetime. In particular we focus on the results on the de Sitter (geodetic precession) [@desitter16] and the Lense-Thirring (frame dragging) [@lense18] effects. Experimental measurements of such physical effects directly lead to constraints on the parameters of Horava-Lifshitz theory [@wheeler]. We will consider the results of two completed space experiments: Gravity Probe B (GP-B) [@gpb] and Lageos [@lageos]. Moreover, we will comment on the expected results of LARES [@lares] and present what will come from GINGER [@ginger], an Earth based experiment.\ The paper is organized as follows. In section \[HLmodel\] we present the covariant version of the Horava-Lifshitz gravity. Then, in sec. \[Weak field approximation and spherically symmetric solution\] we focus on the weak field and slow motion approximation and present the solution in the PPN formalism, as in [@lin13]. In particular we derive the predictions of the theory on the geodetic precession and the frame dragging effect that are compared and contrasted with the experimental results from space experiments in sec. \[Constraints from space experiments\]. In the subsequent section we present the expected results by a terrestrial experiment, GINGER, and finally sec. \[Conclusions\] contains our conclusions. HL model {#HLmodel} ======== One of the candidates for quantum gravity is Horava-Lifshitz theory which is power-counting renormalizable due to the anisotropic scaling of space and time [@Horava09] in the ultra-violet limit, $$t\rightarrow\sl{l}^3 t, \quad \quad \vec{x}\rightarrow\sl{l} \vec{x}.$$ According to the Hamiltonian formulation of General Relativity developed by Dirac [@dirac] and Arnowitt Deser and Misner [@arnowitt59], the suitable variables in this Horava-Lifshitz theory are the lapse function, the shift vector and the spatial metric $N$, $N_i$, $g_{ij}$ respectively, so that $$ds^2=-N^2 dt^2+g_{ij} \left(dx^i+N^i dt\right)\left(dx^j+N^j dt\right).$$ The gauge symmetry of the system is the foliation-preserving diffeomorphism $Diff(M,\mathcal{F})$, $$\label{diff} \tilde{t}=t-f(t),\quad \tilde{x}^i=x^i-\zeta^i(t,x),$$ which causes a non-healthy new degree of freedom in the gravitational sector, a spin-0 graviton. This scalar mode is not stable on a Minkowski background neither in the original version of the Horava-Lifshitz theory [@Horava09] nor in the Sotiriou, Visser and Weinfurtner implementation [@SVW09]. The problem is ameliorated when de Sitter spacetime is considered, but the strong coupling problem still exists. Infact, being the extra degree of freedom always coupled, General Relativity is not recovered in the perturbative limit.\ A generalisation of the original version of the Horava-Lifshitz theory has been recently proposed [@HL; @daSilva11], in which the spin-0 mode has been eliminated from the theory by extending the foliation-preserving-diffeomorphism to include a local $U(1)$ symmetry. This approach allows to restore the general covariance.\ In order to heal the scalar graviton problem, two new fields have been introduced: the gauge field $A(t,x)$ and the Newtonian pre-potential $\phi(t,x)$. The theory satisfies the projectability condition, i.e. the lapse function only depends on time $N=N(t)$, while the total gravitational action is given by $$\label{HLaction} S_g=\zeta^2\int dt\ d^3x\ N\sqrt{g}\left(\mathcal{L}_K-\mathcal{L}_V+\mathcal{L}_\phi+\mathcal{L}_A\right),$$ where $g=\text{det}(g_{ij})$ and $$\begin{aligned} \mathcal{L}_K&=&K_{ij} K^{ij} -\lambda K^2,\\ \mathcal{L}_\phi&=&\phi\ \mathcal{G}^{ij}\left(2K_{ij}+\nabla_i\nabla_j \phi\right),\\ \mathcal{L}_A&=&\frac{A}{N}\left(2 \Lambda_g-R\right).\end{aligned}$$ Covariant derivatives as well as Ricci terms all refer to the $3$-metric $g_{ij}$.\ $K_{ij}$ represents the extrinsic curvature $$K_{ij}=g_i^k\nabla_k n_j,$$ $n_j$ being a unit normal vector of the spatial hypersurface, and $\mathcal{G}_{ij}$ the $3$- dimensional generalised Einstein tensor $$\mathcal{G}_{ij}=R_{ij}-\frac{1}{2}g_{ij} R+\Lambda_g g_{ij}.$$ We remark that $\lambda$ characterises deviations of the kinetic part of the action from General Relativity. The most general parity-invariant Lagrangian density up to six order in spatial derivatives is $$\begin{aligned} \mathcal{L}_{V}&=&2 \Lambda-R+\frac{1}{\zeta^2}\left(g_2\ R^2+g_3\ R_{ij} R^{ij}\right)\\ &&+\frac{1}{\zeta^4}\left(g_4\ R^3+g_5\ R R_{ij} R^{ij}+g_6\ R^i_j R^j_k R_i^{k}\right.\\ &&\left.+g_7\ R \nabla^2 R+ g_8\ (\nabla_i R_{jk}) (\nabla^i R^{jk})\right),\end{aligned}$$ where in physical units $\zeta^2=(16 \pi G)^{-1}$, $G$ being the Newtonian constant in the Horava-Lifshitz theory.\ Matter coupling for this theory has not been studied systematically; in [@lin12] it has been shown that, in order to be consistent with solar system tests, the gauge field and the Newtonian pre-potential must be coupled to matter in a specific way, but there were no indication on how to obtain the precise prescription from the action principle. Recently such a prescription has been generalised [@lin13] and a scalar-tensor extension of the theory presented above has been developed to allow the needed coupling to emerge in the IR without spoiling the power-counting renormalizability of the theory in the ultraviolet. In details, the matter action term is $$S_M=\int dt d^3 x\tilde{N}\sqrt{\tilde{g}}\ \mathcal{L}_M\ (\tilde{N}, \tilde{N}_i,\tilde{g}_{ij}; \psi_n),$$ where $\mathcal{L}_M$ is the matter Lagrangian and $\psi_n$ stands for matter fields.The metric is given by $$(\gamma_{\mu\nu})=\left( \begin{array}{cc} -\tilde{N}^2+\tilde{N}^i\tilde{N}_i &\tilde{N}_i \\ \tilde{N}_i & \tilde{g}_{ij}. \end{array} \right)$$ This means that matter fields couple to the Arnowitt-Deser-Misner components with the tilde, defined as $$\begin{aligned} \tilde{N}&=&(1-a_1\sigma) N,\\ \tilde{N}^i&=&N^i+N g^{ij}\nabla_j\phi,\\ \tilde{g}_{ij}&=&(1-a_2\sigma)^2 g_{ij},\end{aligned}$$ where $a_1$ and $a_2$ are two arbitrary coupling constants and $$\sigma=\frac{A-\mathcal{A}}{N}, \quad\text{with}\quad\mathcal{A}=-\dot{\phi}+N^i\nabla_i\phi+\frac{1}{2}N\nabla^i\phi \nabla_i\phi.$$ Weak field approximation and spherically symmetric solution {#Weak field approximation and spherically symmetric solution} =========================================================== In order to find a viable solution for solar system constraints, we assume that the influence of the cosmological constant and the space curvature are negligible; hence $$\Lambda=\Lambda_g=0.$$ Furthermore, in the IR all the higher order derivative terms are small, then we can safely set $g_2,\dots, g_8$ to zero in what follows. A caveat must be clarified. The coupling $g_1$ cannot be rescaled to $g_1=-1$, that corresponds to the General Relativity value and represents a redefinition of the units of time and space. Here, this freedom has already been used to set to unity other parameters that enter in the matter coupling [@lin13]. In addition, we can use the $U(1)$ gauge freedom to choose $\phi=0$, which uniquely fixes the gauge.\ We consider the metric in the post-Newtonian approximation; it can be written in the form [@will] $$\gamma_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu},$$ where $\eta_{\mu\nu}=\text{diag} (-1,1,1,1)$ and we consider the perturbation up to third order since we are interested in testing post-newtonian effect on a satellite orbiting around the Earth: $$\begin{aligned} h_{00}&\sim& \mathcal{O}(2)\nonumber\\ h_{0i}&\sim&\mathcal{O}(3)\nonumber\\ h_{ij}&\sim&\mathcal{O}(2),\end{aligned}$$ where $\mathcal{O}(n)\equiv\mathcal{O}(v^n)$, $v$ being the three velocity of the objects considered (we are using natural units).\ Following [@lin13], the solution in the Horava-Lifshitz theory, up to the above-mentioned order, is expressed by $$\begin{aligned} \label{PPN} h_{00}&\sim& 2U\nonumber\\ h_{0i}&\sim&c V_i+d \chi_{,0i}\nonumber\\ h_{ij}&\sim&2\gamma U\delta_{ij},\end{aligned}$$ where the gauge freedom has been used to eliminate anisotropic terms in the space-space contribution of the perturbation. The coefficients are found by solving the equations at the appropriate order and can be expressed in terms of the couplings that appear in the matter and gravitational Horava-Lifshitz action. They read $$\begin{aligned} \label{coeff} c&=&-4\frac{G}{G_N}\nonumber\\ d&=&\frac{G}{G_N}\frac{2-a_1-\lambda(4-3a_1)}{2(1-\lambda)}\nonumber\\ \gamma&=&\frac{G}{G_N} a_1-\frac{a_2}{a_1},\end{aligned}$$ where $G_N$ is the Newton constant, that could in principle differ from $G$, introduced through the parameter $\zeta$ in the Horava-Lifshitz action, see eq.(\[HLaction\]).\ Describing the source in terms of the energy density $\rho$ and velocity ${\bf v}$, the potentials appearing in the solution are defined as follows: $$\begin{aligned} U&\equiv&\int\frac{\rho({\bf x'},t)}{|{\bf x}-{\bf x'}|}d^3x',\\ \chi&\equiv&\int\rho({\bf x'},t)\ |{\bf x}-{\bf x'}|\ d^3x'\\ V_j&\equiv&\int\frac{\rho({\bf x'},t)\ v'_j}{|{\bf x}-{\bf x'}|}\ d^3x'.\end{aligned}$$ From field equations, one gets $$\begin{aligned} \partial^2 U&=&-4\pi G_N\rho,\\ \partial^2V_j&=&-4\pi G_N\rho v_j\end{aligned}$$ Moreover, from the continuity equation for the source, one obtains $ \chi_{,0i}=V_i-W_i$, where the potential $W_i$ is now defined as $$W_i\equiv\int\frac{\rho({\bf x'},t)\ {\bf v' \cdot (x-x')}(x-x')_i}{|{\bf x}-{\bf x'}|^3}d^3x'.\\$$ Then, variation from General Relativity solutions arise because the perturbed solution depends on the parameters $c$, $d$ and $\gamma$ that in turn explicitly depend on the Horava-Lifshitz couplings. With this solution in mind, let us specify the source terms for the Earth, considered as a massive homogeneous spherical body with mass $m$ and intrinsic spin-angular momentum ${\bf J}=(2/5)mR^2 \boldsymbol{\omega}$, being $R$ its radius and ${\boldsymbol \omega}$ its angular velocity. It is considered at rest in the origin of the axes. In this approximation the vector potential $V_i$ and $W_i$ coincide and read [@will] $$V^i=W^i=\frac{1}{2}\left({\bf n}\times\frac{ {\bf J}}{r^2}\right)^i,$$ where $r$ is the distance from the Earth and $n^i=x^i/r$ are the components of the unit vector. In such a case the dependence of the solution on the theory parameters reduces to the ratio $G/G_N$ and $\gamma$ as it can be seen from eqs. (\[PPN\]) and (\[coeff\]) and the fact that $\chi_{,0i}=0$.\ Let us evaluate such a contribution. We will use the gravito-magnetic formalism, thus introducing the vector potential $$A_{\mu}=-\frac{1}{4}\bar{h}_{0\mu},$$ with $$\bar{h}_{\mu\nu}=h_{\mu\nu}-\eta_{\mu\nu} h/2$$ being the trace-reversed metric perturbation. Then the gravitomagnetic field can be introduced ${\bf B}={\bf \nabla\times A}$ and the contribution coming from a spherical rotating homogeneous sphere can be easily calculated. It turns out to be equivalent to the General Relativity field except for the factor $G/G_N$. Choosing ${\bf J}$ to lie perpendicular to the celestial equator (see fig.\[orbitalelem\]), one infers $$\label{gravitomag} {\bf B}_{GR}=-\frac{1}{2}\frac{{\bf J}}{r^3}+\frac{3}{2}\frac{({\bf J\times \hat{r}})\ {\bf \hat{r}}}{r^3}$$ and $${\bf B}_{HL}= \frac{G}{G_N} \ {\bf B}_{GR}.$$ ![Keplerian orbital parameters. The longitude of the ascending node $\Omega$ is defined to be the angle between a stationary reference line, the vernal equinox which is also called the first point of Aries, and the line connecting the origin of the coordinate system and the point where the orbiting body intersects the orbital plane as it is moving upwards [@brouwer61].[]{data-label="orbitalelem"}](Orbitalelem.pdf){width="9cm"} In order to investigate how this field will affect the motion of test particles around the Earth, we will use the Gaussian perturbation equations [@brouwer61] that give the time variation of the Keplerian orbital elements in the presence of a perturbing force. In the case at hand, we treat the gravitomagnetic force ${\bf F}=-4{\bf v \times B}$ as a small perturbation of the otherwise unperturbed Keplerian motion. We here focus only on the secular variation of the longitude of the ascending node $\Omega$ that is defined to be the angle between a stationary reference line and the line connecting the origin of the coordinate system and the point where the orbiting body intersects the orbital plane as it is moving upwards, see fig.\[orbitalelem\].\ The time variation of $\Omega$ is connected in General Relativity with the Lense-Thirring drag [@lense18]. In particular, averaging over one period one can separate the secular variation of the perturbation that in General Relativity reads $$\langle\dot{\Omega}_{GR}\rangle=\frac{2 G_N J}{\text{a}^3(1-e^2)^{3/2}},$$ being ’$\text{a}$’ the semimajor axis of the orbit of the test body and $e$ the orbit eccentricity. The ratio between the Horava-Lifshitz and the General Relativity contributions does not depend on the details of the orbital configuration since their functional dependence is the same. It only constraints the coupling constant $G$ that appears in eq.(\[HLaction\]). Actually, the measurement of $\dot{\Omega}$ by space orbiting experiments and its confrontation with the General Relativity estimation $$\label{lense} \frac{\langle\dot{\Omega}_{HL}\rangle}{\langle\dot{\Omega}_{GR}\rangle}=\frac{G}{G_N}$$ gives an upper bound on the ratio on the coupling constants $G$ and $G_N$. On the other hand, the gravitomagnetic and the gravitoelectric fields will also cause a precession of the spin $\bf{S}$ of gyroscopes orbiting around the Earth. A gyroscope will undergo precession due to two torques. One is known as the geoedetic precession [@desitter16] and is independent of the Earth gravitomagnetic field; the other is due to a coupling to the gravitomagnetic field: $$\dot{\bf{S}}={\bf{\Omega}}\times{\bf{S}},\quad\quad{\bf\Omega}={\bf\Omega^G}+{\bf\Omega^{LT}},$$ where ${\bf\Omega^G}$ and ${\bf\Omega^{LT}}$ are the geodetic and Lense-Thirring precession, respectively.\ The value of this variation in Horava-Lifshitz theory can be deduced in terms of the weak field solution written above; in particular, the Lense-Thirring term has been evaluated above and is related to the General Relativity value by means of eq.(\[lense\]). For what concerns the de Sitter (geodetic) effect, the General Relativity contribution is $${\bf\Omega}^{G}_{GR}=\frac{3}{2 r^3}GM({\bf r} \times {\bf v})$$ and ratio between the Horava-Lifshitz value and the General Relativity one turns out to be $$\label{geodetic} \frac{\Omega^{G}_{HL}}{\Omega^{G}_{GR}}=\frac{1}{3}\left(1+2\frac{G}{G_N}a_1-2\frac{a_2}{a_1}\right)$$ We note that the geodesic effect allows us to constrain the matter couplings $a_1$ and $a_2$ even though only trough the combination shown above.\ Because of the peculiarity of the situation analysed here (in our case $V_i=W_i$ as mentioned before eq.(\[gravitomag\])), both effects do not depend on the parameter $\lambda$ that is the coefficient of the extrinsic curvature term. Constraints from space experiments {#Constraints from space experiments} ================================== The Gravity Probe B (GP-B) four gyroscopes aboard an Earth-orbiting satellite [@gpb] allowed to measure the frame-dragging effect with an error of about $19\%$ [@everitt11] $\Omega ^{LT}_{obs}=37.2\pm 7.2 \ \ \text{mas/yr}$, while the General Relativity predicted value is $\Omega^{LT}_{GR}=39.2 \ \ \text{mas/yr}$. This result provides a constraint on the difference between the coupling constant $G$ that appears in the Horava-Lifshitz theory and the Newton constant $G_N$. Indeed, $$\left|\frac{\Omega^{LT}_{obs}-\Omega^{LT}_{GR}}{\Omega^{LT}_{GR}}\right|=0.05\ \ \ \Longrightarrow\ \ \ \left|\frac{G}{G_N}-1\right|=0.05.$$ The mission was able to measure the geodetic effect as well. The measured value reads $\Omega^{G}_{obs}=-6601.8\pm18.3\ \ \text{mas/yr}$, at the level of $0.28\%$. The General Relativity predicted value turns out to be $\Omega^{G}_{GR}=-6606.1\ \ \text{mas/yr}$ . In this case the observations constraint a combination of parameters, namely $$\left|\frac{\Omega^{G}_{obs}-\Omega^{G}_{GR}}{\Omega^{G}_{GR}}\right|=\left|\frac{2}{3}\left(\frac{G}{G_N}a_1-\frac{a_2}{a_1}-1\right)\right|=0.0006$$ The LAGEOS satellites (LAser GEOdynamics Satellites), launched by NASA (LAGEOS) and NASA-ASI (LAGEOS-2) [@lageos], have been recently able to test the Lense-Thirring effect. This mission has improved the sensitivity thanks to the laser-ranging technique for measuring distances as well as the combination of the two LAGEOS nodal longitudes that allow to eliminate the uncertainty in the value of the second degree zonal harmonic describing the Earth’s quadrupole moment.\ The Lense-Thirring effect measured for the combination of the two LAGEOS nodal longitudes reads [@ciufolini04] $$\dot\Omega^{LT}_{obs}=47.9 \ \ \text{mas/yr},$$ with an error of about $10\%$. In this case the value predicted by General Relativity is $\dot\Omega^{LT}_{GR}=48.2 \ \ \text{mas/yr}$; then, the constraint on the parameter c reduces to $$\left|\frac{G}{G_N}-1\right|=0.006.$$ Lastly, the Laser Relativity Satellite (LARES) mission [@lares], launched on february 2012 and founded by ASI, aims to achieve an uncertainty of a few percent only. Actually, the three nodes of the LAGEOS, LAGEOS-2 and LARES satellites together with gravitational field determinations from the GRACE space mission (Gravity Recovery And Climate Experiment) [@grace] will allow to improve the previous results by eliminating the uncertainties in the value of the first two even zonal harmonics of the Earth potential.\ At the present stage, Monte Carlo simulations predict a standard deviation of the simulated value of the frame-dragging to be equal to $1.4\%$ of the frame-dragging effect predicted by General Relativity and its mean value effect is equal to $100.24\%$ if its general relativistic value [@ciufolini13].\ This prediction would constraint the combination of the Horava-Lifshitz coupling constant $G$ and the Newton constant $G_N$ to differ from unity by $2\cdot10^{-3}$: $$\left|\frac{G}{G_N}-1\right|=0.002.$$ It could be instructive to compare these constraints with the results coming from experimental tests.\ Among the fundamental constants, the gravitational constant $G_N$ is the less accurately measured. Newton’s law of gravitation has been used to test it at laboratory scales but also at geophysical and astronomical scales. Nevertheless, the latter mainly gives information only on $GM$: through the measurements of planetary and satellite orbits only the product of the Newton’s constant times the masses of the interacting bodies can be directly constrained.\ Quite recently in [@umezu05] an independent estimation has been derived. In that work the spatial variation of the gravitational constant has been parametrized by $G(r)=\xi G_N$[^5]. At different scales, a change of the Newton constant affects various phenomena. For scales up to $r\sim 10^{10}m$ the analysis of the age of stars in globular clusters constraints $\xi$, since an increasing Newton’s constant causes stars to burn faster. The value obtained in [@umezu05] is $0.93\lesssim\xi\lesssim 1.09$.\ Further, the primordial light-element abundances constraint the value of the Newton’s constant during the BBN epoch. In particular, the Helium abundance and the Deuterium-to-hydrogen abundance give $0.95\lesssim\xi\lesssim1.01$ at $10^{8}\div10^{12} m$ scales.\ Finally, on cosmological scales, a changing Newton’s constant appears in the amplitude of the acoustic peaks in the CMB power spectrum, even though it provides a weaker constraint. By using WMAP data the range, at $95\%$ confidence level, is $0.75\lesssim\xi\lesssim1.66$. Terrestrial experiment: GINGER ============================== GINGER (Gyroscopes IN GEneral Relativity) is a proposed tridimensional array of laser gyroscopes with the aim of measuring general relativity effects in a terrestrial laboratory by means of laser gyroscopes [@ginger]. Being the whole proposed experiment Earth based, it has a constant gravito-electric field so it does not require any modelling of the interior of the Earth. The experiments in space are based on the precession of physical gyroscopes induced by the gravitational field so that the coupling of the angular momentum of a gyroscope with the gravitational field induces a torque depending on the configuration of the field.\ GINGER experiment, instead, exploits the anisotropic propagation of light in the skew symmetric space-time associated with rotating bodies [@tartaglia14].\ The key idea of the GINGER experiment is to measure the difference in the times of flight of two beams circulating in a laser cavity in opposite directions. This translates into a time difference $\delta\tau=\tau_{+}-\tau_{-}$ between the right-handed beam propagation time ($\tau_{+}$) and the left-handed one ($\tau_{-}$) $$\delta\tau=-2\sqrt{g_{00}}\oint\frac{g_{0i}}{g_{00}}ds^i.$$ The presence of a gravitoelectric (Newtonian) field as well as a gravitomagnetic contribution due to the rotation of the Earth. Following the analysis in [@bosi11], the General Relativity calculation, in linear approximation for an instrument with its normal contained in the local meridian plane, gives $$\begin{aligned} c\ \delta\tau&=&\frac{4A}{c}\Omega_{E}\left[\cos{(\theta+\alpha)}-\frac{G_NM}{c^2 R}\sin{\theta}\sin{\alpha}\right.\nonumber\\ &-&\left.\frac{G_NI_E}{c^2 R^3}(2\cos{\theta}\cos{\alpha}+\sin{\theta}\sin{\alpha})\right],\end{aligned}$$ where $A$ is the area encircled by the light beams, $\alpha$ is the angle between the local radial direction and the normal to the plane of the array-laser ring, measured in the meridian plane, and $\theta$ is the colatitude of the laboratory; $\Omega_{E}$ ($I_E$) is the rotation rate (momentum of inertia) of the Earth as measured in the local reference frame. The first constribution in the square brackets is the Sagnac term, due to the rotation of the Earth; the other terms encode the relativistic effects: geodetic and Lense-Thirring term[^6]. To cancel out the pure kinematic term, an accuracy $1$ in $10^{10}$ is necessary. This can be achieved, according to the proposal, with a detector consisting of six large ring lasers arranged in three orthogonal axes and in about two years of measurements. For a theory that differs from General Relativity but is still in the set of metric theories that can be described by the PPN formalism, the contribution to the time shift has been evaluated in [@bosi11] and, for the Horava-Lifshitz theory, the geodesic and Lense-Thirring contributions read: $$\begin{aligned} c\ \delta\tau&=&\frac{4A}{c}\Omega_{E}\left[-\left(1+\frac{G}{G_N} a_1-\frac{a_2}{a_1}\right)\frac{G M}{c^2 R}\sin{\theta}\sin{\alpha}\right.\nonumber\\ &-&\left.\frac{G I_E}{c^2 R^3}\left(2\cos{\theta}\cos{\alpha}+\sin{\theta}\sin{\alpha}\right)\right].\end{aligned}$$ As for the GP-B mission, a measurement of the geodetic and Lense-Thirring effect would give constraints on the parameters of the Horava-Lifshitz theory. Conclusions {#Conclusions} =========== In this paper we have considered the weak field and slow motion limit of a covariant version of Horava-Lifshitz theory in order to constrain the parameters of the model against the results of some recent space experiments. Such an approximation is well suited for describing the field around the Earth (massive and slowly rotating body). Satellites orbiting around our planet experience a weak field induced dynamics and move according to the slow motion approximation. Parametrized Post Newtonian formalism is a good framework to deal with in such a situation. Going into details, when considering the solar system scenario, all the terms that come from the cosmological constant and the space curvature are negligible and the solution explicitly depends only on a few parameters. In particular, at the third post-newtonian order, the potentials depend on $G$, the Newtonian constant in the Horava-Lifshitz theory, on $\lambda$, the coefficient of the extrinsic curvature term that characterises deviations of the kinetic part of the action from General Relativity, as well as on $a_1$ and $a_2$ that are two arbitrary coupling constants in the matter action.\ The difference between General Relativity and Horava-Lifshitz solution translates into different prediction on the motion of satellites around the Earth as well as on the precession of gyroscopes. In particular we have deduced the Lense-Thirring precession and the de Sitter effect. The former only depends on the ratio $\frac{G}{G_N}$, that is the difference on the coupling constants $G$ and $G_N$. The latter introduces a constraint on the matter couplings $a_1$ and $a_2$ through the combination $\frac{1}{3}\left(1+2\frac{G}{G_N}a_1-2\frac{a_2}{a_1}\right)$.\ We then have also considered constraints from observational data. The Gravity Probe B experiment provides a measurement of the frame dragging with an error of about $19\%$ and the geodetic effect with an error of $0.28\%$. The two induce constraints on $G$, that is forced to be equal to the Newton constant $G_N$ up to five parts over $10^2$ and on the combination $\left(\frac{G}{G_N}a_1-\frac{a_2}{a_1}\right)$ that must be equal to unity.\ When considering Lense-Thirring effect from Lageos experiment, which reached an accuracy of $10\%$, the constraint on $G$ increases of about an order of magnitude and the Monte Carlo simulations on LARES seem to predict the ratio $G/G_N$ to differ from unity by $2\cdot 10^{-3}$.\ Finally, we have presented the theoretical result expected by GINGER, an Earth based experiment that aims to evaluate the response to the gravitational field of a ring laser array. Again, from GINGER forthcoming data we will get a combination of the de Sitter and Lense-Thirring effect thus providing independent constraints on the same parameters of the Horava-Lifshitz theory. Acknowledgements {#acknowledgements .unnumbered} ================ Authors wish to thank Agenzia Spaziale Italiana (ASI) for partial support through contract n.I/034/12/0 and Istituto nazionale di Fisica Nucleare (INFN). [99]{} C.M. Will, Living Rev. Relativity 17, (2014), 4. URL (cited on 1/07/2014): http://www.livingreviews.org/lrr-2014-4. M.L. Ruggiero and L.Iorio, JCAP [**07**]{} (2007) 010. T.L. Smith, A.L. Erickcek, R.R. Caldwell and M. Kamionkowski, Phys. Rev. D [**77**]{}, 024015 (2008). C.G. Böhmer, G. De Risi, T. Harko and F. S.N. Lobo, Class. Quantum Grav. [**27**]{} (2010) 185013. T. Harko, Z. Kovács and F.S.N. Lobo, Proc. R. Soc. A (2011) [**467**]{}, 1390-1407. G. Lambiase, M. Sakellariadu and A. Stabile, JCAP [**12**]{}(2013) 020. P. Horava, Phys. Rev. D [**79**]{}, 084008 (2009). Dirac, P.A.M. Proceedings of the Royal Society of London, Mathematical and Physical Sciences [**46**]{}, Issue 1246, pp 333-343 (August 1958). Arnowitt, R., Deser, S., Misner, C., Phys. Rev. [**116**]{} (5): 1322Ð1330 (1959). Anzhong Wang and Yumei Wu, Phys. Rev. D [**83**]{}, 044031 (2011). P.Horava and C. Melby-Thompson, Phys. Rev. D [**82**]{}, 064027 (2010). A. M da Silva, Class.Quant.Grav. [**28**]{}, 055011 (2011). C.F. Will, [*Theory and experiments in gravitational physics*]{}, Cambridge: Cambridge University Press, 1981. W. de Sitter, Mon. Not. Roy. Astron. Soc. [**77**]{}, 155–184 (1916). J. Lense and H.Thirring, Phys. Z. [**19**]{}, 156–163 (1918). I. Ciufolini and J.A. Wheeler, [*Gravitation and Inertia*]{}, Pronceton University Press, 1996. . . . http://www.df.unipi.it/ginger A. Tartaglia [*et al.*]{}, EPJ Web Conf. [**74**]{} 03001, 2014. Kai Lin, Shinji Mukohyama, Anzhong Wang and Tao Zhu, Phys. Rev. D [**89**]{}, 084022 (2014). T. Sotiriou, M. Visser and S. Weinfurtner, JHEP [**10**]{}, 033 (2009). Kai Lin, Shinji Mukohyama and Anzhong Wang, Phys. Rev. D [**86**]{}, 104024 (2012). D. Brouwer and G.M. Clemece, [*Methods of Celestial Mechanics*]{}, Academic Press, New York, 1961. C.W. Everitt [*et al.*]{}, Phys. Rev. Lett. [**106**]{}, 221101 (2011). I. Ciufolini and E.C. Pavlis, Nature [**431**]{}, 958–960 (2004). . I. Ciufolini [*et al.*]{}, Class.Quant.Grav. [**30**]{} (2013) 235009 . Ken-ichi Umezu, Kiyotomo Ichiki and Masanobu Yahiro, Phys. Rev. D [**72**]{}, 044010 (2005). F. Bosi [*et al.*]{}, Phys. Rev. D [**84**]{}, 122002 (2011). [^1]: ninfa.radicella@sa.infn.it [^2]: lambiase@sa.infn.it [^3]: parisi@sa.infn.it [^4]: vilasi@sa.infn.it [^5]: $\xi$ is related to the parameter $c$ of the Horava Lifshitz theory: $\xi=c/4$ [^6]: Actually, also the Thomas precession, related to the angular defect due to the Lorentz boost comes into play, but it is two orders of magnitude smaller than the geodetic and Lense-Thirring terms.
--- abstract: 'In this paper we define the Dupled abstract Tile Assembly Model (DaTAM), which is a slight extension to the abstract Tile Assembly Model (aTAM) that allows for not only the standard square tiles, but also “duple” tiles which are rectangles pre-formed by the joining of two square tiles. We show that the addition of duples allows for powerful behaviors of self-assembling systems at temperature $1$, meaning systems which exclude the requirement of cooperative binding by tiles (i.e., the requirement that a tile must be able to bind to at least $2$ tiles in an existing assembly if it is to attach). Cooperative binding is conjectured to be required in the standard aTAM for Turing universal computation and the efficient self-assembly of shapes, but we show that in the DaTAM these behaviors can in fact be exhibited at temperature $1$. We then show that the DaTAM doesn’t provide asymptotic improvements over the aTAM in its ability to efficiently build thin rectangles. Finally, we present a series of results which prove that the temperature-$2$ aTAM and temperature-$1$ DaTAM have mutually exclusive powers. That is, each is able to self-assemble shapes that the other can’t, and each has systems which cannot be simulated by the other. Beyond being of purely theoretical interest, these results have practical motivation as duples have already proven to be useful in laboratory implementations of DNA-based tiles.' author: - 'Jacob Hendricks [^1]' - 'Matthew J. Patitz [^2]' - 'Trent A. Rogers [^3]' - 'Scott M. Summers [^4]' bibliography: - 'tam.bib' title: 'The Power of Duples (in Self-Assembly): It’s Not So Hip To Be Square' --- [^1]: Department of Computer Science and Computer Engineering, University of Arkansas, [jhendric@uark.edu](jhendric@uark.edu) Supported in part by National Science Foundation Grant CCF-1117672. [^2]: Department of Computer Science and Computer Engineering, University of Arkansas, [patitz@uark.edu](patitz@uark.edu) Supported in part by National Science Foundation Grant CCF-1117672. [^3]: Department of Mathematical Sciences, University of Arkansas, [tar003@uark.edu](tar003@uark.edu) Supported in part by National Science Foundation Grant CCF-1117672. [^4]: Department of Computer Science, University of Wisconsin–Oshkosh, Oshkosh, WI 54901, USA. [summerss@uwosh.edu](summerss@uwosh.edu).
--- abstract: | Let $1 \in A \subset B$ be an inclusion of unital C\*-algebras of index-finite type and depth $2$. Suppose that $A$ is infinite dimensional simple with ${{\mathrm{tsr}}}(A) = 1$ and SP-property. Then ${{\mathrm{tsr}}}(B) \leq 2$. As a corollary when $A$ is a simple [C\*-algebra]{}  with ${{\mathrm{tsr}}}(A) = 1$ and SP-property and ${\alpha}$ an action of a finite group $G$ on ${{\rm Aut}}(A)$, $ {{\mathrm{tsr}}}(A \rtimes_{\alpha}G) \leq 2. $ address: - | Department of Mathematical Sciences\ Ritsumeikan University\ Kusatsu, Shiga, 520 - 2152 Japan - | Department of Mathematical Sciences\ Ritsumeikan University\ Kusatsu, Shiga, 520 - 2152 Japan author: - 'Hiroyuki Osaka$^{*}$' - 'Tamotsu Teruya$^{**}$' title: 'Stable rank of inclusion of C\*-algebras of depth 2' --- Introduction ============ The notion of topological stable rank for a C\*-algebra $A$, denoted by ${{\mathrm{tsr}}}(A)$, was introduced by Rieffel, which generalizes the concept of dimension of a topological space ([@Rf1]). He presented the basic properties and stability theorem related to K-Theory for C\*-algebras. In [@Rf1] he proved that ${{\mathrm{tsr}}}(A \rtimes_\alpha {\mathbb Z}) \leq {{\mathrm{tsr}}}(A) + 1$, and asked if an irrational rotation algebra $A_\theta$ has topological stable rank two. I. Putnum ([@pu]) gave a complete answer to this question, that is, ${{\mathrm{tsr}}}(A_\theta) = 1$. Moreover, using the notion of approximate divisibility and U. Haggerup’s striking result ([@ha], [@ht]), Blackadar, Kumjian, and Rørdam ([@bkr]) proved that every nonrational noncommutative torus has topological stable rank one. Naturally, we pose a question of how to compute topological stable rank of $A \rtimes_\alpha G$ for any discrete group $G$. On the contrary, one of long standing problems was whether the fixed point algebra of a UHF C\*-algebra by an action of a finite group $G$ is an AF C\*-algebra. In 1988, Blackadar ([@bl3]) constructed a symmetry on the CAR algebra whose fixed point algebra is not an AF C\*-algebra. Note that Kumjian ([@km]) constructed a symmetry on a simple AF C\*-algebra whose fixed point algebra is not an AF C\*-algebra. Blackadar proposed the question in [@bl3] whether ${{\mathrm{tsr}}}(A \rtimes_\alpha G) = 1$ for any unital AF C\*-algebra $A$, a finite group $G$, and an action $\alpha$ of $G$ on $A$. In [@OT] the authors presented a partial answer to an extended question of Blackadr’s using C\*-index Theory by Watatani ([@wata]), that is, Let $1 \in A \subset B$ be an inclusion of unital C\*-algebras and $E\colon B \rightarrow A$ be a faithful conditional expectation of index-finite type. Suppose that the inclusion $1 \in A \subset B$ has depth $2$ and $A$ is tsr boundedly divisible with ${{\mathrm{tsr}}}(A) = 1$. Then ${{\mathrm{tsr}}}(B) \leq 2$. Here a C\*-algebra $A$ is [*tsr boundedly divisible*]{} ([@rf3 Definition 4.1]) if there is a constant $K$ ($> 0$) such that for every positive integer $m$ there is an integer $n \geq m$ such that $A$ can be expressed as $M_n(B)$ for a C\*-algebra $B$ with ${{\mathrm{tsr}}}(B) \leq K$. Typical such an example is $B \otimes UHF$ for any unital [C\*-algebra]{} $B$. As a corollary, Let $A$ be a tsr boundedly divisible, unital C\*-algebra with ${{\mathrm{tsr}}}(A) = 1$, $G$ a finite group, and $\alpha$ an action of $G$ on $A$. Then ${{\mathrm{tsr}}}(A \rtimes_\alpha G) \leq 2$. This estimate is best possible. Indeed in [@bl3 Example 8.2.1] B. Blackadar constructed an symmetry action $\alpha$ on $CAR$ such that $ (C[0,1] \otimes CAR) \rtimes_{id \otimes \alpha} Z_2 \cong C[0,1] \otimes B, $ where $B$ is the Bunce-Deddens algebra of type $2^\infty$. Then since $K_1(B)$ is non-trivial, we know that $ {{\mathrm{tsr}}}(C[0,1] \otimes B) = 2. $ In this note we try to solve generalized Blackadar’s question and get the final estimate in some sense: Let $1 \in A \subset B$ be an inclusion of unital C\*-algebras of index-finite type and depth $2$. Suppose that $A$ is infinite dimensional simple with ${{\mathrm{tsr}}}(A) = 1$ and SP-property. Then ${{\mathrm{tsr}}}(B) \leq 2$. In the case of crossed product algebras we conclude that ${{\mathrm{tsr}}}(A \rtimes_\alpha G) \leq 2$ for a simple unital C\*-algebra $A$ with ${{\mathrm{tsr}}}(A) = 1$ and SP-property, and an action ${\alpha}$ from a finite group $G$ on ${{\rm Aut}}(A)$. We cannot still conclude that ${{\mathrm{tsr}}}(A \rtimes_\alpha G) = 1$, but it seems to guarantee that the question would be solved affirmatively. Preliminaries ============= \[dfn2.1\] Let $A$ be a unital C\*-algebra and $Lg_n(A)$ be the set of elements $(b_i)$ of $A^n$ such that $$Ab_1 + Ab_2 + \cdots + Ab_n = A.$$ Then topological stable rank of $A$, ${{\mathrm{tsr}}}(A)$, is defined to be the least integer $n$ such that the set $Lg_n(A)$ is dense in $A^n$. Topological stable rank of a non-unital C\*-algebra is defined by topological stable rank of its unitaization algebra $\tilde{A}$ Note that ${{\mathrm{tsr}}}(A) = 1$ is equivalent to having the dense set of invertible elements in $\tilde{A}$. The following is a well-known characterization of topological stable rank one. See [@Rf1] and [@OT Remark 2.4]. \[P:Stablerankone\] Let $A$ be a unital [C\*-algebra]{}. 1. Let $p$ be a non-zero projection in $A$. Then ${{\mathrm{tsr}}}(A) = 1$ if and only if ${{\mathrm{tsr}}}(pAp) = {{\mathrm{tsr}}}((1- p)A(1- p)) = 1$. 2. Let ${\mathbb K}$ be a C\*-algebra of compact operators on an infinite dimensional separable Hilbert space. Then $${{\mathrm{tsr}}}(A) = 1 \ \hbox{if and only if} \ {{\mathrm{tsr}}}(A \otimes {\mathbb K}) = 1.$$ \[R:Stablerankone\] Suppose that a [C\*-algebra]{} $B$ is stably isomorphic to a [C\*-algebra]{} $A$, that is, $B \otimes {\mathbb K} \cong A \otimes {\mathbb K}$. Then from Proposition $\ref{P:Stablerankone}(2)$ if ${{\mathrm{tsr}}}(A) = 1$, then ${{\mathrm{tsr}}}(B) = 1$. Let $A$ be a [C\*-algebra]{}. $A$ is said to have [*SP-property*]{} if any non-zero hereditary C\*-subalgebra of $A$ has non-zero projection. It is well known that if $A$ has real rank zero, that is, any self-adjoint element can be approximated by self-adjoint elements with finite spectra, then $A$ has SP-property. (See [@bp0].) Next we summarize the C\*-index theory of Watatani ([@wata]). Let $1 \in A \subset B$ be an inclusion of C\*-algebras, and let $E\colon B \rightarrow A$ be a faithful conditional expectation from $B$ to $A$. A finite family $\{(u_1, v_1), \dots, (u_n, v_n)\}$ in $B \times B$ is called *a* quasi-basis for $E$ if $$\sum_{i=1}^n u_iE(v_ib) = \sum_{i=1}^nE(bu_i)v_i = b \enskip \hbox{for} \enskip b \in B.$$ We say that a conditional expectation $E$ is of [*index-finite type*]{} if there exists a quasi-basis for $E$. In this case the index of $E$ is defined by $${\rm Index}(E) = \sum_{i=1}^nu_iv_i.$$ (We say also that the inclusion $1 \in A \subset B$ is of [*index-finite type*]{}.) Note that ${\rm Index}(E)$ does not depend on the choice of a quasi-basis ([@iz3 Example  3.14]) and every conditional expectation $E$ of index-finite type on a C\*-algebra has a quasi-basis of the form $\{(u_1, u_1^*), \dots, (u_n,u_n^*)\}$ ([@wata Lemma  2.1.6]). Moreover ${\rm Index}(E)$ is always contained in the center of $B$, so that it is a scalar whenever $B$ has the trivial center, in particular when $B$ is simple ([@wata Proposition  2.3.4]). Let $E\colon B\to A$ be a faithful conditional expectation. Then $B_{A}(=B)$ is a pre-Hilbert module over $A$ with an $A$-valued inner product $$\langle x,y\rangle =E(x^{*}y), \ \ x, y \in B_{A}.$$ Let $\mathcal E$ be the completion of $B_{A}$ with respect to the norm on $B_{A}$ defined by $$\| x\|_{B_{A}}=\|E(x^{*}x)\|_{A}^{1/2}, \ \ x \in B_{A}.$$ Then $\mathcal E$ is a Hilbert $C^{*}$-module over $A$. Since $E$ is faithful, the canonical map $B\to \mathcal E$ is injective. Let $L_{A}(\mathcal E)$ be the set of all (right) $A$-module homomorphisms $T\colon \mathcal E\to \mathcal E$ with an adjoint $A$-module homomorphism $T^{*}\colon \mathcal E\to \mathcal E$ such that $$\langle T\xi,\zeta \rangle = \langle \xi,T^{*}\zeta \rangle \ \ \ \xi, \zeta \in \mathcal E.$$ Then $L_{A}(\mathcal E)$ is a $C^{*}$-algebra with the operator norm $\|T\|=\sup\{\|T\xi \|:\|\xi \|=1\}.$ There is an injective $*$-homomorphism $\lambda \colon B\to L_{A}(\mathcal E)$ defined by $$\lambda(b)x=bx$$ for $x\in B_{A}$ and $b\in B$, so that $B$ can be viewed as a $C^{*}$-subalgebra of $L_{A}(\mathcal E)$. Note that the map $e_{A}\colon B_{A}\to B_{A}$ defined by $$e_{A}x=E(x),\ \ x\in B_{A}$$ is bounded and thus it can be extended to a bounded linear operator, denoted by $e_{A}$ again, on $\mathcal E$. Then $e_{A}\in L_{A}({\mathcal E})$ and $e_{A}=e_{A}^{2}=e_{A}^{*}$; that is, $e_{A}$ is a projection in $L_{A}(\mathcal E)$. A projection $e_A$ is called the [*Jones projection*]{} of $E$. The [*(reduced) $C^{*}$-basic construction*]{} is a $C^{*}$-subalgebra of $L_{A}(\mathcal E)$ defined to be $$C^{*}(B, e_{A}) = \overline{ span \{\lambda (x)e_{A} \lambda (y) \in L_{A}({\mathcal E}): x, \ y \in B \ \} }^{\|\cdot \|}$$ ([@wata Definition  2.1.2]). Since $C^{*}(B, e_{A})$ is isomorphic to $qM_n(A)q$ for some $n \in {{\mathbf{N}}}$ and projection $q \in M_n(A)$ by [@wata Lemma 3.3.4], we conclude that if ${{\mathrm{tsr}}}(A) = 1$, then ${{\mathrm{tsr}}}(C^{*}(B, e_{A})) = 1$ from Proposition $\ref{P:Stablerankone}$ and Remark $\ref{R:Stablerankone}$. The inclusion $1 \in A \subset B$ of unital C\*-algebras of index-finite type is said to have [*finite depth k*]{} if the derived tower obtained by iterating the basic construction $$A' \cap A \subset A' \cap B \subset A' \cap B_2 \subset A' \cap B_3 \subset \cdots$$ satisfies $(A' \cap B_k)e_k(A' \cap B_k) = A' \cap B_{k+1}$, where $\{e_k\}_{k \geq 1}$ are projections derived obtained by iterating the basic construction such that $B_{k+1} = C^*(B_{k}, e_k)$  ($k \geq 1$) ($B_1 = B, e_1 = e_A$). Let $E_k : B_{k+1} \rightarrow B_k$ be a faithful conditional expectation correspondent to $e_ k$ for $k \geq 1$. When $G$ is a finite group and $\alpha$ an action of $G$ on $A$, it is well known that an inclusion $1 \in A \subset A \rtimes_\alpha G$ is of depth 2. (See [@OT Lemma 3.1].) Main result =========== The following result is contained in [@OT Theorem 5.1]. We give a sketch of the proof for self-contained. \[prpOT2\]$($cf[@OT Theorem 5.1]$)$ Let $1 \in A \subset B$ be an inclusion of unital C\*-algebra of index-finite type and depth $2$. Suppose that ${{\mathrm{tsr}}}(A) = 1.$ Then we have $$\sup_{p\in P(A)}{{\mathrm{tsr}}}(pBp) < \infty,$$ where $P(A)$ denotes the set of all prjections in $A$. Let $$1 \in A \subset B \subset B_2 \subset B_3 \subset \cdots$$ be the derived tower of iterating the basic construction and $\{e_k\}_{k\geq1}$ be canonical projections such that $B_{k+1} = C^*(B_k,e_k)$, where $e_1 = e_A$. Since $1 \in A \subset B$ is of depth 2, we have $$(A' \cap B_2)e_2(A' \cap B_2) = A' \cap B_3.$$ (See [@GHJ Definition 4.6.4].) Then there are finitely elements $\{u_i\}_{i=1}^n$ in $A' \cap B_2$ such that $$\sum_{i=1}^nu_ie_2u_i^* = 1.$$ Since for any $b \in B_2$ $$\begin{array}{ll} be_2 = 1\cdot be_2 &= \sum_{i=1}^nu_ie_2u_i^*be_B\\ &= \sum_{i=1}^nu_iE_2(u_i^*b)e_2, \end{array}$$ from [@wata Lemma 2.1.1 (2)] we have $b = \sum_{i=1}^nu_iE_2(u_i^*b)$. Similarly, we can show that $b = \sum_{i=1}^nE_2(bu_i)u_i^*$ for any $b \in B_2$. It follows that $\{(u_i, u_i^*)\}_{i=1}^n$ is a quasi-basis for $E_2$. Since $pu_i = u_ip$ for $1 \leq i \leq n$ and any projection $p \in A$, from the simple calculation we know that $\{(pu_i, pu_i^*)\}_{i=1}^n$ is a quasi-basis for $F_p = E_2|pB_2p$ from $pB_2p$ onto $pBp$. Hence from [@OT Proposition 5.3] we have $$\begin{array}{ll} {{\mathrm{tsr}}}(pBp) &\leq n^2\times{{\mathrm{tsr}}}(pB_2p) - n + 1\\ &= n^2 - n + 1. \end{array}$$ The last equality comes from that ${{\mathrm{tsr}}}(B_2) = 1$ and Proposition $\ref{P:Stablerankone}(1)$. Since $n$ is independent of the choice of projections in $A$, we have $$\sup_{p\in P(A)}{{\mathrm{tsr}}}(pBp) \leq n^2 - n + 1 < \infty.$$ The following main theorem is an extended version of [@OT Theorem 5.1]. \[T:Main\] Let $1 \in A \subset B$ be an inclusion of unital C\*-algebras of index-finite type and depth $2$. Suppose that $A$ is infinite dimensional simple with ${{\mathrm{tsr}}}(A) = 1$ and SP-property. Then ${{\mathrm{tsr}}}(B) \leq 2$. Since $1 \in A \subset B$ be an inclusion of simple C\*-algebras of index-finite type and depth $2$, from Proposition $\ref{prpOT2}$ we have $$\sup_{p\in P(A)}{{\mathrm{tsr}}}(pBp) < \infty.$$ Set $K = \sup_{p\in P(A)}{{\mathrm{tsr}}}(pBp)$. Since $A$ is simple with SP-property, there is a sequence of mutually orthogonal equivalent projections $\{p_i\}_{i=1}^N$ in $A$ such that $N > K$. (For example see [@Hl Lemma 3.5.7].) Set $p = \sum_{i=1}^Np_i$. Then $pBp$ has a matrix unite such that $$pBp \cong M_N(p_1Bp_1).$$ Then using [@Rf1 Theorem 6.1] $$\begin{aligned} {{\mathrm{tsr}}}(pBp) &= {{\mathrm{tsr}}}(M_N(p_1Bp_1))\\ &= \{\frac{{{\mathrm{tsr}}}(p_1Bp_1) - 1}{N}\} + 1\\ &\leq \{\frac{K}{N}\} + 1 = 2,\\\end{aligned}$$ where $\{a \}$ denotes least integer greater than $a$. Since $A$ is simple, $p$ is a full projection in $A$, and moreover, in $B$. Hence from [@bl4 Theorem 4.5] we have $$\begin{aligned} {{\mathrm{tsr}}}(B) \leq {{\mathrm{tsr}}}(pBp) \leq 2.\end{aligned}$$ Let $A$ be a simple [C\*-algebra]{}  with ${{\mathrm{tsr}}}(A) = 1$ and SP-property and ${\alpha}$ an action of a finite group $G$ on ${{\rm Aut}}(A)$. Then $${{\mathrm{tsr}}}(A \rtimes_{\alpha}G) \leq 2.$$ If $A$ is finite dimensional, then $A \rtimes_{\alpha}G$ has finite dimensional, and ${{\mathrm{tsr}}}(A \rtimes_{\alpha}G) = 1$. Hence we may assume that $A$ is infinite dimensional. Since $A \subset A \rtimes_{\alpha}G$ is an inclusion of index-finite and depth 2 from [@OT Lemma 3.1], it follows from Theorem $\ref{T:Main}$. \[C:Crossed product\] Let $A$ be a simple [C\*-algebra]{}  of tracial topological rank zero and ${\alpha}$ an action of a finite group $G$ on ${{\rm Aut}}(A)$. Then $${{\mathrm{tsr}}}(A \rtimes_{\alpha}G) \leq 2.$$ Since $A$ has tracial topological rank zero, $A$ has ${{\mathrm{tsr}}}(A) = 1$ and SP-property. (For example see [@Hl Lemma 3.6.6 and Theorem 3.6.10].) Hence the conclusion comes from Corollary $\ref{C:Crossed product}$. \[blackadr’s example\] If a given [C\*-algebra]{} $A$ has only the condition of ${{\mathrm{tsr}}}(A) = 1$, the estimate in Corollary $\ref{C:Crossed product}$ is best possible. Indeed in [@bl3 Example 8.2.1] Blackadar constructed an symmetry action $\alpha$ on $CAR$ such that $$(C[0,1] \otimes CAR) \rtimes_{id \otimes \alpha} Z_2 \cong C[0,1] \otimes B,$$ where $B$ is the Bunce-Deddens algebra of type $2^\infty$. Then since $K_1(B)$ is non-trivial, we know that $${{\mathrm{tsr}}}(C[0,1] \otimes B) = 2.$$ $($See also [@nop Proposition  5.2].$)$ From Corollary $\ref{C:Crossed product}$ if $A$ is infinite dimensional simple AF [C\*-algebra]{} , we conclude that $$\begin{aligned} {{\mathrm{tsr}}}(A \rtimes_{\alpha}G) \leq 2\end{aligned}$$ for any an action ${\alpha}$ from any finite group $G$ on $Aut(A)$. This gives an affirmative data to Blackadar’s question in [@bl3], that is, ${{\mathrm{tsr}}}(A \rtimes_{\alpha}G) = 1$ under the above condition. We note that under the extra condition on an action we can conclude it. Indeed, Phillips proved in [@Ph] if ${\alpha}$ has strictly Rokhlin property, then $A \rtimes_{\alpha}G$ is again AF [C\*-algebra]{}. More recently, the first author and Phillips proved in [@OP] that if ${\alpha}$ is an action with tracial Roklin property from a finite group $G$ on a simple C\*-algebra $A$ of tracial topological rank zero, then $A \rtimes_{\alpha}G$ has again tracial topological rank zero. All data implies that Blackadar’s question should be correct. \[R:Cancellation\] A [C\*-algebra]{} $A$ is said to have [*cancellation of projections*]{} if $p \sim q$ holds whenever $p, q$, and $r$ are projections in $A$ with $p \perp r, \ q \perp r, \ p+r \sim q+r$. If the matrix algebra $M_n(A)$ over $A$ has cancellation of projections for each $n \in {{\mathbf{N}}}$, we simply say that $A$ has [*cancellation*]{}. It is well known that if $A$ has ${{\mathrm{tsr}}}(A) = 1$, then $A$ has cancellation. But it was a long standing problem till quite recently whether the cancellation of $A$ implies ${{\mathrm{tsr}}}(A)=1$ for a stably finite simple [C\*-algebra]{}  $A$, and it has been settled finally in [@To] where a stably finite simple $C^*$-algebra $B$ with cancellation and ${{\mathrm{tsr}}}(B)>1$ is constructed by applying Villadson’s techniques ([@v]) ($B$ is also unital simple, separable, and nuclear). Very recently, authors, Jeong, and Phillips have proved [@JOTP] that under the same assumption in Theorem $\ref{T:Main}$ $B$ has cancellation. Therefore, we predict that ${{\mathrm{tsr}}}(B) = 1$. [99]{} B. Blackadar, [*Symmetries of the CAR algebra*]{}, Annals of Math. [**131**]{}(1990), 589 - 623. B. Blackadar, [*The stable rank of full corners in C\*-algebras*]{}, Proc. Amer. Math. Soc. [**132**]{}(2004), 2945 - 2950. B. Blackadar, A. Kumjian, and M. Rørdam, [*Approximately central matrix units and the structure of noncommutative tori*]{}, K-theory 6(1992), 267 - 284. L. G. Brown and G. K. Pedersen, [ *C\*-algebras of real rank zero*]{}, J. Funct. Anal. **99**(1991), 131 - 149. F. M. Goodman, P. de la Harpe, and V. F. R. Jones, [*Coxeter Graphs and Towers of Algebras*]{}, Mathematical Sciences Research Institute Publication 14(989), Springer-Verlag, New York Berlin Heidelberg London Paris Tokyo. U. Haagerup, [*Quasitraces on exact C\*-algebras are traces*]{}, preprint. U. Haagerup and S. Thorbjømsen, [*Random matrices with complex Gaussian entries*]{}, Expo. Math. 21 (2003), no. 4, 293–337. M. Izumi, [*Inclusions of simple C\*-algebras*]{}, J. reine angew. Math. 547(2002), 97 - 138. J. A Jeong, H. Osaka, T. Teruya, and N. C. Phillips, [*Cancellation of crossed product algebras*]{}, in preparation. A. Kumjian, [*An involutive automorphism of the Bunce-Deddens algebra*]{}, C. R. Math. Sci. Canada, [**10**]{}(1988), 217 - 218. H. Lin, [*Introduction to the classification of amenable C\*-algebras*]{}, World Scientific Publishing Co., Inc., River Edge, NJ, 2001. M. Nagisa, H. Osaka, and N. C. Phillips, [*Ranks of algebras of continuous C\*-algebra valued functions*]{}, Canad. J. Math. 53(2001), 979 - 1030. H. Osaka and N. C. Phillips, [*Crossed products of simple C\*-algebras with tracial rank one by actions with tracial Roklin property*]{}, in preparation. H. Osaka and T. Teruya, [*Topological stable rank of inclusions of unital C\*-algebras*]{}, Inter.  J.  Math. [**17**]{}(2006), 19 - 34. N. C. Phillips, [*Crossed products by by finite cyclic group action with the tracial Rokhlin property*]{}, unpublished preprint(atXiv:math.OA/0306410). I. Putnam, [*The invertible elements are dense in the irrational rotation algebras*]{}, J. Rein. Angew. Math. 410(1990), 160 - 166. M. A. Rieffel, [*Dimension and stable rank in the K-theory of C\*-algebras*]{}, Proc. London Math. Soc. [**46**]{}(1983), 301 - 333. M. A. Rieffel, [*The homotopy groups of the unitary groups of non-commutative tori*]{}, J. Opeator Theory [**17**]{}(1987), 237 - 254. A. S. Toms, [*Cancellation does not imply stable rank one*]{}, preprint, arXiv:math.OA/0509107. J.  Villadsen, [*On the stable rank of simple $C^*$-algebras*]{}, J. Amer. Math. Soc. [**12**]{}(1999), no. 4, 1091 - 1102. Y. Watatani, [*Index for C\*-algebras*]{}, Memoirs of the Amer. Math. Soc. 424(1990).
--- address: | $^{1}$ Institute of Theoretical Physics, University of Wrocław, PL-50204 Wrocław, Poland\ $^{2}$ Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, 141980 Dubna, Russia\ $^{3}$ National Research Nuclear University, 115409 Moscow, Russia\ $^{4}$ Extreme Matter Institute EMMI, GSI, D-64291 Darmstadt, Germany --- Introduction {#sec:introduction} ============ For the investigation of matter under extreme conditions and the structure of its phase diagram the equation of state (EoS) is the key target. In the region of finite temperature and vanishing baryon density ab initio calculations using Monte Carlo simulations of lattice QCD [@Bazavov:2014pvz] provide a benchmark for developing phenomenological approaches that can be tested, e.g., in heavy-ion collision experiments. Up to now, however, the sign problem prevents the application of lattice QCD methods to the region at low temperature and high baryon density where the possible existence of a first-order phase transition with a critical endpoint has been conjectured. To elucidate the QCD phase structure in this domain inaccessible to terrestrial experiments and present techniques of lattice QCD simulations, valuable information comes from progress in observing the mass–radius (M-R) relationship of compact stars, due to its one-to-one correspondence with the EoS of compact star matter [@Lindblom:1998dp] via the solution of the Tolman–Oppenheimer–Volkoff (TOV) equations [@Tolman:1939jz; @Oppenheimer:1939ne]. For the extraction of the compact star EoS via Bayesian analysis techniques using mass and radius measurements as priors, see Refs. [@Steiner:2010fz; @Steiner:2012xt; @Alvarez-Castillo:2016oln]. In particular, in the era of multi-messenger astronomy, it shall soon become possible to constrain the sequence of stable compact star configurations in the mass–radius plane inasmuch that a benchmark for the EoS of cold and dense matter can be deduced from it. Among the modern observatories for measuring masses and radii of compact stars, the gravitational wave interferometers of the LIGO-Virgo Collaboration (LVC) and the X-ray observatory NICER on-board the International Space Station provide new powerful constraints besides those from radio pulsar timing. In this work, we pay special attention to the state-of-the-art results from the recent measurement of the high mass $2.17^{+0.11}_{-0.10}~M_\odot$ for PSR J0740+6620 by the NANOGrav Collaboration [@Cromartie:2019kug] and the compactness derived from the tidal deformability measurement for the binary compact star merger GW170817 [@Abbott:2018exr] in its mass range ($1.16$–$1.60~M\odot$ for the low-spin prior). In the study of cold and dense QCD and its applications, commonly used are separate effective models for the nuclear and quark matter phases (two-phase approaches) with a priori assumed first-order phase transition, typically associated with simultaneous chiral and deconfinement transitions. Within this setting, for a constant-speed-of sound model of high-density (quark) matter, a systematic classification of hybrid compact star solutions has been given in [@Alford:2013aca], which gives a possibility to identify a strong first-order transition in the EoS by the fact that the hybrid star branch in the mass–radius diagram becomes disconnected from the branch of pure neutron stars. However, already before this occurs, a strong phase transition manifests itself by the appearance of an almost horizontal branch on which the hybrid star solutions lie, as opposed to the merely vertical branch of pure neutron stars. In the literature, this strong phase transition has been discussed as due to quark deconfinement [@Blaschke:2013ana; @Benic:2014jia; @Alvarez-Castillo:2016wqj]. This conclusion may however be premature since strong phase transitions with a large latent heat occur also within hadronic matter, for instance due to chiral symmetry restoration within the hadronic phase [@Marczenko:2018jui]. In the present work, we employ the hybrid quark–meson–nucleon (QMN) model [@Benic:2015pia; @Marczenko:2017huu; @Marczenko:2018jui] to explore the implications of dynamical sequential phase transitions at high baryon density on the phenomenology of neutron stars. To improve the description of nuclear matter properties at the saturation density, we extend the previous hybrid QMN model by including a six-point scalar interaction. Our main focus is on the role of the chiral symmetry restoration within the hadronic branch of the EoS. This paper is organized as follows. In Section \[sec:hqmn\_model\], we introduce the hybrid quark–meson–nucleon model. In Section \[sec:eos\], we discuss the obtained numerical results on the equation of state under neutron-star conditions. In Section \[sec:mass\_radius\], we discuss the obtained neutron-star relations. In Section \[sec:qcd\_phase\_diagram\], we present possible realizations of the low-temperature phase diagram. Finally, Section \[sec:conclusions\] is devoted to summary and conclusions. Hybrid Quark–Meson–Nucleon Model {#sec:hqmn_model} ================================ In this section, we briefly introduce the hybrid QMN model for the QCD transitions at finite temperature, density, and arbitrary isospin asymmetry for the application to the physics of neutron stars [@Benic:2015pia; @Marczenko:2017huu; @Marczenko:2018jui]. The hybrid QMN model is composed of the baryonic parity doublet [@Detar:1988kn; @Jido:1999hd; @Jido:2001nt] and mesons as in the Walecka model, as well as quark degrees of freedom as in the standard quark–meson model. The spontaneous chiral symmetry breaking yields the mass splitting between the two baryonic parity partners, while it generates an entire mass of a quark. In this work, we consider a system with $N_f=2$; hence, relevant for this study are the lowest nucleons and their chiral partners, as well as the up and down quarks. The hadronic degrees of freedom are coupled to the chiral fields $\left(\sigma, \boldsymbol\pi\right)$, isosinglet vector field ($\omega_\mu$), and isovector vector field ($\boldsymbol \rho_\mu$). The quarks are coupled solely to the chiral fields. The important concept of statistical confinement is realized in the hybrid QMN model by considering a medium-dependent modification of the particle distribution functions. In the mean-field approximation, the thermodynamic potential of the hybrid QMN model reads [@Marczenko:2018jui] $$\label{eq:thermo_pot_iso} \Omega = \sum_{x}\Omega_x + V_\sigma + V_\omega + V_b + V_\rho \textrm.$$ where the summation goes over the positive-parity nucleons, i.e., proton ($p_+$) and neutron ($n_+$), their negative-parity counterparts, denoted as $p_-$ and $n_-$, and up ($u$) and down ($d$) quarks. The positive-parity nucleons are identified as the positively charged and neutral $N(938)$ states. The negative-parity states are identified as $N(1535)$ [@Patrignani:2016xqp]. The kinetic part of the thermodynamic potential in Equation , $\Omega_x$, reads $$\label{eq:thermo_potential_all} \Omega_x = \gamma_x \int\frac{{\mathrm{d}}^3p}{\left(2\pi\right)^3} T \left[\ln\left(1-n_x\right) + \ln\left(1-\bar n_x\right)\right]\textrm.$$ The spin degeneracy of the nucleons is $\gamma_\pm=2$ for both positive- and negative-parity states, while the color-spin degeneracy factor for quarks is $\gamma_q=2\times 3 = 6$. The functions $n_x$ are the modified distribution functions for the nucleons $$\label{eq:cutoff_nuc} \begin{split} n_\pm &= \theta \left(\alpha^2 b^2 - \boldsymbol p^2\right) f_\pm \textrm,\\ \bar n_\pm &= \theta \left(\alpha^2 b^2 - \boldsymbol p^2\right) \bar f_\pm \textrm, \end{split}$$ and for the quarks, accordingly $$\label{eq:cutoff_quark} \begin{split} n_q &= \theta \left(\boldsymbol p^2-b^2\right) f_q \textrm,\\ \bar n_q &= \theta \left(\boldsymbol p^2-b^2\right) \bar f_q \textrm, \end{split}$$ where $b$ is the expectation value of the $b$-field, and $\alpha$ is a dimensionless model parameter [@Benic:2015pia; @Marczenko:2017huu]. The hybrid QMN model employs confinement/deconfinement mechanism in a statistical sense. The approach used in this model is to introduce IR and UV momentum cutoffs to suppress quarks at low momenta and hadrons at high momenta. This notion has been widely used in effective theories and Dyson–Schwinger approaches [@Roberts:2010rn; @Roberts:2011wy]. In the current approach, the cutoff is replaced with a medium-dependent quantity, which is expected from asymptotic freedom. Such an intrinsic modification of the cutoff is determined self-consistently when the cutoff is regarded as a vacuum expectation value of a scalar field (see Equation ). The role of the $\alpha b_0$ parameter can be understood twofold. First, its lower values trigger the chiral phase transition at lower densities. Second, the chiral phase transition is stronger and the equation of state becomes stiffer for lower values of the parameter. This can be seen in the equation of state and corresponding speed of sound squared as functions of net-baryon-number density (see Section \[sec:eos\]). The functions $f_x$ and $\bar f_x$ are the standard Fermi–Dirac distributions, $$\begin{split} f_x &= \frac{1}{1+e^{\beta \left(E_x - \mu_x\right)}} \textrm,\\ \bar f_x &= \frac{1}{1+e^{\beta \left(E_x + \mu_x\right)}}\textrm, \end{split}$$ with $\beta$ being the inverse temperature, the dispersion relation $E_x = \sqrt{\boldsymbol p^2 + m_x^2}$. The effective chemical potentials for $p_\pm$ and $n_\pm$ are [defined ]{}as[^1] $$\label{eq:u_eff_had_iso} \begin{split} \mu_{p_\pm} &= \mu_B - g_\omega\omega - \frac{1}{2}g_\rho \rho + \mu_Q\textrm,\\ \mu_{n_\pm} &= \mu_B - g_\omega\omega + \frac{1}{2}g_\rho \rho\textrm. \end{split}$$ The effective chemical potentials for up and down quarks are given by $$\label{eq:u_effq} \begin{split} \mu_u &= \frac{1}{3}\mu_B + \frac{2}{3}\mu_Q\textrm,\\ \mu_d &= \frac{1}{3}\mu_B - \frac{1}{3}\mu_Q\textrm. \end{split}$$ In Equations (\[eq:u\_eff\_had\_iso\]) and (\[eq:u\_effq\]), $\mu_B$ and $\mu_Q$ are the baryon and charge chemical potentials, respectively. The constants $g_\omega$ and $g_\rho$ couple the nucleons to the $\omega$ and $\rho$ fields, respectively. The strength of $g_\omega$ is fixed by the nuclear saturation properties, while the value of $g_\rho$ can be fixed by fitting the value of symmetry energy [@Lattimer:2012xj]. The properties are shown in Table \[tab:external\_params\]. **\[fm\]** **\[MeV\]**  **\[MeV\]** **\[MeV\]** ------------ ------------- -------------- ------------- $0.16$ $-16$ 240 31 : Properties of the nuclear ground state at $\mu_B = 923~$MeV and the symmetry energy used in this work.[]{data-label="tab:external_params"} The effective masses of the parity doublers are given by $$\label{eq:mass_had} m_\pm = \frac{1}{2}\left[\sqrt{\left(g_1+g_2\right)^2\sigma^2 + 4m_0^2} \mp \left( g_1 - g_2 \right) \sigma \right] \textrm,$$ and for quarks, $m_u = m_d \equiv m_q$, $$\label{eq:mass_quark} m_q = g_q \sigma \textrm.$$ The parameters $g_1$, $g_2$, and $g_q$ are Yukawa-coupling constants, $m_0$ is the chirally invariant mass of the baryons and is treated as an external parameter (for more details, see [@Marczenko:2017huu; @Marczenko:2018jui]). The values of those couplings can be determined by fixing the fermion masses in the vacuum (see Table \[tab:vacuum\_params\]). The quark mass is assumed to be $m_+ = 3 m_q$ in the vacuum. When the chiral symmetry is restored, the masses of the baryonic parity partners become degenerate with a common finite mass $m_\pm\left(\sigma=0\right) = m_0$, which reflects the parity doubling structure of the baryons. This is in contrast to the quarks, which become massless as the chiral symmetry gets restored. **\[MeV\]** **\[MeV\]** **\[MeV\]** **\[MeV\]** **\[MeV\]** **\[MeV\]** ------------- ------------- ------------- ------------- ------------- ------------- 939 1500 140 93 783 775 : Physical vacuum inputs used in this work.[]{data-label="tab:vacuum_params"} The potentials in Equation  are as in the SU(2) linear sigma model, \[eq:potentials\] $$\begin{aligned} V_\sigma &= -\frac{\lambda_2}{2}\left(\sigma^2 + \boldsymbol\pi^2\right) + \frac{\lambda_4}{4}\left(\sigma^2 + \boldsymbol\pi^2\right)^2 - \frac{\lambda_6}{6}\left(\sigma^2 + \boldsymbol\pi^2\right)^3- \epsilon\sigma \textrm,\label{eq:potentials_sigma}\\ V_\omega &= -\frac{1}{2}m_\omega^2 \omega_\mu\omega^\mu\textrm,\\ V_b &= -\frac{1}{2} \kappa_b^2 b^2 + \frac{1}{4}\lambda_b b^4 \textrm,\\ V_\rho &= - \frac{1}{2}m_\rho^2{\boldsymbol \rho}_\mu{\boldsymbol \rho}^\mu \textrm,\label{eq:potentials_b} \end{aligned}$$ where $\lambda_2 = \lambda_4f_\pi^2 - \lambda_6f_\pi^4 - m_\pi^2$, and $\epsilon = m_\pi^2 f_\pi$. $m_\pi$, $m_\omega$, and $m_\rho$ are the $\pi$, $\omega$, and $\rho$ meson masses, respectively, The pion decay constant is denoted as $f_\pi$. Their values are shown in Table \[tab:vacuum\_params\]. The constants $\kappa_b$ and $\lambda_b$ are fixed following Ref. [@Benic:2015pia]. The parameters $\lambda_4$ and $\lambda_6$ are fixed by the properties of the nuclear ground state (see Table \[tab:external\_params\]). We note that the introduction of the six-point scalar interaction term in Equation  is essential in order to reproduce the experimental value of the compressibility . Following the previous studies of the models [@Zschiesche:2006zj; @Benic:2015pia; @Marczenko:2017huu; @Marczenko:2018jui], as well as recent lattice QCD results [@Aarts:2017rrl; @Aarts:2018glk], we choose rather large values, $m_0=700,~800$ MeV. We note that the additional mass, $m_0$, is not associated with spontaneous chiral symmetry breaking. Thus, it has to originate through another mechanism. Although it is unknown how $m_0$ is expressed in terms of the QCD condensates, the constraint $m_0 \leq 800~$MeV is transmuted into the nucleon mass such that at most $15\%$ of the entire mass is generated by the spontaneous chiral symmetry breaking. This is best seen in the chiral limit, where no dimensionful parameters are present in the QCD Lagrangian, but the appearance of the QCD scale breaks the scale invariance. Thus, one expects that both give rise to the emergence of dynamical hadronic scales at low energies [@Collins:1976yq; @Bardeen:1985sm; @Nielsen:1977sy]. Thus, the chirally invariant mass, $m_0$, can be identified with the gluon condensate $\langle G_{\mu\nu}G^{\mu\nu} \rangle$. The physical inputs and the model parameters used in this work are summarized in Tables \[tab:external\_params\]–\[tab:model\_params\]. In-medium profiles of the mean fields are obtained by extremizing the thermodynamic potential in Equation (\[eq:thermo\_pot\_iso\]). The gap equations are obtained as follows \[eq:gap\_eqs\_iso\] $$\begin{aligned} \frac{\partial\Omega}{\partial\sigma} &= -\lambda_2\sigma + \lambda_4\sigma^3 -\lambda_6\sigma^5 - \epsilon + \sum_{x=p_\pm,n_\pm, u,d}s_x \frac{\partial m_x}{\partial \sigma} = 0 \textrm, \label{eq:gap_eq_sigma}\\ \frac{\partial\Omega}{\partial\omega} &= -m_\omega^2 \omega + g_\omega\sum_{x=p_\pm,n_\pm}\rho_x = 0 \textrm,\label{eq:gap_omega}\\ \frac{\partial\Omega}{\partial b} &= - \kappa_b^2 b + \lambda_b b^3 + \alpha \sum_{x=p_\pm,n_\pm} \hat\omega_x - \sum_{x=u,d}\hat\omega_x = 0 \textrm,\label{eq:gap_b}\\ \frac{\partial\Omega}{\partial\rho} &= -m_\rho^2 \rho + \frac{1}{2}g_\rho\sum_{x=p_\pm}\rho_x - \frac{1}{2}g_\rho\sum_{x=n_\pm}\rho_x = 0 \textrm, \end{aligned}$$ where the scalar and baryon densities are $$\label{eq:scalar_den} s_x = \gamma_x \int\frac{{\mathrm{d}}^3 p}{(2\pi)^3}\; \frac{m_x}{E_x} \left( n_x + \bar n_x \right) \textrm,$$ and $$\label{eq:vector_den} \rho_x = \gamma_x \int\frac{{\mathrm{d}}^3 p}{(2\pi)^3}\; \left( n_x - \bar n_x \right) \textrm,$$ respectively. The boundary terms in the gap Equation  are given as $$\label{eq:boundary_nucleon} \hat\omega_{\pm} = \gamma_{\pm} \frac{(\alpha b)^2}{2\pi^2} T \left[ \ln\left(1 - f_{\pm}\right) + \ln\left(1 - \bar f_{\pm}\right) \right]_{\boldsymbol p^2 = (\alpha b)^2}\textrm,$$ and $$\label{eq:boundary_quark} \hat\omega_q = \gamma_q \frac{b^2}{2\pi^2} T \left[ \ln\left(1 - f_q\right) + \ln\left(1 - \bar f_q\right) \right]_{\boldsymbol p^2 = b^2} \textrm,$$ for the nucleons and quarks, respectively. Note that the terms in Equations (\[eq:boundary\_nucleon\]) and (\[eq:boundary\_quark\]) come into the gap Equation (\[eq:gap\_b\]) with opposite signs. This reflects the fact that nucleons and quarks favor different values of the bag field.  **\[MeV\]** **\[MeV\]** -------------- ------- ------- ------ ------ ------- ------ -- ------------- -- 700 33.74 13.20 5.60 8.10 13.75 7.72 800 21.50 8.25 7.27 7.92 12.91 6.88 : Sets of the model parameters used in this work. The values of $\lambda_4$, $\lambda_6$ and $g_\omega$ are fixed by the nuclear ground state properties, and $g_\rho$ by the symmetry energy (see the text). The remaining parameters, $g_q$, $\kappa_b$, and $\lambda_b$, do not depend on the choice of $m_0$, and their values are taken from Ref. [@Marczenko:2017huu].[]{data-label="tab:model_params"} In the grand canonical ensemble, the thermodynamic pressure is obtained from the thermodynamic potential as $P = -\Omega + \Omega_0$, where $\Omega_0$ is the value of the thermodynamic potential in the vacuum. The net-baryon number density for a species $x$ is defined as $$\rho^x_B = -\frac{\partial \Omega^x}{\partial \mu_B} \textrm,$$ where $\Omega^x$ is the kinetic term in Equation  for the species $x$. The total net-baryon number density reads $$\rho_B = \rho_B^{n_+} + \rho_B^{n_-} + \rho_B^{p_+} + \rho_B^{p_-} + \rho_B^{u} + \rho_B^{d} \textrm.$$ In the next section, we discuss the obtained equations of state in the hybrid QMN model and their impact on the chiral phase transition, under the neutron-star conditions of $\beta$ equilibrium and charge neutrality. Equation of State under Neutron-Star Conditions {#sec:eos} =============================================== The neutron-star conditions require additional constraints to be imposed on the EoS under investigation. To this end, electrons and muons are included as gases of free relativistic particles. The first constraint is the $\beta$-equilibrium. This condition is an equilibrium among protons, neutrons, and charged leptons. It assumes that the energy of the system is minimized, the system is electrically neutral, and the total net-baryon number is conserved. $\beta$-equilibrium condition can be expressed in terms of chemical potentials, $$\mu_{n_+} = \mu_{p_+} + \mu_{e/\mu} \textrm,$$ where $\mu_{n_+}$, $\mu_{p_+}$, $\mu_{e}$, and $\mu_\mu$ are the neutron, proton, electron, and muon chemical potentials, respectively. The electric-charge neutrality constraint dictates that the overall charge density in a neutron star has to be zero, $$\rho_Q^{p_+} + \rho_Q^{p_-} + \frac{2}{3}\rho_Q^u - \frac{1}{3}\rho_Q^d - \rho_Q^e - \rho_Q^\mu = 0 \textrm,$$ where $\rho_Q^x$ is the charge density of a species $x$. In Figure \[fig:eos\], we show the calculated zero-temperature equations of state in the mean-field approximation with $m_0=700~$MeV (Figure \[fig:eos\], left) and $m_0=800~$MeV (Figure \[fig:eos\], right), for different values of the $\alpha$ parameter, namely (red, solid line), (purple, dashed line), (blue, dotted line) and (black, dash-dotted line). The value $b_0$ denotes the vacuum expectation value of the $b$-field. The coexistence phases of the chirally broken and restored phases are shown between circles. We stress that the chiral and deconfinement phase transitions are sequential in the current model setup (see [@Marczenko:2017huu]). The latter happen at higher densities and are not shown in the figure. ![Thermodynamic pressure $P$ as a function of the net-baryon number density $\rho_B$, in units of the saturation density, $\rho_0=0.16$ fm$^{-3}$ for $m_0=700~$MeV (**left**) and $m_0=800~$MeV (**right**). The regions between circles correspond to the coexistence of chirally broken and restored phases in the first-order phase transition. For , the transition is a crossover. The deconfinement transitions are triggered at higher densities and are not shown here.[]{data-label="fig:eos"}](Definitions/p_n_700.pdf "fig:"){width="0.497\linewidth"} ![Thermodynamic pressure $P$ as a function of the net-baryon number density $\rho_B$, in units of the saturation density, $\rho_0=0.16$ fm$^{-3}$ for $m_0=700~$MeV (**left**) and $m_0=800~$MeV (**right**). The regions between circles correspond to the coexistence of chirally broken and restored phases in the first-order phase transition. For , the transition is a crossover. The deconfinement transitions are triggered at higher densities and are not shown here.[]{data-label="fig:eos"}](Definitions/p_n_800.pdf "fig:"){width="0.497\linewidth"} In all cases, the behavior at low densities is similar. In general, for low values of $\alpha b_0$ (except $\alpha b_0=450~$MeV), the chiral transition is of first order, determined as a jump in the $\sigma$-field expectation value. The parity-doublet nucleons become degenerate with mass $m_\pm = m_0$. The chiral phase transition becomes weaker for higher values of the $\alpha$ parameter. For $\alpha b_0=450$ MeV, the transition turns into a smooth crossover, defined as a peak in $\partial \sigma / \partial \mu_B$. This behavior agrees with the case of isospin-symmetric matter, where higher value of $\alpha$ causes the first-order chiral phase transition to weaken and eventually go through a critical point, and turn into a crossover transition [@Marczenko:2017huu]. The values of the net-baryon density range for the coexistence phase of the chirally broken and restored phases are shown in Table \[tab:chiral\_transition\]. -------------- --------------- --------------- --------------- ---------  **\[MeV\]** **350** **370** **400** **450** 700 $1.82$–$2.60$ $2.12$–$2.76$ $2.60$–$3.07$ $3.56$ 800 $1.94$–$2.97$ $2.29$–$3.15$ $2.86$–$3.66$ $4.13$ -------------- --------------- --------------- --------------- --------- : Net-baryon density range of the coexistence phase of the chirally broken and restored phases in terms of saturation density units, $\rho_0$, for different values of $m_0$ and $\alpha b_0$ parameters. For the case of $\alpha b_0=450~$MeV, the transitions are smooth crossovers for both values of $m_0$.[]{data-label="tab:chiral_transition"} In Figure \[fig:cs\], we show the speed of sound squared, $c_s^2 = {\mathrm{d}}P / {\mathrm{d}}\epsilon$, in units of the speed of light squared, as a function of the net-baryon number density. The coexistence phases are shown in between circles. As seen in the figure, the causality bound is preserved for all of the parameterizations. The apparent stiffening of the EoSs is a result of the modification of the Fermi–Dirac distributions (cf. Equation ) introduced in the hybrid QMN model. We note that it is in general possible to sustain the $2~M_\odot$ constraint and fulfill the conformal bound, i.e., $c_s^2 \leq 1/3$. This can be obtained, e.g., in a class of constant-speed-of-sound equations of state [@Alford:2015dpa]. ![Speed of sound squared as a function of the energy density for $m_0=700~$MeV (**left**) and $m_0=800~$MeV (**right**). The regions between circles correspond to the coexistence of chirally broken and restored phases in the first-order phase transition. For , the transition is a crossover. The deconfinement transitions are triggered at higher densities and are not shown here.[]{data-label="fig:cs"}](Definitions/cs2_700.pdf "fig:"){width="0.497\linewidth"} ![Speed of sound squared as a function of the energy density for $m_0=700~$MeV (**left**) and $m_0=800~$MeV (**right**). The regions between circles correspond to the coexistence of chirally broken and restored phases in the first-order phase transition. For , the transition is a crossover. The deconfinement transitions are triggered at higher densities and are not shown here.[]{data-label="fig:cs"}](Definitions/cs2_800.pdf "fig:"){width="0.497\linewidth"} TOV Solutions for Compact-Star Sequences {#sec:mass_radius} ======================================== We use the equations of state introduced in the previous section (see Figure \[fig:eos\]) to solve the general-relativistic TOV equations [@Tolman:1939jz; @Oppenheimer:1939ne] for spherically symmetric objects, \[eq:TOV\_eqs\] $$\begin{aligned} \frac{{\mathrm{d}}P(r)}{{\mathrm{d}}r} &= -\frac{\left(\epsilon(r) + P(r)\right)\left(M(r) + 4\pi r^3 P(r)\right)}{r \left(r-2M(r)\right)} \textrm,\\ \frac{{\mathrm{d}}M(r)}{{\mathrm{d}}r} &= 4\pi r^2 \epsilon(r)\textrm, \end{aligned}$$ with the boundary conditions and , where $R$ and $M$ are the radius and the mass of a neutron star, respectively. Once the initial conditions are specified based on a given equation of state, namely the central pressure $P_c$ and the central energy density $\epsilon_c$, the internal profile of a neutron star can be calculated. In general, there is one-to-one correspondence between an EoS and the relation calculated with it. In  Figure \[fig:m\_panel\] (left), we show the relationship of mass vs. central net-baryon number density, for the calculated sequences of compact stars, together with the state-of-the-art constraints on the maximum mass for the pulsar PSR J0348[-0432]{} [@Antoniadis:2013pzd] and PSR J0740+6620 [@Cromartie:2019kug]. We point out that the chiral phase transition leads to a softening of the EoS so that it is accompanied by a rapid flattening of the sequence. Notably, the chiral transition for all values of $\alpha b_0$ occurs in the part of the sequence, but below the constraint, at around $1.8~M_\odot$. In Figure \[fig:m\_panel\] (left), the three curves for consist of three phases: the chirally broken phase in the low-mass part of the sequence, the chirally restored phase in the part, and the coexistence phase between filled circles. Similar to the equation of state, increasing the value of $\alpha$ softens the chiral transition, which eventually becomes a smooth crossover for $\alpha b_0 = 450~$MeV and consists only of branches with chiral symmetry being broken and restored, separated by a circle. ![Sequences of mass for compact stars vs. their central net-baryon density (**left**) and vs. radius (**right**) as solutions of the TOV equations for $m_0=700$ MeV (**top**) and $m_0=800$ MeV (**bottom**). The regions between the circles show the coexistence of the chirally broken and chirally restored phases. The gray band shows the $2.17^{+0.11}_{-0.10}~M_\odot$ constraint [@Cromartie:2019kug]. The blue band is the $2.01\pm0.04~M_\odot$ constraint [@Antoniadis:2013pzd]. The green and purple bands in the right panel show $90\%$ credibility regions obtained from the GW170817 event [@Abbott:2018exr] for the low- and high-mass posteriors.[]{data-label="fig:m_panel"}](Definitions/m_panel_700.pdf "fig:"){width="1\linewidth"}\ ![Sequences of mass for compact stars vs. their central net-baryon density (**left**) and vs. radius (**right**) as solutions of the TOV equations for $m_0=700$ MeV (**top**) and $m_0=800$ MeV (**bottom**). The regions between the circles show the coexistence of the chirally broken and chirally restored phases. The gray band shows the $2.17^{+0.11}_{-0.10}~M_\odot$ constraint [@Cromartie:2019kug]. The blue band is the $2.01\pm0.04~M_\odot$ constraint [@Antoniadis:2013pzd]. The green and purple bands in the right panel show $90\%$ credibility regions obtained from the GW170817 event [@Abbott:2018exr] for the low- and high-mass posteriors.[]{data-label="fig:m_panel"}](Definitions/m_panel_800.pdf "fig:"){width="1\linewidth"} In Figure \[fig:m\_panel\], we show relations obtained for different values of the chirally invariant mass $m_0$. What is evident is that increasing the value of $m_0$ strengthens the chiral phase transition. This is seen twofold, as a shrinking of the coexistence phases and as more abrupt flattening of chirally restored branches. For a larger $m_0$, the transition becomes strong enough to produce disconnected branches (see, e.g., the red, solid line in the bottom right panel of Figure \[fig:m\_panel\]). These, in turn, cause the maximal mass of the sequences to decrease with increasing value of $m_0$. Eventually, the equations of state become not stiff enough to reach the $2~M_\odot$ constraint. We note that such small maximal masses are result of the additional six-point interaction term considered in the thermodynamic potential of the hybrid QMN model (see Equation ). For $m_0=700~$MeV (Figure \[fig:m\_panel\], top), the most favorable parameterizations are $\alpha b_0=350-370~$MeV, while for $m_0=800~$MeV (Figure \[fig:m\_panel\], bottom) none of the EoSs is stiff enough. In Table \[tab:max\_mass\], we show the values of the maximal masses of neutron star and corresponding radii obtained in each parameterization. In Figure \[fig:profiles\], we show the radial profiles of energy density (Figure \[fig:profiles\], top) and pressure (Figure \[fig:profiles\], bottom), for a $2.01~M_\odot$ neutron star, calculated for $m_0=700~$MeV and $\alpha b_0=370~$MeV. The chiral phase transition happens at roughly $7.4~$km from the center of the star and is reflected in a jump in energy density. -------------- --------------- --------------- --------------- ---------------  **\[MeV\]** **350** **370** **400** **450** 700 $2.10,~12.11$ $2.05,~11.91$ $2.01,~11.81$ $1.96,~11.92$ 800 $1.95,~11.64$ $1.88,~11.29$ $1.83,~11.22$ $1.79,~11.25$ -------------- --------------- --------------- --------------- --------------- : Maximal neutron-star masses in units of $M_\odot$ and corresponding radius in km (separated by comma) for different values of $m_0$ and $\alpha b_0$ parameters.[]{data-label="tab:max_mass"} ![Profiles of the energy density (**top**) and pressure (**bottom**) for a neutron star with $M=2.01~M_\odot$ for $m_0 = 700~$MeV and $\alpha b_0=370~$MeV. The red regions show the phase, where the chiral symmetry is broken, in the blue regions chiral symmetry is restored. The phases are separated with the dashed lines.[]{data-label="fig:profiles"}](Definitions/profile_panel_700.pdf){width="0.497\linewidth"} The end points of the mass–radius relation correspond to the onset of quark d.o.f. in each parameterization. This leads to the conclusion that the hadronic matter is not stiff enough to fulfill the two-solar-mass constraint. In general, a possible resolution to this problem could be another phase transition. This is the case in the hybrid QMN model, which features sequential chiral and deconfinement phase transitions. However, in the current model setup, the equation of state in the deconfined phase is not stiff enough to sustain the gravitational collapse and the branches become immediately unstable. This is because quarks are not coupled with the vector field leading to a repulsive force. On the other hand, it is known that repulsive interactions tend to stiffen the equation of state. Hence, an additional repulsive force in the quark sector could possibly make the branch stiff enough in order to reach the $2~M_\odot$ constraint, and an additional family of stable hybrid compact stars would appear, with the possibility for the twin scenario advocated by other effective models [@Alvarez-Castillo:2017qki; @Ayriyan:2017nby; @Kaltenborn:2017hus]. We note that the obtained relations stay in good agreement with the low-mass constraints derived from the recent neutron-star merger GW170817 for the low- and high-mass posteriors [@Abbott:2018exr]. In  Figure \[fig:m\_panel\] (right), they are shown as green and purple regions, respectively. Isospin-Symmetric Phase Diagram {#sec:qcd_phase_diagram} =============================== The observational neutron-star data provide useful constraints on the structure of strongly interacting matter. Furthermore, they may constrain the phase diagram of isospin-symmetric QCD matter, which is of major relevance for the heavy-ion physics. In Figure \[fig:phase\_diag\], we show the low-temperature part of the isospin-symmetric phase diagram obtained in the model in the for and . The liquid–gas phase transition (green, line) is common for both values of $m_0$ by construction of the hybrid QMN model [@Benic:2015pia; @Marczenko:2017huu]. Its critical point shows up at around $T=16~$MeV, above which it turns into crossover. A similar phase structure is developed for the chiral phase transition for low values of $\alpha b_0$. For $m_0=700~$MeV, the critical points appear around $T=19, ~9~$MeV for $\alpha b_0=350,~370~$MeV, respectively. On the other hand, for $\alpha b_0=400,~450~$MeV, the chiral transition proceeds as a smooth crossover at all temperatures. For $m_0=800~$MeV, the critical points are developed at $T=36,~26,15,1~$MeV for $\alpha b_0=350,~370,~400,~450~$MeV, respectively. Higher values of the temperatures for the critical points are essentially a result of much stronger chiral phase transition at zero temperature. We note that the most favorable parameterizations, i.e., for smaller values of $m_0$, yield rather low temperature for the critical point of the chiral phase transition. ![The Low-temperature part of phase diagram in the $(T,\rho_B)$-plane for isospin-symmetric matter obtained in the hybrid QMN model for $m_0=700~$MeV (**left)** and $m_0=800~$MeV (**right**). The curves indicate phase boundaries and the colored areas correspond to the density jump associated with the first-order phase transition. The green curve corresponds to the liquid–gas phase transition common for all $\alpha b_0$. The circles indicate critical points on the transition lines above which the first-order transition turns into a crossover. For $m_0=700~$MeV, no critical point is shown for the cases with $\alpha b_0=400~$MeV and $\alpha b_0=450$ MeV, where the chiral phase transition is a smooth crossover at all temperatures.[]{data-label="fig:phase_diag"}](Definitions/phase_diagram700 "fig:"){width=".49\linewidth"} ![The Low-temperature part of phase diagram in the $(T,\rho_B)$-plane for isospin-symmetric matter obtained in the hybrid QMN model for $m_0=700~$MeV (**left)** and $m_0=800~$MeV (**right**). The curves indicate phase boundaries and the colored areas correspond to the density jump associated with the first-order phase transition. The green curve corresponds to the liquid–gas phase transition common for all $\alpha b_0$. The circles indicate critical points on the transition lines above which the first-order transition turns into a crossover. For $m_0=700~$MeV, no critical point is shown for the cases with $\alpha b_0=400~$MeV and $\alpha b_0=450$ MeV, where the chiral phase transition is a smooth crossover at all temperatures.[]{data-label="fig:phase_diag"}](Definitions/phase_diagram800 "fig:"){width=".49\linewidth"} Conclusions {#sec:conclusions} =========== In this work, we investigated the hybrid QMN model for the equation of state of dense matter under neutron-star conditions and the phenomenology of compact stars. In particular, we focused on the implications of including six-point scalar interaction and studied the consequences of the realization of the chiral symmetry restoration within the hadronic phase. We found that the apparent softening of the EoS results in mass–radius relations with maximal mass at tension with the $2~M_\odot$ constraint within the hadronic branch, especially if the new PSR J0740+6620 with $2.17~M_\odot$ [@Cromartie:2019kug] is considered. We have shown that parameterizations of the model which yield large maximal mass (i.e., for smaller value of $m_0$) suggest rather low value of the temperature for the critical end point of the first-order chiral phase transition in the phase diagram, which may also be absent. In view of this, if would interesting to establish a constraint on the chirally invariant mass $m_0$. Since the hybrid QMN model features sequential chiral and deconfinement phase transitions, one possible resolution to this could be the onset of quark d.o.f. Such a scenario would be even further supported in view of the recent formulation of the three-flavor parity doubling [@Steinheimer:2011ea; @Sasaki:2017glk] and further lattice QCD studies [@Aarts:2018glk], where it was found that, to a large extent, the phenomenon occurs also in the hyperon channels. In general, the inclusion of heavier flavors is known to soften the equation of state and additional repulsive forces are needed to comply with the $2~M_\odot$ constraint. Additional stiffness from the quark side would play a role, which is not included in the current study. Work in this direction is in progress and the results will be reported elsewhere. [999]{} Bazavov, A.; Bhattacharya, T.; DeTar, C.; Ding, H.-T.; Gottlieb, S.; Gupta, R.; Hegde, P.; Heller, U.; Karsch, F.; Laermann, E.; et al. . , [*90*]{}, 094503. Lindblom, L. . , [*58*]{}, 024008. Tolman, R.C. . , [*55*]{}, 364–373. Oppenheimer, J.R.; Volkoff, G.M. . , [*55*]{}, 374–381. Steiner, A.W.; Lattimer, J.M.; Brown, E.F. . , [*722*]{}, 33–54. Steiner, A.W.; Lattimer, J.M.; Brown, E.F. . , [*765*]{}, L5. Alvarez-Castillo, D.; Ayriyan, A.; Benic, S.; Blaschke, D.; Grigorian, H.; Typel, S. . , [*52*]{}, 69. Cromartie, H.T.; Fonseca, E.; Ransom, S.M.; Demorest, P.B.; Arzoumanian, Z.; Blumer, H.; Brook, P.R.; DeCesar, M.E.; Dolch, T.; Ellis, J.A.; et al. *arXiv* [**2019**]{}, arXiv:1904.06759. Abbott, B.P. et al. \[LIGO Scientific Collaboration and Virgo Collaboration\] . , [*121*]{}, 161101. Alford, M.G.; Han, S.; Prakash, M. . , [*88*]{}, 083013. Blaschke, D.; Alvarez-Castillo, D.E.; Benic, S. . , arXiv:1310.3803. Benic, S.; Blaschke, D.; Alvarez-Castillo, D.E.; Fischer, T.; Typel, S. . , [*577*]{}, A40. Alvarez-Castillo, D.; Benic, S.; Blaschke, D.; Han, S.; Typel, S. . , [*52*]{}, 232. Marczenko, M.; Blaschke, D.; Redlich, K.; Sasaki, C. . , [*98*]{}, 103021. Benic, S.; Mishustin, I.; Sasaki, C. . , [*91*]{}, 125034. Marczenko, M.; Sasaki, C. . , [*97*]{}, 036011. Detar, C.E.; Kunihiro, T. . , [*39*]{}, 2805–2808. Jido, D.; Hatsuda, T.; Kunihiro, T. . , [*84*]{}, 3252–3255. Jido, D.; Oka, M.; Hosaka, A. . , [*106*]{}, 873–908. Patrignani, C. et al. \[Particle Data Group\]. , [*40*]{}, 100001. Roberts, H.L.L.; Roberts, C.D.; Bashir, A.; Gutierrez-Guerrero, L.X.; Tandy, P.C. . , [*82*]{}, 065202. Roberts, H.L.L.; Bashir, A.; Gutierrez-Guerrero, L.X.; Roberts, C.D.; Wilson, D.J. . , [*83*]{}, 065206. Lattimer, J.M.; Lim, Y. . , [*771*]{}, 51. Motohiro, Y.; Kim, Y.; Harada, M. . , [*92*]{}, 025201. Zschiesche, D.; Tolos, L.; Schaffner-Bielich, J.; Pisarski, R.D. . , [*75*]{}, 055202. Aarts, G.; Allton, C.; De Boni, D.; Hands, S.; Jäger, B.; Praki, C.; Skullerud, J.I. . , [*2017*]{}, 034. Aarts, G.; Allton, C.; De Boni, D.; Jäger, B. . , [*99*]{}, 074503. Collins, J.C.; Duncan, A.; Joglekar, S.D. . , [*16*]{}, 438–449. Bardeen, W.A.; Leung, C.N.; Love, S.T. . , [*56*]{}, 1230–1233. Nielsen, N.K. . , [*120*]{}, 212–220. Alford, M.G.; Burgio, G.F.; Han, S.; Taranto, G.; Zappalà, D. . , [*92*]{}, 083002, Antoniadis, J.; Freire, P.C.; Wex, N.; Tauris, T.M.; Lynch, R.S.; van Kerkwijk, M.H.; Kramer, M.; Bassa, C.; Dhillon, V.S.; Driebe, T.; et al. . , [*340*]{}, 6131. Alvarez-Castillo, D.E.; Blaschke, D.B. . , [*96*]{}, 045809. Ayriyan, A.; Bastian, N.U.; Blaschke, D.; Grigorian, H.; Maslov, K.; Voskresensky, D.N. . , [*97*]{}, 045802. Kaltenborn, M.A.R.; Bastian, N.U.F.; Blaschke, D.B. . , [*96*]{}, 056024. Steinheimer, J.; Schramm, S.; Stocker, H. . , [*84*]{}, 045208. Sasaki, C. . , [*970*]{}, 388–397. [^1]: In the mean-field approximation, the non-vanishing expectation value of the $\omega$ field is the time-like component; hence, we simply denote it by $\omega_0 \equiv \omega$. Similarly, we denote the non-vanishing component of the $\rho$ field, time-like and neutral, by $\rho_{03} \equiv \rho $.
--- abstract: 'The supersymmetric sector of minimal supersymmetric standard model (MSSM) possesses a $U(1)$ R-symmetry which contains $Z_2$ matter parity. Non-zero neutrino masses, consistent with a ‘redefined’ R-symmetry, are possible through the see-saw mechanism and/or a pair of superheavy (mass $M$) $SU(2)_L$ triplets with vev $\sim M^2_W/M$. If this R-symmetry is respected by the higher order terms, then baryon number conservation follows as an immediate consequence. In the presence of right handed neutrinos, the observed baryon asymmetry of the universe arises via leptogenesis. An interplay of R- and Peccei-Quinn symmetry simultaneously resolves the strong CP and $\mu$ problems.' address: - | $^{(1)}$[*Physics Division, School of Technology, Aristotle University of Thessaloniki,\ Thessaloniki GR 540 06, Greece.*]{} - | $^{(2)}$[*Bartol Research Institute, University of Delaware,\ Newark, DE 19716, USA*]{} author: - '[G. Lazarides]{}$^{(1)}$[^1] [and Q. Shafi]{}$^{(2)}$[^2]' title: | -2.5truecm[\ [[BA-98-11]{}]{}.1truecm]{} 0.1truecm [**R-Symmetry in the Minimal Supersymmetric Standard Model and Beyond with Several Consequences** ]{} --- 8.5in 6.25in 0.07in 0.25in -.25in \#1\#2[3.6pt]{} Although quite compelling, the minimal supersymmetric standard model (MSSM) fails to address a number of important challenges. For instance, to explain the apparent stability of the proton, it must be assumed that the dimensionless coefficients accompanying dimension five operators are of order $10^{-8}$ or less. The strong CP and $\mu$ problems loom large in the background, and the observed baryon asymmetry, it appears, cannot be explained within the MSSM framework. Last, but by no means least, there is increasing evidence for non-zero neutrino masses from a variety of experiments. In a recent paper[@dls], we offered one approach for resolving many of the above problems. It relied on extending the gauge symmetry to $SU(2)_L \times SU(2)_R \times U(1)_{B-L}$ , with a global $U(1)$ R-symmetry playing an essential role. The magnitude of the supersymmetric $\mu$ term of MSSM was directly related to the gravitino mass $m_{3/2} ~(\sim 1~{\rm{TeV}})$ which, in turn, arises from the hidden sector [*a la*]{} supergravity. The left-right symmetry ensures the presence of right handed neutrino superfields and consequently non-zero neutrino masses, while the R-symmetry implies an accidental global $U(1)_B$ symmetry which explains why the proton is so stable. Note that the R-symmetry is spontaneously and perhaps even explicitly broken by the hidden sector. The soft (quadratic and trilinear) supersymmetry breaking terms in the visible sector are expected to explicitly break the R-symmetry. In this paper, we wish to provide a resolution of the problems listed above without departing from the $SU(3)_c \times SU(2)_L \times U(1)_Y$ framework of MSSM. We observe that the MSSM superpotential $W$ possesses a global $U(1)$ R-symmetry[@hr] in which $Z_2$ matter parity is embedded. We show how neutrino masses can be incorporated while preserving a (redefined) R-symmetry. When extended to higher orders, this symmetry ensures the appearance of global $U(1)_B$, thereby guaranteeing proton stability. In the case where right handed neutrinos are included, the observed baryon asymmetry of the universe can arise, as we will see, via leptogenesis. The approach followed here also provides the framework for an elegant resolution of the strong CP and $\mu$ problems of MSSM, with the R-symmetry once again playing an essential role. The MSSM superpotential $W$ contains the following renormalizable terms (we will not distinguish between the generations in this paper): $$H^{(1)}QU^c,~H^{(2)}QD^{c},~H^{(2)}LE^{c},~H^{(1)} H^{(2)}~. \label{eq:superpot}$$ Here $H^{(1)},~H^{(2)}$ are the two higgs superfields, $Q$ denotes the $SU(2)_L$ doublet quark superfields, $U^c$ and $D^c$ are the $SU(2)_L$ singlet quark superfields, while $L~(E^c)$ stands for the $SU(2)_L$ doublet (singlet) lepton superfields. A $Z_2$ matter parity ($Z_2^{mp}$) under which only the ‘matter’ superfields change sign ensures the absence of terms such as $QD^{c}L$ and $U^cD^cD^c$ which lead to rapid proton decay. The superpotential in Eq.(\[eq:superpot\]) possesses three global symmetries, namely, $U(1)_B,~U(1)_L$ and $U(1)_R$. (Except for sphaleron effects in baryogenesis, we will ignore the ‘tiny’ non-perturbative violation of $B$ and $L$ by the $SU(2)_L$ instantons.) The ‘global’ charges of the various superfields are as follows: $$\begin{aligned} B:~H^{(1)}(0),~H^{(2)}(0),~Q(1/3),~U^{c}(-1/3),~D^{c}(-1/3), ~L(0),~E^{c}(0)~;\end{aligned}$$ $$\begin{aligned} L:~H^{(1)}(0),~H^{(2)}(0),~Q(0),~U^c(0),~D^c(0), ~L(1),~E^c(-1)~;\end{aligned}$$ $$R:~H^{(1)}(1),~H^{(2)}(1),~Q(1/2),~U^c(1/2),~D^c(1/2), ~L(1/2),~E^c(1/2)~. \label{eq:sym}$$ We have normalized the $R$ charges such that $W$ carries two units. The introduction of the right handed neutrino superfields, $\nu^c$, gives rise, consistent with $Z_2^{mp}$, to two additional renormalizable superpotential couplings $$H^{(1)}L\nu^c ~,~M^{R} \nu^c \nu^c~, \label{eq:nent}$$ where $M^{R}$ is the Majorana mass matrix of the superheavy right handed neutrinos. The first term in Eq.(\[eq:nent\]) fixes the quantum numbers of the right handed neutrinos, namely, $B(\nu^c)=0$, $L(\nu^c)=-1$, $R(\nu^c)= 1/2$. The second term violates both $U(1)_L$ and $U(1)_R$, but the combination $$R^\prime = R - \frac {1}{2}~L \label{eq:rsym}$$ is now the new R-symmetry of the superpotential. In addition, the $Z^{lp}_2$ (lepton parity) subgroup of $U(1)_L$, under which only the lepton superfields $L,~E^c$ change sign, remains unbroken. Consequently, the global symmetries of the renormalizable superpotential containing all the couplings in Eqs.(\[eq:superpot\]) and (\[eq:nent\]) are $U(1)_{R^{\prime}}$ , $U(1)_B$ and $Z^{lp}_2$. With the couplings in Eq.(\[eq:nent\]), the observed neutrinos acquire masses via the well-known see-saw mechanism[@grs]. It is interesting to note that both $U(1)_B$ and lepton parity are automatically implied by $U(1)_{R^{\prime}}$. Moreover, this remains true even if non-renormalizable terms are included in the superpotential. Indeed, by extending the $U(1)_{R^\prime}$ symmetry to higher order terms, we will first show that $U(1)_B$ follows as a consequence. To see this, note that $U(1)_{R^{\prime}}$ contains $Z_2^{bp}$ (baryon parity) under which only the color triplet, antitriplet $(3,~\bar{3})$ superfields change sign. This means that superpotential couplings containing, in addition to color singlet and $(3 \cdot \bar{3})^m$ (m $\geq$ 0) factors, the $U(1)_B$ violating combinations $(3 \cdot 3 \cdot 3)^n$ or $(\bar{3} \cdot \bar{3} \cdot \bar{3})^n$ with $n$ = odd $\geq 1$ are not allowed. Similarly, analogous couplings but with $n$ = even $\geq 2$ are also not allowed since their $R^\prime$ charge exceeds two units and cannot be compensated. In particular, the troublesome dimension five operators $QQQL$ and $U^cU^cD^cE^c$ are eliminated. One can next show that $U(1)_{R^{\prime}}$ implies $Z^{lp}_2$ (lepton parity) to all orders. Because of $U(1)_B$, the quark superfields must appear in ‘blocks’ $QU^c(1)$ and $QD^c(1)$, where the parenthesis indicates the $R^\prime$ charge. The other non-leptonic ‘blocks’ are $H^{(1)}(1)$ and $H^{(2)}(1)$. The leptonic superfields are $L(0),~E^c(1),~\nu^c(1)$. To violate lepton parity, we need an odd number of lepton superfields. Therefore, we should consider: i) odd number of $L$ ’s together, by $U(1)_{R^\prime}$ symmetry, with two non-leptonic blocks belonging to the four types described above; ii) even number of $L$ ’s and a single $E^c$ or $\nu^c$, together with one non-leptonic block; iii) odd number of $L$ ’s with two out of the $E^c$ ’s and $\nu^c$ ’s. In all three cases, one ends up with an odd number of $SU(2)_L$ doublets which is not gauge invariant. In summary, both lepton parity and $U(1)_B$ are present in the scheme to all orders as mere consequences of the $U(1)_{R^{\prime}}$ symmetry and remain exact, although $U(1)_{R^{\prime}}$ is explicitly broken to its maximal non-R-subgroup $Z_4^{\prime}$ (which includes $Z_2^{bp}$) by the supersymmetry breaking terms in the visible sector. We now present an alternative scheme for introducing non-zero neutrino masses in MSSM . This scheme is, actually, quite familiar[@w] from Grand Unified Theories (GUTs), and was recently considered within the non-supersymmetric standard model framework in Ref.[@ms]. Introduce, in MSSM, an $SU(2)_L$ triplet pair $T,~\bar{T}$, with hypercharges +1, -1 respectively. Consider the renormalizable superpotential couplings $$TLL,~\bar{T} H^{(1)} H^{(1)},~MT\bar{T}~, \label{eq:triplet}$$ such that $B(T)=B(\bar{T})=0$, $L(T)=-2$, $L(\bar{T})=0$, $R(T)=1$, $R(\bar{T})=0$, from the first two couplings, and $M$ is some superheavy scale (taken real and positive by suitable phase redefinitions of the superfields $T$, $\bar{T}$). The supersymmetric mass term in Eq.(\[eq:triplet\]) breaks $U(1)_R$ and $U(1)_L$ but, in analogy with the previous discussion involving the $\nu^c$ superfields, the superpotential defined by the terms in Eqs.(\[eq:superpot\]) and (\[eq:triplet\]) possesses a redefined R-symmetry generated by $$R^{\prime \prime} = R - \frac {1}{2}~L~. \label{eq:rrsym}$$ The $R^{\prime \prime}$ charges of the various superfields are: $$H^{(1)}(1),~H^{(2)}(1),~Q(1/2),~U^c(1/2),~D^c(1/2), ~L(0),~E^{c}(1),~T(2),~\bar{T}(0)~. \label{eq:charge}$$ Both $U(1)_B$ and lepton parity remain unbroken in this case too. Finally, as with the $U(1)_{R^\prime}$ symmetry, one can readily show that $U(1)_{R^{\prime \prime}}$ implies conservation of $B$ and $Z^{lp}_{2}$ to all orders despite its explicit breaking to its maximal non-R-subgroup $Z_4^{\prime\prime}$ (including $Z_2^{bp}$) by the supersymmetry breaking terms in the visible sector. It is readily checked that the scalar component of $T$ acquires a non-zero vacuum expectation value (vev) $\sim M^2_W/M~(\ll M_W)$, with the electroweak breaking playing an essential role in the generation of this vev. This is due to the fact that the two last terms in Eq.(\[eq:triplet\]), after electroweak breaking, give rise to a term linear with respect to $T$ in the scalar potential of the theory. The vev of $T$ leaves $U(1)_B$ , $Z^{lp}_{2}$ and the $Z_4^{\prime\prime}$ subgroup of $U(1)_{R^{\prime\prime}}$ unbroken and generates non-zero neutrino masses. Note that $T$, $\bar{T}$ are superheavy fields, so that the low energy spectrum is given by the MSSM. It should be clear that the coexistence of all the superpotential couplings in Eqs.(\[eq:superpot\]), (\[eq:nent\]) and (\[eq:triplet\]) provides us with a scheme where the light neutrino masses acquire contributions from the see-saw mechanism as well as the triplet vev. It is important to note that, in this ‘combined’ case, $W$ possesses a $U(1)$ R-symmetry, $U(1)_{\hat{R}}$ , which coincides with $U(1)_{R^{\prime}}$ or $U(1)_{R^{\prime\prime}}$ when restricted to the superfields where these symmetries are defined. (Note that $U(1)_{R^{\prime}}$ and $U(1)_{R^{\prime\prime}}$ become identical when restricted to the MSSM superfields.) This R-symmetry, just as in the previous cases, implies $U(1)_B$ and $Z^{lp}_{2}$ to all orders. It is, finally, interesting to notice that baryon number and lepton parity conservation is a consequence of the ‘redefined’ R-symmetries in Eqs.(\[eq:rsym\]), (\[eq:rrsym\]) or $U(1)_{\hat{R}}$ , in the ‘combined’ case, and not of the original $U(1)_R$ which allows couplings like $QQQL$ and $U^cU^cD^cE^c$. The two mechanisms considered above for generating masses for the neutrinos have an additional far reaching consequence. This has to do with the generation of the observed baryon asymmetry in the universe. The basic idea is to generate an initial lepton asymmetry which is partially transformed through the non-perturbative electroweak sphaleron effects, that ‘actively’ violate $B+L$ at energies above $M_W$, to the observed baryon asymmetry. Actually, this is the only way to generate baryons in the present scheme, since baryon number is otherwise exactly conserved. This mechanism has been well documented[@lepto] when the lepton asymmetry is created by a decaying massive Majorana neutrino (say from the $\nu^c$ superfields) and exploits the couplings given in Eq.(\[eq:nent\]). If the $T,~\bar {T}$ superfields with the couplings given in Eq.(\[eq:triplet\]) are also present, then additional diagrams must be considered. The complete set of double-cut diagrams for leptogenesis from a decaying fermionic $\nu^c$, which is the relevant case for inflationary models where the inflaton predominantly decays to a fermionic right handed neutrino, is displayed in Fig.\[fig:graph\]. The resulting lepton asymmetry, in this case, can be estimated[@lss] to be $$\frac {n_{L}}{s} \approx \frac{3}{16 \pi}~\frac {T_{r}} {m_{infl}}~M^{R}_{i}~\frac{{\rm{Im}}(M^{D}~m^{\dagger} ~\tilde{M}^{D})_{ii}}{|\langle H^{(1)} \rangle |^{2} (M^{D}~M^{D}\,^{\dagger})_{ii}}~\cdot \label{eq:genlept}$$ Here $T_{r}$ is the ‘reheat’ temperature, $m_{infl}$ the inflaton mass, $M^{D}$ the neutrino ‘Dirac’ mass matrix in the basis where the Majorana mass matrix of right handed neutrinos, $M^R$, is diagonal with positive entries, and $M^{R}_{i}$ is the mass of the decaying $\nu^{c}_{i}$. Also $$m \approx -\alpha~ t~ \frac{\langle H^{(1)}\rangle^{2}}{M} -\tilde{M}^{D}\ \frac{1}{M^{R}}\ M^{D}~ \label{eq:mass}$$ is the light neutrino mass matrix in the same basis, with $t$ (a complex symmetric matrix) and $\alpha$ being the coefficients of the first and second terms in Eq.(\[eq:triplet\]) respectively. It should be noted that this estimate holds provided that $M^{R}_i$ is much smaller than the mass of the other $\nu^c$ ’s and the mass $M$ of the triplets . Eq.(\[eq:genlept\]) gives[@lss] the bound $$\left |{n_L\over s}\right| \stackrel{_<}{_\sim}{3\over 16 \pi} ~{T_r\over m_{infl}}~{M^{R}_{i}~m_{\nu_{\tau}}\over |\langle H^{(1)}\rangle|^2}~, \label{eq:bound}$$ which, for $T_{r}\approx 10^{9}~{\rm{GeV}}$ (consistent with the gravitino constraint), $m_{infl}\approx 3\times 10^{13} ~{\rm{GeV}}$, $M^{R}_{i}\approx 10^{10}~{\rm{GeV}}$ (see Ref.[@lss]), $|\langle H^{(1)}\rangle|\approx 174~{\rm{GeV}}$, and $m_{\nu_{\tau}}\approx 5~{\rm{eV}}$ (providing the hot dark matter of the universe), gives $|n_L/s|\stackrel{_<}{_\sim} 3 \times 10^{-9}$. This is large enough to account for the observed baryon asymmetry of the universe. It is important though to ensure that the lepton asymmetry is not erased by lepton number violating $2\rightarrow 2$ scatterings at all temperatures between $T_r$ and 100 GeV. This requirement gives[@ibanez] $m_{\nu_{\tau}}\stackrel{_<}{_\sim} 10~{\rm{eV}}$. We pointed out that, in non-inflationary (and perhaps some inflationary) models, leptogenesis from the decay of bosonic $\nu^c$ ’s as well as bosonic and fermionic $T$, $\bar{T}$ ’s may be present too. Most of the relevant double-cut diagrams can be obtained from the ones in Fig.\[fig:graph\] by breaking up the $\nu^c$, $T$, $\bar{T}$ internal lines and joining the external $\nu^c$ lines. The only extra diagram (not obtainable this way) is a diagram of the same type with bosonic $\nu^c$ external lines and a fermionic $T$, $\bar{T}$ internal line. The important observation is that diagrams of the type in Fig.\[fig:graph\] with no $\nu^c$ internal or external lines cannot be constructed. Thus, efficient leptogenesis can take place only in the presence of $\nu^c$ ’s. We have seen how $U(1)_B$ arises as a consequence of requiring the superpotential $W$ (including higher order terms) to respect a $U(1)$ R-symmetry. Among other things, this explains why the proton is so stable. However, the learned reader may be concerned that requiring the non-renormalizable terms in the superpotential to respect a continuous R-symmetry may not be a reasonable thing to do. Indeed, one may wonder if continuous global symmetries such as $U(1)_{\hat{R}}$ or the Peccei-Quinn[@pq] ($U(1)_{PQ}$) symmetry, rather than being imposed, can arise in some more ‘natural’ manner. One way how this may occur was pointed out in Ref.[@lps]. Here, discrete (including R-) symmetries that typically arise after compactification could effectively behave as if they are continuous. Furthermore, such ‘continuous’ symmetries can be very useful in resolving problems other than the one of proton stability. To see this, let us now address the strong CP and $\mu$ problems of MSSM. It has been noted by earlier authors[@np] that a continuous $U(1)$ R-symmetry can be relevant for the solution of the $\mu$ problem. By invoking $U(1)_{PQ}$ and combining it with the $U(1)$ R-symmetry above, we will provide a resolution of both the strong CP and $\mu$ problems, with the $U(1)$ R-symmetry playing an essential role in controlling the structure of the terms that are permitted at the non-renormalizable level. It has been recognized for some time that, within the supergravity extension of MSSM, the existence of D- and F-flat directions in field space can generate an intermediate scale $M_I$ which, in the simplest case, is given by $$M_I \sim \sqrt{m_{3/2}M_P} \sim 10^{11}-10^{12}~{\rm{GeV}}~, \label{eq:inter}$$ where $m_{3/2} \sim 1~{\rm{TeV}}$ is the supersymmetry breaking scale and $M_P = 1.22 \times 10^{19}$ GeV is the Planck mass. It seems ‘natural’ to try and identify $M_I$ with the symmetry breaking scale $f_{a}$ of $U(1)_{PQ}$, such that $\mu \sim m_{3/2} \sim f^2_{a}/M_P$[@kn]. We will now see how this idea, which simultaneously resolves the strong CP and $\mu$ problems, can be elegantly realized in the presence of the $U(1)$ R-symmetry. Note that the resolution of the $\mu$ problem forces us to consider non-renormalizable terms. We supplement the MSSM spectrum with a pair of superfields $N,~\bar{N}$ whose vevs will break $U(1)_{PQ}$ at an intermediate scale. $W$ contains[@dvali] the following terms: $$H^{(1)}QU^c,~H^{(2)}QD^c,~H^{(2)}LE^c,~N^{2}H^{(1)}H^{(2)}, ~N^{2} \bar{N}^2~. \label{eq:pquinn}$$ The global symmetries of this superpotential are $U(1)_B$, $U(1)_L$ (with the new superfields $N$, $\bar{N}$ being neutral under both), an anomalous Peccei-Quinn symmetry $U(1)_{PQ}$, and a non-anomalous R-symmetry $U(1)_{\tilde{R}}$. The $PQ$ and $\tilde{R}$ charges are as follows: $$\begin{array}{rcl} PQ:~H^{(1)}(1),~H^{(2)}(1),~Q(-1/2),~U^c(-1/2),~D^c(-1/2), ~~~~~~~~~~\\ ~L(-1/2),~E^c(-1/2),~N(-1),~\bar{N}(1)~, ~~~~~~~~~~~~~~~~~~~~\\ & & \\ \tilde{R}:~H^{(1)}(0),~H^{(2)}(0),~Q(1),~U^c(1),~D^c(1), ~L(1),~E^c(1),~N(1),~\bar{N}(0)~. \end{array} \label{eq:pqr}$$ Note that the quartic terms in Eq.(\[eq:pquinn\]) carry a coefficient proportional to $M^{-1}_P$ which has been left out. The R-symmetry ensures that undesirable terms such as $N \bar{N}$, which otherwise spoil the flat direction in the supersymmetric limit, are absent from Eq.(\[eq:pquinn\]). After taking the supersymmetry breaking terms into account, one finds[@dvali] that, for suitable choice of parameters, a solution with $$|\langle N \rangle|~=~|\langle\bar{N}\rangle| ~\sim \sqrt{m_{3/2}M_{P}} \label{eq:vev}$$ is preferred over the one with $\langle N\rangle=\langle\bar{N}\rangle=0$. To see this, let us consider the relevant part of the scalar potential: $$V=\left(m_{3/2}^2+\lambda^2\left|\frac{N\bar{N}}{M_P}\right|^2 \right)(|N|^2+|\bar{N}|^2)+\left(Am_{3/2}\lambda \frac{N^2\bar{N}^2}{2M_P}+h.c \right)~, \label{eq:potential}$$ where $\lambda/(2M_P)$ is the coefficient of the last superpotential term in Eq.(\[eq:pquinn\]) and $A$ the dimensionless coefficient of the corresponding soft supersymmetry breaking term ($\lambda$ is taken real and positive by appropriately redefining the phases of $N,~\bar{N}$). This potential can be rewritten as $$V=\left(m_{3/2}^2+\lambda^2\left|\frac{N\bar{N}}{M_P}\right|^2 \right)\left[(|N|-|\bar{N}|)^2+2|N||\bar{N}|\right]+|A|m_{3/2} \lambda\frac{|N\bar{N}|^2}{M_P}{\rm{cos}}(\epsilon+2\theta +2\bar{\theta})~, \label{eq:npotential}$$ where $\epsilon,~\theta,~\bar{\theta}$ are the phases of $A,~N,~\bar{N}$ respectively. Minimization of $V$ then requires $|N|=|\bar{N}|$, $\epsilon+2\theta+2\bar{\theta}=\pi$ and $V$ takes the form $$V=2|N|^2m_{3/2}^2\left(\lambda^2\frac{|N|^4}{m_{3/2}^2M_P^2}- |A|\lambda\frac{|N|^2}{2m_{3/2}M_P}+1\right)~. \label{eq:nnpotential}$$ It is now obvious that, for $|A|>4$, the absolute minimum of the potential is at $$|\langle N\rangle|=|\langle\bar{N}\rangle|= (m_{3/2}M_P)^{\frac{1}{2}} \left(\frac{|A|+(|A|^2-12)^{\frac{1}{2}}} {6\lambda}\right)^{\frac{1}{2}}\sim (m_{3/2}M_{P}) ^{\frac{1}{2}}~. \label{eq:solution}$$ Note that the $\langle N \rangle,~\langle\bar{N}\rangle$ vevs together break $U(1)_{\tilde{R}} \times U(1)_{PQ}$ down to $Z_2^{mp}$. Substitution of these vevs in Eq.(\[eq:pquinn\]) shows that the $\mu$ parameter of MSSM is of order $m_{3/2}$ as desired. This discussion is readily extended to include either massive right handed neutrino superfields $\nu^c$ or the $SU(2)_L$ triplet higgs superfields $T,~\bar{T}$. In the $\nu^c$ case, the new terms in the superpotential $W$ are $H^{(1)}L\nu^c$ and $N\nu^c\nu^c$. The first term yields $B(\nu^c)=0$, $L(\nu^c)=-1$, $PQ(\nu^c)=-1/2$, $\tilde{R}(\nu^c)=1$. The second term leaves $U(1)_{B}$ unbroken but breaks $U(1)_{L}$, $U(1)_{PQ}$ and $U(1)_{\tilde{R}}$ to a ‘redefined’ Peccei-Quinn symmetry $U(1)_{PQ^{\prime}}$ and a ‘redefined’ R-symmetry $U(1)_{\tilde{R}^{\prime}}$ with $PQ^{\prime}=PQ-L$ and $\tilde{R}^{\prime}=\tilde{R}+PQ-(1/2)L$. Thus, the strong CP problem can be resolved. It should be noted that $U(1)_{\tilde{R}^{\prime}}$ coincides with $U(1)_{R^\prime}$ in Eq.(\[eq:rsym\]) when restricted to the superfields where $U(1)_{R^\prime}$ is defined. Moreover, just as $U(1)_{R^\prime}$, the R-symmetry $U(1)_{\tilde{R}^{\prime}}$ contains $Z_2$ baryon parity as a subgroup and implies $U(1)_{B}$ to all orders. $Z_2^{mp}$ is contained in $U(1)_{PQ^{\prime}}$ and, thus, $Z_2^{lp}$ is also present but not as an automatic consequence of $U(1)_{\tilde{R}^{\prime}}$ in this case. A similar discussion applies if the triplets $T,~\bar{T}$ are introduced in the scheme of Eq.(\[eq:pquinn\]). The new superpotential terms, in this case, are $TLL$, $\bar{T}H^{(1)}H^{(1)}$ and $NT\bar{T}$. The first two couplings give $B(T)=B(\bar{T})=0$, $L(T)=-2$, $L(\bar{T})=0$, $PQ(T)=1$, $PQ(\bar{T})=-2$, $\tilde{R}(T)=0$ and $\tilde{R}(\bar{T})=2$. The last coupling leaves unbroken the symmetries $U(1)_{B}$, $U(1)_{PQ^{\prime\prime}}$ and $U(1)_{\tilde{R}^{\prime\prime}}$ with $PQ^{\prime\prime}=PQ-L$ and $\tilde{R}^{\prime\prime}=\tilde{R}+PQ-(1/2)L$. The symmetry $U(1)_{\tilde{R}^{\prime\prime}}$ is an extension of $U(1)_{R^{\prime\prime}}$ in Eq.(\[eq:rrsym\]), contains $Z_2$ baryon parity and implies $U(1)_{B}$ to all orders. Finally, it should be pointed out that $\nu^c$ and $T$, $\bar{T}$ can coexist with all the couplings mentioned being present. In this ‘combined’ case, $W$ possesses a $U(1)$ Peccei-Quinn (R-) symmetry, $U(1)_{\hat{PQ}}$ ($U(1)_{\hat{\tilde{R}}}$), which coincides with $U(1)_{PQ^{\prime}}$ ($U(1)_{\tilde{R}^{\prime}})$ or $U(1)_{PQ^{\prime\prime}}$ ($U(1)_{\tilde{R}^{\prime\prime}}$) when restricted to the superfields where these symmetries are defined. The R-symmetry $U(1)_{\hat{\tilde{R}}}$ implies $U(1)_B$ to all orders. We have focused in this paper on MSSM and its extensions, with $Z_2$ matter parity embedded, to begin with, in a $U(1)$ R-symmetry. Non-zero neutrino masses, consistent with a redefined $U(1)$ R-symmetry, can be introduced in at least two ways. By requiring the higher order superpotential couplings to respect this redefined R-symmetry, one can i) explain proton stability to be a consequence of an automatic $U(1)_B$ , and ii) show that the observed baryon asymmetry can arise via a primordial lepton asymmetry provided right handed neutrinos are present. Finally, simultaneous resolutions of the strong CP and $\mu$ problems, with $\mu \sim f_{a}^2/M_{P}$, can be elegantly accommodated in this scheme. One of us (G. L.) would like to thank G. Dvali for drawing his attention to the resolution of the $\mu$ problem via a PQ-symmetry and for important discussions on this point. We acknowledge the NATO support, contract number NATO CRG-970149. One of us (Q.S.) would also like to acknowledge the DOE support under grant number DE-FG02-91ER40626. \#1\#2\#3[[ Int. Jour. Mod. Phys. ]{}[**\#1 **]{}(19\#2) \#3]{} \#1\#2\#3[[ Phys. Lett. ]{}[**B\#1 **]{}(19\#2) \#3]{} \#1\#2\#3[[ Z. Phys. ]{}[**C\#1 **]{}(19\#2) \#3]{} \#1\#2\#3[[ Phys. Rev. Lett. ]{}[**\#1 **]{}(19\#2) \#3]{} \#1\#2\#3[[ Rev. Mod. Phys. ]{}[**\#1 **]{}(19\#2) \#3]{} \#1\#2\#3[[ Phys. Rep. ]{}[**\#1 **]{}(19\#2) \#3]{} \#1\#2\#3[[ Phys. Rev. ]{}[**D\#1 **]{}(19\#2) \#3]{} \#1\#2\#3[[ Nucl. Phys. ]{}[**B\#1 **]{}(19\#2) \#3]{} \#1\#2\#3[[ Mod. Phys. Lett. ]{}[**\#1 **]{}(19\#2) \#3]{} \#1\#2\#3[[ Annu. Rev. Nucl. Part. Sci. ]{}[**\#1 **]{}(19\#2) \#3]{} \#1\#2\#3[[ Sov. J. Nucl. Phys. ]{}[**\#1 **]{}(19\#2) \#3]{} \#1\#2\#3[[ JETP Lett. ]{}[**\#1 **]{}(19\#2) \#3]{} \#1\#2\#3[[ Acta Phys. Polon. ]{}[**\#1 **]{}(19\#2) \#3]{} \#1\#2\#3[[ Riv. Nuovo Cim. ]{}[**\#1 **]{}(19\#2) \#3]{} \#1\#2\#3[[ Ann. Phys. ]{}[**\#1 **]{}(19\#2) \#3]{} \#1\#2\#3[[ Prog. Theor. Phys. ]{}[**\#1 **]{}(19\#2) \#3]{} \#1\#2\#3[[ Phys. Lett. ]{}[**\#1B **]{}(19\#2) \#3]{} G. Dvali, G. Lazarides and Q. Shafi, . For another approach to MSSM with R-symmetry see L. J. Hall and L. Randall, . M. Gell-Mann, P. Ramond and R. Slansky, in [*Supergravity, Proceedings of the Workshop*]{}, Stony Brook, New York, 1979, eds. P. Van Nieuwenhuizen and D. Z. Freedman (North Holland, Amsterdam, 1979), p. 315; T. Yanagida, in [*Proceedings of the Workshop on Unified Theories and Baryon Number in the Universe*]{}, Tsukuba, Japan, 1979, eds. A. Sawada and A.Sugamoto (KEK Rep. No. 79-18, Tsukuba, Japan, 1979). C. Wetterich, ; G. Lazarides, Q. Shafi and C. Wetterich, ; R. N. Mohapatra and G. Senjanovic, ; R. Holman, G. Lazarides and Q. Shafi, . E. Ma and U. Sarkar, hep-ph/9802445. M. Fukugita and T. Yanagida, ; W. Buchmüller and M. Plümacher, . In the context of inflation see G. Lazarides and Q. Shafi, ; G. Lazarides,  C. Panagiotakopoulos and Q. Shafi, . G. Lazarides, R. Schaefer and Q. Shafi, ; G. Lazarides, hep-ph/9802415 (to appear in the proceedings of the 6th BCSPIN Summer School). L. E. Ibáñez and F. Quevedo, . R. Peccei and H. Quinn, ; S. Weinberg, ; F. Wilczek, . G. Lazarides, C. Panagiotakopoulos and Q. Shafi, . For a recent discussion see H. P. Nilles and N. Polonsky, . J. E. Kim and H. P. Nilles, . G. Dvali, private communication. [^1]: lazaride@eng.auth.gr [^2]: shafi@bartol.udel.edu
--- abstract: | We have identified a ring-shaped emission-line nebula and a possible bipolar outflow centered on the B1.5 supergiant Sher \#25 in the Galactic giant  region NGC 3603 (distance 6 kpc). The clumpy ring around Sher \#25 appears to be tilted by 64$^\circ$ against the plane of the sky. Its semi-major axis (position angle $\approx$ 165$^\circ$) is 69 long, which corresponds to a ring diameter of 0.4 pc. The bipolar outflow filaments, presumably located above and below the ring plane on either side of Sher \#25, show a separation of $\approx$ 0.5 pc from the central star. High-resolution spectra show that the ring has a systemic velocity of V$_{\rm LSR}$ = +19 km s$^{-1}$ and a de-projected expansion velocity of 20 km s$^{-1}$, and that one of the bipolar filaments has an outflow speed of $\sim$83 km s$^{-1}$. The spectra also show high \[N[ii]{}\]/H$\alpha$ ratio, suggestive of strong N enrichment. Sher \#25 must be an evolved blue supergiant (BSG) past the red supergiant (RSG) stage. We find that the ratio of equatorial to polar mass-loss rate during the red supergiant phase was $\approx$ 16. We discuss the results in the framework of RSG–BSG wind evolutionary models. We compare Sher \#25 to the progenitor of SN1987A, which it resembles in many aspects. author: - 'Wolfgang Brandner, Eva K. Grebel , You-Hua Chu, Kerstin Weis' title: 'Ring Nebula and Bipolar Outflows Associated with the B1.5 Supergiant Sher \#25 in NGC 3603[^1]' --- The Blue Supergiant Sher \#25 in NGC 3603 ========================================= Sher \#25 (Sher 1965) is a B1.5Iab supergiant (Moffat 1983) similar to Sk$-$69202, the progenitor of SN1987A. Sher \#25 has a visual magnitude of V $\approx$ 122–123 (e.g., van den Bergh 1978). It is located at $\approx$20$''$ north of HD 97950, the core of the $\approx$ 4 Myr old cluster at the center of the Galactic giant  region NGC 3603 at a distance of 6–7 kpc (Clayton 1986; Melnick et al. 1989). Based on UBV CCD photometry of Sher \#25 Melnick et al. derived a visual extinction A$_V \approx 5^m$ and a distance modulus consistent with Sher \#25 being associated with NGC 3603. In a recent search for emission-line objects in NGC 3603 (Brandner et al. 1997), we found a clumpy ring and a bipolar nebula around Sher \#25. This ring is similar to that around SN1987A in both size and morphology. Follow-up spectroscopy shows N enrichment in the nebula around Sher \#25, suggesting that Sher \#25 is at a similar evolutionary stage as Sk$-$69202. In this letter, we report our observations (§2), discuss the physical structure of the nebula around Sher \#25 (§3) and the evidence for Sher \#25 being associated with NGC 3603 (§4), and compare it to SN1987A (§5). Observations ============  and R images of an 8$'$ $\times$ 8$'$ field centered on NGC 3603 were obtained at the ESO New Technology Telescope (NTT) on 1995 February 8 with the red arm of the ESO Multi-Mode Instrument and a 2k Tek CCD (ESO \#36). We used a narrow-band  filter ($\Delta \lambda$=1.8 nm) and a broadband R filter for continuum subtraction. Photometric VRI observations and an H$\alpha$+\[N[ii]{}\] image ($\Delta \lambda$=6.2 nm) were obtained on 1995 March 2 with the CCD camera at the Danish 1.54m telescope. The H$\alpha$+\[N[ii]{}\] image of the central cluster in NGC 3603 and its surroundings is displayed in Figure \[fig1\]. Figure \[fig2\] shows a continuum-subtracted  NTT-image centered on Sher \#25. The residuals (bright features) are due to charge bleeding as the brightest stars were saturated in the R image. Figure \[fig3\] shows the location of the blue supergiant Sher \#25 above and to the red of the main sequence turn-off. The stars well above the main-sequence are (blended) Wolf-Rayet and early type O stars located in the very center of the cluster. Long-slit echelle spectra of the eastern cap and the ring were obtained on 1996 January 10 at the CTIO 4m telescope. The echelle spectrograph was equipped with a 2k Tek CCD. The slit orientation was east-west (dashed line in Figure \[fig2\]) and the slit width was 16. We adopted rest wavelengths in air of 654.81 nm and 658.36 nm for the two forbidden \[\] lines $^3$P$_1$–$^1$D$_2$ and $^3$P$_2$–$^1$D$_2$ (e.g., Moore 1959). Relative velocities are accurate to about 0.2 km s$^{-1}$. The absolute calibration with respect to the local standard of rest (LSR) has an uncertainty of about 2–4 km s$^{-1}$. Figure \[fig4\] shows the 2D spectra in the region of the   and \[\] lines. At the position where the slit intersects the ring the two distinct velocity components are clearly visible. Inner Ring and Outflow Filaments around Sher \#25 ================================================= A tilted ring around Sher \#25 can be clearly seen in Figures \[fig1\] and \[fig2\]. The semi-major and semi-minor axes are 69 and 305, respectively. Assuming a circular ring geometry, we derive an inclination angle of 64$^\circ$ with respect to the sky plane along the position angle $\approx$ 165$^\circ$. The linear diameter of the ring is 0.4 pc, for a distance of 6 kpc. The relative velocity difference between the two ring components intersected by the slit is 32 km s$^{-1}$ (cf. Figure \[fig5\]). Taking projection effects into account, we derive a de-projected expansion velocity of 20 km s$^{-1}$ for the ring and a systemic velocity of v$_{\rm LSR}$ = +19 km s$^{-1}$. This value agrees well with radial velocities of molecular cloud cores south of NGC 3603 (+12 km s$^{-1}$ to +16 km s$^{-1}$, Nürnberger, priv. comm.) and supports Sher \#25’s association with NGC 3603. Bipolar filaments to the northeast and to the southwest of Sher \#25 are also clearly visible in Figures \[fig1\] and \[fig2\]. The northeast filament is not resolved into substructures. It is blue-shifted by 36.2 km s$^{-1}$ from the systemic velocity, indicating an outflow nature. The southwestern filament shows a complex structure with two apparent shock fronts. Lacking kinematic information, we cannot determine whether the southwestern filament is part of a larger 3D structure, i.e., something hourglass-like. Nevertheless, it is very likely that these bipolar filaments are physically produced by a bipolar outflow. If the bipolar filaments of Sher \#25 are located along an axis perpendicular to the plane defined by the ring, their physical separation from Sher \#25 is about 0.5 pc (15$''$/$\cos 26^\circ$ at 6 kpc). The de-projected expansion velocity of the northeastern filament is $\approx$ 83 km s$^{-1}$. The echelle spectra of the ring and the northeast outflow filament show a high \[N[ii]{}\]/ ratio, compared to the background H[ii]{} region (see Table 1 and Figure \[fig5\]). Low-resolution spectra of the north-eastern outflow filament indicate T$_{\rm eff} \approx$ 7000$\pm$1000 K and $\log {\rm N}_{\rm e} \approx$ 9.8$\pm$0.3 m$^{-3}$. In a \[O[iii]{}\]/H$\beta$ vs. \[N[ii]{}\]/H$\alpha$ diagram, the north-eastern outflow filament is situated clearly outside the location of H[ii]{} regions or supernova remnants (Brandner et al. 1997). Thus, the high \[N[ii]{}\]/ ratio is caused by an enhanced N abundance, indicating that at least the bipolar filaments around Sher \#25 consist of stellar material enriched by the CNO cycle. Therefore, Sher \#25 very likely is an evolved post-red supergiant. Given the evolutionary stage of Sher \#25, the surrounding nebula may be explained in the framework of interaction between red supergiant (RSG) wind and blue supergiant (BSG) wind. A 2D ring-like structure (as opposed to 3D shells) can be produced if the density of the RSG wind is a strong function of polar angle, peaking along the equatorial plane and decreasing toward the poles (Blondin & Lundqvist 1993; Martin & Arnett 1995). A ring develops as the fast BSG wind sweeps up the dense RSG wind material. At the same time the density gradient in the RSG wind allows the fast BSG wind to expand more easily in polar directions. This process might lead to an hourglass-shaped emission nebula as has been observed in the young planetary nebula MyCn18 (Sahai et al. 1995). The clumpy structure of the ring (see Figure \[fig2\]) very likely originates in Rayleigh-Taylor instabilities at the interface between the swept-up slow RSG wind and the fast BSG wind. The expansion velocities and physical dimensions yield a dynamical age of $\approx$ 9000 yr for the ring and $\approx$ 6000 yr for the polar outflows. Applying a self-similarity solution for the interaction regions of colliding winds (e.g., Chevalier & Imamura 1983) we can carry out a crude analysis of the observed velocities. In the following we assume constant mass loss rates and constant wind velocities for the slow wind and the fast wind, a stalled shock (pressure equilibrium) in a spherical symmetric fast wind, and an adiabatic (non-radiative) shock. The shock then expands with a velocity of $${\rm v_{shock} \approx \left(\frac{\dot{M}_{fw}v_{fw}v_{sw}}{\dot{M}_{sw}} \right)^{1/2} }$$ where ${\rm \dot{M}_{fw}}$ ${\rm, \dot{M}_{sw}}$, ${\rm v_{fw}}$, and ${\rm v_{sw}}$ are mass-loss rate and wind velocity of the fast wind (fw) and the slow wind (sw), respectively. If the fast wind is isotropic and does not show any variation in density as a function of polar angle, then the ratio of the expansion velocity of the polar outflow (83 km s$^{-1}$) to that of the ring (20 km s$^{-1}$) gives us directly the ratio of RSG mass loss in both directions: M$_{\rm{SW}}$(0$^\circ$)/M$_{\rm{SW}}$(90$^\circ$) $\approx$ 16:1. This is in reasonable agreement with the ratios of 20:1 and 10:1 computed for the progenitor of SN1987A by Blondin & Lundqvist (1993) and by Martin & Arnett (1995), respectively. Assuming a fast wind velocity of 800 km s$^{-1}$, a slow wind velocity of 50 km s$^{-1}$, and a ratio of the slow wind and fast wind mass-loss rates along the stellar equator of 90:1 (6:1 in polar direction) one is able to reproduce the observed shock velocities in the framework of this very simplified model. The center of the inner ring does not coincide with the position of Sher \#25 (offset 1$''$ to 2$''$). This may be due to a movement of Sher \#25 relative to the surrounding ISM. Martin & Arnett (1995) pointed out that such a movement would produce asymmetric polar outflow structures, which in turn might explain the different appearance of the northeastern and southwestern polar outflow filaments of Sher \#25. However, the interpretation of the filaments may be complicated by Sher \#25’s apparent location at the edge of a wind-blown cavity excavated by the central cluster of NGC 3603 (Figure \[fig1\]). This cavity has a diameter of 2 pc, a dynamical age of 10$^4$ yr (Clayton 1986) and may have been created by the onset of the Wolf-Rayet phase of the three central stars of NGC 3603 (Drissen et al. 1995). Sher \#25 and its relation to NGC 3603 ====================================== What evidence do we have that Sher \#25 is indeed a member of the giant H[ii]{} region NGC 3603? Firstly, as discussed above, the systemic velocity of the ring is in good agreement with the line-of-sight velocities of the cloud cores south of HD 97950. Secondly, Sher \#25 is not the only BSG in the NGC 3603 region. Spectroscopy by Moffat (1983) revealed two other BSG in the vicinity of the cluster core (cf. Figure \[fig1\]). The locations of Sher \#18 (O6If) and Sher \#23 (O9.5Iab) are also indicated in our V vs. V-I CMD (Figure \[fig3\]). The apparent lack of BSGs with similar reddening among the “field stars” (thin dots in Figure \[fig3\]) adds additional weight to the assumption that Sher \#25 is associated with NGC 3603. Could Sher \#25 then have been born at the same time as the massive central stars of the cluster? This would require that Sher \#25 originally had been at least as massive as these central stars and has gone through a violent Luminous Blue Variable (LBV) phase with a total mass-loss of more than 50% of its initial mass (i.e., M $\ge$ 25 M$_\odot$) before it became a BSG. Indeed, kinematical age, expansion velocity, and abundances of Sher \#25’s nebula are comparable to those of the AG Car nebula (cf. e.g., Leitherer et al. 1994). With M$_{\rm bol} \approx$ -91, Sher \#25’s luminosity is in the range of luminosities observed in other LBVs. Thus, an LBV evolutionary scenario for Sher \#25 and its circumstellar surrounding cannot be entirely excluded. The simultaneous presence of BSGs and stars of MK type O3V (cf. Drissen et al. 1995) requires at least two distinct episodes of star formation in NGC 3603 separated by $\approx$ 10 Myr. Moffat (1983) and Melnick et al.  (1989) already suggested that star formation in NGC 3603 might not have been coeval. The starburst in the dense cluster of NGC 3603 might have been initiated by the first generation of massive stars through their interaction with a dense cloud core. Subsequently, this cloud core developed into the present-day starburst. A similar evolutionary scenario has been suggested by Hyland et al. (1992) in order to explain the starburst in the 30 Dor region. Sher \#25 and SN1987A ===================== Sher \#25’s circumstellar nebula resembles that of SN1987A in many aspects. Both objects have an equatorial ring and bipolar nebulae, and show high \[N[ii]{}\]/ ratios, indicating an enhanced N abundance. The N enrichment indicates that the rings and the bipolar nebulae consist of mass lost from the progenitor at an earlier evolutionary stage and swept up by the fast BSG wind. Surface enrichment with material processed by the CNO cycle typically occurs at the very end of the RSG phase within the last 10$^4$ yr of RSG evolution. Yet, differences exist. As shown in Table 1, the expansion velocities and \[N[ii]{}\]/ ratios are different. The \[\]/ ratio of the nebula around Sher \#25 is lower than the ratio observed in the outer northern ring around SN1987A despite the lower N abundance in the LMC. Furthermore, Sher \#25 exhibits a higher \[N[ii]{}\]/ ratio in the bipolar nebulae than in the ring, while SN1987A shows an opposite trend. The similarities between the nebulae seem to suggest that Sher \#25 is at a similar evolutionary stage as the late progenitor of SN1987A, Sk$-$69202. However, the differences between the nebulae imply that the evolutionary history differs. This may be due to the abundance differences between the young population in the LMC and in the Milky Way, and to mass differences between the two stars. Sher \#25 appears to have been in a rather stable BSG evolutionary phase during the past decades covered by photometric measurements. Its photometry (Table 2) is quite inhomogeneous owing to the variety of different measurement techniques used, crowding problems, and spatial variations in the strength of the nebular background emission. The overall amplitude of variation in V is less than 025 within the last 35 yr, and less than 01 over the last 25 yr. The progenitor of SN1987A did not stand out as a variable star, either. Will Sher \#25 explode like Sk$-$69202 in the near future? The presence of the ring around Sher \#25, the N enrichment in the outflows, and the surface enrichment in metals all suggest that Sher \#25 has passed at least once through the RSG phase and is now well within its final BSG phase, which may last a few 10$^4$ yr in total depending on the initial stellar mass (see Martin & Arnett 1995). Evolutionary models for massive stars, however, still suffer from many unsolved problems, such as the amount of overshooting, semiconvection, mixing, and mass loss, the choice of convection criteria, and metallicity effects (see, e.g., Langer & Maeder 1995). At present, it is premature to predict whether Sher \#25 will succeed Sk-69 202 to provide another spectacular supernova in the southern sky soon. WB acknowledges support by the Deutsche Forschungsgemeinschaft (DFG) under grant Yo 5/16-1. EKG and YHC were partially supported by the NASA grants STI6122.01-94A, NAGW-4519, and NAG 5-3256. EKG acknowledges support by the German Space Agency (DARA) under grant 05 OR 9103 0. We thank our referee Laurent Drissen for helpful comments. Blondin, J.M., & Lundqvist, P. 1993, ApJ, 405, 337 Brandner, W., Dottori, H., Grebel, E.K., et al. 1997, A&A, [*Stellar and non-stellar emission line objects in NGC 3603*]{}, to be submitted Chevalier, R.A., & Imamura, J.N. 1983, ApJ, 270, 554 Clayton, C. 1986, MNRAS, 219, 895 Drissen, L., Moffat, A.F.J., Walborn, N.R., & Shara, M.M. 1995, AJ, 110, 2235 Hyland A.R., Straw S., Jones T.J., Gatley I. 1992 MNRAS 257, 391 Jakobsen, P., Albrecht, R., Barbieri, C., et al. 1991, ApJ, 369, L63 Langer, N., & Maeder, A. 1995, A&A, 295, 685 Leitherer C., Allen R., Altner B. et al. 1994 ApJ 428, 292 Martin, C.L., & Arnett, D. 1995, ApJ, 447, 378 Melnick, J., Tapia, M., & Terlevich, R. 1989, A&A, 213, 89 Moffat, A.F.J. 1974, A&A, 35, 315 Moffat, A.F.J. 1983, A&A, 124, 273 Moffat, A.F.J., Drissen L., & Shara M.M. 1994 ApJ 436, 183 Moore, C.E. 1959, [*A Multiplet Table of Astrophysical Interest*]{}, United States Department of Commerce, Washington, D.C. Panagia, N., Gilmozzi, R., Macchetto, F. et al. 1991, ApJ, 380, L23 Panagia, N., Scuderi, S., Gilmozzi, R., et al. 1996, ApJ, 459, L17 Plait, P.C., Lundqvist, P., Chevalier, R.A., & Kirshner, R.P. 1995, ApJ, 439, 730 Sahai, R., Trauger, J.T., & Evans, R.W. 1995, BAAS, 27, 1344 Sher, D. 1965, MNRAS, 129, 237 van den Bergh, S. 1978, A&A, 63, 275 [lcc]{} d$_{\rm (inner) ring}$ & 0.4 pc & 0.4 pc\ v$_{\rm (inner) ring}$ & 20 km s$^{-1}$ & 10 km s$^{-1}$\ v$_{\rm poles}$ & 83 km s$^{-1}$ &\ (\[N[ii]{}\]/H$\alpha)_{\rm ring}$ & 0.9–1.2 : 1 & 4.2 : 1\ (\[N[ii]{}\]/H$\alpha)_{\rm poles}$ & 2.1 : 1 & 2.5 : 1\ (\[N[ii]{}\]/H$\alpha)_{\rm background}$ & 0.15 : 1 & 0.09 : 1\ [lcccc]{} Sher (1965) & $\approx$1962 & 1207 & +159 & +019\ van den Bergh (1978)& &[*1208*]{} &[*+159*]{} &[*+019*]{}\ Moffat (1974)&$\approx$1972 & 1227 & +136 & +010\ van den Bergh (1978) & &[*1238*]{} &[*+140*]{} &[*+029*]{}\ van den Bergh (1978)& 1976/77 & 1228 & +138 & +025\ Melnick et al. (1989)& 1985 Feb.& 1220 & +142 & +013\ Moffat et al. (1994)& 1991 Feb.& — & — & —\ This paper & 1995 March 2& 1231 & &\ [^1]: Based on observations obtained at the European Southern Observatory, La Silla
--- abstract: | In this paper we re-investigate the core of Schrödinger’s ’cat paradox’. We argue that one has to distinguish clearly between superpositions of macroscopic cat states $|\smiley \rangle + |\frowney \rangle$ and superpositions of entangled states $|\smiley, {\uparrow}\rangle + |\frowney, {\downarrow}\rangle$ which comprise both the state of the cat ($\smiley$=alive, $\frowney$=dead) and the radioactive substance (${\uparrow}$=not decayed, ${\downarrow}$=decayed). It is shown, that in the case of the cat experiment recurrence to decoherence or other mechanisms is not necessary in order to explain the absence of macroscopic superpositions. Additionally, we present modified versions of two quantum optical experiments as [*experimenta crucis*]{}. Applied rigorously, quantum mechanical formalism reduces the problem to a mere pseudo-paradox. author: - Stefan Rinner - Ernst Werner date: 'Received: date / Accepted: date' title: 'On the Role of Entanglement in Schrödinger’s Cat Paradox ' --- [example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore Introduction {#intro} ============ Recently, there has been a number of reports on cooling of micromirrors [@Zeilinger], [@Heidman] and micromechanical resonator [@Bouwemeester] down to such low temperatures that quantum effects such as superposition and entanglement at a macroscopic scale come into reach. Also, photoassociative formation of macroscopic atom-molecule superposition in Bose-Einstein-condensates has been considered lately theoretically [@Mackie]. Almost all works dealing with macroscopic superposition of one kind or another refer to the cat paradox claiming the cat itself being in a superposition state. Yet, as mentioned by Leggett [@Leggett] “the conceptual status of the theory is still a topic of lively controversy” and we would like to contribute to this controversy an alternative point of view which quite naturally explains the suppression of interference effects in macroscopic objects already at the level of isolated systems.\ For the sake of completeness we briefly give the basic ingredients of the [*Gedankenexperiment*]{}. The proposal involves a cat (macroscopic), a vial of cyanide and a radioactive atom (microscopic) initially prepared in a metastable state. All three components are placed inside a closed box. The radioactive atom has a probability of $1/2$ for decaying within one hour. If it decays the cyanide shall be unharnessed and will kill the cat via some mechanism. In Schrödinger’s own words [@Schroedinger]: > If one has left this entire system to itself for an hour, one would say that the cat still lives [*if*]{} meanwhile no atom has decayed. The first atomic decay would have poisoned it. The $\psi$-function of the entire system would express this by having in it the living and the dead cat (pardon the expression) mixed or smeared out in equal parts. The first sentence of this quotation emphasizes the entangled character of the system’s state by stressing the word “if”.\ In the third sentence Schrödinger refers to the $\psi$-function of the [**entire**]{} system. Forasmuch then as Schrödinger neither claims the cat to be in a superposition state nor even uses the term ’paradox’ throughout the article the succeeding interpretations of his [*Gedankenexperiment*]{} can only be thought of having misconstrued Schrödinger’s intention. In fact, a paradox could only arise when from claiming the nucleus to be in a superposition state one concludes that, due to the entanglement between states of the atom and states of the cat, the cat is in a superposition of its two possible states, too, [*a minore ad maius*]{}, so to speak. Exemplary for this attitude we quote [@Burnett]: > Quantum mechanics tells us that at any time the nucleus involved is in a superposition of the decayed and original state. Because the fate of the cat is perfectly correlated with the state of the nucleus undergoing decay, we are forced to conlude that the cat must also be in a superposition state, this time of being alive and dead. This assessment is wide-spread in the literature (see [*e.g.*]{} [@Omnes], [@Auletta] and citations therein). In this paper we investigate a different proposition that has the advantage of yielding non-paradoxical predictions. Contrary to the statement cited above, it asserts that at any time neither the nucleus (if the initial state is eq.(\[Anfang\])) nor the cat are in a superposition.\ The fact of the matter is that already three years before Schrödinger’s article von Neumann treating the properties of composite systems made the point clear [@Neumann]: > Auf Grund der obigen Resultate heben wir noch hervor: Ist I im Zustande $\phi(q)$ und II im Zustande $\xi(r)$, so ist I+II im Zustande $\Phi(q,r)=\phi(q)\xi(r)$. Ist dagegen I+II in einem Zustande $\Phi(q,r)$, der kein Produkt $\phi(q)\xi(r)$ ist, so sind I und II Gemische, aber $\Phi$ stiftet eine ein-eindeutige Zuordnung zwischen den möglichen Werten gewisser Größen in I und in II. In English and contemporary diction, the main result of his analysis of composite systems is the following: if a composite system is in an entangled state, each of its subsystems is in a mixed state. Thus, for the case in question here, the subsystem ’cat’ is described by a mixed state, as well, and consequently is not in a superposition state. Superposition and Entanglement {#sec:1} ============================== Since there is no correlation between cat and radioactive material in the very beginning of the [*Gedankenexperiment*]{} the state vector of the combined system may be written as a tensor product in the following way $$\label{Anfang} |\Psi(0) \rangle = |{\uparrow}\rangle \otimes |\smiley \rangle.$$ In the course of time both subsystems become entangled and the system’s state can be written $$\label{unter} | \Psi(t) \rangle = \Big(e^{-\frac{1}{2} \lambda t}|{\uparrow}, \smiley \rangle+\sqrt{1-e^{-\lambda t}}|{\downarrow}, \frowney \rangle\Big).$$ Two peculiarities of the given setup should be noted. First, the Hilbert-space for the combined system is spanned by the four basis states $\left\{|{\uparrow}, \smiley \rangle, |{\downarrow}, \frowney \rangle, |{\downarrow}, \smiley \rangle, |{\uparrow}, \frowney \rangle \right\}$. Due to the initial condition of eq.(\[Anfang\]) only the subspace spanned by the vectors given in eq.(\[unter\]) is accessible. Secondly, for $t>0$ the superposition of eq.(\[unter\]) will decay in time leading to a final state $|\Psi(t \rightarrow \infty)\rangle=|{\downarrow}, \frowney \rangle$ even without the impact of an external environment.\ The assumed half-life of one hour gives for the decay constant $\lambda=\frac{\ln(2)}{3600 \:s}$ and for the corresponding state after one hour $$|\Psi'\rangle:=| \Psi(1 \:h) \rangle = \frac{1}{\sqrt{2}}\Big(|{\uparrow}, \smiley \rangle+|{\downarrow}, \frowney \rangle\Big).$$ Note that this statement is about the whole system being in a superposition state, but not concurrently a statement about the subsystems. In order to gain information about the state of subsystem [**A**]{} of a combined system [**AB**]{} the rules of quantum mechanics tell us that one should consider the system’s density matrix rather than the state vector description and take the partial trace over the degrees of freedom of subsystem [**B**]{}.\ Thus, considering the density matrix of the evolved state $$\begin{aligned} \nonumber \hat{\rho'}_{sys.}&=&\frac{1}{2}\Big(|{\uparrow}, \smiley \rangle \langle \smiley, {\uparrow}|+|{\downarrow}, \frowney \rangle \langle \frowney, {\downarrow}|+ \\ \nonumber &+&|{\uparrow}, \smiley \rangle \langle \frowney, {\downarrow}|+|{\downarrow}, \frowney \rangle \langle \smiley, {\uparrow}|\Big).\end{aligned}$$ the density matrix describing the cat alone results from taking the partial trace $$\begin{aligned} \nonumber \hat{\rho'}_{cat}&=&\langle {\uparrow}| \hat{\rho'}_{sys.} |{\uparrow}\rangle + \langle {\downarrow}| \hat{\rho'}_{sys.} |{\downarrow}\rangle \\ &=& \frac{1}{2}\Big(|\smiley \rangle \langle \smiley|+|\frowney \rangle \langle \frowney|\Big).\end{aligned}$$ This means that within the framework of quantum mechanics there actually is no paradox, since the above reduced density matrix for the subsystem ’cat’ is a statistical mixture of states [*dead*]{} or [*alive*]{} with equal probability $1/2$. The situation is the same as in classical statistics when one describes the unknown outcome ([*head*]{} or [*tail*]{}) of tossing a coin. No superposition state of the cat is present which would give rise to non-diagonal entries in the cat’s density matrix $\hat{\rho'}_{cat}$.\ At first sight, introducing the partial trace in such a way and declaring it a rule of quantum mechanics might seem just a clever trick in order to circumvent the interpretational difficulties posed by the paradox. Indeed, why should one choose to define the reduced state of a subsystem just in that way?\ In order to justify this, consider a composite system [**AB**]{} whose state space is described by a tensor product of Hilbert spaces ${\cal{H}}_{AB}={\cal{H}}_{A} \otimes {\cal{H}}_{B}$ with ${\cal{H}}_{A} \cap {\cal{H}}_{B}=\emptyset$. Then, if ${\cal{O}}_{A}$ is some observable of subsystem [**A**]{} acting on ${\cal{H}}_{A} $ the corresponding observable acting on ${\cal{H}}_{AB}$ is consistently defined by ${\cal{O}}={\cal{O}}_{A} \otimes \hat{{\bf{1}}}_{B}$, where $\hat{{\bf{1}}}_{B}$ is the identity-operator on ${\cal{H}}_{B}$. When subsytem [**A**]{} is prepared in a state described by $\rho_{A}$ the expectation value of ${\cal{O}}_{A}$ should equal the expectation value of ${\cal{O}}_{A} \otimes \hat{{\bf{1}}}_{B}$ when we prepare the combined system in $\rho=\rho_{A} \otimes \rho_{B}$. That is, consistency of measurement statistics demands the following equality to hold: $$Tr({\cal{O}}_{A} \rho_{A})=Tr([{\cal{O}}_{A} \otimes \hat{{\bf{1}}}_{B}] \rho)$$ It can be shown that this equation can only be satisfied if the state of the subsystem $\rho_{A}$ is defined via the partial trace: $$\begin{aligned} \langle {\cal{O}} \rangle&=&Tr_{A,B}\Big(\rho {\cal{O}}\Big)=\sum_{a,b} \langle a,b| \rho {\cal{O}}|b,a \rangle= \\ &=&\sum_{a} \langle a| \sum_{b} \langle b| \rho \hat{{\bf{1}}}_{B}|b \rangle {\cal{O}}_{A} |a \rangle= Tr_{A}\Big(\rho_{A}{\cal{O}}_{A}\Big)=\\&=&\langle {\cal{O}}_{A} \rangle\end{aligned}$$ This shows that in fact there is no freedom of choice in the way one defines the state of a subsystem.\ Note, that in this deduction there was no assumption about the size of the quantum subsystems. That is, one does not need to emphasize the macroscopic size of the cat and interpret the cat itself as some measurement apparatus or even to call for some sort of consciousness of the cat. In particular, the same still holds if the two subsystems are two-level systems like one atom with states $|e \rangle$, $|g \rangle$ and the radiation field inside a cavity with number states $|0 \rangle$ and $|1 \rangle$. If the system is in a superposition state $$\frac{1}{\sqrt{2}} \Big(|e,0 \rangle + |g,1 \rangle \Big)$$ neither the atom nor the cavity field alone are in a superposition. In the same line of reasoning and going from Fock states further to coherent field states (Glauber states) of mesoscopic size, the state in eq.(1) of [@Haroche] of the form $$| \Psi \rangle=\frac{1}{\sqrt{2}} \Big(|e, \alpha e^{i \phi} \rangle + |g, \alpha e^{-i \phi} \rangle \Big)$$ actually does not describe a superposition of the coherent field states $|\alpha e^{i \phi} \rangle$ and $|\alpha e^{-i \phi} \rangle$.\ Here, some words about change of basis seem to be in order. It is clear that a mere rotation of axes will not change the situation, [*e.g.*]{} consider the case of symmetric (S) and antisymmetric (A) linear combinations of the “old” basis states defined in the usual way: $$\begin{aligned} |S \rangle &=& \frac{1}{\sqrt{2}}\Big(| \smiley \rangle + | \frowney \rangle\Big), \: \:|+ \rangle = \frac{1}{\sqrt{2}}\Big(| {\uparrow}\rangle + | {\downarrow}\rangle\Big) \\ |A \rangle &=& \frac{1}{\sqrt{2}}\Big(| \smiley \rangle - | \frowney \rangle\Big), \: \: |- \rangle = \frac{1}{\sqrt{2}}\Big(| {\uparrow}\rangle - | {\downarrow}\rangle\Big).\end{aligned}$$ Then $$\frac{1}{\sqrt{2}}\Big(| \smiley, {\uparrow}\rangle + | \frowney, {\downarrow}\rangle\Big)=\frac{1}{\sqrt{2}}\Big(|S,+ \rangle + |A,- \rangle\Big).$$ Again, after tracing over the states $|+ \rangle$ and $|- \rangle$ the reduced density matrix is given by $$\nonumber \hat{\rho'}_{cat}=\frac{1}{2}\Big(|S \rangle \langle S|+|A \rangle \langle A|\Big)=\frac{1}{2}\Big(|\smiley \rangle \langle \smiley |+ |\frowney \rangle \langle \frowney |\Big)$$ where the last equality is gotten by transforming $|S \rangle$ and $|A \rangle$ back to the “old” basis states. So, in both bases the reduced density matrix is diagonal which does not pose any kind of interpretational problem.\ Rather one could object that in quantum mechanics a measurable state should be an eigenstate of some observable (operator) and for the (anti-)symmetric combination-states a corresponding observable could be difficult to define. At least, it is not obvious what this would look like. As, by the way, is already the case for the “alive” and “dead” states introduced by Schrödinger. In the latter case, it is easy to find some alternative system to replace the cat, [*e.g.*]{} some mass suspended on a thread that is cut if the atom decays. Thus, the macroscopically distinct states $| \top>$(=mass hangs on the thread) and $|\bot \rangle$ (=mass fallen to the floor) correspond to the liveliness of the cat. If the thread is attached at some place outside the box and screened from view in one way or the other, the weight of the box could be an appropriate observable that allows one to discriminate both states from each other. Experimental Test {#sec:2} ================= Whether the reduced density matrix of a subsystem is enough in order to describe the state of the subsystem completely and correctly, should not be a question of philosophical taste but should be decided at first instance by experiments. Paris experiment {#sec:3} ---------------- The first of such tests consists in the modification of an experiment Brune [*et al.*]{} [@Haroche] have carried out in quantum optics. It constitutes the experimental adoption of an earlier theoretical proposal by Schaufler [*et al.*]{}[@Schleich]. The setup substantially is made up by a high-Q microwave resonator $C$ containing a coherent field $|\alpha \rangle$ and Rydberg atoms with excited $|e \rangle$ and ground state $|g \rangle$ that are used both to manipulate and probe the field. Thereto, before entering C the atom is prepared in an superposition of $|e \rangle$ and $|g \rangle$ in low-Q cavity $R_1$ by a resonant $\pi/2$ pulse. This superposition state enters $C$ and being detuned from resonance interacts dispersively with the cavity field in $C$. This interaction produces an atom-level depending phase shift of the cavity field and leads to the following entangled atom-field state: $$\label{inkoh} | \Psi \rangle_{R_1C}=\frac{1}{\sqrt{2}}\Big(|e, \alpha \rangle + |g, -\alpha \rangle\Big)$$ where the subscript on the left hand side indicates that the atom has already passed $R_1$ and $C$. Now, following the widespread opinion one would say that the field is already in a superposition.\ Contrary to accepted opinion, we hold a different view that is based on the importance we ascribe to the reduced density matrix. After leaving $C$ the atom undergoes another $\pi/2$ pulse in a second resonator $R_2$ leading to the system state: $$\label{koh} | \Psi \rangle_{R_1CR_2}={{\cal{N}}}\left[\Big(|-\alpha \rangle -|\alpha \rangle \Big)|e \rangle + \Big(|-\alpha \rangle +|\alpha \rangle \Big)|g \rangle\Big) \right]$$ with some normalisation constant ${{\cal{N}}}$. Behind $R_2$ the atom is detected state-selectively. This projects the field state onto $|\alpha \rangle +e^{i \psi}|\alpha \rangle$ with $\psi=0$ or $\psi=\pi$, according to whether the state of the atom was $|g \rangle$ or $|e \rangle$ respectively.\ In the original version of the experiment this state is susequently probed by a second atom that is sent into the setup after a variable time interval $\tau$ in order to monitor decoherence. The signature of progressive loss of coherence is the decay of the two-atom correlation signal as a function of the preparation-probing interval $\tau$.\ If eq.(\[inkoh\]) already was a Schrödinger cat state, as argued usually, the second Ramsey-zone in the setup of Brune [*et al.*]{} would have been needless. Indeed, the quantum interference signal is explained through erasing [*welcher-Weg-*]{}information in [@Davidovich]. Therefore, repeating measurements on this apparatus and leaving out the interaction in $R_2$ in the preparation process as well as the state-selective detection of the preparing atom could decide whether one could measure the two-atom correlation signal at all in this modified version. Garching experiment {#sec:4} ------------------- In this paragraph we propose a modification of another experimental setup in order to show that the absence of quantum interference in subsystem-states when the entangled system is in a superposition is not a peculiarity of one of the subsystems being (quasi) macroscopic as in eq.(\[inkoh\]). A preparation scheme for superposition states of highly non-classical photon number states of a radiation field inside a high-Q cavity was proposed in [@Rinner1] for one cavity and in [@Rinner2] for two coupled micromasers. Hereto, the coherent exchange of energy between Rydberg atoms sent through the cavity with the radiation field is used. Intriguingly, following only the Rabi oscillations in the Jaynes-Cummings model of quantum optics quantum interference effects will be observable only if both the atom and the field are in coherent superpositions at the beginning of the interaction.\ Suppose we start with the atom in the excited state and the cavity mode in the vacuum: $$|\psi(0) \rangle=|e \rangle |0\rangle.$$ The time structure of the Rabi oscillation leads to $$|\psi(t) \rangle=\cos(g t)|e,0 \rangle+i \sin(g t) |g,1 \rangle$$ where $g$ denotes the vacuum Rabi frequency. At some point of the Rabi oscillation two lasers are applied that induce transitions of the atom from both state $|e \rangle $ and state $|g \rangle$ to one and the same lower lying state $|a \rangle$. Note that $|g \rangle$ is some highly excited Rydberg state and only the ground state of the maser transition, not the “real” ground state of the atom. Hereby, the [*welcher-Weg-*]{}information was erased giving rise to observable quantum interference effects since the state of the system after this procedure is given by $$|\psi(t') \rangle = \Big( \cos(gt')|0 \rangle + i \sin(gt')|1 \rangle \Big)|a \rangle.$$ Detection of the atom in $|a \rangle$ leaves the field in a coherent superposition.\ This suggests that the general recipe for the generation of (particularly macroscopic) superposition states is the following: at first, create entanglement with another (microscopic) system. This leads to superpositions of entangled states. Yet, in order to transfer the coherence to one of the subsystems alone one has to deliberately disentangle the two systems by erasing the [*welcher-Weg-*]{}information in one subsystem and thus enabling quantum interference effects in the other one. Measurement Problem {#sec:5} =================== Since any physical property finally has to be measured in order to gain information about its value the act of measurement plays a decisive role both in the formulation and interpretation of theories in physics. The proposed interpretation in terms of density matrices and partial trace operations for subsystems of composed systems also obviates the so called measurement problem (at least the part of it dealing with the problem of definite outcomes). If the measurement device $\cal{M}$ allows for two readings $|\nearrow \rangle, \: |\nwarrow \rangle$ correlated with the cat’s state of liveliness then the extended density matrix reads $$\begin{aligned} \hat{\rho}&=&\frac{1}{2}\Big(|\nwarrow, {\uparrow}, \smiley \rangle \langle \smiley, {\uparrow}, \nwarrow|+|\nearrow, {\downarrow}, \frowney \rangle \langle \frowney, {\downarrow}, \nearrow|+ \\&+&|\nwarrow, {\uparrow}, \smiley \rangle \langle \frowney, {\downarrow}, \nearrow|+|\nearrow, {\downarrow}, \frowney \rangle \langle \smiley, {\uparrow}, \nwarrow|\Big).\end{aligned}$$ which gives for the measurement device’s reduced density matrix $$\hat{\rho}_{{\cal{M}}}=\frac{1}{2}\Big(|\nwarrow \rangle \langle \nwarrow|+|\nearrow \rangle \langle \nearrow|\Big).$$ The same is still true if the measurement apparatus allows for more than two pointer states as in the original formulation of the problem by von Neumann [@Neumann]. There, one considers a (microscopic) system ${\cal{S}}$ with Hilbert space ${\cal{H_S}}$ and basis vectors $|s_n \rangle$ together with a (macroscopic) measurement apparatus ${\cal{A}}$ with Hilbert space ${\cal{H_A}}$ and basis vectors $|a_n \rangle$ that are supposed to correspond to macroscopically distinguishable pointer states. Further, it is assumed that a pointer reading of $|a_n \rangle$ corresponds to the state $|s_n \rangle$ of system ${\cal{S}}$. If $|a_0 \rangle$ denotes the ready-position of the apparatus the following evolution will take place: $$\left( \sum_{n} c_n(0) |s_n \rangle \right)|a_0 \rangle \longrightarrow \sum_{n} c_n(t) |s_n \rangle |a_n \rangle.$$ The reduced density matrix of the apparatus has only diagonal entries: $$\left( \hat{\rho}_{\cal{A}}\right)_{nn}(t)=|c_n(t)|^2$$ Consequently, the outcomes of measurements are statistically distributed, yet definite ones. Conclusion {#sec:6} ========== First, in the preceeding it was shown that neither decoherence, [*i.e.*]{} entanglement with some environment, nor other ideas like superpositions of space-time geometries [@Penrose] need to be invoked in order to arrive at a classical picture of the Schrödinger cat scenario. This shall not derogate the clarification on the role of the environment accomplished by the decoherence program. In the case in question here, resorting to decoherence in order to arrive at classicality of the cat is not necessary and still features interpretational problems in explaining [*e.g.*]{} how small the off-diagonal elements of the density matrix must be in order to call the density matrix a statistical mixture since they vanish only in the limit $t \longrightarrow \infty $. Hence, the transition from the (alleged) macroscopic superposition state to the familiar statistical mixture would still necessitate the existence of some observer and would depend on his ability to resolve the “distance” of the individual components of the superposition state.\ Quite contrary, we argue that for the generation of (macroscopic) superposition states of the Schrödinger cat kind some initial entanglement with a microscopic system has to be removed from the composite system later on by performing a transformation on the microscopic system that erases the [*welcher-Weg-*]{}information. In fact, the signature of coherent superposition states is the interference pattern of some proper measureable quantity. This interference arises if the system starting out from its initial state A has two or more possibilities B, C, D ... to end up in one final state Z. Yet, in the case of Schrödinger’s cat there is no such final state to which two or more different paths would have been open. Where, then, should interference come from?\ Secondly, it is more satisfying to have a self-consistent interpretation which does not contradict everyday experience ([*i.e.*]{} no superposed cats), but still is able to fully reproduce measurements performed on intentionally prepared superposition states, exemplarily shown for [@Haroche], [@Rinner1]. This is guaranteed by interpreting the reduced density matrix as a quantity that completely describes that state of a given subsystem.\ To summarize, Schrödinger’s cat paradox in our opinion has its roots in the state vector description of the composite system which indeed shows a superposition of states (i.e. entangled state). In order to come from this entangled state to a superposition of the two cat states one has to ignore the non-identity of the two nuclear states. This step is much less innocent than it might look: it changes - by hand - an entangled state into a coherent state. In physical reality such a transmutation could only be achieved by erasing “which-path-information” ( see the discussion in sec. 3.2) which is not the case here.\ We have shown that, at least in the two cases discussed above, a well-chosen transformation on one of the subsystems can lead to a disentangled state which indeed leaves the other subsystem in a superposition state.\ Although the interpretation of the mathematical formalism underlying a physical theory to a certain extent has a right in its own, it holds the dangerous tendency to misconceive itself as the ’philosophy of nature’ in the sense that the elements of the theory are taken to correspond to essential properties of reality. Interpretations of quantum mechanics are particularly prone to this ontological persuasion. Yet, the relation between the formalism and the supposedly underlying reality it tries to describe cannot be treated within the formalism itself. Since physical theories are not part of the objects investigated by quantum mechanics, quantum mechanics itself is not an object the theory makes statements about. The connection between theory and reality has to be established axiomatically in the formulation of the theory and the theory has to be checked for consistency henceforth.\ Gigan, S., Böhm, H. R., Paternostro, M., Blaser, F., Langer, G., Hertzberg, J. B., Schwab, K. C., Bäuerle, D., Aspelmeyer, M. and Zeilinger, A., [*Nature*]{} [**444**]{}, 67 (2006). Arcizet, O., Cohadon, P.-F., Briant, T., Pinard, M. and Heidmann, A., [*Nature*]{} [**444**]{}, 71 (2006). Kleckner, D. and Bouwmeester, D., [*Nature*]{} [**444**]{}, 75 (2006). Marshall, W., Simon, C., Penrose, R. and Bouwmeester, D., [*Phys. Rev. Lett.*]{} [**91**]{}, 130401 (2003). Dannenberg, O. and Mackie, M., [*Phys. Rev. A*]{} [**74**]{}, 053601 (2006). Leggett, A.J., [*Science*]{} [**307**]{}, 871 (2005). Schrödinger, E., [*Naturwissenschaften*]{} [**23**]{} 807; 823; 844 (1935). Einstein, A., Podolosky, B. and Rosen, N., [*Phys. Rev.*]{} [**47**]{} 777-780 (1935). Dunningham, J., Rau, A. and Burnett, K., [*Science*]{} [**307**]{} 872 (2005). Omnès, R., [*The Interpretation of Quantum Mechanics*]{} [Princeton: Princeton University Press]{} p 91 (1994). Auletta, G., [*Foundations and Interpretation of Quantum Mechanics*]{} [Singapore: World Scientific Publishing Co. Pte. Ltd.]{} p 361-362 (200). Monroe, C. [*et al.*]{}, [*Science*]{} [**272**]{} 1131 (1996). Brune, M., Hagley, E., Dreyer, J., Maître, X. Maali, A., Wunderlich, C., Raimond, J.M. and Haroche, S., [*Phys. Rev. Lett.*]{} [**77**]{} 4887-4890 (1996). Schaufler, S., Freyberger, M. and Schleich, W., [*J. Mod. Opt.*]{} [**41**]{} 1765-1779 (1994). Davidovich, L., Brune, M., Raimond, J.M. and Haroche, S., [*Phys. Rev. A*]{} [**53**]{} 1295-1309 (1996). Rinner, S., Walther, H. and Werner, E., [*Phys. Rev. Lett.*]{} [**93**]{} 160407 (2004). Rinner, S., Werner, E., Becker, T. and Walther, H., [*Phys. Rev. A*]{} [**74**]{} 041802(R) (2006). Penrose, R., [*The Large, the Small and the Human Mind*]{} [Cambridge: Cambridge University Press]{} p 74;82 (1997). von Neumann, J., [*Mathematische Grundlagen der Quantenmechanik*]{} [Berlin: Springer Verlag]{} p 232 (1932).
--- abstract: 'This work considers a system with two energy harvesting (EH) nodes transmitting to a common destination over a random access channel. The amount of harvested energy is assumed to be random and independent over time, but correlated among the nodes possibly with respect to their relative position. A threshold-based transmission policy is developed for the maximization of the expected aggregate network throughput. Assuming that there is no a priori channel state or EH information available to the nodes, the aggregate network throughput is obtained. The optimal thresholds are determined for two practically important special cases: i) at any time only one of the sensors harvests energy due to, for example, physical separation of the nodes; ii) the nodes are spatially close, and at any time, either both nodes or none of them harvests energy.' author: - - - '[^1]' bibliography: - 'Bibliography.bib' title: Energy Harvesting Wireless Networks with Correlated Energy Sources --- Introduction ============ Due to the tremendous increase in the number of battery-powered wireless communication devices over the past decade, harvesting of energy from natural resources has become an important research area as a mean of prolonging life time of such devices [@Paradiso; @Niyato]. The various sources for energy harvesting (EH) are wind turbines, photovoltaic cells, thermoelectric generators and mechanical vibration devices such as piezoelectric devices, electromagnetic devices [@EHsource]. EH technology is considered as a promising solution especially for large scale wireless sensor networks (WSNs), where the replacement of batteries is often difficult or cost-prohibitive [@Anthony]. However, due to the random nature of the harvested energy from ambient sources, the design of the system requires a careful analysis. In particular, depending on the spatial distribution of EH devices, the amount of energy harvested by different devices is typically correlated. For example, consider EH devices harvesting energy from tidal motion [@book]. The locations of two EH devices may be such that one is located at the tidal crest, while the other one is located in a tidal trough. In such a case, there may be a time delay equal to the speed of one wavelength between the generation of energy at each device. ![System Model[]{data-label="sysmod"}](sysmod.eps) In this paper, we aim to investigate the effects of the correlation between the EH processes at different EH devices in a wireless network. To this end, we consider a network with two EH nodes transmitting data to a common base station over a random access channel as shown in Fig. \[sysmod\]. Random channel access is a frequently used technique preferred for its distributed and stateless implementation, which is particularly suitable for low power and low duty-cycle sensor networks. In random channel access, the nodes transmit probabilistically over time resulting in occasional packet collisions. However, packet collisions are especially harmful in EH networks due to scarce resources, and should be avoided as much as possible. In this work, we develop and analyze a simple threshold-based transmission policy which grants access to an EH node only when its battery state exceeds a given threshold value. Threshold values are selected based on the battery capacities and the correlation among EH processes of the nodes to maximize the long-term throughput of the system. To illustrate the importance of choosing these threshold values intelligently, consider the following example. Let both EH nodes have a battery capacity of two energy units. Suppose that the EH nodes are spatially close, so they harvest energy simultaneously when energy is available. If the transmission thresholds are such that both nodes transmit a packet whenever they have one unit of energy, transmissions always result in a collision, and thus, the total network throughput is essentially zero. Meanwhile, if the thresholds are selected such that one EH node transmits a packet whenever it has one unit of energy, and the other node transmits a packet whenever it has two units of energy, there will be a collision once every two transmissions. Hence, with the latter choice of thresholds throughput increases to $0.5$ packets. We first derive the average throughput of the network by modeling the system as a discrete time Markov chain (DTMC) and obtaining its steady-state distribution. We then investigate two important special cases to obtain further insights into the selection of optimal transmission thresholds. In the first special case, only one node harvests energy at any time, while in the second case the nodes always harvest energy simultaneously. These two cases demonstrate completely different optimal threshold characteristics. Early research in the design of optimal energy management policies for EH networks consider an offline optimization framework [@off1; @off2], in which non-causal information on the exact realization of the EH processes are assumed to be available. In the online optimization framework [@on1; @on2; @on3], the statistics governing the random processes are assumed to be available at the transmitter, while their realizations are known only causally. The EH communication system is modeled as a Markov decision process [@on1], and dynamic programming can be used to optimize the throughput numerically. In the learning optimization framework, knowledge about the system behavior is further relaxed and even the statistical knowledge about the random processes governing the system is not assumed, and the optimal policy scheduling is learned over time [@learning]. In this paper we assume that EH nodes have no knowledge about the EH processes, and can only observe the amount of harvested energy in their own battery. Optimal threshold policies for an EH network is considered in [@game] based on a game theoretic approach. In [@dos], authors optimize the throughput of a heterogeneous *ad hoc* EH network by formulating it as an optimal stopping problem. In [@basco2015] multiple energy harvesting sensor nodes are scheduled by an access point which does not know the energy harvesting process and battery states of the nodes. However, in these works the EH processes at different devices are assumed to be independent. System Model {#sec:SystemModel} ============ We adopt an interference model, where the simultaneous transmissions of two EH nodes result in a collision, and eventual loss of transmitted packets at the base station. Each node is capable of harvesting energy from an ambient resource (solar, wind, vibration, RF, etc.), and storing it in a finite capacity rechargeable battery. EH nodes have no additional power supplies. The nodes are data backlogged, and once they access the channel, they transmit until their battery is completely depleted. Note that assuming that the nodes are always backlogged allows us to obtain the saturated system throughput. In the following, we neglect the energy consumption due to generation of data to better illustrate the effects of correlated EH processes[^2]. Time is slotted into intervals of unit length. In each time slot, the energy is harvested in units of $\delta$ joules. Let $E_{n}(t)$ be the energy harvested in time slot $t$ by node $n=1,2$. We assume that $E_{n}(t)$ is an independent and identically distributed (i.i.d.) Bernoulli process with respect to time $t$. However, at a given time slot $t$, $E_{1}(t)$ and $E_{2}(t)$ may not be independent. The EH rates are defined as follows: $$\begin{aligned} {\ensuremath{\mathsf{Pr}\left(E_{1}(t)=\delta,E_{2}(t)=\delta\right)}}=p_{11},\nonumber\\ {\ensuremath{\mathsf{Pr}\left(E_{1}(t)=\delta,E_{2}(t)=0\right)}}=p_{10}, \nonumber\\ {\ensuremath{\mathsf{Pr}\left(E_{1}(t)=0,E_{2}(t)=\delta\right)}}=p_{01},\nonumber\\ {\ensuremath{\mathsf{Pr}\left(E_{1}(t)=0,E_{2}(t)=0\right)}}=p_{00},\end{aligned}$$ where $p_{00}+p_{10}+p_{01}+p_{11}=1$[^3]. We assume that the transmission time $\varepsilon$ is much shorter than the time needed to harvest a unit of energy, i.e., $\varepsilon\ll 1$, and the nodes cannot simultaneously transmit and harvest energy. Transmissions take place at the beginning of time slots, and the energy harvested during time slot $t$ can be used for transmission in time slot $t+1$. The channel is non-fading, and has unit gain. Given transmission power $P$, the transmission rate, $r_n(t)$, $n=1,2$ is given by the Shannon rate, i.e., $r_n(t)=\log\left(1+P/N\right)$ (nats/sec/Hz), where $N$ is the noise power. We consider a deterministic transmission policy which only depends on the state of the battery of an EH node. Each EH node independently monitors its own battery level, and when it exceeds a pre-defined threshold, the node accesses the channel. If more than one node accesses the channel, a collision occurs and both packets are lost. Note that, by considering such an easy-to-implement and stateless policy, we aim to achieve low-computational power at EH devices. The battery of each EH node has a finite capacity of $\bar{B}_{n}$, $n=1,2$. Let $B_n(t)$ be the state of the battery of EH node $n=1,2$ at time $t$. Node $n$ transmits whenever its battery state reaches $\gamma_n\leq \bar{B}_{n}$ joules, $n=1,2$. When node $n$ accesses the channel, it transmits at power $\frac{B_n(t)}{\varepsilon}$, i.e., the battery is completely depleted at every transmission. Hence, the time evolution of the battery states is governed by the following equation. $$\begin{aligned} B_{n}(t+1)=&\min\left\{\bar{B}_{n},\right. \nonumber\\ &\,\left. B_{n}(t)+E_{n}(t)\mathds{1}_{\left\{B_{n}(t)<\gamma_{i}\right\}}-\mathds{1}_{\left\{B_{n}(t)\geq\gamma_{i}\right\}}B_{n}(t) \right\}, \label{Bi}\end{aligned}$$ where $\mathds{1}_{a<b}=\begin{cases} 1 & \mbox{if }a<b\\ 0 & \mbox{if } a\geq b \end{cases}$ is the indicator function. Let $R_n(t)$ be the rate of [*successful*]{} transmissions, i.e., $$\begin{aligned} R_{1}(t)=&\log\left(1+\frac{B_1(t)/\varepsilon}{N}\right)\mathds{1}_{\left\{B_{1}(t)\geq\gamma_{1},B_{2}(t)<\gamma_{2}\right\}},\label{R1t}\\ R_{2}(t)=&\log\left(1+\frac{B_2(t)/\varepsilon}{N}\right)\mathds{1}_{\left\{B_{1}(t)<\gamma_{1},B_{2}(t)\geq\gamma_{2}\right\}}.\label{R2t}\end{aligned}$$ Maximizing the Throughput {#sec:ThroughputMaximization} ========================= We aim at maximizing the long-term average total throughput by choosing the transmission thresholds intelligently, taking into account the possible correlation between the EH processes. Let $\bar{R}_n(\gamma_1,\gamma_2)$ be the long-term average throughput of EH node $n$ when the thresholds are selected as $\gamma_1,\gamma_2$, i.e., $$\bar{R}_n(\gamma_1,\gamma_2)=\lim_{T\rightarrow\infty}\frac{1}{T} \sum^{T}_{t=1}R_{n}(t), \,\, n=1,2.\label{avgRnt}$$ Then, the optimization problem of interest can be stated as $$\begin{aligned} \max_{\gamma_{1},\gamma_{2}}\ &\sum_{n}{\bar{R}_n(\gamma_1,\gamma_2)},\label{opt1}\\ &\text{s.t.}\ \ \ 1\leq\gamma_{n}\leq \bar{B}_{n}\ \ n=1,2.\label{opt2}\end{aligned}$$ In order to solve the optimization problem (\[opt1\])-(\[opt2\]), we first need to determine the long term average total throughput in terms of the thresholds. Note that for given $\gamma_1,\gamma_2$, the battery states of EH nodes, i.e., $\left(B_{1}(t),\ B_{2}(t)\right)\in \left\{0,\ldots, \gamma_1-1\right\}\times\left\{0,\ldots, \gamma_{2}-1\right\}$ constitute a finite two dimensional discrete-time Markov chain (DTMC), depicted in Fig. \[markov\]. Let $\pi\left(i,\ j\right)={\ensuremath{\mathsf{Pr}\left(B_{1}(t)=i,\ B_{2}(t)=j\right)}}$ be the steady-state distribution of the Markov chain for $i=0,\ldots,\gamma_{1}-1$ and $j=0,\ldots,\gamma_{2}-1$. ![Associated DTMC with joint battery states[]{data-label="markov"}](markov.eps) \[M\] The steady state distribution of DTMC associated with the joint battery state of EH nodes is $\pi\left(i,\ j\right) = \frac{1}{\gamma_{1}\gamma_{2}},$ for $i=0,\ldots,\gamma_{1}-1$ and $j=0,\ldots,\gamma_{2}-1$. The detailed balance equations for $i=1,\ldots,\gamma_{1}-1$ and $j=1,\cdots,\gamma_{2}-1$ are: $$\begin{aligned} \pi\left(i,\ j\right)(1-p_{00})=&\pi\left(i-1,\ j-1\right)p_{11}\nonumber\\ &+\pi\left(i-1,\ j\right)p_{10}+\pi\left(i,\ j-1\right)p_{01}. \label{eq:m1}\end{aligned}$$ Whenever the battery state of node $n$ reaches $\gamma_{n}-1$, in the next state transition, given that it harvests energy, there is a transmission. Since the transmission time is much shorter than a time slot, i.e., $\varepsilon\ll 1$, after reaching state $\gamma_{n}$, node $n$ immediately transmits and transitions back to state $0$. Thus, the detailed balance equations for state $0$ are given as: $$\begin{aligned} \pi\left(i,\ 0\right)(1-p_{00}) = &\pi\left(i-1,\ 0\right)p_{10}+\pi\left(i,\ \gamma_{2}-1\right)p_{01}\nonumber\\ +&\pi\left(i-1,\ \gamma_{2}-1\right)p_{11},\ \ 1\leq i\leq\gamma_{1}-1\label{eq:m2},\end{aligned}$$ $$\begin{aligned} \pi\left(0,\ j\right)(1-p_{00})= &\pi\left(0,\ j-1\right)p_{01}+\pi\left(\gamma_{1}-1,\ j\right)p_{10}\nonumber\\ +&\pi\left(\gamma_{1}-1,\ j-1\right)p_{11},\ \ 1\leq j\leq\gamma_{2}-1\label{eq:m3},\\ \pi\left(0,\ 0\right)(1-p_{00})&=\pi\left(\gamma_{1}-1,\ \gamma_{2}-1\right)p_{11}\nonumber\\ +&\pi\left(\gamma_{1}-1,\ 0\right)p_{10}+\pi\left(0,\ \gamma_{2}-1\right)p_{01}.\label{eq:m4}\end{aligned}$$ From , it is clear that if $p_{01},p_{10}\neq 0$ then $\pi\left(i,\ j\right)\neq 0$ for all $i=1,\ldots,\gamma_{1}-1$ and $j=1,\ldots,\gamma_{2}-1$. Then, it can be verified that $\pi\left(i,\ j\right) = \pi\left(l,\ k\right)$ satisfies (\[eq:m1\])-(\[eq:m4\]) for all $i,j,k$, and $l$. Hence, the theorem is proven since $\sum^{\gamma_{2}-1}_{j=0}\sum^{\gamma_{1}-1}_{i=0}\pi\left(i,\ j\right)=1$. Once the steady state distribution of DTMC is available, we can obtain the average throughput values. Let $\delta'=\frac{\delta/\varepsilon}{N}$. \[lem\_avg\_thr\] The average throughput of EH nodes 1 and 2 for $p_{01},p_{10}\neq 0$ are given as $$\begin{aligned} \bar{R}_{1}\left(\gamma_{1},\gamma_{2}\right)=&\log(1+\gamma_{1}\delta')\nonumber\\ &\times\left(\left(p_{10}+p_{11}\right)\sum^{\gamma_{2}-2}_{j=0}\pi\left(\gamma_{1}-1,\ j\right)+p_{10}\pi\left(\gamma_{1}-1,\ \gamma_{2}-1\right)\right)\nonumber\\ =&\frac{\log(1+\gamma_{1}\delta')\left[(\gamma_{2}-1)\left(p_{10}+p_{11}\right)+p_{10}\right]}{\gamma_{1}\gamma_{2}},\\ \bar{R}_{2}\left(\gamma_{1},\gamma_{2}\right)=&\log(1+\gamma_{2}\delta')\nonumber\\ &\times\left(\left(p_{01}+p_{11}\right)\sum^{\gamma_{1}-2}_{i=0}\pi\left(i,\ \gamma_{2}-1\right)+p_{01}\pi\left(\gamma_{1}-1,\ \gamma_{2}-1\right)\right)\nonumber\\ =&\frac{\log(1+\gamma_{2}\delta')\left[(\gamma_{1}-1)\left(p_{01}+p_{11}\right)+p_{01}\right]}{\gamma_{1}\gamma_{2}}.\end{aligned}$$ Consider node 1. Note that whenever the batteries are in one of the states $\left(\gamma_{1}-1,\ j\right)$ for $j=0,\ldots,\gamma_{2}-2$, a unit of energy (of $\delta$ joules) is harvested at node $1$ with probability of $p_{10}+p_{11}$, and it transmits in the subsequent transition. Meanwhile, whenever the batteries are in state $\left(\gamma_{1}-1,\ \gamma_{2}-1\right)$, both nodes harvest a unit energy with probability $p_{11}$, and transmit in the subsequent transition resulting in a collision. Thus, in state $\left(\gamma_{1}-1,\ \gamma_{2}-1\right)$, EH node $1$ successfully transmits with probability $p_{10}$. Similar arguments apply for node 2. The following optimization problem is equivalent to (\[opt1\])-(\[opt2\]). $$\begin{aligned} \max_{\gamma_{1},\gamma_{2}}\ z(\gamma_{1},\gamma_{2})&\triangleq \frac{\log(1+\gamma_{1}\delta')\left[(\gamma_{2}-1)\left(p_{10}+p_{11}\right)+p_{10}\right]}{\gamma_{1}\gamma_{2}}\nonumber\\ +&\frac{\log(1+\gamma_{2}\delta')\left[(\gamma_{1}-1)\left(p_{01}+p_{11}\right)+p_{01}\right]}{\gamma_{1}\gamma_{2}},\label{optC1}\\ &\text{s.t.}\ \ \ 1\leq\gamma_{n}\leq\bar{B}_{n},\ \ n=1,2.\label{optC2}\end{aligned}$$ Note that (\[optC1\])-(\[optC2\]) is an integer program. Since our main motivation is to investigate the effects of the correlated energy arrivals on the operation of EH networks, rather than to obtain exact optimal thresholds, we may relax the optimization problem by omitting the integrality constraints. Nevertheless, the resulting relaxed optimization problem is still difficult to solve since the objective function is non-convex. Hence, in the following, we obtain the optimal solution for two important special cases. Special Cases {#sec:SpecialCases} ============= Depending on the energy source and relative locations of the nodes, correlation among their EH processes may significantly vary. For example, if mechanical vibration is harvested, and the nodes are located far from each other, e.g., one EH device on one side of the road whereas the other one on the other side of a two-lane road, only the EH device on the side of the road where a car passes may generate energy from its vibration. This is a case of [*high negative correlation*]{}. Meanwhile, if solar cells are used as an energy source, EH processes at nearby nodes will have [*high positive correlation*]{}. The Case of High Negative Correlation {#sec:HighNegativeCorrelation} ------------------------------------- We first analyze the case of high negative correlation. In particular, we have $p_{00}=p_{11}=0$, $p_{10}=p$ and $p_{01}=1-p$ with $0<p<1$. Note that only one EH device generates energy at a given time. Let $z^{(-)}\left(\gamma_{1},\gamma_{2}\right)$ be the total throughput of EH network when the thresholds are $\gamma_{1},\gamma_{2}$, obtained by inserting the values of $p_{00},p_{11},p_{10},p_{01}$ in . We have $$\begin{aligned} z^{(-)}\left(\gamma_{1},\gamma_{2}\right) = &\frac{\log(1+\gamma_{1}\delta')p}{\gamma_{1}}+\frac{\log(1+\gamma_{2}\delta')(1-p)}{\gamma_{2}}. \label{optN} $$ The following lemma establishes that an EH device transmits whenever it harvests a single unit of energy. Interestingly, the optimal thresholds prevent any collisions between transmissions of EH devices, since at a particular time slot only one EH device has sufficient energy to transmit. \[HNC\] The optimal solution of (\[optC1\])-(\[optC2\]) when $p_{00}=p_{11}=0$, $p_{10}=p$ and $p_{01}=1-p$ with $0<p<1$, is $\gamma^{*}_{1}=0$, $\gamma^{*}_{2}=0$. Assume that $\gamma_{1}$ and $\gamma_{2}$ are non-negative continuous variables. Then, the gradient of $z^{(-)}\left(\gamma_{1},\gamma_{2}\right)$ is: $$\begin{aligned} \nabla z^{(-)}\left(\gamma_{1},\gamma_{2}\right) = \left[\frac{p \left(\delta'\gamma_{1}-\left(1+\delta'\gamma_{1}\right)\log \left(1+\gamma _1\delta'\right)\right)}{\gamma _{1}{}^2\left(1+\delta'\gamma_{1}\right)}\right.,\nonumber\\ \left.\frac{(1-p) \left(\delta'\gamma_{2}-\left(1+\delta'\gamma_{2}\right)\log \left(1+\gamma _2\delta'\right)\right)}{\gamma _{2}{}^2\left(1+\delta'\gamma_{2}\right)}\right].\end{aligned}$$ Note that $\nabla z^{(-)}\left(\gamma_{1},\gamma_{2}\right)<0$ for all $\gamma_{1}\geq 0$, $\gamma_{2}\geq 0$ and $p$. Since $\nabla z^{(-)}<0$, we have $z^{(-)}\left(\gamma_{1},\gamma_{2}\right)>z^{(-)}\left(\hat{\gamma}_{1},\hat{\gamma_{2}}\right)$ for every $\gamma_{1}<\hat{\gamma}_{1}$ and $\gamma_{2}<\hat{\gamma}_{2}$. Then, the lemma follows. The Case of High Positive Correlation {#sec:HighPositiveCorrelation} ------------------------------------- ![Transitions of joint battery states for high positive correlation case.[]{data-label="HPCex"}](HPCex.eps) Now, we consider the case of high positive correlation. In particular, we investigate the optimal solution when EH process parameters are $p_{01}=p_{10}=0$, $p_{11}=p$ and $p_{00}=1-p$ with $0<p<1$; that is, either both EH devices generate energy or neither of them does. Note that in Theorem \[M\] the steady state distribution of DTMC is derived assuming that all of the states are visited. However, in the case of high positive correlation, only a part of the state space is visited. In order to better illustrate this case, consider an EH network with thresholds $\gamma_{1}=4$ and $\gamma_{2}=6$. The state space of the corresponding DTMC is given in Fig. \[HPCex\]. Large solid and empty circles represent visited and unvisited battery states, respectively. The solid lines represent the transitions of battery states when thresholds are not yet reached, and the dotted lines represent transitions when at least one of the nodes transmits. Also, arrows show the direction of transitions between the states. Since only a subset of the state space is visited infinitely often, the average throughputs given in Lemma \[lem\_avg\_thr\] are no longer valid. We establish the average throughput of EH network with high positive correlation by the following lemma. The average throughput $\bar{R}_{n}^{(+)}(\gamma_{1},\ \gamma_{2})$ of node $n=1,2$ for $p_{01}=p_{10}=0$, $p_{11}=p$ and $p_{00}=1-p$ is given as $$\begin{aligned} \bar{R}_{n}^{(+)}(\gamma_{1},\ \gamma_{2})=&p\cdot\frac{\left[\frac{LCM(\gamma_{1},\ \gamma_{2} )}{\gamma_{n}}-1\right]}{LCM(\gamma_{1},\ \gamma_{2} )}\cdot\log(1+\gamma_{n}\delta'),\,\,n=1,2 \label{eq:avg_thr_positive}\end{aligned}$$ where $LCM(\gamma_{1}, \gamma_{2})$ is the [least common multiple]{} of $\gamma_{1}$ and $\gamma_{2}$. Due to our transmission policy, EH node $n$ transmits whenever its battery level reaches $\gamma_{n}$, $n=1,2$. Note that both nodes reach their respective thresholds simultaneously every $LCM(\gamma_{1},\ \gamma_{2})$ [*instances of EH events*]{}. Since they transmit simultaneously, a collision occurs, and they both exhaust their batteries, i.e., the joint battery state transitions into state $(0, 0)$. The process repeats afterwards. Hence, the renewal period of this random process is $LCM(\gamma_{1},\ \gamma_{2})$. In every renewal period, EH node $n=1,2$ makes $\frac{LCM(\gamma_{1},\ \gamma_{2} )}{\gamma_{n}}-1$ number of successful transmissions. Hence, by using renewal reward theory, and noting that on the average a unit of energy is harvested in $p<1$ proportion of time slots, we obtain . Let $z^{(+)}(\gamma_{1},\ \gamma_{2})=\bar{R}_{1}^{(+)}(\gamma_{1},\ \gamma_{2})+\bar{R}_{2}^{(+)}(\gamma_{1},\ \gamma_{2})$ be the total throughput of a system with high positive correlation. Note that $z^{(+)}(\gamma_{1},\ \gamma_{2})$ is a non-convex function with respect to $\gamma_{1}$, and $\gamma_{2}$. Hence, in the following, we analyze the system in two limiting cases, i.e., when unit of energy harvested per slot, i.e., $\delta'$, is either very small or very large. ### Small Values of $\delta'$ {#sec:SmallValuesOfDelta} For small values of $\delta'$, $\log(1+\gamma_{n}\delta')$ can be approximated by $\gamma_{n}\delta'$. Let $GCD(\gamma_1, \gamma_2)$ be the [*greatest common divisor*]{} of $\gamma_1$ and $\gamma_2$. By substituting $LCM(\gamma_{1},\ \gamma_{2} )=\frac{\gamma_{1}\gamma_{2} }{GCD(\gamma_{1},\ \gamma_{2} )}$ we obtain $$\begin{aligned} z^{(+)}\left(\gamma_{1},\gamma_{2}\right) &= 2\delta'p-GCD(\gamma_{1},\ \gamma_{2} )\left(\frac{1}{\gamma_{1}}+\frac{1}{\gamma_{2}}\right)\delta'p.\label{optPL}\end{aligned}$$ Note that maximizing (\[optPL\]) is equivalent to minimizing $GCD(\gamma_{1},\ \gamma_{2} )\left(\frac{1}{\gamma_{1}}+\frac{1}{\gamma_{2}}\right)$. Lemma \[HPCL\] establishes that it is optimal to choose the thresholds as large as possible as long as the greatest common divisor of the two thresholds is equal to $1$. This is due to the fact that the objective function in (\[optPL\]) is linear, and the optimum thresholds minimize the number of collisions. \[HPCL\] The optimal thresholds for the case of high positive correlation for small values of $\delta'$, and for $\bar{B}_{2}>\bar{B}_{1}$ are $\gamma^{*}_{1}=\bar{B}_{1}$, $\gamma^{*}_{2}=\arg\max_{j}\bar{B}_{2}-j$ for $j = 1,\ldots,\bar{B}_{2}$, s.t., $GCD(\bar{B}_{1},j)=1$. Note that $0<\frac{1}{\gamma_{1}}+\frac{1}{\gamma_{2}}\leq 2$, for $1\leq\gamma_n\leq \bar{B}_n$, $n=1,2$. Let $\Gamma=\{(\gamma_1,\gamma_2): GCD(\gamma_1,\gamma_2)=1\}$. Note that if $(\gamma_1,\gamma_2)\notin \Gamma$, then $GCD(\gamma_1,\gamma_2)\geq 2$. Hence, it can be shown that $z^{(+)}\left(\gamma_{1},\gamma_{2}\right)\geq z^{(+)}\left(\gamma_{1}',\gamma_{2}'\right)$, for all $(\gamma_1,\gamma_2)\in \Gamma$, and $(\gamma_1',\gamma_2')\notin \Gamma$. Among $(\gamma_1,\gamma_2)\in\Gamma$, we choose the one that minimizes $\frac{1}{\gamma_{1}}+\frac{1}{\gamma_{2}}$, and thus, proving the lemma. ### Large Values of $\delta'$ {#sec:LargeValuesOfDelta} For large values of $\delta'$, $\log(1+\gamma_{n}\delta')$ can be approximated by $\log(\gamma_{n}\delta')$. Also by substituting $LCM(\gamma_{1},\ \gamma_{2} )=\frac{\gamma_{1}\gamma_{2} }{GCD(\gamma_{1},\ \gamma_{2} )}$ in $z^{(+)}(\gamma_{1},\ \gamma_{2})$ we have: $$\begin{aligned} z^{(+)}\left(\gamma_{1},\gamma_{2}\right) =& \frac{\left(\gamma_{2}-GCD(\gamma_{1},\ \gamma_{2} )\right)\log(\gamma_{1}\delta')p}{\gamma_1\gamma_2}\nonumber\\ &+\frac{\left(\gamma_{1}-GCD(\gamma_{1},\ \gamma_{2} )\right)\log(\gamma_{2}\delta')p}{\gamma_1\gamma_2}.\label{optPH} $$ The optimal thresholds for this case is established in Lemma \[HPCH\]. Since the objective function in (\[optPH\]) has the property of *diminishing returns*, i.e., the rate of increase in the function decreases for higher values of its parameters, each device will choose transmitting more often, equivalently short messages, using less energy. However, transmissions are scheduled every time each node exceeds a threshold, which dictates small thresholds. When both EH devices transmit with small thresholds, there will be a large number of collisions, so the following lemma suggests that the aggregate throughput is maximized when one EH device transmits short messages, whereas the other transmits long messages. \[HPCH\] The optimal thresholds for the case of high positive correlation for large values of $\delta'$ are $\gamma^{*}_{1}=B_{1}$, $\gamma^{*}_{2}=1$ for $\bar{B}_{1}>\bar{B}_{2}$, and they are $\gamma^{*}_{1}=1$, $\gamma^{*}_{2}=B_{2}$ for $\bar{B}_{2}>\bar{B}_{1}$. Let ${\hat z}$ be an upper envelope function for $z^{(+)}$, obtained by substituting $GCD(\gamma_{1},\ \gamma_{2} )= 1$ in (\[optPH\]): $$\begin{aligned} {\hat z}\left(\gamma_{1},\gamma_{2}\right) = \frac{\left(\gamma_{2}-1\right)\log(\gamma_{1}\delta')p}{\gamma_1\gamma_2}+\frac{\left(\gamma_{1}-1\right)\log(\gamma_{2}\delta')p}{\gamma_1\gamma_2}.\end{aligned}$$ Note that since $GCD(\gamma_{1},\ \gamma_{2} )\geq 1$, for every value of $\gamma_{1}$ and $\gamma_{2}$, we have ${\hat z}\left(\gamma_{1},\gamma_{2}\right)\geq z^{(+)}\left(\gamma_{1},\gamma_{2}\right)$. First, we maximize ${\hat z}$ for a given $\gamma_{2}$ by obtaining the corresponding optimal $\gamma_{1}$. Taking the partial derivative of ${\hat z}$ with respect to $\gamma_1$, we obtain: $$\begin{aligned} \frac{\partial {\hat z} }{\partial \gamma_{1}} = \frac{p}{\gamma _1^2 \gamma _2} \left[ \log \left(\gamma _1 \delta \right)+\log \left(\gamma _2 \delta \right)-\gamma _2 \left(\log \left(\gamma _1 \delta \right)-1\right)-1 \right]. \label{eq:par_derv_zup}\end{aligned}$$ Note that $\gamma_2\in \{1,\ldots, {\bar B}_2\}$. If $\gamma_2=1$, reduces to $$\begin{aligned} \frac{\partial {\hat z}\left(\gamma_{1},1\right) }{\partial \gamma_{1}} =& \frac{p}{\gamma _1^2 \gamma _2}\log\delta>0.\end{aligned}$$ Since $\frac{\partial {\hat z}\left(\gamma_{1},1\right) }{\partial \gamma_{1}}>0$, the maximum value of ${\hat z}$ is attained when $\gamma_1=B_{1}$. For $\gamma_2=2$, reduces to $$\begin{aligned} \frac{\partial {\hat z}\left(\gamma_{1},2\right) }{\partial \gamma_{1}} = &\frac{p}{\gamma _1^2 \gamma _2}\left(-\log \left(\gamma _1 \delta \right)+\log (2 \delta )+1\right)\nonumber\\ &= \left\{ \begin{array}{rl} <0 & \text{if } \gamma_{1} > 2e,\\ \geq 0 & \text{if } \gamma_{1} \leq 2e, \end{array} \right.\end{aligned}$$ where $e$ is the Euler’s constant. Since $\frac{\partial^{2} {\hat z}\left(2e,2\right) }{\partial \gamma_{1}{}^2}=-\frac{1}{16 e^3}<0$, the maximum value of ${\hat z}$ is attained when $\gamma_1=2e$. Finally, if $\gamma_2\geq 3$, it can be shown that is always negative as long as $\delta>3e^2$. Hence, the maximum value of ${\hat z}$ is attained for $\gamma_1=1$, if $\gamma_{2}\geq 3$. By comparing the optimal values of $\hat z$ for all $\gamma_2\in \{1,\ldots, {\bar B}_2\}$, one can show that $\hat z$ is maximized for $(\gamma_1,\gamma_2)=\left(B_{1},\ 1\right)$ when $B_1>B_2$ and $(\gamma_1,\gamma_2)=\left(1,\ B_2\right)$ when $B_2>B_1$. Since $GCD(1,\ B_2)=GCD(B_1,\ 1)=1$, and $\hat z=z^{(+)}$ when $GCD(\gamma_1,\gamma_2)=1$, it follows that optimal points for ${\hat z}$ are also the optimal for $z^{(+)}$. Numerical Results {#sec:NumericalResults} ================= We first verify (\[optC1\]) and (\[eq:avg\_thr\_positive\]) by Monte Carlo simulations. In the simulation, we model the battery states using equation (\[Bi\]). At each time slot $t$, we generate the joint EH process $(E_1(t),E_2(t))$ randomly. We run the simulation for $10^4$ time slots and calculate the expected throughput by evaluating time average of the instantaneous rates as in (\[avgRnt\]). Fig. \[figerr\] depicts the reliability of our analytical derivations. In particular, we measure both the percent relative error (%RE), which is defined as $\%\text{RE} = \frac{\text{Analytical value}-\text{Simulation value}}{\text{Analytical value}}\times 100$, and the absolute error (%AE), which is defined as %AE = $(\text{Analytical value}-\text{Simulation value})\times 100$, for $\gamma_2=9$ versus $\gamma_{1}$. The results show a good match between the analytical and simulation results. ![%AE and %RE versus $\gamma_{1}$ with $\gamma_2=9$ and $\delta'=30$.[]{data-label="figerr"}](err.eps) Next, we verify the optimal thresholds by numerically evaluating (\[optC1\]) and (\[eq:avg\_thr\_positive\]) for the cases of high negative and high positive correlation. We assume that $\bar{B}_{1}=\bar{B}_{1}=10$ and $p=0.5$. The aggregate throughput of the network with respect to the thresholds $\gamma_{1}$ and $\gamma_{2}$ for the case of high negative correlation is depicted in Fig. \[figHNC\]. It can be seen that the optimal thresholds are $\gamma^{*}_{1}=1$, $\gamma^{*}_{2}=1$, which is in accordance with Lemma \[HNC\]. ![Expected total throughput for high negative correlation with $\delta'=5$.[]{data-label="figHNC"}](figHNC.eps) Fig. \[figHPCL\] illustrates the aggregate throughput of the network for the case of high positive correlation with respect to $\gamma_{1}$ and $\gamma_{2}$ for $\delta'=0.04$. The abrupt drops in the value of the aggregate throughput are due to the fact that $GCD(\gamma_{1},\ \gamma_{2})$ varies at least by a factor of two, which shows consistency with Lemma \[HPCL\]. ![Expected total throughput for high positive correlation with $\delta'=0.04$.[]{data-label="figHPCL"}](figHPCL.eps) In Fig. \[figHPCH\], the aggregate throughput is depicted for the case of high positive correlation with respect to $\gamma_{1}$ and $\gamma_{2}$ for $\delta'=30$. As expected from the results established in Lemma \[HPCH\], the optimal thresholds are either $(\gamma^{*}_{1},\ \gamma^{*}_{2})=(1,\ 10)$ or $(\gamma^{*}_{1},\ \gamma^{*}_{2})=(10,\ 1)$. ![Expected total throughput for high positive correlation with $\delta'=30$.[]{data-label="figHPCH"}](figHPCH.eps) Conclusion {#sec:Conclusion} ========== We have investigated the effects of correlation among the EH processes of different EH nodes as encountered in many practical scenarios. We have developed a simple threshold based transmission policy to coordinate EH nodes’ transmissions in such a way to maximize the long-term aggregate throughput of the network. In the threshold policy, nodes have no knowledge about each other, and at any given time they can only monitor their own battery levels. Considering various assumptions regarding the EH statistics and the amount of the harvested energy, the performance of the proposed threshold policy is studied. The established lemmas in Section \[sec:ThroughputMaximization\] show that different assumptions about the underlying EH processes and the amount of the harvested energy demonstrate completely different optimal threshold characteristics. As our future work, we will investigate the cases when data queues are not infinitely backlogged and when the channels exhibit fading properties. [^1]: This work was in part supported by EC H2020-MSCA-RISE-2015 programme under grant number 690893, by Tubitak under grant number 114E955 and by British Council Institutional Links Program under grant number 173605884. [^2]: For example, data may be generated by a sensor continuously monitoring the environment. Then, the energy consumption of a sensor may be included as a continuous drain in the energy process, but due to possible energy outages, the data queues may no longer be backlogged. We leave the analysis of this case as a future work. [^3]: Note that if $p_{00}=p_{10}=p_{01}=p_{11}=1/4$, then EH nodes generate energy independently from each other.
--- abstract: 'We characterize the kinematic and chemical properties of $\sim$3,000 Sagittarius (Sgr) stream stars, including K-giants, M-giants, and BHBs, select from SEGUE-2, LAMOST, and SDSS separately in Integrals-of-Motion space. The orbit of Sgr stream is quite clear from the velocity vector in $X$-$Z$ plane. Stars traced by K-giants and M-giants present the apogalacticon of trailing steam is $\sim$ 100 kpc. The metallicity distributions of Sgr K-, M-giants, and BHBs present that the M-giants are on average the most metal-rich population, followed by K-giants and BHBs. All of the K-, M-giants, and BHBs indicate that the trailing arm is on average more metal-rich than leading arm, and the K-giants show that the Sgr debris is the most metal-poor part. The $\alpha$-abundance of Sgr stars exhibits a similar trend with the Galactic halo stars at lower metallicity (\[Fe/H\] $<\sim$ $-$1.0 dex), and then evolve down to lower \[$\alpha$/Fe\] than disk stars at higher metallicity, which is close to the evolution pattern of $\alpha$-element of Milky Way dwarf galaxies. We find $V_Y$ and metallicity of K-giants have gradients along the direction of line-of-sight from the Galactic center in $X$-$Z$ plane, and the K-giants show that $V_Y$ increases with metallicity at \[Fe/H\] $>\sim-$1.5 dex. After dividing the Sgr stream into bright and faint stream according to their locations in equatorial coordinate, the K-giants and BHBs show that the bright and faint stream present different $V_Y$ and metallicities, the bright stream is on average higher in $V_Y$ and metallicity than the faint stream.' author: - Chengqun Yang - 'Xiang-Xiang Xue' - Jing Li - Chao Liu - Bo Zhang - 'Hans-Walter Rix' - Lan Zhang - Gang Zhao - Hao Tian - Jing Zhong - Qianfan Xing - Yaqian Wu - Chengdong Li - 'Jeffrey L. Carlin' - Jiang Chang bibliography: - 'Bibtex.bib' title: 'Tracing Kinematic and Chemical Properties of Sagittarius Stream by K-Giants, M-Giants, and BHB stars' --- Introduction {#sec:intro} ============ The disrupting Sagittarius (Sgr) dwarf spheroidal galaxy (dSph) was discovered in the work on the Galactic bulge of @Ibata94, which has a heliocentric-distance of $\sim$25 kpc and is centered at coordinate of $l=5.6^\circ$ and $b=-14.0^\circ$ [@Ibata97]. For a dwarf galaxy, such a close distance to the Galactic center means it is suffering a huge tidal force from the Milky Way. Subsequently, the Sgr stream was found [@Yanny00; @Ibata01] and has been traced over 360$^\circ$ on the Sky [@Majewski03; @Belokurov06], which indicates it is a strong tool for exploring the Milky Way [@Ibata97; @Majewski99]. Thanks to the early detections of Sgr tidal stream from Two Micron All Sky Survey [2MASS; @Skrutskie06] and Sloan Digital Sky Survey [SDSS; @York00], the morphology of the Sgr stream is known in detail [@Newberg02; @Newberg03; @Majewski03; @Majewski04]. Earlier model and observations predicted the Galactocentric distance of the Sgr stream about $\sim$20-60 kpc [@LM10; @Majewski03]. Recently, @Belokurov14, @Koposov15 and @Hernitschek17 found the Sgr trailing stream reaches $\sim100$ kpc from the Sun. @Sesarl17 and @Li19 used RR Lyrae stars and M-giants found the trailing stream even extends to a heliocentric distance of $\sim$ 130 kpc at $\widetilde{\Lambda}_\odot \sim170^\circ$[^1]. Additionally, @Belokurov06 and @Koposov12 found that the Sgr stream has a faint bifurcation called faint stream, which is a always on the same side of the bright stream at a nearly constant angular separation and without cross [@NC16]. It has been recognized that the Sgr dSph has a complex star formation history. @Ibata95 showed that Sgr contains a strong intermediate-age population with age $\sim4$ - 8 Gyr and metallicity $\sim-$0.2 to $-0.6$ dex and its own globular cluster system. @Siegel07 demonstrated that Sgr has at least 4 - 5 star formation bursts, including an old population: 13 Gyr and \[Fe/H\] = $-1.8$ dex from main sequence (MS) and red-giant branch (RGB) stars; at least two intermediate-aged populaitons: 4 - 6 Gyr with \[Fe/H\]= $-0.4$ to $-0.6$ dex from RGB stars; a 2.3 Gyr population near solar abundance: \[Fe/H\] = $-0.1$ dex from main sequence turn-off (MSTO) stars. @Carlin18 picked up 42 Sgr stream stars from LAMOST M-giants and processed high-resolution observations, they found stars in trailing and leading streams show systematic differences in \[Fe/H\], and the $\alpha$-abundance patterns of Sgr stream is similar to those observed in Sgr core and other dwarf galaxies like the large Magellanic Cloud and the Fornax dwarf spheroidal galaxy. With the second data release of $Gaia$ mission [@Gaia18], it is becoming possible to search the Galactic halo substructures in 6D phase space. Xue, X.-X et al. (2019, in preparation, X19 thereafter) took advantage of 6D information to obtain about 3,000 Sgr stream members with high reliability in Integrals-of-Motion (IoM) space, which is the largest spectroscopic Sgr stream sample obtained yet. Based on this sample, we will characterize the properties of the Sgr stream in more detail. This paper is structured as follow: in Section \[sec:data&method\], we describe our Sgr sample and the method of X19 used for selecting the Sgr members. In Section \[sec:Sgr\], we present the kinematic and chemical properties of the Sgr sample. Finally, a brief summary is shown in Section \[sec:summary\]. DATA and METHOD {#sec:data&method} =============== Data ---- The Sgr stream sample consists of K-, M-giants, and blue horizontal branch stars (BHBs). The K-giants are from Sloan Extension for Galactic Understanding and Exploration 2 [SEGUE-2; @Yanny09] and the fifth data release of Large Sky Area Multi-Object Fibre Spectroscopic Telescope [LAMOST DR5; @Zhao12; @Cui12; @Luo12], and their distances were estimated by Bayesian method @Xue14. The M-giants are picked up from LAMOST DR5 through a 2MASS+WISE photometric selection criteria. The distances were calculated through the $(J-K)_0$ color-distance relation @Li16 [@Li19; @Zhong19]. The BHBs are chosen from SDSS by color and Balmer line cuts, and their distances were easy to estimate because of the nearly constant absolute magnitude of BHB stars [@Xue11]. We calibrated the distances of K-, M-giants, and BHBs with $Gaia$ DR2 parallax rather than Gaia distances estimated by @BJ18. Because @BJ18 claimed that their mean distances to distant giants are underestimated, because the stars have very large fractional parallax uncertainties, so their estimates are prior-dominated, and the prior was dominated by the nearer dwarfs in the model. Only stars with good parallaxes ($\delta \varpi/\varpi <20\%$) and good distances ($\delta d/d <20\%$) are used to do the calibration, which allows us to compare parallax with $1/d$, and minimize the possible bias from inverting. It is very hard to use Sgr stream members, because they are too faint to have good parallax. Finally, we used halo stars from where we identified streams. We found we underestimated distances of K-giants by 15%, and overestimated distances of M-giants by 30%, but no bias in BHBs, shown as left panels of Figure \[sys\]. However, the systematic biases do not apply to Sgr stream members, because the difference between parallax and $1/d$ decreased with $G$ for both K-giants and M-giants, and most Sgr stream members are fainter than $G\sim15^{\rm{m}}$ (shown as right panels of Figure \[sys\]). Besides the distance $d$, our sample also includes equatorial coordinate information $(\alpha, \delta)$, heliocentric radial velocities $hrv$, and proper motions ($\mu_{\alpha}, \mu_{\delta}$). The $hrv$ of the LAMOST K-giants are obtained by the ULySS [@Wu11], $hrv$ of LAMOST M-giants are calculated by @Zhong19, and $hrv$ of SEGUE K-giants and SDSS BHBs are from SEGUE Stellar Parameter Pipeline [SPSS; @Lee08a; @Lee08b]. The proper motions ($\mu_{\alpha}, \mu_{\delta}$) are from $Gaia$ DR2 by cross-matching with a radius of $1\arcsec$. The chemical abundances (the overall metallicity \[M/H\] and the abundance of $\alpha$-element $[\alpha$/M\]) of LAMOST K-, M-giants are from @Zhang19, which introduced a machine learning program called Stellar LAbel Machine (SLAM) to transfer the APOGEE DR15 [@Majewski17] stellar labels to LAMOST DR5 stars. The metallicity \[Fe/H\] of SDSS BHBs and SEGUE K-giants are estimated by SPSS. Since in the APOGEE data, \[M/H\] and \[Fe/H\] are calibrated using same method [@Holtzman15; @Feuillet16], we use \[Fe/H\] to represent the metallicity of all stars and do not to distinguish the \[M/H\] of LAMOST stars and \[Fe/H\] of SDSS/SEGUE stars hereafter. For the measurement errors of our sample, LAMOST K-giants have a median distance precision of 13%, a median radial velocity error of 7 km s$^{-1}$, a median error of 0.14 dex in metallicity, and a median $[\alpha$/Fe\] error of 0.05 dex. SEGUE K-giants have a median distance precision of 16% [@Xue14], a median radial velocity error of 2 km s$^{-1}$, a typical error of 0.12 dex in metallicity. SDSS BHBs do not have error of individual star, but their distances are expected to be better than 10% due to their nearly constant absolute magnitude [@Xue08]. The median radial velocity error of BHBs is 6 km s$^{-1}$, and the typical metallicity error is 0.22 dex. There is no distance error of individual LAMOST M-giant either. @Li16 declared a typical distance precision of 20%. LAMOST M-giants have a typical radial velocity error of about 5 km s$^{-1}$ [@Zhong15], a median error of 0.17 dex in metallicity, and a median $[\alpha$/M\] error of 0.06 dex. The proper motions of K-giants, M-giants, and BHBs are derived from $Gaia$ DR2, which is good to 0.2 mas yr$^{-1}$ at G = 17$^{\rm{m}}$. Additionally, there are about 400 common K-giants between LAMOST and SEGUE samples, of which about 100 K-giants belong to Sgr streams. We used these common K-giants to find that LAMOST K-giants have a $-$8.1 km s$^{-1}$ offset in radial velocity from SEGUE K-giants, and the two surveys have consistent metallicities and distance. In this paper, we have added 8.1 km s$^{-1}$ to LAMOST K-giants to avoid systematic bias from SEGUE K-giants. In analysis of Sgr streams, the duplicate K-giants are removed. See Table \[t\_catalog\] for an example of the measurements and corresponding uncertainties. Integrals of Motion and Friends-of-Friends Algorithm ---------------------------------------------------- To search stars with similar orbits through friends-of-friends (FoF), X19 defined five IoM parameters: eccentricity $e$, semimajor axis $a$, direction of the orbital pole $(l_{\rm{orb}}, b_{\rm{orb}})$, and the angle between apocenter and the projection of $X$-axis on the orbital plane $l_{\rm{apo}}$. Then they calculated the “distance" between any two stars in the normalized space of $(e, a, l_{\rm{orb}}, b_{\rm{orb}}, l_{\rm{apo}})$ and used FoF to find out the group stars that have similar orbits according to the size of the “distance". The five IoM parameters $(e,a,l_{\rm{orb}}, b_{\rm{orb}}, l_{\rm{apo}})$ are gotten by 6D information $(\alpha, \delta,d,hrv,\mu_{\alpha}, \mu_{\delta})$ under the assumption that the Galactic potential is composed of a spherical @Hernquist90 bulge, an exponential disk, and a NFW halo [@NFW96]. See Table \[t\_orbs\] for an example of the orbital parameters and corresponding uncertainties. By comparing the FoF groups with observations and simulations of Sgr [@LM10; @Koposov12; @Belokurov14; @DL17; @Hernitschek17], X19 identified 3028 Sgr stream members, including 2626 K-giants (including 102 suspected duplicate stars), 158 M-giants, and 224 BHBs, which is the largest spectroscopic sample obtained in the Sgr stream yet. In the next section, we will exhibit the Sgr members in detail, including spatial, kinematic, and abundance features. THE PROPERTIES of SAGITTARIUS STREAM {#sec:Sgr} ==================================== The Cartesian reference frame used in this work is centered at the Galactic center, the $X$-axis is positive toward the Galactic center, the $Y$-axis is along the rotation of the disk, and the $Z$-axis points toward the North Galactic Pole. We adopt the Sun’s position is at $(-8.3,0,0)$ kpc [@de_Grijs16], the local standard of rest (LSR) velocity is 225 km s$^{-1}$ [@de_Grijs17], and the solar motion is $(+11.1,+12.24,+7.25)$ km s$^{-1}$ [@Schonrich10]. Figure \[pm\] presents the proper motions ($\mu_{\alpha}, \mu_{\delta}$) of Sgr stream stars. The colors represent the longitude in Sgr coordinate system, $\widetilde{\Lambda}_\odot$, and help to identify the stars belonging to different Sgr streams. In this figure, we can easily see the variation of proper motion along the leading and trailing stream. Figure \[obs\] shows the Sgr streams traced by K-giants, M-giants and BHBs are consistent with previous observations both in line-of-sight velocities @Belokurov14 and distances [@Koposov12; @Belokurov14; @Hernitschek17]. The comparison with simulations is presented in Figure \[sims\]. In the range of $100^\circ < \widetilde{\Lambda}_\odot < 200^\circ$ and $d > 60$ kpc, both velocities and distances do not match with @LM10 () model shown as left panel of Figure \[sims\]. The right panel of Figure 3 shows Sgr streams traced by K-giants, M-giants and BHBs are roughly in good agreement with @DL17 () both in velocities and distances. In the range of $\sim100^\circ < \widetilde{\Lambda}_\odot < 150^\circ$ and $V_{\rm{los}}$ 130 km s$^{-1}$, the observation shows slightly slow than simulation. Furthermore, we have fewer stars beyond 100 kpc than the prediction of , which we attribute to the limiting magnitude of LAMOST (r $\sim17.8^{\rm{m}}$). On the Sgr orbital plane, $\widetilde{\Lambda}_\odot$ $< 50^\circ$ and $\widetilde{\Lambda}_\odot$ $> 300^\circ$ is out of the Sky coverage of LAMOST and SDSS/SEGUE, where is around the Sgr dSph. Kinematics of Sagittarius Stream -------------------------------- Figure \[xz\] illustrates the spatial distribution of Sgr in $X$-$Z$ plane, which is close to the Sgr’s orbital plane. In the top panel, we show our Sgr sample with model as background. We tag the position of each Sgr component (Sgr dSph, Sgr leading, Sgr trailing, and Sgr debris), and Sgr dSph’s moving direction. The panel exhibits the position of each Sgr component in spatial distribution, and our sample comports with model perfectly. In the bottom panel, the arrows indicate the direction and amplitude of velocities in $X$-$Z$ plane and every star is color-coded according to its velocity component in and out of $X$-$Z$ plane ($V_{Y}$). This panel well illustrates the kinematic feature of stream, i.e., stream stars move together in phase space. Besides, the arrows and low latitude M-giants (red circles in the top panel) implies that the Sgr debris actually is the continuation of the Sgr trailing stream and where the trailing stream stars return from their apocenter. Thus, the apogalacticon of Sgr trailing stream could reach $\sim100$ kpc from the Sun (see $\widetilde{\Lambda}_\odot \sim170^\circ$ in Figure \[obs\]). This apogalacticon is consistent with the work of @Belokurov14, @Koposov15, @Sesarl17, @Hernitschek17, and @Li19. In addition, the panel also presents a obvious gradient in $V_Y$ along the line-of-sight direction from the Galactic center in both leading and trailing stream. In Figure \[le\], we examine the angular momentum ($L$) and energy ($E$) of Sgr member stars. The left panel shows the Sgr K-, M-giants and BHBs in $E$-$L$ space, and there is no tangible difference among them. The right panel illustrates the stars from different Sgr streams. The panel shows the energy of each stream are quite different, Sgr debris and trailing stream are significantly higher than leading stream. Metallicities of Sagittarius Stream {#sub_sec:feh_vy} ----------------------------------- Figure \[feh\_hist\] presents the metallicity distribution of our Sgr sample. In the top left panel, we exhibit the sample’s metallicity distribution from K-, M-giants, and BHBs. The panel shows that M-giants is the most metal-rich population with mean metallicity $<\rm{[Fe/H]}>$ = $-0.69$ dex and scatter $\sigma_{\rm{[Fe/H]}}$ = 0.36 dex, BHBs is the most metal-poor population with $<\rm{[Fe/H]}>$ = $-1.98$ dex and $\sigma_{\rm{[Fe/H]}}$ = 0.47 dex, and for the K-giants, these values are \[Fe/H\] = $-1.31$ dex and $\sigma_{\rm{[Fe/H]}}$ = 0.58 dex. The mean metallicity of Sgr M-giants is close to the result of high-resolution spectra from @Carlin18, which used 42 Sgr stream common stars of LAMOST DR3 M-giants ($-0.68$ dex for trailing stream and $-0.89$ dex for leading stream). This implies that the metallicity of our LAMOST sample is reliable. In the other panels, we pick up K-, M-giants, and BHBs to exhibit the metallicity of Sgr leading, trailing, and debris separately. The top right panel (K-giants) shows that the Sgr leading stream has $<\rm{[Fe/H]}>$ = $-1.35$ dex with $\sigma_{\rm{[Fe/H]}}$ = 0.54 dex, the Sgr trailing stream has $<\rm{[Fe/H]}>$ = $-1.21$ dex and $\sigma_{\rm{[Fe/H]}}$ = 0.58 dex, and for Sgr debris, $<\rm{[Fe/H]}>$ = $-1.89$ dex and $\sigma_{\rm{[Fe/H]}}$ = 0.54 dex. Thus, Sgr trailing stream is on average the most metal-rich Sgr stream, followed by Sgr leading and debris. In bottom panels, the M-giants and BHBs present a similar feature, i.e., the trailing stream is more metal-rich than leading stream. This difference among Sgr different streams had been mentioned in @Carlin18, and they suggested that this difference might cause by the stars’ different unbound time from Sgr core. In Figure \[feh\_xz\], we present the metallicity distribution of Sgr stars in $X$-$Z$ plane. Similar with Figure \[xz\], the K-giants in the top left panel shows that the metallicity also has a gradient along the line-of-sight direction, which indicates that the inner side stars (close to the Galactic center) are not only different with outer side stars (away from the Galactic center) in kinematics ($V_Y$), but also in metallicity. In the top right panel, we plot the K-giants in the \[Fe/H\] versus $V_Y$ space. The panel shows that $V_Y$ increases with metallicity at \[Fe/H\] $>\sim -$1.5 dex, which implies that there are some correlations between $V_Y$ and metallicity in Sgr stream. In the distribution of M-giants and BHBs, we do not see clear feature as K-giants have. Alpha-Abundances of Sagittarius Stream -------------------------------------- It is well established that dwarf galaxies have different chemical-evolution paths with the Milky Way [@Tolstoy09; @Kirby11]. In Figure \[feh\_alpha\], we present the abundance of $\alpha$-element from LAMOST Sgr stars obtained by SLAM [@Zhang19]. In top panel, we compare the Sgr sample with the Milky Way stars, including the Galactic disk and halo. For disk, we choose stars with $|Z|<$ 3 kpc (blue density map), and for halo, we plot the stars with $|Z|>$ 5 kpc and not belonging to any substructures (blue dots; X19). The top panel shows that the trend of \[$\alpha$/Fe\] is similar with halo stars at lower metallicity, but the ratio then evolve down to lower values than disk stars at higher metallicity. In addition, there might be a hint of a knee at \[Fe/H\] $\sim -2.3$ dex, but it is not very clear in our data. If the knee is very metal-poor (or non-existent), then Sgr must have had a very low star-formation efficiency at early times (similar to, e.g., the Large Magellanic Cloud; @Nidever19). In the bottom panel, we compare the $\alpha$-abundance (\[Mg/Fe\]) with previous work of Sgr, including M54 [@Carretta10], Sgr core [@Monaco05; @Sbordone07; @Carretta10; @McWilliam13], and Sgr stream [@Hasselquist19]. In the panel, our Sgr stream sample mainly follows the stars in M54 and Sgr core, but are slightly higher in $\alpha$-abundance than the Sgr stream stars from @Hasselquist19 in the same range of metallicity ($-1.2 < {\rm[Fe/H]} < -0.2$ dex). We also include \[Mg/Fe\] versus \[Fe/H\] of some other dwarf galaxies, like Draco [@Shetrone01; @CH09], Sculptor [@Shetrone03; @Geisler05], Carina [@Koch08; @Lemasle12; @Shetrone03; @Venn12], Fornax [@Letarte10; @Lemasle14], and the panel shows a similar evolution pattern of $\alpha$-element between our Sgr stream and dwarf galaxies. Bifurcations in Sagittarius Stream ---------------------------------- In Figure \[bif\_div\], we exhibit the Sgr bifurcation in density map using our sample. To identify the faint and bright stream of the bifurcation, we add the coordinates of faint and bright stream defined by @Belokurov06 and @Koposov12 (see squares in Figure \[bif\_div\]). In addition, we extend the coordinates of Sgr bifurcation in trailing stream based on our sample (see Table \[t\_bif\] and the triangles in Figure \[bif\_div\]). The dash-dotted line between faint and bright stream is used to distinguish the faint and bright stream stars, above the dash-dotted line are belonging to faint stream, and below are belonging to bright stream. In previous studies, the Sgr bifurcation was identified through density map from photometry data [@Koposov12; @Belokurov14], and it is called bright stream because it is denser than faint stream in density map. Due to few spectroscopic data of Sgr streams, it is hard to statistically analyze the kinematics and chemistry of the bifurcation. LAMOST and SEGUE sample provided many spectra of Sgr stream stars in either bright or faint stream, which allow us to analyze the properties of the bifurcation in detail. In Section \[sub\_sec:feh\_vy\], we find the inner and outer side of Sgr stream are different in $V_Y$ and metallicity. Therefore, in Figure \[bif\_vy\] and \[bif\_feh\], we present the density map of bifurcation with color-coded according to $V_Y$ and metallicity respectively. From the top panel of the Figure \[bif\_vy\] and \[bif\_feh\], the K-giants show that faint and bright stream are also different in $V_Y$ and metallicity, bright stream is obviously higher in $V_Y$ and metallicity than faint stream, and in bottom panels of Figure \[bif\_vy\] and \[bif\_feh\], BHBs also show a similar result. The M-giants members almost only cover on bright stream, which is consistent with the result found in @Li16. To examine the difference of $V_Y$ and metallicity between faint and bright stream appeared in K-giants and BHBs sample, in Figure \[bif\_kg\] and \[bif\_bhb\], we divide the bifurcation into leading bright, faint stream and trailing bright, faint stream according to the dash-dotted line in Figure \[bif\_div\]. The result is both K-giants and BHBs present a same result, leading and trailing bright stream are on average higher in $V_Y$ and metallicity than those of leading and trailing faint stream. But the difference in metallicty is not as obvious as the velocity, especially trailing stream. In Figure \[bif\_xyz\], we plot the divided bifurcation into $X$-$Z$ and $Y$-$Z$ plane. The figure shows that the faint and bright stream are two parallel stream along their moving direction. Thus, it is uncertainty that the $V_Y$ and metallicity difference between Sgr inner and outer side is related to Sgr bifurcation. Summary {#sec:summary} ======= By combining IoM and FoF algorithm, X19 picked up about 3,000 Sgr stream members from LAMOST, SDSS, and SEGUE-2, including K-giants, M-giants, and BHBs, which is the largest spectroscopic Sgr stream sample obtained yet. Based on this sample, we present the features of Sgr stream that we find. We compare our Sgr sample with numerical simulations, and , and observation data from @Koposov12, @Belokurov14 and @Hernitschek17. We find our sample is broadly consistent with model and observation data from @Koposov12, @Belokurov14 and @Hernitschek17. The velocity vector directions of Sgr debris and the low latitude M-giants in $X$-$Z$ plane indicate that the debris actually is the continuation of Sgr trailing stream and where the trailing stream stars return from the apocenter. Therefore, our sample shows that the apogalacticon of the Sgr trailing stream may reach $\sim$100 kpc from the Sun, which is in agreement with previous observations like @Belokurov14, @Koposov15, @Hernitschek17, and @Li19. In addition, the energy versus angular momentum distribution of Sgr K-, M-giants, and BHBs shows no clear difference, but for Sgr streams, the debris and trailing stream are obviously higher in energy than leading stream. We also present the metallicity distribution of Sgr K-, M-giants, and BHBs. M-giants is the most metal-rich population, followed by K-giants and BHBs. Additionally, the metallicities of Sgr leading, trailing, and debris are also different. All K-, M-giants and BHBs indicate that Sgr trailing stream is on average more metal-rich than leading stream, and K-giants show that Sgr debris is the most metal-poor population, which reflects their different unbound time from Sgr core. By comparing the $\alpha$-abundance of Sgr stars with the Galactic components and dwarf galaxies of the Milky Way, the trend of \[$\alpha/$Fe\] of Sgr stream is close to the Galactic halo at lower metallicity, then evolve down to lower \[$\alpha/$Fe\] than disk stars, and this evolution pattern is quite similar with Milky Way dwarf galaxies. The $V_Y$ and metallicity distribution of Sgr stream in $X$-$Z$ plane shows that Sgr stream have a gradient along the line-of-sight direction from the Galactic center, the inner side of Sgr stream is higher in both $V_Y$ and metallicity, and $V_Y$ versus \[Fe/H\] shows that the $V_Y$ increases with metallicity, which means there indeed exists a correlation between $V_Y$ and metallicity. In addition, the Sgr bright and faint streams also exhibit different $V_Y$ and metallicity, with the bright stream higher in $V_Y$ and metallicity than the faint stream. But it is still hard to draw any conclusions that the $V_Y$ and metallicity difference between Sgr inner and outer side is related to Sgr bifurcation. This study is supported by the National Natural Science Foundation of China under grants (NSFC) Nos. 11873052, 11890694, 11573032, and 11773033. J.L. acknowledges the NSFC under grant 11703019. L.Z. acknowledges support from NSFC grant 11703038. J.Z. would like to acknowledge the NSFC under grants U1731129. Q.F.X. thanks the NSFC for their support through grant 11603033. J.L.C. acknowledges support from HST grant HST-GO-15228 and NSF grant AST-1816196. This project was developed in part at the 2018 $Gaia$-LAMOST Sprint workshop supported by the NSFC under grants 11333003 and 11390372. Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. This work has made use of data from the European Space Agency (ESA) mission [*Gaia*]{} (<https://www.cosmos.esa.int/gaia>), processed by the [*Gaia*]{} Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the [*Gaia*]{} Multilateral Agreement. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- ![Top two panels: the inverse of distances $1/d$ and $Gaia\ G$ magnitude distributions of Sgr stars. Histograms with different colors represent different kinds of stars. Left lower three panels: density map comparisons of $Gaia$ DR2 parallax ($\varpi$) with the inverse of distances ($1/d$) of K-giants, M-giants, and BHBs. The black dashed lines mark the 1:1 relation between the scales, and the red lines represent the systematic bias between them, which are fitted by least squares method. Right lower three panels: density map comparisons of parallax biases ($1/d-\varpi$) with $G$ magnitude of K-, M-giants, and BHBs.[]{data-label="sys"}](fig1_sys.pdf "fig:"){width=".50\textwidth"} ![Top two panels: the inverse of distances $1/d$ and $Gaia\ G$ magnitude distributions of Sgr stars. Histograms with different colors represent different kinds of stars. Left lower three panels: density map comparisons of $Gaia$ DR2 parallax ($\varpi$) with the inverse of distances ($1/d$) of K-giants, M-giants, and BHBs. The black dashed lines mark the 1:1 relation between the scales, and the red lines represent the systematic bias between them, which are fitted by least squares method. Right lower three panels: density map comparisons of parallax biases ($1/d-\varpi$) with $G$ magnitude of K-, M-giants, and BHBs.[]{data-label="sys"}](fig1_g_bias.pdf "fig:"){width=".50\textwidth"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -- ![Distribution of Sgr stream in proper motions ($\mu_{\alpha}, \mu_{\delta}$) space and color-coded by $\widetilde{\Lambda}_\odot$. The colors can help to identify the Sgr leading (blue cluster) and trailing stream (red cluster). A mean error bar is shown in the lower right corner. []{data-label="pm"}](fig2_pm.pdf "fig:"){width=".9\textwidth"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -- ![Comparisons with observed Sgr data in coordinates of $(\widetilde{\Lambda}_\odot, d)$ and $(\widetilde{\Lambda}_\odot, V_{\rm{los}})$. Yellow, red, and blue circles represent K-, M-giants, and BHBs separately. In the top panel, the black dots with error bars from tables 3-5 of @Belokurov14, which estimated by their Sgr giant stars. In the bottom panel, magenta and green dots represent BHBs and RCs from tables 1-2 of @Belokurov14 and table 2 of @Koposov12, the black dots are from tables 4-5 of @Hernitschek17 obtained from RR Lyrae stars. The error bars mean distance uncertainty, and the gray bars represent 1$\sigma$ line-of-sight depth of the Sgr stream. []{data-label="obs"}](fig3_obs.pdf "fig:"){width=".85\textwidth"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -- ![Comparisons with Sgr simulations in coordinates of $(\widetilde{\Lambda}_\odot, d)$ and $(\widetilde{\Lambda}_\odot, V_{\rm{los}})$. Grey dots in the left and right panels respectively indicate the model of and . Yellow, red, and blue circles separately represent K-, M-giants, and BHBs. []{data-label="sims"}](fig4_sims.pdf "fig:"){width="110.00000%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -- ![Top panel: distribution of each Sgr component (Sgr dSph, Sgr leading, Sgr trailing, and Sgr debris) in $X$-$Z$ plane. Yellow, red, and blue circles represent the K-, M-giants, and BHBs, and grey dots indicate the model. Bottom panel: arrows represent the Sgr stars’ moving directions and velocity amplitudes, and every star is color-coded according to its velocity along $Y$-axis ($V_Y$). Because of the coverage limits of LAMOST and SDSS/SEGUE, we have no Sgr stars around the Sgr dSph.[]{data-label="xz"}](fig5_xz.pdf "fig:"){width=".95\textwidth"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- ![Sgr stars in angular momentum ($L$) versus energy ($E$) space. Left and right panel respectively exhibit the distribution of different kinds of stars and streams in $E$-$L$ space. []{data-label="le"}](fig6_le_1.pdf "fig:"){width=".5\textwidth"} ![Sgr stars in angular momentum ($L$) versus energy ($E$) space. Left and right panel respectively exhibit the distribution of different kinds of stars and streams in $E$-$L$ space. []{data-label="le"}](fig6_le_3.pdf "fig:"){width=".5\textwidth"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- ![The metallicity distribution of Sgr K-, M-giants and BHBs (top left panel) and different streams (top right and bottom panels). Histogram with different colors represent different kinds of stars or streams, and each histogram has corresponding gaussian distribution obtained by the mean metallicity $<\rm{[Fe/H]}>$ and scatter $\sigma_{\rm{[Fe/H]}}$. In the bottom panels, there are few K-giants and BHBs can be used to plot histogram of Sgr debris. []{data-label="feh_hist"}](fig7_feh_pops.pdf "fig:"){width=".5\textwidth"} ![The metallicity distribution of Sgr K-, M-giants and BHBs (top left panel) and different streams (top right and bottom panels). Histogram with different colors represent different kinds of stars or streams, and each histogram has corresponding gaussian distribution obtained by the mean metallicity $<\rm{[Fe/H]}>$ and scatter $\sigma_{\rm{[Fe/H]}}$. In the bottom panels, there are few K-giants and BHBs can be used to plot histogram of Sgr debris. []{data-label="feh_hist"}](fig7_feh_streams_kg.pdf "fig:"){width=".5\textwidth"} ![The metallicity distribution of Sgr K-, M-giants and BHBs (top left panel) and different streams (top right and bottom panels). Histogram with different colors represent different kinds of stars or streams, and each histogram has corresponding gaussian distribution obtained by the mean metallicity $<\rm{[Fe/H]}>$ and scatter $\sigma_{\rm{[Fe/H]}}$. In the bottom panels, there are few K-giants and BHBs can be used to plot histogram of Sgr debris. []{data-label="feh_hist"}](fig7_feh_streams_mg.pdf "fig:"){width=".5\textwidth"} ![The metallicity distribution of Sgr K-, M-giants and BHBs (top left panel) and different streams (top right and bottom panels). Histogram with different colors represent different kinds of stars or streams, and each histogram has corresponding gaussian distribution obtained by the mean metallicity $<\rm{[Fe/H]}>$ and scatter $\sigma_{\rm{[Fe/H]}}$. In the bottom panels, there are few K-giants and BHBs can be used to plot histogram of Sgr debris. []{data-label="feh_hist"}](fig7_feh_streams_bhb.pdf "fig:"){width=".5\textwidth"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- ![Left panels: the metallicity distribution of Sgr K-, M-giants and BHBs in $X$-$Z$ plane. Right panels: distribution of Sgr K-, M-giants and BHBs in \[Fe/H\] versus $V_Y$ space. Red dots with error bars represent the mean of $V_Y$ and its dispersion. []{data-label="feh_xz"}](fig8_xz_feh.pdf "fig:"){width=".55\textwidth"} ![Left panels: the metallicity distribution of Sgr K-, M-giants and BHBs in $X$-$Z$ plane. Right panels: distribution of Sgr K-, M-giants and BHBs in \[Fe/H\] versus $V_Y$ space. Red dots with error bars represent the mean of $V_Y$ and its dispersion. []{data-label="feh_xz"}](fig8_feh_vy.pdf "fig:"){width=".55\textwidth"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Comparisons of LAMOST Sgr stars with the Milky Way components (upper panel) and dwarf galaxies (lower panel) in \[$\alpha$/Fe\] versus \[Fe/H\] space. In upper panel, we plot red stars as our Sgr sample, blue density map as disk stars, and blue triangles as halo stars. The disk stars are selected by $|Z|<3$ kpc, and for halo stars, we choose $|Z|>$ 5 kpc and not belonging to any substructures. A typical error bar of our Sgr stars is presented on top-left corner. In the lower panel, our Sgr stars are shown as red stars, and the Milky Way dwarf galaxies including Draco [yellow dots; @Shetrone01; @CH09], Sculptor [green dots; @Shetrone03; @Geisler05]; Carina [magenta dots; @Koch08; @Lemasle12; @Shetrone03; @Venn12]; Fornax [blue dots; @Letarte10; @Lemasle14]; Sagittariuis core (black dots), M54 (black circles), and stream (black triangles) [@Monaco05; @Sbordone07; @Carretta10; @McWilliam13; @Hasselquist19]). []{data-label="feh_alpha"}](fig9_feh_alpha.pdf "fig:"){width="90.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Density map of Sgr bifurcation in equatorial coordinate. The squares and dashed lines are defined in @Belokurov06 and @Koposov12, the triangles are defined in Table \[t\_bif\], and the dash-dotted line between faint and bright streams is used to distinguish faint and bright stream stars. []{data-label="bif_div"}](fig10_bif_div.pdf "fig:"){width="110.00000%"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Density map of Sgr bifurcation from K-, M-giants and BHBs in equatorial coordinate. Colors represents the velocity along $Y$-axis $V_Y$. Dashed lines represent the center line of the faint and bright streams. []{data-label="bif_vy"}](fig11_bif_vy.pdf "fig:"){width="110.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Density map of Sgr bifurcation from K-, M-giants and BHBs in equatorial coordinate color-coded according to metallicity. Dashed lines represent the center line of faint and bright stream. []{data-label="bif_feh"}](fig12_bif_feh.pdf "fig:"){width="110.00000%"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- ![Left and right panels respectively exhibit $V_Y$ and metallicity distribution of Sgr leading bright, faint stream and trailing bright, faint stream obtained by K-giants. Red histograms represent the bright stream. Blue histograms show the faint stream. Each histogram has a corresponding gaussian distribution obtained by mean value and scatter. In the left panels, $<V_Y>$ of leading bright and faint streams are 42.1 km s$^{-1}$ and 32.5 km s$^{-1}$, and those of trailing bright and faint streams are 43.0 km s$^{-1}$ and 18.3 km s$^{-1}$. In the right panels, $<\rm{[Fe/H]}>$ of leading bright and faint streams are $-$1.30 dex and $-$1.43 dex, and those of trailing bright and faint streams are $-$1.19 dex and $-$1.27 dex. []{data-label="bif_kg"}](fig13_bif_vy_sgrl_kg.pdf "fig:"){width=".5\textwidth"} ![Left and right panels respectively exhibit $V_Y$ and metallicity distribution of Sgr leading bright, faint stream and trailing bright, faint stream obtained by K-giants. Red histograms represent the bright stream. Blue histograms show the faint stream. Each histogram has a corresponding gaussian distribution obtained by mean value and scatter. In the left panels, $<V_Y>$ of leading bright and faint streams are 42.1 km s$^{-1}$ and 32.5 km s$^{-1}$, and those of trailing bright and faint streams are 43.0 km s$^{-1}$ and 18.3 km s$^{-1}$. In the right panels, $<\rm{[Fe/H]}>$ of leading bright and faint streams are $-$1.30 dex and $-$1.43 dex, and those of trailing bright and faint streams are $-$1.19 dex and $-$1.27 dex. []{data-label="bif_kg"}](fig13_bif_feh_sgr_l_kg.pdf "fig:"){width=".5\textwidth"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- ![Left and right panels respectively exhibit $V_Y$ and metallicity distribution of Sgr leading bright, faint stream and trailing bright, faint stream obtained by K-giants. Red histograms represent the bright stream. Blue histograms show the faint stream. Each histogram has a corresponding gaussian distribution obtained by mean value and scatter. In the left panels, $<V_Y>$ of leading bright and faint streams are 42.1 km s$^{-1}$ and 32.5 km s$^{-1}$, and those of trailing bright and faint streams are 43.0 km s$^{-1}$ and 18.3 km s$^{-1}$. In the right panels, $<\rm{[Fe/H]}>$ of leading bright and faint streams are $-$1.30 dex and $-$1.43 dex, and those of trailing bright and faint streams are $-$1.19 dex and $-$1.27 dex. []{data-label="bif_kg"}](fig13_bif_vy_sgrt_kg.pdf "fig:"){width=".5\textwidth"} ![Left and right panels respectively exhibit $V_Y$ and metallicity distribution of Sgr leading bright, faint stream and trailing bright, faint stream obtained by K-giants. Red histograms represent the bright stream. Blue histograms show the faint stream. Each histogram has a corresponding gaussian distribution obtained by mean value and scatter. In the left panels, $<V_Y>$ of leading bright and faint streams are 42.1 km s$^{-1}$ and 32.5 km s$^{-1}$, and those of trailing bright and faint streams are 43.0 km s$^{-1}$ and 18.3 km s$^{-1}$. In the right panels, $<\rm{[Fe/H]}>$ of leading bright and faint streams are $-$1.30 dex and $-$1.43 dex, and those of trailing bright and faint streams are $-$1.19 dex and $-$1.27 dex. []{data-label="bif_kg"}](fig13_bif_feh_sgr_t_kg.pdf "fig:"){width=".5\textwidth"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- ![$V_Y$ and metallicity distribution of Sgr bifurcation obtained by BHBs. In the left panels, the $<V_Y>$ of leading bright and faint streams are 11.2 km s$^{-1}$ and $-$12.6 km s$^{-1}$, and those of trailing bright and faint streams are 14.0 km s$^{-1}$ and 7.7 km s$^{-1}$. In the right panels, $<\rm{[Fe/H]}>$ of leading bright and faint streams are $-$2.01 dex and $-$2.10 dex, and those of trailing bright and faint streams are $-$1.82 dex and $-$1.86 dex. []{data-label="bif_bhb"}](fig14_bif_vy_sgrl_bhb.pdf "fig:"){width=".5\textwidth"} ![$V_Y$ and metallicity distribution of Sgr bifurcation obtained by BHBs. In the left panels, the $<V_Y>$ of leading bright and faint streams are 11.2 km s$^{-1}$ and $-$12.6 km s$^{-1}$, and those of trailing bright and faint streams are 14.0 km s$^{-1}$ and 7.7 km s$^{-1}$. In the right panels, $<\rm{[Fe/H]}>$ of leading bright and faint streams are $-$2.01 dex and $-$2.10 dex, and those of trailing bright and faint streams are $-$1.82 dex and $-$1.86 dex. []{data-label="bif_bhb"}](fig14_bif_feh_sgr_l_bhb.pdf "fig:"){width=".5\textwidth"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- ![$V_Y$ and metallicity distribution of Sgr bifurcation obtained by BHBs. In the left panels, the $<V_Y>$ of leading bright and faint streams are 11.2 km s$^{-1}$ and $-$12.6 km s$^{-1}$, and those of trailing bright and faint streams are 14.0 km s$^{-1}$ and 7.7 km s$^{-1}$. In the right panels, $<\rm{[Fe/H]}>$ of leading bright and faint streams are $-$2.01 dex and $-$2.10 dex, and those of trailing bright and faint streams are $-$1.82 dex and $-$1.86 dex. []{data-label="bif_bhb"}](fig14_bif_vy_sgrt_bhb.pdf "fig:"){width=".5\textwidth"} ![$V_Y$ and metallicity distribution of Sgr bifurcation obtained by BHBs. In the left panels, the $<V_Y>$ of leading bright and faint streams are 11.2 km s$^{-1}$ and $-$12.6 km s$^{-1}$, and those of trailing bright and faint streams are 14.0 km s$^{-1}$ and 7.7 km s$^{-1}$. In the right panels, $<\rm{[Fe/H]}>$ of leading bright and faint streams are $-$2.01 dex and $-$2.10 dex, and those of trailing bright and faint streams are $-$1.82 dex and $-$1.86 dex. []{data-label="bif_bhb"}](fig14_bif_feh_sgr_t_bhb.pdf "fig:"){width=".5\textwidth"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Sgr bifurcation from K-, M-giants and BHBs in $X$-$Z$ and $Y$-$Z$ plane, red and blue dots respectively indicate the bright stream stars and faint stream stars. []{data-label="bif_xyz"}](fig15_bif_xyz.pdf "fig:"){width="70.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [@cclrr ccr@r@c ccccc cc]{} & & & & & & & & & & & & & & & &\ & & & & & & & & & & & & & & & &\ 120814182 & 3967103412613001728 & LAMOST KG & 170.253884 & 15.573170 & 24.4 & 3.7 & $ 1.4$ & 5.0 & $-1.605$ & 0.095 & $-1.413$ & 0.069 & $-1.18$ & 0.17 & $-0.02$ & 0.06\ 121008028 & 1267711072598547072 & LAMOST KG & 221.696725 & 26.496090 & 16.6 & 1.4 & $-136.1$ & 10.6 & $ 0.429$ & 0.079 & $-1.754$ & 0.094 & $-1.81$ & 0.09 & $ 0.44$ & 0.04\ 121905054 & 3938773327292638848 & LAMOST KG & 198.322328 & 18.246219 & 20.4 & 1.0 & $ -59.2$ & 6.9 & $-0.089$ & 0.062 & $-1.353$ & 0.049 & $-2.01$ & 0.09 & $ 0.31$ & 0.04\ 121905244 & 3937264934779562624 & LAMOST KG & 198.274650 & 18.005971 & 20.0 & 1.3 & $ -60.4$ & 6.1 & $-0.108$ & 0.051 & $-1.337$ & 0.044 & $-1.62$ & 0.11 & $ 0.37$ & 0.05\ 121905245 & 3937271497489649792 & LAMOST KG & 198.268435 & 18.148787 & 20.5 & 1.3 & $ -60.5$ & 7.8 & $-0.211$ & 0.078 & $-1.434$ & 0.060 & $-1.94$ & 0.15 & $ 0.39$ & 0.06\ 123011177 & 1442795139541724544 & LAMOST KG & 200.612511 & 22.731429 & 21.1 & 0.8 & $ -44.8$ & 9.5 & $-0.188$ & 0.055 & $-1.384$ & 0.051 & $-2.11$ & 0.10 & $ 0.31$ & 0.04\ 125604054 & 3882438096696454528 & LAMOST KG & 157.016266 & 10.747924 & 28.4 & 1.1 & $ 22.8$ & 7.8 & $-1.207$ & 0.099 & $-0.999$ & 0.124 & $-1.62$ & 0.15 & $ 0.24$ & 0.06\ 132101163 & 3918753557013284608 & LAMOST KG & 181.490768 & 11.441141 & 31.8 & 0.9 & $ -31.8$ & 8.6 & $-1.545$ & 0.061 & $-1.028$ & 0.036 & $-1.61$ & 0.08 & $ 0.40$ & 0.04\ 132105204 & 3918794788699300992 & LAMOST KG & 181.092298 & 11.555653 & 43.1 & 2.0 & $ -29.7$ & 7.7 & $-1.041$ & 0.097 & $-0.785$ & 0.063 & $-1.59$ & 0.15 & $ 0.19$ & 0.06\ 132109226 & 3920550124653418624 & LAMOST KG & 183.333534 & 13.041779 & 29.4 & 2.0 & $ -28.5$ & 6.2 & $-1.697$ & 0.074 & $-1.017$ & 0.055 & $-1.73$ & 0.12 & $ 0.19$ & 0.05\ \ \ [ccccc ccrrc ccccc]{} & & & & & & & & & & & & & &\ & & & & & & & & & & & & & &\ 120814182 & 0.46 & 0.10 & 19.79 & 5.02 & 277.27 & 13.35 & 106.41 & 7.23 & 115.22 & 12.80 & $-76520.25$ & 6485.40 & 3212.90 & 975.30\ 121008028 & 0.47 & 0.03 & 15.34 & 1.68 & 159.99 & 6.49 & 104.02 & 1.19 & 311.62 & 8.98 & $-85054.91$ & 3722.07 & 2548.25 & 234.91\ 121905054 & 0.55 & 0.03 & 16.45 & 0.77 & 157.44 & 3.91 & 100.70 & 1.51 & 304.42 & 4.87 & $-82276.59$ & 1607.59 & 2499.43 & 123.88\ 121905244 & 0.55 & 0.03 & 16.21 & 0.84 & 159.80 & 4.27 & 101.28 & 1.91 & 306.31 & 4.62 & $-82777.67$ & 1830.66 & 2465.24 & 106.50\ 121905245 & 0.61 & 0.04 & 15.63 & 0.88 & 157.13 & 5.66 & 100.45 & 2.03 & 309.11 & 6.35 & $-83764.70$ & 1969.44 & 2206.07 & 149.70\ 123011177 & 0.51 & 0.03 & 15.63 & 0.58 & 151.47 & 3.85 & 101.95 & 0.99 & 308.07 & 5.24 & $-84242.28$ & 1270.22 & 2487.49 & 124.20\ 125604054 & 0.38 & 0.07 & 26.05 & 2.14 & 268.87 & 7.24 & 121.74 & 4.24 & 100.11 & 12.84 & $-67939.22$ & 2322.00 & 4318.00 & 500.08\ 132101163 & 0.37 & 0.04 & 28.84 & 2.13 & 283.61 & 3.34 & 104.90 & 1.05 & 90.38 & 5.61 & $-64795.94$ & 2170.06 & 4785.16 & 412.04\ 132105204 & 0.49 & 0.09 & 35.77 & 4.40 & 283.98 & 8.01 & 105.67 & 2.11 & 94.79 & 10.74 & $-57819.74$ & 3114.20 & 5274.65 & 991.59\ 132109226 & 0.31 & 0.06 & 26.13 & 3.52 & 278.76 & 6.32 & 104.13 & 1.84 & 88.22 & 7.14 & $-68086.86$ & 3893.76 & 4517.30 & 667.87\ -------- --------- --------- $-5.0$ ... $-10.0$ $ 0.0$ $-20.0$ $ -8.0$ $ 5.0$ $-17.0$ $ -5.0$ $10.0$ $-14.0$ $ -2.0$ $15.0$ $-11.0$ $ 2.5$ $20.0$ $ -8.5$ $ 5.0$ $25.0$ $ -5.5$ $ 8.0$ $30.0$ $ -3.0$ $ 10.0$ $35.0$ $ 0.5$ $ 12.5$ $40.0$ $ 2.5$ $ 15.5$ $45.0$ $ 5.0$ $ 18.5$ $50.0$ $ 7.5$ $ 20.5$ $55.0$ $ 10.5$ $ 23.0$ $60.0$ $ 12.5$ $ 25.5$ $65.0$ $ 14.5$ ... $70.0$ $ 17.0$ ... $75.0$ $ 19.5$ ... -------- --------- --------- : Coordinates of the Sgr Trailing Stream Bifurcation[]{data-label="t_bif"} [^1]: ($\widetilde{\Lambda} _\odot, \widetilde{B}_\odot)$ used in this paper is a heliocentric coordinate system defined by @Belokurov14. The longitude $\widetilde{\Lambda}_\odot$ begins at the Sgr core and increases in the direction of the Sgr motion. The equator (latitude $\widetilde{B}_\odot$ = 0$^\circ$) is aligned with the Sgr trailing stream.
--- abstract: 'We consider a new stratification of the space of configurations of $n$ marked points on the complex plane. Recall that this space can be differently interpreted as the space $\dpol_{n}$ of degree $n>1$ complex, monic polynomials with distinct roots, the sum of which is 0. A stratum $A_{\sigma}$ is the set of polynomials having $P^{-1}(\mathbb{R}\cup\imath\mathbb{R})$ in the same isotopy class, relative to their asymptotic directions. We show that this stratification is a Goresky–MacPherson stratification and that from thickening strata a [*good cover*]{} in the sense of Čech, can be constructed, allowing an explicit computation of the cohomology groups of this space.' address: | Sorbonne Université\ 4 Place Jussieu\ 75005 Paris author: - 'N.C. Combe' title: | Čech cover of the complement of the discriminant variety.\ Part I: Goresky–MacPherson stratification. --- Introduction ============ The decomposition of the space of configurations of $n$ marked points on the complex plane has been considered for over more than fifty years [@FoN62] and has lead to many important works . The braid group $\mathcal{B}_n$ with $n$ strands and the space $\dpol_{n}$ of degree $n$ complex monic polynomials with distinct roots are objects which are deeply connected: the space $\dpol_{n}$ is a $K (\pi, 1)$ of fundamental group $\mathcal{B}_n$. The richness of their interactions allows each object to provide information on the other. In particular, one may use the space $\dpol_{n}$ to give a description of braids with $n$ strands. By means of a spectral sequence V. Arnold [@Ar69] gives a method to calculate the integral cohomology of the braid group. An other version was given by D.B. Fuks [@Fu70], which allows to have the cohomology of the braid group with values in $\mathbb{Z}_{2}$. This latter approach was further on developed in a more general way by F.V. Vainshtein [@Va78] using Bockstein homomorphisms. Many other results followed more recently, namely by V. Lin, F. Cohen [@Coh73A] and A. Goryunov [@Gor78]. In very close subjects we can also cite works of De Concini, D. Moroni, C. Procesi, M. Salvetti [@CPS01; @CMS08]. In spite of an abundant literature concerning this subject, we give a new approach to the configuration space of $n$ marked points on ${\mathbb{C}}$ and thus on the moduli space ${\overline{\mathcal{M}}}_{0,n}$ of $n$ marked points on the Riemann sphere $\mathbb{P}^1$. We develop in this paper the [*real*]{}-geometry on the canonical stratifications of the spaces ${\overline{\mathcal{M}}}_{0,n}$, which gives a new insight on [@CoMa; @De; @Ke; @KoMa; @Ma99]. To be more precise, the aim is to present a real-algebraic stratification of the parametrizing ${\mathbb{C}}$–scheme of ${\overline{\mathcal{M}}}_{0,n}$. We show that it is a [*Goresky–MacPherson stratification*]{}. This highlights real-geometry properties of ${\overline{\mathcal{M}}}_{0,n}$ and thus gives a different approach to this object. The notion of Goresky–MacPherson stratification can be found in [@GM0; @GM1; @GM2]. Let us recall the notion of stratification. A stratification of a topological space $X$ is a locally finite partition of this space $\mathcal{S}=(X^{(\sigma)})_{\sigma\in S}$ of $X$ into elements called strata, locally closed and verifying the condition: for all $\tau,\sigma\in S,$ $$X^{(\tau)}\cap \overline{X}^{(\sigma)}\neq \emptyset \iff X^{(r)}\subset \overline{X}^{(\sigma)}.$$ The boundary condition can be reformulated as follows: the closure of a stratum is a union of strata. A different interpretation of this would be to define a stratification as a filtration $$X=X_{(n)}\supset X_{(n-1)}\supset \dots \supset X_{(1)}\supset X_{(0)}\supset X_{-1}=\emptyset,$$ where the $X_{(i)}$ are closed, and where by defining $X^{(i)}:=X_{(i)}\setminus X_{(i-1)},$ we have $\overline{X^{(i)}}=X_{(i)}$. A Goresky–MacPherson stratification is defined as follows. An $n$-dimensional topological stratification of a paracompact Hausdorff topological space $X$ is a filtration by closed subsets $$X=X_{(n)}\supset X_{(n-1)}\supset \dots \supset X_{(1)}\supset X_{(0)}\supset X_{-1}=\emptyset,$$ such that for each point $p\in X_{(i)}-X_{(i-1)}$ there exists a distinguished neighborhood $N$ of $p$ in $X$, a compact Hausdorff space $L$ with an $n-i-1$ dimensional topological stratification $$L=L_{n-i-1}\supset \dots \supset L_1\supset L_0\supset L_{-1}=\emptyset$$ and a homeomorphism $$\phi:\Re^i\times cone^\circ(L)\to N,$$ which takes $\Re^i \times cone^{\circ} (L_j) \to N\cap X_{i+j+1}$. The symbol $cone^{\circ} (L_j)$ denotes the open cone, $L\times [0,1)/(l,0)\sim(l',0)$ for all $l, l'\in L$. Focusing on the case where the marked points on ${\mathbb{C}}$ are pairwise distinct, we show that from this stratification can be constructed a [*good Čech cover.*]{} This paper includes the explicit construction of the open sets of the Čech cover. Consider a cover of a topological space $X$. A good Čech cover is a cover such that its components are open and have contractible multiple intersections. We prove the following theorems. Consider the configuration space of $n$ marked points on the complex plane, where points are pairwise disjoint. Then, there exists a real algebraic stratification $\mathcal{S}$ of this space, where strata are indexed by oriented and bicolored forests verifying the following properties: - there exist $n$ vertices of valency 4, incident to edges of alternating color and orientation; - there exist at most $n-1$ vertices of even valency, incident to edges of only one color and of alternating orientations; - there are $4n$ leaves (i.e. vertices incident to one edge), where the colors and orientations of the incident edges alternate. Let $A_\sigma$ be a generic stratum (i.e. of codimension 0). Then, the topological closure $\overline{A}_\sigma$ defines a Goresky–MacPherson stratification. Thickening these strata prepares the ground for defining a good Čech cover, which leads to the following statement: Consider the configuration space of $n$ marked points on the complex plane, where points are pairwise disjoint, and the stratification $\mathcal{S}$ defined above. Then, the thickened strata $\underline{A_{\sigma}}^+$ form a good Čech cover. In a more global way, what we have in mind is a new manner of defining the generators for $\Gamma_{0,[n]}$, where $\Gamma_{0,[n]}$ is nothing but the orbifold fundamental group of the moduli space of smooth curves of genus 0 with $n$ unordered marked points [@Gro84; @LS]. This new cell decomposition turns out to have many interesting symmetries (this is the subject of the paper [@CO2]), where we aim at underlining the existence of polyhedral symmetries explicitly in the presentation. In this paper we show the existence of a good cover in the sense of Čech which we will use, in further investigations, to calculate explicitly the cohomology groups [@CO1]. Namely, to stratify the configuration space $\dpol_n$, we consider [*drawings of polynomials*]{}—objects reminiscent of Grothendieck’s [*dessin’s d’enfant*]{} in the sense that we consider the inverse image of the real and imaginary axis under a complex polynomial. The [drawing]{} associated to a complex polynomial is a system of blue and red curves properly embedded in the complex plane, being the inverse image under a polynomial $P$ of the union of the real axis (colored red) and the imaginary axis (colored blue) [@CO1]. For a polynomial of degree $d$, the drawing contains $d$ blue and $d$ red curves, each blue curve intersecting exactly one red curve exactly once. The entire drawing forms a forest whose leaves (terminal edges) go to infinity in the asymptotic directions of angle $k \pi/4d$. Polynomials belong to a given stratum if their drawings are isotopic, relatively to the $4d$ asymptotic directions. An element of the stratification is indexed by an equivalence class of drawings which we call a signature $\sigma$. A stratum shall be denoted by $A_{\sigma}$. Each stratum of this decomposition is attributed to a decorated graph (i.e. a graph where edges are oriented and colored in red or blue; complementary faces to the embedded graph are colored in colors ($A,B,C,D$). This paper is organized as follows. The section 2, we present the definition and construction of the real-algebraic stratification. We introduce the notion of drawings, signatures and present some of its properties. In section 3 we introduce Whitehead moves on the signatures which allow a definition of the combinatorial closure of a signature. An important result which follows from this section is that the topological closure of a stratum is given by its combinatorial closure. In particular, it is the union of all the incident signatures to $\sigma$ ([*i.e*]{} the combinatorial closure). Finally, we prove that this stratification is a Goresky–MacPherson stratification. Section 4 shows the construction of the Čech cover. In particular, we define the components of the cover and prove that multiple intersections between those components are contractible. Finally, an appendix is introduced, where we discuss the multiple intersections of closures of generic strata and classify the possible diagrams in order to have non-empty multiple intersections. A new point of view on ${\overline{\mathcal{M}}}_{0,T}$ ======================================================= Moduli spaces of genus 0 curves with marked points -------------------------------------------------- Let ${\mathbb{C}}$ be fixed field and $\bf{T}$ a finite set. Consider the moduli space (generally a stack) ${\overline{\mathcal{M}}}_{0,\bf{T}}$ of stable genus zero curves with a finite set of smooth pairwise distinct closed points defined over ${\mathbb{C}}$ and bijectively marked (labeled) by $\bf{T}$. For a ${\mathbb{C}}$–scheme $B$, a $B$–family of such curves is represented by a fibration $C_B \to B$ with genus zero fibres of dimension 1, endowed by pairwise distinct sections $s_i : B \to C$ labeled by $\bf{T}$ as above (cf. [@Ma99]). From the known complex results, the combinatorial type of each such curve over a closed point is encoded by a stable tree $\sigma$. Leaves (or else, tails) of such a tree are labeled by the elements of a subset $\bf{T}_\sigma \subseteq \bf{T}$. It has been considered in [@KoMa] and later in [@Ma99] and [@CoMa] the spaces $\mathcal{M}_{0,\boldsymbol{\sigma}}$ and their natural closures $M_{0,\boldsymbol{\sigma}} \subset {\overline{\mathcal{M}}}_{0,\boldsymbol{\sigma}}.$ One point of ${\overline{\mathcal{M}}}_{0,n+1}({\mathbb{C}})$ “is” a stable complex curve of genus 0 with $n + 1$ points labeled by $\{1,2,...,n-2, 0, 1, \infty\}$. These points are distributed along different locally closed strata that are naturally labeled by stable trees. The complex codimension of a stratum is the number of edges of the respective tree. We will refer to this number as level. Each stratum is reduced and irreducible, two different strata do not intersect. At the level zero, all these pairwise different points live on a fixed ${\mathbb{P}}^1$ endowed with a fixed real structure, with respect to which $(0,1,\infty)$ are real. Equivalently, this projective line is endowed with a fixed homogeneous coordinate system $(y : z)$ such that $0=(0:1)$, $1=(1:1)$, $\infty=(1:0)$. Definition-construction ----------------------- Given a stratification of $\mathcal{S}=(X^{(\sigma)})_{\sigma \in S}$ one can define an incidence relation on the set $S$ of indices by: $$\tau \dot{\preceq} \sigma\quad \text{if and only if} \quad X_{(\tau)}\cap\overline{X}^{(\sigma)}\neq \emptyset.$$ In other words, in a stratification the incidence relation gives a [*poset*]{}. #### We consider the set of $n+1$ marked points on ${\mathbb{P}}^1$ as a configuration space of $n$ points on the complex plane modulo the group $PSL_2({\mathbb{C}})$. This configuration space $Conf_{n}({\mathbb{C}})$ can itself be considered as the set of monic complex polynomials, with $n$ roots, which we denote $\dpol_{n}$. The quotient of this space by $PSL_2({\mathbb{C}})$ allows to map three of the marked points to $\{0,1,\infty\}$. #### For the construction of this new stratification, it is very important to have a fixed real structure on ${\mathbb{P}}^1$. The main idea of our construction is to take the inverse image, under a polynomial $P$ in $\dpol_n$, of the real and imaginary axis, i.e. $P^{-1}(\Re\cup\imath \Re)$. This inverse image forms a system of oriented curves in the complex plane. To distinguish $P^{-1}(\Re)$ from $P^{-1}(\imath\Re)$, we color in red the pre-image of $\Re$ and in blue the pre-image of $\imath\Re$. The orientation of the curves is inherited from the natural orientation on the real and imaginary axis (see Figure \[F:patdisc\]). An $n$-drawing $\mathcal{C}_{P}$ of a degree $n$-polynomial $P\in \dpol_n$ is a system of curves properly embedded in the complex plane given by $P^{-1}(\Re\cup\imath \Re)$. , every node/.style = [inner xsep=0mm, inner ysep=1mm, fill=white]{} (0,0) circle (3.1cm); (-3,0) – (2.3,0); (3,0) node\[fill=white\][*Re*]{}; (0,-3) – (0,2.3); (0,3) node\[fill=white\][*Im*]{}; (1,1) node\[fill=white\][A]{}; (-1,1) node\[fill=white\][B]{}; (-1,-1) node\[fill=white\][C]{}; (1,-1) node\[fill=white\][D]{}; Using the coloring convention of the curves above, we have three families of intersections in a drawing: 1. The roots of the polynomials given by the intersection points of a red curve with a blue curve.\ 2. The critical points $z_0$ of $P$ such that the critical value associated $P(z_0)$ is an imaginary, $Re(P)(z_0)=0$, are given by the intersection points between the blue curves.\ 3. The critical points $z_0$ of $P$ such that the critical value associated to $P(z_0)$ is a real, $Im(P)(z_0)=0$, are given by the intersection points between the red curves.\ We define isotopy classes of drawings. \[D:equi\] Two $n$-drawings $\mathcal{C}_{P_{1}}$ and $\mathcal{C}_{P_{2}}$ are equivalent if and only if there exists an isotopy $h$ of $\mathbb{C}$ (a continuous family of homeomorphisms of $\mathbb{C}$), such that $h:\mathcal{C}_{P_{1}}\to\mathcal{C}_{P_{2}}$; $h$ preserves the $4n$ asymptotic directions, the colouring and orientation of curves. We denote the equivalence class of isotopic drawings by $[\mathcal{C}_{P}]$. This definition serves to construct the decomposition of the configuration space of marked points on the complex plane. Consider the space of configurations of $n$ marked points on ${\mathbb{C}}$. Let $A_{[\mathcal{C}_{P}]}$ be the set of polynomials with drawings in the isotopy class $[\mathcal{C}_{P}]$. The family $(A_{[\mathcal{C}_{P}]})_{[\mathcal{C}_{P}] \in S}$, where $S$ is the set of isotopy classes of $n$-drawings, partitions the configuration space. The component $A_{[\mathcal{C}_{P}]}$ is called stratum. \[D:cod\] Let $P$ be a degree $n$ complex polynomial in $\dpol_n$. 1. A polynomial $P$ with no critical values on $\mathbb{R}\cup \imath \mathbb{R}$ is called generic, such a polynomial is of codimension 0. 2. The [special]{} critical points of $P$ are the critical points $z$ such that $P(z) \in \mathbb{R}\cup \imath \mathbb{R}$. The local index at a special critical point $z\in \mathbb{R}$ (resp. $ \imath \mathbb{R}$) is equal to $2m-3$, where $m$ is the number of red (or blue) diagonals crossing at the point $z$. The [real codimension]{} of $P$ is the sum of the local indices of all the special critical points. Figure \[SSSS6\] illustrates a generic (codimension 0) polynomial of degree 6. ![$P(z)= (z+1.54*(0.98+i*0.2))*(z+0.68*(0.98+i*0.2))*(z+0.4*(0.98+i*0.2))*(z-1.54*(0.98+i*0.2))*(z-0.68*(0.98+i*0.2))*(z-0.4*(0.98+i*0.2)).$[]{data-label="SSSS6"}](SSSS6.pdf) Embedded forests ---------------- Consider the category $\Gamma$ of graphs. Objects of this category are graphs, i.e. objects defined by a set $V$ of elements called vertices, equipped with a symmetric, reflexive, relation $E$. For given vertices $x,y\in V$ there is an edge $(x,y)$ if and only if $(x,y)\in E$ and $x\ne y$. We define a morphism $(V,E)\to (W,F)$ as a function $f: V\to W$ such that $(x,y)\in E$ implies $(f(x),f(y))\in F$. We equip this category $\Gamma$ with a coproduct $\sqcup$, being here the disjoint union of graphs. In the category $\Gamma$, we are interested only in those graphs being trees, i.e. acyclic connected graphs. By convention, a tree has at least one edge. Vertices of valency one are called [*leaves*]{}. We equip the standard definition of trees with a colouring and an orientation on the set of edges. Each edge is mapped to the set $\{R^{+},B^{-},B^{+},R^{-}\}$, where the capital letter $R$ (resp. $B$) corresponds to a red (resp. blue) colouring and the sign “+” or “-” corresponds to the orientation of the edge. We will write $R^{-}$ (or $B^{-}$) if at a given vertex, the direction points inwards, i.e. towards the chosen vertex. Similarly, if we have the opposite sign, then the direction points in the opposite direction i.e. outwards. Concerning the notation, we use the symbol $(i,j)_{R}$ (resp. $(i,j)_{B}$) for a red (blue) diagonal in the forest, connecting the leaf $i$ to the leaf $j$. We define the following object. An $n$-signature is an object of the category $(\Gamma,\sqcup)$ being a disjoint union of trees (i.e. a forest) such that: 1. There exist $k$ vertices of valency four. The incident edges are in bijection with the set $\{R^{+},B^{-},B^{+},R^{-}\}$. Orientation and colors alternate. 2. There exist $n-k$ vertices of valency a multiple of four. The incident edges are in bijection with the set $\{R^{+},B^{-},B^{+},R^{-}\}$. Orientation and colors alternate. 3. There exist at most $k-1$ vertices of even valency (at least 4), incident to edges of the same color. Orientations alternate. 4. There exist $4n$ leaves. The set of incident edges to those leaves are of alternating color and orientation. Note that for the case of pairwise distinct points on the complex plane we have exactly $n$ vertices incident to four edges of alternating colors/orientations and at most $n-1$ vertices incident to monochromatic edges. A geometric realisation of a signature is a (union of) 1-dimensional CW-complex, being contractible, and properly embedded in the complex plane. It is important for the construction presented in this note, to insist on the difference between the geometric realisation of a given graph and the graph itself, which is a purely [*combinatorial*]{} notion. An embedded forest is a subset of the plane which is the image of a proper embedding of a forest minus leaves to the plane. From real curves to forests --------------------------- An equivalent way of describing the orientation on the edges on the graphs is to consider the complementary part to the real and imaginary axis in ${\mathbb{C}}$ and to color them. Let us label the quadrants in the colors $A,B,C,D$ as in the figure \[F:patdisc\]. \[T:1\] The set of $n$-signatures is in bijection with the set of isotopy classes of drawings relatively to the $4n$ asymptotic directions. One direction is easy. The other direction is proved as follows. Let $\gamma$ be the embedding in ${\mathbb{C}}$ of a signature $\sigma$, which satisfies the asymptotic directions. Then, we can construct a function $f$ such that: - $f: {\mathbb{C}}\to {\mathbb{C}}$ is a smooth function, - $f^{-1}(\Re\cup\imath\Re)=\gamma$ - The function $f$ is a bijection between each region of $\gamma$ and a quadrant colored $A, B, C, D$ in the complex plane, - $f$ is injective and regular on the edges of $\gamma$. Let $J$ be the standard conformal structure on ${\mathbb{C}}$. We have $J= f_{*}(J_{0})$ so $f:({\mathbb{C}},J)\to ({\mathbb{C}},J_{0})$ is holomorphic. By Riemann’s mapping theorem there exists a biholomorphic function $\rho: ({\mathbb{C}}, J ) \to ({\mathbb{C}}, J_0)$. The classical theorem of complex analysis by Rouché implies that $f \circ \rho^{-1}$ is a polynomial. Let $\gamma\in [\mathcal{C}_{P}]$, then $\gamma$ is a forest. Consider an embedded forest $F_R$ (resp. $F_B$) with $2n$ leaves, whose edges are colored in $R$ (resp. B) and such that all vertices in the plane are of even valencies. Then, according to theorem 1.3 in [@Er], there exists a harmonic polynomial of degree $n$ whose zero set is equivalent to $F_R$ (resp. $F_B$). The class $[\mathcal{C}_{P}]$ is a forest, since it is easy to see that the level set of the degree $d$ harmonic polynomials $\{(x,y)\in \Re^2: Re P(x,y)=0\}$ (resp. $\{(x,y)\in \Re^2: ImP(x,y)=0\}$) is an embedded forest (the existence of a cycle would contradict the maximum principle). \[T:Contract\] Let $\sigma$ be an $n$-signature. Then, the set $A_{\sigma}$ is contractible. We need to use a theorem of J. Cerf, which we recall. Let $V$ be a manifold with boundary, compact; $W$ is a “target” manifold and $Pl(V,W)$ is the space of smooth embeddings of $V$ into $W$. \[T:C\] Let $G$ be a subgroup of the group of all diffeomorphisms of $V$, where $G$ acts on $Pl(V,W)$ and determines a structure of principal bundle. If $G$ is open, then the canonical map: $$Pl(V,W)\to Pl(V,W)/G$$ is a locally trivial fibration. For an exposition on principal bundles, we refer to [@Ca]. We recall a lemma (cited as lemma 1 in [@Ce62]), useful for the application of this result to our case. \[L:1\] Let $(E,B, p)$ be a fiber bundle where $E$ ans $B$ are topological spaces, $p$ is a continuous map $p: E \to B$. A sufficient condition to have a locally trivial fibration is that for $x_{0}\in B$ there exists a topological group $G$ operating (left wise) on the total space $E$ and on the base space $B$ so that the following diagram is commutative: $$\begin{tikzcd} G \times E \arrow{r}{\phi} \arrow[swap]{d}{Id \times p} & E \arrow{d}{p} \\ G \times B \arrow{r}{\Phi} & B \end{tikzcd}$$ and the map $g \to g x_{0}$ from $G$ into $B$ has a continuous section in the neighborhood of $x_{0}$. \[R:1\]Let us comment on this result. Let $\gamma$ be the embedding in ${\mathbb{C}}$ of a signature $\sigma$, i.e. $\gamma \in Pl(\sigma,{\mathbb{C}})$ which satisfies the asymptotic directions. Let $T$ be a tubular neighbourhood closed of $\gamma$ in ${\mathbb{C}}$ and denote by $H$ the group of diffeomorphisms, inducing the identity on ${\mathbb{C}}-T$. From \[L:1\], it is shown in [@Ce62] that there exists a neighborhood $\mathcal{V}$ of $\gamma$ in $Pl(\sigma,{\mathbb{C}})$ and a continuous map $s: \mathcal{V} \to H$ such that : - $s(\gamma)=e;$ - For all $\gamma'\in \mathcal{V}$, $s(\gamma')\gamma'$ is identified with a diffeomorphism of $\sigma$ in the neighborhood of the identity, which is of the form $\gamma g$ where $g$ is a diffeomorphism of $\sigma$. - $\forall \gamma'\in \mathcal{V}$ and any diffeomorphism $g$ of $\sigma$ such that $\gamma' g\in \mathcal{V}$, $s(\gamma' g)=s(\gamma')$ We will mainly use the Cerf theorem and the remark \[R:1\], above. The embedded graphs of $\sigma$ in ${\mathbb{C}}$ correspond to a class of drawings. We know that this class is contractible, by theorems 5.3 and 5.4 of Epstein in [@Ep]. For each $\gamma$ in the neighborhood $V$, we can construct the function $f$, as mentioned above in theorem \[T:1\]. Define $E_{\gamma}$ to be the space of those functions corresponding to the embedded graph $\gamma$, and verifying the properties described in the poof of theorem \[T:1\]. This space is contractible. Define the Cerf fibration $$E_\sigma=\{ (\gamma,f) | \gamma \in \sigma ,f \in E_{\gamma}\}\to \sigma.$$ $$(\gamma,f )\mapsto \gamma$$ By the result of Cerf in Theorem \[T:C\], we have a structure of principal bundle on $E_\sigma \to E_\sigma/G$, where $G$ is the group of diffeomorphisms $G$ acting on ${\mathbb{C}}$, which are homotopic to the identity and preserving orientation. This group is contractible, by the theorem of Earle and Eells [@Ee]. Now, by a theorem of Cerf [@Ce61] the total space $E_{\sigma}$ is contractible. Note that $G$ acts without fixed points on $E$ and that orbits are closed. Using the Riemann mapping theorem and the classical Rouché theorem, implies that each class modulo $G$ has a representent which is a complex polynomial. From real curves to stable trees ================================ Stratification on ${\overline{\mathcal{M}}}_{0,n}$ -------------------------------------------------- Let $\boldsymbol{\tau}$ be the (labeled) graph of a stable connected curve with $n$ marked points $C_B$. Denote by $M_{0, {\boldsymbol{\tau} }} \subset {\overline{\mathcal{M}}}_{0,n}$ the moduli submanifold (or generally, substack) parametrizing all curves having the same graph ${\boldsymbol{\tau}}$. Its closure will be denoted ${\overline{\mathcal{M}}}_{0,\boldsymbol{\tau}}$. A morphism $f:\, {\boldsymbol{\tau}} \to {\boldsymbol{\sigma}}$ is determined by its covariant surjective action upon vertices $f_v:\, V_{{\boldsymbol{\tau}}}\to V_{ {\boldsymbol{\sigma}}}$ and contravariant injective actions upon tails and edges: $$f^t:\, \bf{T}_{{\boldsymbol{\sigma}}}\to \bf{T}_{\boldsymbol{\tau}}, \quad f^e:\, E_{\boldsymbol{\sigma}}\to E_{\boldsymbol{\tau}}.$$ Geometrically, $f$ contracts edges from $E_{{\bf\tau}}\setminus f^e(E_{{\boldsymbol{\sigma}}})$ and tails from $\bf{T}_{{\boldsymbol{\tau}}}\setminus f^t(\bf{T}_{{\boldsymbol{\sigma}}})$, compatibly with its action upon vertices. Similarly, given a stratum $\overline{D}( {\boldsymbol{\tau}})$ in $B$, its closure is formed from the union of subschemes $D({\boldsymbol{\sigma}})$, such that ${\boldsymbol{\tau}}>{\boldsymbol{\sigma}}$ and, where ${\boldsymbol{\tau}}$ and ${\boldsymbol{\sigma}}$, have the same set of tails. In our case, where the genus of the curve is zero, the condition ${\boldsymbol{\tau}}> {\boldsymbol{\sigma}}$ is uniquely specified by the [*splitting*]{} data, which can be described as a certain type of Whitehead move. The splitting data is as follows. Choose a vertex $v$ of the set of vertices of $\boldsymbol{\tau}$ and a partition of the set of flags, incident to $v$: $F_{{\boldsymbol{\tau}}}(v)=F'_{{\boldsymbol{\tau}}}(v)\cup F''_{{\boldsymbol{\tau}}}(v)$ such that both subsets are invariant under the involution $j_{\boldsymbol{\tau}}: F_{{\boldsymbol{\tau}}}\to F_{{\boldsymbol{\tau}}}$. To obtain $\boldsymbol{\sigma}$, replace the vertex $v$ by two vertices $v'$ and $v''$ connected by an edge $e$, where the flags verify $F'_{{\boldsymbol{\tau}}}(v')=F'_{{\boldsymbol{\tau}}}(v)\cup \{e'\}$, $F'_{{\boldsymbol{\tau}}}(v'')=F''_{{\boldsymbol{\tau}}}(v)\cup \{e''\}$, where $e',e''$ are the two halves of the edge $e$. The remaining vertices, flags and incidence relations stay the same for ${\boldsymbol{\tau}}$ and $ {\boldsymbol{\sigma}}$. For more details, see  [@Ma99] ch.III $\S$ 2.7, p.90. Finally, the scheme ${\overline{\mathcal{M}}}_{0,n}$ is decomposed into pairwise disjoined locally closed strata, indexed by the isomorphism classes of $n$-graphs. Such as depicted in [@Ma99] ch.III $\S 3$, the stratification of the scheme is given by trees. If the set $\bf{T}$ is finite, then $\mathcal{T}((\bf{T}))$ is the set of isomorphism classes of trees ${\boldsymbol{\tau}}$, whose external edges are labeled by the elements of $T$. The set of trees is graded by the number of edges: $$\mathcal{T}((\bf{T}))=\bigcup_{i=0}^{|T|-3}\mathcal{T}_{i}(({\bf T})),$$ where $\mathcal{T}_{i}((\bf{T}))$ is a tree with $i$ edges. The tree $\mathcal{T}_{0}((\bf{T}))$ is the tree with one vertex and the set of flags equals to $\bf{T}$. Real stratification of the parametrizing space ---------------------------------------------- Compared to the previous exposition, some additional material is necessary to define the real stratification of ${\overline{\mathcal{M}}}_{0,n}$. Bridging strata between each other requires the introduction of Whitehead-moves. Let us discuss the case of a forest of one color. This forest has $2d$ leaves and vertices of even valency. Suppose that in this forest there exist $m\geq2$ one-edged trees, occurring as part of the boundary of a given cell. Add one vertex (different from the leafs) to each of the trees and a polygon, joining these vertices. Contract this polygon to a point. This contracting morphism gives a new tree, locally star-like. We call this a [*complete contracting Whitehead move*]{}. This operation also holds in the case of a tree where two vertices are connected to each other by an edge. In this situation, the edge can be contracted, leaving only one vertex, exactly as in the contracting morphism discussed above. We call this a [*partial contracting Whitehead move*]{}. The opposite of this operation exists too. It is called a [*smoothing operation*]{}. Geometrically speaking, a smoothing is applied to the vertex (not a leaf) of a tree having $m\geq2$ edges. This smoothing is obtained by ungluing those edges, giving $m$ one-edged trees. This is called a [*complete smoothing Whitehead move*]{}. In this smoothing process, we include also the splitting operation above, i.e. for a given tree with $m\geq3$ edges, replace one vertex by two vertices connected by an edge, such that the valency of the new vertices remains even, and that the condition for flags holds (where ”flags” are replaced here by incident half-edges). This is called the [*partial smoothing Whitehead move*]{}. \(a) at (-2.5,0) ([0\*180/[3]{}]{}:1) .. controls([0\*180/[3]{}]{}:-0.2) and ([1\*180/[3]{}]{}:-0.2) .. ([1\*180/[3]{}]{}:1) ; ([2\*180/[3]{}]{}:1) .. controls([2\*180/[3]{}]{}:-0.2) and ([3\*180/[3]{}]{}:-0.2) .. ([3\*180/[3]{}]{}:1) ; ([4\*180/[3]{}]{}:1) .. controls([4\*180/[3]{}]{}:-0.2) and ([5\*180/[3]{}]{}:-0.2) .. ([5\*180/[3]{}]{}:1) ; ([0\*180/[3]{}+ 90/[3]{}]{}:1) .. controls([0\*180/[3]{}+ 90/[3]{}]{}:0.1) and ([1\*180/[3]{}+ 90/[3]{}]{}:0.1) .. ([1\*180/[3]{}+ 90/[3]{}]{}:1) ; ([2\*180/[3]{}+ 90/[3]{}]{}:1) .. controls([2\*180/[3]{}+ 90/[3]{}]{}:0.1) and ([3\*180/[3]{}+ 90/[3]{}]{}:0.1) .. ([3\*180/[3]{}+ 90/[3]{}]{}:1) ; ([4\*180/[3]{}+ 90/[3]{}]{}:1) .. controls([4\*180/[3]{}+ 90/[3]{}]{}:0.1) and ([5\*180/[3]{}+ 90/[3]{}]{}:0.1) .. ([5\*180/[3]{}+ 90/[3]{}]{}:1) ; ; (b) at (2.5,0) ([0\*180/[3]{}]{}:1) .. controls([0\*180/[3]{}]{}:-0.2) and ([3\*180/[3]{}]{}:-0.2) .. ([3\*180/[3]{}]{}:1) ; ([1\*180/[3]{}]{}:1) .. controls([1\*180/[3]{}]{}:-0.2) and ([4\*180/[3]{}]{}:-0.2) .. ([4\*180/[3]{}]{}:1) ; ([2\*180/[3]{}]{}:1) .. controls([2\*180/[3]{}]{}:-0.2) and ([5\*180/[3]{}]{}:-0.2) .. ([5\*180/[3]{}]{}:1) ; ([0\*180/[3]{}+ 90/[3]{}]{}:1) .. controls([0\*180/[3]{}+ 90/[3]{}]{}:0.2) and ([1\*180/[3]{}+ 90/[3]{}]{}:0.2) .. ([1\*180/[3]{}+ 90/[3]{}]{}:1) ; ([2\*180/[3]{}+ 90/[3]{}]{}:1) .. controls([2\*180/[3]{}+ 90/[3]{}]{}:0.2) and ([3\*180/[3]{}+ 90/[3]{}]{}:0.2) .. ([3\*180/[3]{}+ 90/[3]{}]{}:1) ; ([4\*180/[3]{}+ 90/[3]{}]{}:1) .. controls([4\*180/[3]{}+ 90/[3]{}]{}:0.2) and ([5\*180/[3]{}+ 90/[3]{}]{}:0.2) .. ([5\*180/[3]{}+ 90/[3]{}]{}:1) ; ; (a) – (b); \(a) at (-2.5,0) ([0\*180/[4]{}]{}:1) .. controls([0\*180/[4]{}]{}:0.3) and ([4\*180/[4]{}]{}:0.3) .. ([4\*180/[4]{}]{}:1) ; ([1\*180/[4]{}]{}:1) .. controls([1\*180/[4]{}]{}:0.3) and ([7\*180/[4]{}]{}:0.3) .. ([7\*180/[4]{}]{}:1) ; ([2\*180/[4]{}]{}:1) .. controls([2\*180/[4]{}]{}:0.3) and ([6\*180/[4]{}]{}:0.3) .. ([6\*180/[4]{}]{}:1) ; ([3\*180/[4]{}]{}:1) .. controls([3\*180/[4]{}]{}:0.3) and ([5\*180/[4]{}]{}:0.3) .. ([5\*180/[4]{}]{}:1) ; ([0\*180/[4]{}+ 90/[4]{}]{}:1) .. controls([0\*180/[4]{}+ 90/[4]{}]{}:0.1) and ([1\*180/[4]{}+ 90/[4]{}]{}:0.1) .. ([1\*180/[4]{}+ 90/[4]{}]{}:1) ; ([2\*180/[4]{}+ 90/[4]{}]{}:1) .. controls([2\*180/[4]{}+ 90/[4]{}]{}:0.1) and ([3\*180/[4]{}+ 90/[4]{}]{}:0.) .. ([3\*180/[4]{}+ 90/[4]{}]{}:1) ; ([4\*180/[4]{}+ 90/[4]{}]{}:1) .. controls([4\*180/[4]{}+ 90/[4]{}]{}:0.1) and ([5\*180/[4]{}+ 90/[4]{}]{}:0.1) .. ([5\*180/[4]{}+ 90/[4]{}]{}:1) ; ([6\*180/[4]{}+ 90/[4]{}]{}:1) .. controls([6\*180/[4]{}+ 90/[4]{}]{}:0.1) and ([7\*180/[4]{}+ 90/[4]{}]{}:0.1) .. ([7\*180/[4]{}+ 90/[4]{}]{}:1) ; ; (b) at (2.5,0) ([0\*180/[4]{}]{}:1) .. controls([0\*180/[4]{}]{}:0.3) and ([4\*180/[4]{}]{}:0.3) .. ([4\*180/[4]{}]{}:1) ; ([1\*180/[4]{}]{}:1) .. controls([1\*180/[4]{}]{}:0.3) and ([5\*180/[4]{}]{}:0.3) .. ([5\*180/[4]{}]{}:1) ; ([2\*180/[4]{}]{}:1) .. controls([2\*180/[4]{}]{}:0.3) and ([6\*180/[4]{}]{}:0.3) .. ([6\*180/[4]{}]{}:1) ; ([3\*180/[4]{}]{}:1) .. controls([3\*180/[4]{}]{}:0.3) and ([7\*180/[4]{}]{}:0.3) .. ([7\*180/[4]{}]{}:1) ; ([0\*180/[4]{}+ 90/[4]{}]{}:1) .. controls([0\*180/[4]{}+ 90/[4]{}]{}:0.3) and ([1\*180/[4]{}+ 90/[4]{}]{}:0.3) .. ([1\*180/[4]{}+ 90/[4]{}]{}:1) ; ([2\*180/[4]{}+ 90/[4]{}]{}:1) .. controls([2\*180/[4]{}+ 90/[4]{}]{}:0.3) and ([3\*180/[4]{}+ 90/[4]{}]{}:0.3) .. ([3\*180/[4]{}+ 90/[4]{}]{}:1) ; ([4\*180/[4]{}+ 90/[4]{}]{}:1) .. controls([4\*180/[4]{}+ 90/[4]{}]{}:0.3) and ([5\*180/[4]{}+ 90/[4]{}]{}:0.3) .. ([5\*180/[4]{}+ 90/[4]{}]{}:1) ; ([6\*180/[4]{}+ 90/[4]{}]{}:1) .. controls([6\*180/[4]{}+ 90/[4]{}]{}:0.3) and ([7\*180/[4]{}+ 90/[4]{}]{}:0.3) .. ([7\*180/[4]{}+ 90/[4]{}]{}:1) ; ; (a) – (b); We now introduce the deformation retract lemma. \[L:DeR\] Consider a signature $\sigma$. Suppose that in $\sigma$ there exist edges of $m \geq 2$ trees, occurring in the boundary of a cell of the plane ${\mathbb{C}}$. Applying a complete contraction Whitehead move onto these edges corresponds to a deformation retract of $\sigma$ onto a new signature $\tau$. To prove this lemma, we essentially need some notions from Morse theory (see [@Mi1]). From the Morse lemma we know the following. Let $p$ be a non-degenerate critical point of $f : M \to \Re$. Then there exists a chart $(x_1, x_2, \dots, x_n)$ in a neighborhood $U$ of $p$ such that $$f(x)=f(p)-x_{1}^{2}-\cdots -x_{\alpha}^{2}+x_{\alpha +1}^{2}+\cdots +x_{n}^{2}$$ throughout $U$. Here $\alpha$ is equal to the index of $f$ at $p$. A corollary of the Morse lemma, is that non-degenerate critical points are isolated. A smooth real-valued function on a manifold $M$ is a Morse function if it has no degenerate critical points. Let $M^a=f^{-1}(-\infty, a]=\{x\in M: f(x)\leq a\}$. We recall two important results. \[Th:M1\] Let $M$ be a differentiable manifold. Suppose $f$ is a smooth real-valued function on $M$, $a < b$, $f^{-1}[a, b]$ is compact, and there are no critical values between $a$ and $b$. Then $M^a$ is diffeomorphic to $M^b$, and $M^b$ deformation retracts onto $M^a$. \[Th:M2\] Suppose $f$ is a smooth real-valued function on $M$ and $p$ is a non-degenerate critical point of $f$ of index $k$, and that $f(p) = q$. Suppose $f^{-1}[q - \epsilon, q + \epsilon]$ is compact and contains no critical points besides $p$. Then $M^{q+\epsilon}$ is homotopy equivalent to $M^{q-\epsilon}$ with a $k$-cell attached. We are now able to prove that the lemma, in order to show that a Whitehead contraction step is a deformation retract. Consider the simplest case i.e. when we have two curves of the same color lying in a cell of ${\mathbb{C}}$ and after the contracting Whitehead move, intersecting. Applying the Morse lemma, we take a coordinate system $x,y$ in a neighborhood $U$ of a critical point $p$, such that $f(b)=c$ and so that the identity $$f=c-x^2+y^2$$ holds throughout $U$ and the critical point $p$ will have coordinates $x(p)=y(p)=0$. Choose $\epsilon>0$, sufficiently small so that: 1. the region $f^{-1}[c-\epsilon, c+\epsilon]$ is compact and contains no critical point other than $p$. 2. The image of $U$ under the diffeomorphic embedding $(x,y): U\to \Re^2$ contains the closed ball $\{(x,y): x^2+y^2\leq 2\epsilon\}$. Coordinate lines are $x=0, y=0$; the hyperbolas represent $f^{-1}(c-\epsilon)$ and $f^{-1}(c+\epsilon)$. A slight modification of the proof of theorem \[Th:M2\] in [@Mi1], and using the Morse function defined above, shows that the set $M^c$ is a deformation retract of $M^{c+\epsilon}$ (reciprocally, a deformation retract of $M^{c-\epsilon}$). The generalisation for more than two curves intersecting follows from a modification of the function $f$ to higher degrees than 2. Using the Morse theory two previous results i.e. Theorems \[Th:M1\] and \[Th:M2\], and the fact that there exists a Morse function on any differentiable manifold, one can prove that any differentiable manifold is a CW complex with an $n$-cell for each critical point of index $n$. \(a) at (-2.5,0) ([3\*180/[4]{}]{}:1) .. controls([3\*180/[4]{}]{}:0.3)and ([5\*180/[4]{}]{}:0.3) .. ([5\*180/[4]{}]{}:1) ; ([1\*180/[4]{}]{}:1) .. controls([1\*180/[4]{}]{}:0.3)and ([7\*180/[4]{}]{}:0.3) .. ([7\*180/[4]{}]{}:1) ; ; (b) at (0,0) ([1\*180/[4]{}]{}:1) .. controls([1\*180/[4]{}]{}:0.3)and ([5\*180/[4]{}]{}:0.3) .. ([5\*180/[4]{}]{}:1) ; ([3\*180/[4]{}]{}:1) .. controls([3\*180/[4]{}]{}:0.3)and ([7\*180/[4]{}]{}:0.3) .. ([7\*180/[4]{}]{}:1) ; ; (c) at (2.5,0) ([1\*180/[4]{}]{}:1) .. controls([1\*180/[4]{}]{}:0.3)and ([3\*180/[4]{}]{}:0.3) .. ([3\*180/[4]{}]{}:1) ; ([5\*180/[4]{}]{}:1) .. controls([5\*180/[4]{}]{}:0.3)and ([7\*180/[4]{}]{}:0.3) .. ([7\*180/[4]{}]{}:1) ; ; at (-4,0) ; (a) – (b); (b) – (c); \[L:Mil\] Consider a tree (or a subtree of a tree) containing a vertex which is of valency greater or equal to 8; incident edges having of alternating colors. The smoothing Whitehead move applied to this vertex forms a Milnor fiber. This Milnor fiber is homotopy equivalent to a circle. To prove this let us note the following. Milnor introduced the Milnor fibration for any holomorphic germ $(X_0,0)$ in ${\mathbb{C}}^N$ and proved that the Milnor fiber is always a CW-complex of dimension at most $(n-1)$ (see [@Mi2]). This tool can be used when $X_0$ is smoothable. Let us recall this notion. Consider an open ball in ${\mathbb{C}}^N$ with centre zero; a small disk ${\mathbb{D}}$ in ${\mathbb{C}}$ with centre zero and a closed subspace $X$ of $B\times {\mathbb{D}}$. The flat holomorphic mapping $f:X \to {\mathbb{D}}$ is a smoothing of $X_0$ (which is the restriction to $X$ of the projection $p: B\times {\mathbb{D}}\to {\mathbb{D}}$) when the pre-image of 0 under $f$ is the germ ($X_0,0)$ and the pre-image $f^{-1}(t)$ is smooth for $t \in {\mathbb{D}}$ different from zero. In the case in which $f$ has an isolated singularity at the origin, Milnor proved that the Milnor fiber is homotopy equivalent to a bouquet of $(n-1)$-spheres. The number of spheres is equal to the Milnor number, which is given by $dim(K[[x_1,\dots,x_N]]/jacob(f)),$ where $jacob(f)$ is the ideal generated by the partials of $f$. This case appears precisely when the polynomial in $\dpol_n$ has a root of multiplicity 2, i.e. when we have a pair of colliding points in the configuration space. This corresponds to a germ of the type $(z^2,0)$. Now, here $N=1$ and the Milnor number is 1. Applying Milnor’s result, the fiber, for $t\neq 0$, is equivalent to a circle. Remark that, for this lemma the Milnor fiber is locally illustrated by the drawings in Figure \[F:d=2\]. Some properties of the stratification ------------------------------------- Let $P\in \dpol_{d}$, explicitly: $P(z)=z^{n}+a_{n-1} z^{n-1}+\ldots + a_0$. We denote the critical points by $\ur=(r_1,\ldots,r_n)$ ($P'(r_i)=0$) and the critical values by $\uv=(v_1,\ldots ,v_{n-1})$, so that for any $i$ there is a $j$ such that $P(r_i)=v_j$. There are no constraints on the $r_i$’s and $v_j$’s other than $v_j\ne 0$ (for all $j$) because the roots of $P$ are assumed to be simple. Let $\mathcal{V}_n$ denote the (affine) space of the $\uv$’s and recall that there is a ramified cover $$\pi_w :\dpol_{n} \rightarrow \mathcal{V}_n$$ of degree $(n)^{n-1}$ [@Myc69]. Let $\mathbb{C}_r^n$ denote the affine space of the critical points $\ur$ and let $p : \mathbb{C}_r\rightarrow \mathbb{C}_v$ denote the natural map given by $P$ ($P(r)=v)$. Denote by $c(v_k)$ (or just $c(k)$) the number of distinct critical points above $v_k$ and by $\um=(m_1,\ldots,m_{c(k)})$ their multiplicities ($m_i>1$). Generically, above a point $v$ there is just one critical point of multiplicity 2 and $n-1$ simple non critical points, i.e. $c(v)=1$, $m_1=2$. We call such a point, a [*simple*]{} critical point.  \[L:smo\]Suppose $P$ is a polynomial with signature $\tau$ and a critical point of multiplicity $m$ at $z_0$, and let $\sigma$ be the signature obtained by smoothing $\tau$. Then $A_\tau \subset \overline{A}_\sigma$ and there exists a neighborhood $U$ of $P$ in $A_\sigma$ such that $U\cong V \times (\mathbb{D})^m$, where $V$ is a neighborhood of $p\in A_\sigma$ and the polydisk $(\mathbb{D})^m$ corresponds to a canonical local perturbation of $(z - z_0)^m$. Let us work in the fiber given by $p_w$. Now, return to a given polynomial $P$ and its critical values $\mathcal{V}_n$. It is [*generic*]{} if neither $v_j^2\notin \mathbb{R}$ nor $v_j^2\notin \imath\mathbb{R}$ for all $j$. Assume it is [*not*]{} generic and let $v_0$ ($=v_j$ for some $j$) be such that $v_0^2\in \mathbb{R}$ ; purely for notational simplicity, we suppose that $v_0$ is real (rather than pure imaginary). Let $c=c(v_0)$ be the number of distinct critical points above $v_0$, with multiplicities $\um=(m_1,\ldots,m_{c})$ and denote these points $(r_1, \ldots , r_c)$ (here the $z_i$’s are distinct). So, choose one of the $r_i$ ($1\ne i\ne c$), and call it $r_0$, with multiplicity $m=m_0$. First, this is shown via holomorphic surgery and we cannot explicitly write down a universal family in terms of the coefficients $a_i$ of the polynomial $P$. However, we do know that there exists such a local universal family and that it is biholomorphically equivalent to the one obtained by completing the polynomial $(z-r_0)^m$ into the generic polynomial in $(z-r_0)$ of degree $m$ near $r_0$ (for instance, see [@V90], chapter 2, paragraph 3 for more details). In other words, let $\ueps=(\eps_1,\ldots, \eps_m)$ be complex numbers with $\vert \eps_i \vert <\eps<\!< 1$ for all $i$ (with some $\eps >0$) and let $$p_\eps(z) = (z-r_0)^m+\eps_1(z-r_0)^{m-1}+\ldots+\eps_m \ .$$ Then, there is a biholomorphic map between the set of the $\ueps$’s (i.e. $(0,\eps)^m$), or equivalently the family $p_\eps$, and a universal family $P_\eps(z)$ with $P_0=P$ (where $0=(0,\ldots, 0)$). By varying $\ueps$, let the critical point vary into $r_0(\eps)$, leading to a deformation $v_0(\eps)$. The $v_0(\eps)$ has non zero derivative at $\eps=0$ (otherwise, the polynomial would have multiple roots, which we exclude). By the implicit function theorem $v_0(\eps)$ covers a neighborhood of $v_0$ in the $v$-plane and a Whitehead move is nothing but the results of what happens when $v_0(\eps)$ moves into the upper or lower half-plane. Strata in the topological closure $\overline{A_{\sigma}}$ are indexed by signatures obtained from a contracting Whitehead move on $\sigma$. Let us suppose first that we apply on a generic signature $\sigma$ a complete contracting Whitehead. Let $P$ be a polynomial in $A_{\sigma}$ and $\tilde{P}$ a polynomial in $A_{\tilde{\sigma}}$, where $\sigma\prec \tilde{\sigma}$. We first use the Whitehead move of first type. A contracting Whitehead move on $\sigma$ corresponds to a path of the critical values of $P$ in the space of critical values. We will show that by using a contracting Whitehead move on $\sigma$ this corresponds to defining a convergent sequence of critical values in $ \mathbb{C}_w^{n-1}$. In $\tilde{\sigma}$ the intersection point of a set of $m$ curves of the same color lies on a critical point $c_i$ of $\tilde{P}$. Suppose that there exist $I$ (where $|I|<d$) such intersections. Therefore, for $i\in I$ we have $\tilde{P}'(r_i)=0$ and $Re(\tilde{P}(r_i))=0$ (resp $Im(\tilde{P}(r_i))=0$) and the critical value $\tilde{P}(r_i)=v_i \in \imath\mathbb{R}$ (resp. $\tilde{P}(r_i)=v_i \in \mathbb{R}$). So, this indicates a sequence of critical values converging to $(v_1,....,v_{n-1})$, where the subset $v_i, i \in \{1,...,n-1\}$ lies on the imaginary (resp. real) axes. Hence it indicates a topological closure. Consider the case of a partial contracting Whitehead move. In this case, the initial signature $\sigma$ has a set of critical values lying on the real or imaginary axis and the partial contracting Whitehead operation merges a subset of those critical values together. A partial contracting Whitehead move corresponds to a converging sequence of critical values in the space $\mathbb{C}_v^{n-1}$ and hence it indicates a topological closure of the stratum $A_{\sigma}$. $\dpol_{d}$ as a covering of a non-compact stratified space {#S:crit} ----------------------------------------------------------- ### Critical values Consider the space $V_{n}=(\mathbb{C}^{n-1}\setminus 0)/S_{n-1}$, where $S_{n-1}$ is the group of permutations. If $X$ denotes an equivalence class of points in $\mathbb{C}^{n-1}$, we can associate a unique $\sigma$-sequence $(a,b,c,d,e,f,g,h)$ of positive integers to $X$ enumerating the number of points in $X$ in the quadrants $A,B,C,D$ and on the semi-axes. The set of points $X$ in $V_{n}$ having a given $\sigma$-sequence $(a,b,c,d,e,f,g,h)$ forms a polygonal cell in $V_{d}$ isomorphic to $$\label{E:poly} A^{a} /S_{a} \times B^{b}/ S_{b}\times C^{c}/ S_{c} \times D^{d}/ S_{d} \times (\mathbb{R}^{+})^{e} /S_{e} \times (\mathbb{R}^{-})/ S_{f}\times (\imath\mathbb{R}^{+})^{g}/ S_{g} \times (\imath\mathbb{R}^{-})^{h}/ S_{h}.$$ The real dimension of this cell is equal to $2(a+b+c+d)+ (e+f+g+h)$. The cells are disjoint and thus form a stratification of $V_{n}$. A subset $V$ inside $\mathbb{C}^{n-1}/S_{n-1}$ for $n>2$ is said to be a $\mathbf {non-compact\ \ stratification}$ if it is equipped with a stratification by a finite number of open cells of varying dimensions having the following properties: - the relative closure of a $k$-dimensional cell of $V$ is a union of cells in the stratification. - the relative closure of a $k$-dimensional cell of $V$ is a “semi-closed” polytope, i.e. the union of the interior of a closed polytope in $\mathbb{C}^{n-1}/S_{n-1}$ with a subset of its faces. Let $n>2$, and let $\mathcal{V}_{n}$ denote the space $V_{n}$ equipped with the stratification by $\sigma$-sequences. Then $\mathcal{V}_{n}$ is a non-compact stratification. The closure of the region of $\mathcal{V}_{n}$ described by \[E:poly\] is given by $$\overline{A}^{a} /S_{a} \times \overline{ B}^{b}/ S_{b}\times \overline{ C}^{c}/ S_{c} \times \overline{D}^{d}/ S_{d} \times (\mathbb{R}^{+})^{e} /S_{e} \times (\mathbb{R}^{-})/ S_{f}\times (\imath\mathbb{R}^{+})^{g}/ S_{g} \times (\imath\mathbb{R}^{-})^{h}/ S_{h},$$ where $\overline{A}$ denotes the closure in $V_1=\mathbb{C}\setminus 0$ of the quadrant $A$, namely the union of $A$ with $\mathbb{R}^{+}$ and $\imath\mathbb{R}^{+}$, and similarly for $\overline{B}, \overline{C}, \overline{D}$. The direct product of semi-closed polytopes is again a semi-closed polytope, as is the quotient of a semi-closed polytope by a sub-group of its symmetry group. \[polyhedral\_complex\] The map $\nu$ that sends a polynomial in $\dpol_{n}$ to its critical values realizes $\dpol_{n}$ as a finite ramified cover of $\mathcal{V}_{n}$. The image of $\nu$ contains only unordered tuples of $n-1$ complex numbers different from zero, since a polynomial can have 0 as a critical value if and only if it has multiple roots. Therefore, the image of $\nu$ lies in $\mathcal{V}_{n}$. To show that $\nu$ is surjective, we use a theorem of R. Thom [@Th63] (1963), stating that given $n-1$ complex critical values, there exists a complex polynomial $P$ of degree $d$ such that $P(r_{i})=v_{i}$ for $1\leq i\leq d-1$, where the $r_{i}$ are the critical points of $P$, and $P(0)=0$. To find a Tschirnhausen representative polynomial of $\dpol_{n}$ having the same property it suffices to take $P(z-\frac{a_{n-1}}{n-1})$ where, $a_{n-1}$ is the coefficient of $z^{n-1}$ in $P$. By a result of J. Mycielski [@Myc69], the map $\nu$ is a finite ramified cover, of degree $\frac{n^{n-1}}{n-1}$, see [@Myc69]. The cases $n=2,3,4$ ------------------- The exact nature of the ramified cover $\dpol_{n} \rightarrow \mathcal{V}_{n}$ is complicated and interesting, especially in terms of describing the ramification using the signatures. In this section, we work out full details in the small dimensional cases, and for generic strata. .1cm Let $n=2$. The spaces $\dpol_{2}$ and $\mathcal{V}_{2}$ are one-dimensional. The space $\mathcal{V}_{2}$ is $\mathbb{C}\setminus 0$ equipped with the stratification given by the four quadrants $A,B,C,D$ and the four semi-axes. The only Tschirnhausen polynomial of degree 2 having given critical value $v$ is $z^2 +v$, therefore the covering map $\nu$ is unramified of degree 1, an isomorphism. The four signatures corresponding to the strata of real dimensional 2 and the four corresponding to the one dimensional strata are illustrated in figure \[F:d=2\]. \(a) at (-2,2) (0,0) circle (1) ; ([0\*360/[8]{}]{}:1) .. controls([0\*360/[8]{}]{}:0.3)and ([6\*360/[8]{}]{}:0.3) .. ([6\*360/[8]{}]{}:1) ; ([2\*360/[8]{}]{}:1) .. controls([2\*360/[8]{}]{}:0.3)and ([4\*360/[8]{}]{}:0.3) .. ([4\*360/[8]{}]{}:1) ; ([1\*360/[8]{}]{}:1) .. controls([1\*360/[8]{}]{}:0.3)and ([3\*360/[8]{}]{}:0.3) .. ([3\*360/[8]{}]{}:1) ; ([5\*360/[8]{}]{}:1) .. controls([5\*360/[8]{}]{}:0.3)and ([7\*360/[8]{}]{}:0.3) .. ([7\*360/[8]{}]{}:1) ; ; (b) at (2,2) (0,0) circle (1) ; ([0\*360/[8]{}]{}:1) .. controls([0\*360/[8]{}]{}:0.3)and ([6\*360/[8]{}]{}:0.3) .. ([6\*360/[8]{}]{}:1) ; ([2\*360/[8]{}]{}:1) .. controls([2\*360/[8]{}]{}:0.0)and ([4\*360/[8]{}]{}:0.3) .. ([4\*360/[8]{}]{}:1) ; ([1\*360/[8]{}]{}:1) .. controls([1\*360/[8]{}]{}:0.3)and ([7\*360/[8]{}]{}:0.3) .. ([7\*360/[8]{}]{}:1) ; ([3\*360/[8]{}]{}:1) .. controls([3\*360/[8]{}]{}:0.3)and ([5\*360/[8]{}]{}:0.3) .. ([5\*360/[8]{}]{}:1) ; ; (c) at (2,-2) (0,0) circle (1) ; ([0\*360/[8]{}]{}:1) .. controls([0\*360/[8]{}]{}:0.3)and ([2\*360/[8]{}]{}:0.3) .. ([2\*360/[8]{}]{}:1) ; ([4\*360/[8]{}]{}:1) .. controls([4\*360/[8]{}]{}:0.3)and ([6\*360/[8]{}]{}:0.3) .. ([6\*360/[8]{}]{}:1) ; ([3\*360/[8]{}]{}:1) .. controls([3\*360/[8]{}]{}:0.3)and ([5\*360/[8]{}]{}:0.3) .. ([5\*360/[8]{}]{}:1) ; ([1\*360/[8]{}]{}:1) .. controls([1\*360/[8]{}]{}:0.3)and ([7\*360/[8]{}]{}:0.3) .. ([7\*360/[8]{}]{}:1) ; ; (d) at (-2,-2) (0,0) circle (1) ; ([0\*360/[8]{}]{}:1) .. controls([0\*360/[8]{}]{}:0.3)and ([2\*360/[8]{}]{}:0.3) .. ([2\*360/[8]{}]{}:1) ; ([4\*360/[8]{}]{}:1) .. controls([4\*360/[8]{}]{}:0.3)and ([6\*360/[8]{}]{}:0.3) .. ([6\*360/[8]{}]{}:1) ; ([5\*360/[8]{}]{}:1) .. controls([5\*360/[8]{}]{}:0.3)and ([7\*360/[8]{}]{}:0.3) .. ([7\*360/[8]{}]{}:1) ; ([1\*360/[8]{}]{}:1) .. controls([1\*360/[8]{}]{}:0.3)and ([3\*360/[8]{}]{}:0.3) .. ([3\*360/[8]{}]{}:1) ; ; (a) – (b); (b) – (c); (c) – (d); (d) – (a); (-2,2) node [$1$]{}; (2,2) node [$2$]{}; (2,-2) node [$3$]{}; (-2,-2) node [$4$]{}; Let $n=3$. In this case the covering map $\nu$ is of degree 3: explicitly, if $P(z)=z^3+az+b$ has critical values $v_1$ and $v_2$ then so do the polynomials $P(\zeta z)$ and $P(\zeta^2 z)$ where $\zeta^3 =1$. The 10 open strata of real dimension 4 in $\mathcal{V}_3$ are given by: $$A\times A/S_{2},\quad B\times B/S_{2},\quad C\times C/S_{2},\quad D\times D/S_{2},$$ $$A\times B,\quad A\times C,\quad A\times D, \quad B\times C,\quad B\times D, \quad C\times D.$$ Ramification occurs only above the first four; in fact exactly when the two critical values are equal, i.e. $P(z)=z^3+b$. Thus above each of the first four cells, there is only one stratum, corresponding to the four rotations of the left most signature (see figure \[F:diagrams3\]). (0,0) circle (1) ; in [0,2,4,6,8,10]{} [ ([180/[6]{}]{}:1.15) node ;]{} ; in [1,3,5,7,9,11]{} [ ([180/[6]{}]{}:1.15) node ;]{}; ([0\*180/[6]{}]{}:1) .. controls([0\*180/[6]{}]{}:0.7)and ([2\*180/[6]{}]{}:0.7) .. ([2\*180/[6]{}]{}:1) ; ([4\*180/[6]{}]{}:1) .. controls([4\*180/[6]{}]{}:0.7)and ([6\*180/[6]{}]{}:0.7) .. ([6\*180/[6]{}]{}:1) ; ([8\*180/[6]{}]{}:1) .. controls([8\*180/[6]{}]{}:0.7)and ([10\*180/[6]{}]{}:0.7) .. ([10\*180/[6]{}]{}:1) ; ([1\*180/[6]{}]{}:1) .. controls([1\*180/[6]{}]{}:0.7)and ([3\*180/[6]{}]{}:0.7) .. ([3\*180/[6]{}]{}:1) ; ([5\*180/[6]{}]{}:1) .. controls([5\*180/[6]{}]{}:0.7)and ([7\*180/[6]{}]{}:0.7) .. ([7\*180/[6]{}]{}:1) ; ([9\*180/[6]{}]{}:1) .. controls([9\*180/[6]{}]{}:0.7)and ([11\*180/[6]{}]{}:0.7) .. ([11\*180/[6]{}]{}:1) ; (0,0) circle (1) ; in [0,2,4,6,8,10]{} [ ([180/[6]{}]{}:1.15) node [ ]{};]{} ; in [1,3,5,7,9,11]{} [ ([180/[6]{}]{}:1.15) node ; ]{} ([0\*180/[6]{}]{}:1) .. controls([0\*180/[6]{}]{}:0.7)and ([6\*180/[6]{}]{}:0.7) .. ([6\*180/[6]{}]{}:1) ; ([2\*180/[6]{}]{}:1) .. controls([2\*180/[6]{}]{}:0.7)and ([4\*180/[6]{}]{}:0.7) .. ([4\*180/[6]{}]{}:1) ; ([8\*180/[6]{}]{}:1) .. controls([8\*180/[6]{}]{}:0.7)and ([10\*180/[6]{}]{}:0.7) .. ([10\*180/[6]{}]{}:1) ; ([1\*180/[6]{}]{}:1) .. controls([1\*180/[6]{}]{}:0.7)and ([7\*180/[6]{}]{}:0.7) .. ([7\*180/[6]{}]{}:1) ; ([3\*180/[6]{}]{}:1) .. controls([3\*180/[6]{}]{}:0.7)and ([5\*180/[6]{}]{}:0.7) .. ([5\*180/[6]{}]{}:1) ; ([9\*180/[6]{}]{}:1) .. controls([9\*180/[6]{}]{}:0.7)and ([11\*180/[6]{}]{}:0.7) .. ([11\*180/[6]{}]{}:1) ; (0,0) circle (1) ; in [0,2,4,6,8,10]{} [ ([180/[6]{}]{}:1.15) node [ ]{};]{} ; in [1,3,5,7,9,11]{} [ ([180/[6]{}]{}:1.15) node ; ]{} ([0\*180/[6]{}]{}:1) .. controls([0\*180/[6]{}]{}:0.7)and ([6\*180/[6]{}]{}:0.7) .. ([6\*180/[6]{}]{}:1) ; ([2\*180/[6]{}]{}:1) .. controls([2\*180/[6]{}]{}:0.7)and ([4\*180/[6]{}]{}:0.7) .. ([4\*180/[6]{}]{}:1) ; ([8\*180/[6]{}]{}:1) .. controls([8\*180/[6]{}]{}:0.7)and ([10\*180/[6]{}]{}:0.7) .. ([10\*180/[6]{}]{}:1) ; ([1\*180/[6]{}]{}:1) .. controls([1\*180/[6]{}]{}:0.7)and ([3\*180/[6]{}]{}:0.7) .. ([3\*180/[6]{}]{}:1) ; ([5\*180/[6]{}]{}:1) .. controls([5\*180/[6]{}]{}:0.7)and ([7\*180/[6]{}]{}:0.7) .. ([7\*180/[6]{}]{}:1) ; ([9\*180/[6]{}]{}:1) .. controls([9\*180/[6]{}]{}:0.7)and ([11\*180/[6]{}]{}:0.7) .. ([11\*180/[6]{}]{}:1) ; In contrast, there are three distinct signatures above each of the remaining 6 strata. The six signatures in the middle of the figure \[F:diagrams3\] form two orbits under the $\frac{2\pi}{3}$ rotation which lie above the cells, $ A\times C$ and $ B\times D$, whereas the twelve signatures on the right form four orbits lying over the strata $A\times B; B\times C; C\times D; D\times A$. Let $n=4$. The degree of the covering map $\nu$ is 16. There are 20 generic strata in $\mathcal{V}_4$. Four generic strata correspond to taking three critical values in three different quadrants. There is no ramification above these strata; each of these have 16 distinct strata in the preimage of $\nu$, corresponding to four rotations each of the fifth, sixth, eighth and eleventh signatures in the figure below. Four more cells of $\mathcal{V}_4$ correspond to taking three critical values in the same quadrant. Only one stratum of $\dpol_{4}$ lies above each of these cells, namely, the first signature in the figure below, with ramification of order sixteen. The remaining twelve cells correspond to two critical points in one quadrant and the third in a different quadrant. When the quadrants are adjacent, only six cells lie above the corresponding regions of $\mathcal{V}_4$. For example over the region $A,A,B$ lie the two rotations of the second signature in the figure below with ramification of order 2, and the four rotations of the tenth signature each of ramification of order 3. The situation is analogous when the quadrants are opposed. For example over the region $A,A,C$, there are the two rotations of the third figure (below) each with ramification of order 2, and the four rotations of the ninth signature each of ramification order 3. - All 1 classes of size 4: (0,0) circle (1) ; ([0\*360/[16]{}]{}:1) .. controls([0\*360/[16]{}]{}:0.7) and ([2\*360/[16]{}]{}:0.7) .. ([2\*360/[16]{}]{}:1) ; ([4\*360/[16]{}]{}:1) .. controls([4\*360/[16]{}]{}:0.7) and ([6\*360/[16]{}]{}:0.7) .. ([6\*360/[16]{}]{}:1) ; ([8\*360/[16]{}]{}:1) .. controls([8\*360/[16]{}]{}:0.7) and ([10\*360/[16]{}]{}:0.7) .. ([10\*360/[16]{}]{}:1) ; ([12\*360/[16]{}]{}:1) .. controls([12\*360/[16]{}]{}:0.7) and ([14\*360/[16]{}]{}:0.7) .. ([14\*360/[16]{}]{}:1) ; ([1\*360/[16]{}]{}:1) .. controls([1\*360/[16]{}]{}:0.7) and ([3\*360/[16]{}]{}:0.7) .. ([3\*360/[16]{}]{}:1) ; ([5\*360/[16]{}]{}:1) .. controls([5\*360/[16]{}]{}:0.7) and ([7\*360/[16]{}]{}:0.7) .. ([7\*360/[16]{}]{}:1) ; ([9\*360/[16]{}]{}:1) .. controls([9\*360/[16]{}]{}:0.7) and ([11\*360/[16]{}]{}:0.7) .. ([11\*360/[16]{}]{}:1) ; - All 3 classes of size 8: (0,0) circle (1) ; ([0\*360/[16]{}]{}:1) .. controls([0\*360/[16]{}]{}:0.7) and ([2\*360/[16]{}]{}:0.7) .. ([2\*360/[16]{}]{}:1) ; ([4\*360/[16]{}]{}:1) .. controls([4\*360/[16]{}]{}:0.7) and ([6\*360/[16]{}]{}:0.7) .. ([6\*360/[16]{}]{}:1) ; ([8\*360/[16]{}]{}:1) .. controls([8\*360/[16]{}]{}:0.7) and ([10\*360/[16]{}]{}:0.7) .. ([10\*360/[16]{}]{}:1) ; ([12\*360/[16]{}]{}:1) .. controls([12\*360/[16]{}]{}:0.7) and ([14\*360/[16]{}]{}:0.7) .. ([14\*360/[16]{}]{}:1) ; ([1\*360/[16]{}]{}:1) .. controls([1\*360/[16]{}]{}:0.7) and ([7\*360/[16]{}]{}:0.7) .. ([7\*360/[16]{}]{}:1) ; ([3\*360/[16]{}]{}:1) .. controls([3\*360/[16]{}]{}:0.7) and ([5\*360/[16]{}]{}:0.7) .. ([5\*360/[16]{}]{}:1) ; ([9\*360/[16]{}]{}:1) .. controls([9\*360/[16]{}]{}:0.7) and ([15\*360/[16]{}]{}:0.7) .. ([15\*360/[16]{}]{}:1) ; ([11\*360/[16]{}]{}:1) .. controls([11\*360/[16]{}]{}:0.7) and ([13\*360/[16]{}]{}:0.7) .. ([13\*360/[16]{}]{}:1) ; (0,0) circle (1) ; ([0\*360/[16]{}]{}:1) .. controls([0\*360/[16]{}]{}:0.7) and ([2\*360/[16]{}]{}:0.7) .. ([2\*360/[16]{}]{}:1) ; ([4\*360/[16]{}]{}:1) .. controls([4\*360/[16]{}]{}:0.7) and ([14\*360/[16]{}]{}:0.7) .. ([14\*360/[16]{}]{}:1) ; ([6\*360/[16]{}]{}:1) .. controls([6\*360/[16]{}]{}:0.7) and ([12\*360/[16]{}]{}:0.7) .. ([12\*360/[16]{}]{}:1) ; ([8\*360/[16]{}]{}:1) .. controls([8\*360/[16]{}]{}:0.7) and ([10\*360/[16]{}]{}:0.7) .. ([10\*360/[16]{}]{}:1) ; ([1\*360/[16]{}]{}:1) .. controls([1\*360/[16]{}]{}:0.7) and ([3\*360/[16]{}]{}:0.7) .. ([3\*360/[16]{}]{}:1) ; ([5\*360/[16]{}]{}:1) .. controls([5\*360/[16]{}]{}:0.7) and ([15\*360/[16]{}]{}:0.7) .. ([15\*360/[16]{}]{}:1) ; ([7\*360/[16]{}]{}:1) .. controls([7\*360/[16]{}]{}:0.7) and ([13\*360/[16]{}]{}:0.7) .. ([13\*360/[16]{}]{}:1) ; ([9\*360/[16]{}]{}:1) .. controls([9\*360/[16]{}]{}:0.7) and ([11\*360/[16]{}]{}:0.7) .. ([11\*360/[16]{}]{}:1) ; (0,0) circle (1) ; ([0\*360/[16]{}]{}:1) .. controls([0\*360/[16]{}]{}:0.7) and ([2\*360/[16]{}]{}:0.7) .. ([2\*360/[16]{}]{}:1) ; ([4\*360/[16]{}]{}:1) .. controls([4\*360/[16]{}]{}:0.7) and ([6\*360/[16]{}]{}:0.7) .. ([6\*360/[16]{}]{}:1) ; ([8\*360/[16]{}]{}:1) .. controls([8\*360/[16]{}]{}:0.7) and ([10\*360/[16]{}]{}:0.7) .. ([10\*360/[16]{}]{}:1) ; ([12\*360/[16]{}]{}:1) .. controls([12\*360/[16]{}]{}:0.7) and ([14\*360/[16]{}]{}:0.7) .. ([14\*360/[16]{}]{}:1) ; ([1\*360/[16]{}]{}:1) .. controls([1\*360/[16]{}]{}:0.7) and ([3\*360/[16]{}]{}:0.7) .. ([3\*360/[16]{}]{}:1) ; ([5\*360/[16]{}]{}:1) .. controls([5\*360/[16]{}]{}:0.7) and ([15\*360/[16]{}]{}:0.7) .. ([15\*360/[16]{}]{}:1) ; ([7\*360/[16]{}]{}:1) .. controls([7\*360/[16]{}]{}:0.7) and ([13\*360/[16]{}]{}:0.7) .. ([13\*360/[16]{}]{}:1) ; ([9\*360/[16]{}]{}:1) .. controls([9\*360/[16]{}]{}:0.7) and ([11\*360/[16]{}]{}:0.7) .. ([11\*360/[16]{}]{}:1) ; \ - All 7 classes of size 16: (0,0) circle (1) ; ([0\*360/[16]{}]{}:1) .. controls([0\*360/[16]{}]{}:0.7) and ([2\*360/[16]{}]{}:0.7) .. ([2\*360/[16]{}]{}:1) ; ([4\*360/[16]{}]{}:1) .. controls([4\*360/[16]{}]{}:0.7) and ([6\*360/[16]{}]{}:0.7) .. ([6\*360/[16]{}]{}:1) ; ([8\*360/[16]{}]{}:1) .. controls([8\*360/[16]{}]{}:0.7) and ([14\*360/[16]{}]{}:0.7) .. ([14\*360/[16]{}]{}:1) ; ([10\*360/[16]{}]{}:1) .. controls([10\*360/[16]{}]{}:0.7) and ([12\*360/[16]{}]{}:0.7) .. ([12\*360/[16]{}]{}:1) ; ([1\*360/[16]{}]{}:1) .. controls([1\*360/[16]{}]{}:0.7) and ([3\*360/[16]{}]{}:0.7) .. ([3\*360/[16]{}]{}:1) ; ([5\*360/[16]{}]{}:1) .. controls([5\*360/[16]{}]{}:0.7) and ([15\*360/[16]{}]{}:0.7) .. ([15\*360/[16]{}]{}:1) ; ([7\*360/[16]{}]{}:1) .. controls([7\*360/[16]{}]{}:0.7) and ([13\*360/[16]{}]{}:0.7) .. ([13\*360/[16]{}]{}:1) ; ([9\*360/[16]{}]{}:1) .. controls([9\*360/[16]{}]{}:0.7) and ([11\*360/[16]{}]{}:0.7) .. ([11\*360/[16]{}]{}:1) ; (0,0) circle (1) ; ([0\*360/[16]{}]{}:1) .. controls([0\*360/[16]{}]{}:0.7) and ([2\*360/[16]{}]{}:0.7) .. ([2\*360/[16]{}]{}:1) ; ([4\*360/[16]{}]{}:1) .. controls([4\*360/[16]{}]{}:0.7) and ([6\*360/[16]{}]{}:0.7) .. ([6\*360/[16]{}]{}:1) ; ([8\*360/[16]{}]{}:1) .. controls([8\*360/[16]{}]{}:0.7) and ([14\*360/[16]{}]{}:0.7) .. ([14\*360/[16]{}]{}:1) ; ([10\*360/[16]{}]{}:1) .. controls([10\*360/[16]{}]{}:0.7) and ([12\*360/[16]{}]{}:0.7) .. ([12\*360/[16]{}]{}:1) ; ([3\*360/[16]{}]{}:1) .. controls([3\*360/[16]{}]{}:0.7) and ([9\*360/[16]{}]{}:0.7) .. ([9\*360/[16]{}]{}:1) ; ([5\*360/[16]{}]{}:1) .. controls([5\*360/[16]{}]{}:0.7) and ([7\*360/[16]{}]{}:0.7) .. ([7\*360/[16]{}]{}:1) ; ([11\*360/[16]{}]{}:1) .. controls([11\*360/[16]{}]{}:0.7) and ([13\*360/[16]{}]{}:0.7) .. ([13\*360/[16]{}]{}:1) ; ([15\*360/[16]{}]{}:1) .. controls([15\*360/[16]{}]{}:0.7) and ([1\*360/[16]{}]{}:0.7) .. ([1\*360/[16]{}]{}:1) ; (0,0) circle (1) ; ([0\*360/[16]{}]{}:1) .. controls([0\*360/[16]{}]{}:0.7) and ([2\*360/[16]{}]{}:0.7) .. ([2\*360/[16]{}]{}:1) ; ([4\*360/[16]{}]{}:1) .. controls([4\*360/[16]{}]{}:0.7) and ([6\*360/[16]{}]{}:0.7) .. ([6\*360/[16]{}]{}:1) ; ([8\*360/[16]{}]{}:1) .. controls([8\*360/[16]{}]{}:0.7) and ([10\*360/[16]{}]{}:0.7) .. ([10\*360/[16]{}]{}:1) ; ([12\*360/[16]{}]{}:1) .. controls([12\*360/[16]{}]{}:0.7) and ([14\*360/[16]{}]{}:0.7) .. ([14\*360/[16]{}]{}:1) ; ([1\*360/[16]{}]{}:1) .. controls([1\*360/[16]{}]{}:0.7) and ([3\*360/[16]{}]{}:0.7) .. ([3\*360/[16]{}]{}:1) ; ([5\*360/[16]{}]{}:1) .. controls([5\*360/[16]{}]{}:0.7) and ([7\*360/[16]{}]{}:0.7) .. ([7\*360/[16]{}]{}:1) ; ([9\*360/[16]{}]{}:1) .. controls([9\*360/[16]{}]{}:0.7) and ([15\*360/[16]{}]{}:0.7) .. ([15\*360/[16]{}]{}:1) ; ([11\*360/[16]{}]{}:1) .. controls([11\*360/[16]{}]{}:0.7) and ([13\*360/[16]{}]{}:0.7) .. ([13\*360/[16]{}]{}:1) ; (0,0) circle (1) ; ([0\*360/[16]{}]{}:1) .. controls([0\*360/[16]{}]{}:0.7) and ([2\*360/[16]{}]{}:0.7) .. ([2\*360/[16]{}]{}:1) ; ([4\*360/[16]{}]{}:1) .. controls([4\*360/[16]{}]{}:0.7) and ([6\*360/[16]{}]{}:0.7) .. ([6\*360/[16]{}]{}:1) ; ([8\*360/[16]{}]{}:1) .. controls([8\*360/[16]{}]{}:0.7) and ([14\*360/[16]{}]{}:0.7) .. ([14\*360/[16]{}]{}:1) ; ([10\*360/[16]{}]{}:1) .. controls([10\*360/[16]{}]{}:0.7) and ([12\*360/[16]{}]{}:0.7) .. ([12\*360/[16]{}]{}:1) ; ([1\*360/[16]{}]{}:1) .. controls([1\*360/[16]{}]{}:0.7) and ([3\*360/[16]{}]{}:0.7) .. ([3\*360/[16]{}]{}:1) ; ([5\*360/[16]{}]{}:1) .. controls([5\*360/[16]{}]{}:0.7) and ([15\*360/[16]{}]{}:0.7) .. ([15\*360/[16]{}]{}:1) ; ([7\*360/[16]{}]{}:1) .. controls([7\*360/[16]{}]{}:0.7) and ([9\*360/[16]{}]{}:0.7) .. ([9\*360/[16]{}]{}:1) ; ([11\*360/[16]{}]{}:1) .. controls([11\*360/[16]{}]{}:0.7) and ([13\*360/[16]{}]{}:0.7) .. ([13\*360/[16]{}]{}:1) ; (0,0) circle (1) ; ([0\*360/[16]{}]{}:1) .. controls([0\*360/[16]{}]{}:0.7) and ([2\*360/[16]{}]{}:0.7) .. ([2\*360/[16]{}]{}:1) ; ([4\*360/[16]{}]{}:1) .. controls([4\*360/[16]{}]{}:0.7) and ([6\*360/[16]{}]{}:0.7) .. ([6\*360/[16]{}]{}:1) ; ([8\*360/[16]{}]{}:1) .. controls([8\*360/[16]{}]{}:0.7) and ([14\*360/[16]{}]{}:0.7) .. ([14\*360/[16]{}]{}:1) ; ([10\*360/[16]{}]{}:1) .. controls([10\*360/[16]{}]{}:0.7) and ([12\*360/[16]{}]{}:0.7) .. ([12\*360/[16]{}]{}:1) ; ([1\*360/[16]{}]{}:1) .. controls([1\*360/[16]{}]{}:0.7) and ([3\*360/[16]{}]{}:0.7) .. ([3\*360/[16]{}]{}:1) ; ([5\*360/[16]{}]{}:1) .. controls([5\*360/[16]{}]{}:0.7) and ([7\*360/[16]{}]{}:0.7) .. ([7\*360/[16]{}]{}:1) ; ([9\*360/[16]{}]{}:1) .. controls([9\*360/[16]{}]{}:0.7) and ([15\*360/[16]{}]{}:0.7) .. ([15\*360/[16]{}]{}:1) ; ([11\*360/[16]{}]{}:1) .. controls([11\*360/[16]{}]{}:0.7) and ([13\*360/[16]{}]{}:0.7) .. ([13\*360/[16]{}]{}:1) ; (0,0) circle (1) ; ([0\*360/[16]{}]{}:1) .. controls([0\*360/[16]{}]{}:0.7) and ([2\*360/[16]{}]{}:0.7) .. ([2\*360/[16]{}]{}:1) ; ([4\*360/[16]{}]{}:1) .. controls([4\*360/[16]{}]{}:0.7) and ([6\*360/[16]{}]{}:0.7) .. ([6\*360/[16]{}]{}:1) ; ([8\*360/[16]{}]{}:1) .. controls([8\*360/[16]{}]{}:0.7) and ([10\*360/[16]{}]{}:0.7) .. ([10\*360/[16]{}]{}:1) ; ([12\*360/[16]{}]{}:1) .. controls([12\*360/[16]{}]{}:0.7) and ([14\*360/[16]{}]{}:0.7) .. ([14\*360/[16]{}]{}:1) ; ([1\*360/[16]{}]{}:1) .. controls([1\*360/[16]{}]{}:0.7) and ([3\*360/[16]{}]{}:0.7) .. ([3\*360/[16]{}]{}:1) ; ([5\*360/[16]{}]{}:1) .. controls([5\*360/[16]{}]{}:0.7) and ([15\*360/[16]{}]{}:0.7) .. ([15\*360/[16]{}]{}:1) ; ([7\*360/[16]{}]{}:1) .. controls([7\*360/[16]{}]{}:0.7) and ([9\*360/[16]{}]{}:0.7) .. ([9\*360/[16]{}]{}:1) ; ([11\*360/[16]{}]{}:1) .. controls([11\*360/[16]{}]{}:0.7) and ([13\*360/[16]{}]{}:0.7) .. ([13\*360/[16]{}]{}:1) ; (0,0) circle (1) ; ([0\*360/[16]{}]{}:1) .. controls([0\*360/[16]{}]{}:0.7) and ([2\*360/[16]{}]{}:0.7) .. ([2\*360/[16]{}]{}:1) ; ([4\*360/[16]{}]{}:1) .. controls([4\*360/[16]{}]{}:0.7) and ([6\*360/[16]{}]{}:0.7) .. ([6\*360/[16]{}]{}:1) ; ([8\*360/[16]{}]{}:1) .. controls([8\*360/[16]{}]{}:0.7) and ([14\*360/[16]{}]{}:0.7) .. ([14\*360/[16]{}]{}:1) ; ([10\*360/[16]{}]{}:1) .. controls([10\*360/[16]{}]{}:0.7) and ([12\*360/[16]{}]{}:0.7) .. ([12\*360/[16]{}]{}:1) ; ([1\*360/[16]{}]{}:1) .. controls([1\*360/[16]{}]{}:0.7) and ([7\*360/[16]{}]{}:0.7) .. ([7\*360/[16]{}]{}:1) ; ([9\*360/[16]{}]{}:1) .. controls([9\*360/[16]{}]{}:0.7) and ([15\*360/[16]{}]{}:0.7) .. ([15\*360/[16]{}]{}:1) ; ([3\*360/[16]{}]{}:1) .. controls([3\*360/[16]{}]{}:0.7) and ([5\*360/[16]{}]{}:0.7) .. ([5\*360/[16]{}]{}:1) ; ([11\*360/[16]{}]{}:1) .. controls([11\*360/[16]{}]{}:0.7) and ([13\*360/[16]{}]{}:0.7) .. ([13\*360/[16]{}]{}:1) ; Combinatorial closure of a signature ------------------------------------ The existence of the notion of incidence relation between strata, which leads to the notion of poset and chain. A poset (partially ordered set) is a set $P$ together with a binary relation $\prec$ which is transitive ($x \prec y$ and $y \prec z$ implies $x \prec z$) and irreflexive ($x \prec y$ and $y \prec x$ cannot both hold). The elements $x$ and $y$ are comparable if $x \prec y$ and/or $y \prec x$ hold. A chain in a poset $P$ is a subset $C \subseteq P$ such that any two elements in $C$ are comparable. We write $\sigma \prec \tau$, when $\tau$ can be obtained from $\sigma$ by a sequence of repeated contracting Whitehead moves. The signature $\tau$ is incident to $\sigma$. We call the union $\overline{\sigma}=\{\tau : \sigma \prec \tau\}$ the combinatorial closure of $\sigma$, and we define $$A_{\overline{\sigma}}=\cup_{\sigma \prec \tau} A_{\tau}.$$ Let $S$ be the set of signatures. An upper bound in a subset of $(S,\prec)$ is a signature such that there exists no other signature $\tau$ in this subset verifying $\sigma\prec \tau$. \[L:123\] .1cm Let $\tau$ be a signature with a given intersection of two red (or blue) diagonals. There are exactly two ways to smooth the intersection, which give two non-isotopic signatures $\sigma_1$ and $\sigma_2$ such that $\tau$ is incident to both $\sigma_1$ and $\sigma_2$. .1cm The proof follows from Morse theory. As in the proof of lemma \[L:DeR\]: let $x,y$ be in a neighbourhood $U$ of $p$, so that the identity $$f=f(p)-x^2+y^2$$ holds throughout $U$ and the critical point $p$ will have coordinates $x(p)=y(p)=0$, with $f(p)=c$. Choose $\epsilon>0$, sufficiently small so that the region $f^{-1}[c-\epsilon, c+\epsilon]$ is compact and contains no critical point other than $p$. Then, at $c-\epsilon$ and $c+\epsilon$ we have two pairs of curves. They give two signatures which are non-isotopic. Similarly, we have the following for contracting Whitehead moves.  \[L:countWhitehead\] Let $\tau_1$ and $\tau_2$ be obtained from a signature $\sigma$ by two different complete contracting Whitehead moves. Then, $\tau_1$ and $\tau_2$ are different. Let $\tau_1$ and $\tau_2$ be obtained from a signature $\sigma$ from two different contracting Whitehead moves. Two contracting Whitehead moves are different if they operate on different sets of edges. Let us consider $m_1$ (resp. $m_2$) edges of $\sigma$ lying in the boundary of cell of ${\mathbb{C}}$. Suppose that their set of leaves is $\{i_1,...i_{2m_{1}}\}$ (resp. $\{j_1,...j_{2m_{2}}\}$). A contracting Whitehead move glues those edges at one point. This gives a (star shaped) tree with leaves in the set $\{i_1,...i_{2m}\}$ (resp.$\{j_1,...j_{2m_{2}}\}$). This gives the signature $\tau_1$ (resp. $\tau_2$). Clearly $\tau_1$ can not be isotopic to $\tau_2$, with respect to the leaves, since $\{i_1,...i_{2m}\}$ is different from $\{j_1,...j_{2m_{2}}\}$. \[L:tree\] Let $\tau$ be a non generic signature. Consider in $\tau$ a vertex, incident to $m>2$ red (or blue) curves. The signatures, obtained from $\tau$ in one single smoothing Whitehead move, are all distinct. Locally, around the intersection, the graph resembles a star shaped tree with one inner node and $2m$ leaves which we can number $1,..,2m$. After one partial Whitehead move, the graph is still connected. However, the set of leaves are splitted into two disjoint sets $U_1\sqcup U_2=\{1,...,2m\}$. It is clear that for different splittings the graphs are not isotopic. After a complete smoothing Whitehead move, the graph is disconnected: there exists a star-like tree and a tree with at least one edge. There are at $2m$ possible of creating such graphs, if there are no symmetries in the graph. There are $m$ possibilities if there are some symmetries in the graph. These graphs are non-isotopic, with respect to the asymptotic directions of the leaves $2m$. \[L:123\] .1cm Let $\tau$ be a signature of codimension $k$ with a given red (or blue) intersection of $m>2$ curves. Then there exists $Cat(m)$ distinct signatures which are smoothings of $\tau$ and of codimension $(k-(2m-3))$ obtained by complete smoothing Whitehead moves, where $Cat(m)$ is the $m^{th}$-Catalan number. Draw in the neighborhood of the critical point $p$ a $2m$-gon. The vertices are the boundaries of the $m$ curves. Since we only consider the combinatorics, we may assume that the $2m$-gon is regular. Ungluing those $m$ curves gives $m$ disjoint curves in the regular $2m$-gon and the number of such possibilities is given by the $m^{th}$-Catalan number (see [@Sta01]). Topological closure of a stratum --------------------------------  \[L:ball\] Let $\tau$ be a non-generic signature and let $A_\tau$ be the corresponding strata of $\dpol_{d}$. Let $P_0\in A_\tau$. Then for every generic signature $\sigma$ such that $\tau$ is incident to $\sigma$, every $2d$-dimensional open ball $B_{\epsilon}$ containing $P_0$ intersects the generic stratum $A_\sigma$. Suppose that there exists a generic signature $\sigma$ which does not verify that $B_{\epsilon}$ containing $P_0$ intersects the generic stratum $A_\sigma$. Then there does not exist any path from a point $y \in A_{\tau}$ to $x\in A_{\sigma}$. Therefore $A_{\sigma}$ and $A_{\tau}$ are disjoint. In particular this implies that $A_{\tau}$ is not in the closure of $A_{\sigma}$. So, $\tau$ is not incident to $\sigma$. Let $\sigma$ be a generic signature and let $\tau\ne\sigma$. Then $\tau$ is incident to $\sigma$ if and only if the following holds: for every pair of points $x,y\in \dpol_{d}$ with $x\in A_\sigma$ and $y\in A_\tau$, there exists a continuous path $\gamma:[0,1]\rightarrow \dpol_n$ such that $\gamma(0)=x$, $\gamma(1)=y$ and $\gamma(t)\in A_\sigma$ for all $t\in [0,1)$. Any other such path $\rho$ from a point $x'\in A_\sigma$ to a point $y'\in A_\tau$ is homotopic to $\gamma$. The result follows from lemma \[L:ball\]. Indeed, if $\tau$ is incident to $\sigma$ then, a ball around $y\in A_\tau$ necessarily intersects $A_\sigma$ and therefore there is a path from $y$ to a point $x'\in A_\sigma$. Composing this with a path from $x'$ to $x$ in $A_\sigma$ we obtain $\gamma$. Conversely, if there is a path $\gamma$ from $x\in A_\sigma$ to $y\in A_\tau$, then it is impossible to have an open ball containing $y$ that does not intersect $A_\sigma$. We return to the original polynomials in order to define Whitehead moves in an analytic and then topological fashion. \[L:ret\] A contracting Whitehead move which takes a signature $\sigma$ to an incident signature $\tau$ induces a retraction of the stratum $A_{\sigma}$ onto the boundary stratum $A_{\tau}$. From the above discussion it follows that we can construct the retraction locally near $A_{\tau}$; then we extend it to the whole of $A_{\sigma}$ using the contractibility of this stratum. \[D:sigmabar\] If $\sigma$ is a signature, the closure $\overline{A}_\sigma$ of the stratum $A_\sigma$ in $\dpol_n$ is given by $$\overline{A}_\sigma=\cup_{\tau\in \overline{\sigma}} A_\tau,$$ where the [*boundary*]{} of $\sigma$ denoted $\overline{\sigma}$ consists of all incident signatures $\tau$. One direction is easy. Indeed, if $x\in A_{\tau}$ where $\tau$ is incident to $\sigma$ then every $ 2d$-dimensional open set containing $x$ must intersect $A_{\sigma}$, so $\cup_{\tau\in \overline{\sigma}} A_\tau\subset \overline{A}_\sigma$. For the other direction, let $x \in \overline{A}_\sigma\setminus A_{\sigma}$ and let $\tau$ be the signature of $x$. We first note that the dimension of $\tau$ can not be equal to the dimension of $\sigma$ because if they were equal, $A_{\tau}$ would be an open stratum disjoint from $A_{\sigma}$. Therefore the dimension of $\tau$ is less than the dimension of $\sigma$. Let $U$ be any small open neighborhood of $x$. Let $y\in U\cap A_\sigma $ and let $\gamma$ be a path from $y$ to $x$ such that $\gamma \setminus x\subset A_{\sigma}$. Then every point $z \in \gamma\setminus x$ has the same signature $\sigma$. Using theorem \[polyhedral\_complex\], any path from the interior to any point not in $A_{\sigma}$ must pass through the boundary of the polytope. Therefore any sequence of Whitehead moves and smoothings from $\sigma$ to $\tau$ must begin with Whitehead moves bringing $\sigma$ to a signature that is incident to $\sigma$. But $x$ is the first point on $\gamma $ where the signature changes and therefore $\tau$ must be incident to $\sigma$. Let $P^\sigma_r$ (resp. $P^\sigma_r$) be a red (resp. blue) forest in the signature $\sigma$; let $P^{\tau}_r$ (resp. $P^\sigma_b$) be a red (resp. blue) forest in the signature $\tau$. The set of signatures incident to a generic signature $\sigma$ is equal to the set of signatures $\tau$ such that \(i) $P^\sigma_r\subset P^{\tau}_r$ and $P^\sigma_b\subset P^{\tau}_b$ (with at least one of the containments being strict), \(ii) the set of edges $(i,j)$ in $\tau$ for the pairs in $P^\sigma_r$ (resp. those for $(k,l)$ in $P^\sigma_v$) intersect only at isolated points (no shared segments). Performing a Whitehead move on a signature can never eliminate an edge but can only add edges, which shows that if $\tau$ is incident to $\sigma$ then (i) holds. Furthermore, Whitehead moves cause the arcs of $\sigma$ to cross only at isolated points. Conversely, suppose that (i) and (ii) hold for $\tau$. Consider the red forest of $\tau$. We claim that the set $P^\sigma_r$ provides a recipe for smoothing the red forest of $\tau$ to obtain the red forest $\sigma$ (and subsequently the blue using $P^\sigma_b$), as follows. Let us use the term “short edges” for edges joining two neighboring (even for red, odd for blue) leaves. We start with the short edges, joining pairs of the form $(i,i+2)$ in $P^\sigma_r$ (and with labels $\mod 4n$). Because the edges cross only at points, smoothing the edge $(i,i+2)$ by separating it off from the rest of the red forest of $\tau$ eliminates only edges from $i$ or $i+2$ to the other points; paths between all the other pairs of $P^{\tau}_r$ remain. We then erase the edges $(i,i+2)$ from the signature and now treat the “new” short edges (those from $P^\sigma_r$ joining points $j$, $j+6$) separating them from the rest of the tree, then erase those and continue in the same way. The final result reduces the red forest of $P^{\tau}_r$ entirely to a set of $d$ disjoint edges, endpoints of red edges, which is equal to $P^\sigma_r$ since they all belong to $P^\sigma_r$. This proves that any signature $\tau$ satisfying (i) and (ii) can be smoothed to $\sigma$.  \[T:cell\] The set of all $n-$signatures determine a stratification of the configuration space $Conf_n({\mathbb{C}})$ in which a signature of dimension $k$ corresponds to a stratum of dimension $k$ in $Conf_n({\mathbb{C}})$. A signature corresponds uniquely to a class of drawings, a class of drawings describes a region of polynomials (i.e. the set of polynomials having the same signature) and the statement from theorem \[T:Contract\] shows that these regions are contractible. Due to proposition \[D:sigmabar\], lemma \[L:ball\] and lemma \[L:ret\], we have a filtration verifying the definition of a stratified space. Let $\sigma$ be a generic signature. Then, $\overline{A}_{\sigma}$ is a Goresky–MacPherson stratified space. We have the following filtration: $\overline{A}_{\tau_0}\subset \overline{A}_{\tau_{1}}\subset \dots \subset \overline{A}_{\tau_{n-1}} \subset\overline{A}_{\tau_{n}}=\overline{A}_{\sigma}$. Take a point $p$ in $\overline{A}_{\tau_{i}}\setminus \overline{A}_{\tau_{i-1}}$. Then, by lemma \[L:DeR\], $A_\sigma$ retracts onto $\overline{A}_{\tau_{i}}$. Therefore, we have a topological cone structure $cone(L)$ , where $L$ is a compact Hausdorff space. From the stratification, it follows that $L$ is endowed with an $n-i-1$ topological stratification: $$L=L_{n-i-1}\supset \dots \supset L_{1}\supset L_{0} \supset L_{-1}=\emptyset.$$ By lemma \[L:smo\], the neighborhood of $p$ in $\overline{A}_{\tau_{i}}$ behaves as an $i$-dimensional Euclidean space. So, there exists a distinguished neighborhood $N$ of $p$ in $\overline{A}_{\sigma}$ and a homeomorphism $\phi$ such that: $$\phi:\Re^i\times cone^{\circ}(L)\to N,$$ ($cone^{\circ}(L_{j})$ denotes the open cone) which takes each $\Re^i\times cone^{\circ}(L_{j})$ homeomorphically to $N\cap \overline{A}_{\tau_{i+j+1}}$. Multiple intersections of closures of strata -------------------------------------------- This section investigates a combinatorial method to study multiple intersections between closures of strata indexed by generic signatures. This shows that a finite non-empty intersection of closures of generic strata has a lowest upper bound. This part is an independent investigation of the properties of the stratification, and is not directly used for the construction of the Čech cover. For simplicity, instead of signatures, we will use diagrams i.e. embedded forests in the complex plane, such that their leaves lie on the boundary of a disc, on the $4n$ roots of the unity. We label the leaves in the trigonometric sense. We focus essentially on the combinatorics. So, from now on, we can assume that all the discs are of the same canonical radius and that their leaves coincide on the $4n$ roots of the unity. The core idea of the construction is to superimpose generic diagrams $\sigma_0,...,\sigma_p$ such that leaves $\sqrt{1}^{4n}$ (and their labels) coincide. Then, we apply contracting Whitehead moves only to those diagonals, of the generic signatures, which in the superimposition are non identical. ### Admissible superimposition of diagrams Let $\sigma_0,\dots, \sigma_p$ be generic signatures and let $\Theta$ denote their superimposition. This superimposition is not well-defined as the diagonals of the different signatures can be positioned differently with respect to each other, since the diagonals of each signature are given only up to isotopy, but we will consider only those having the following properties: 1. all intersections are crossings (but not tangents) of at most two diagonals, 2. the superimposition $\Theta$ cuts the disk into polygonal regions; we require that no region is a bigon, 3. the number of crossings of a given diagonal with other diagonals of $\Theta$ must be minimal, with respect to a possible isotopy. We take representatives of isotopy classes of arcs. .1cm We call such superimpositions [*admissible*]{}. Let $\sigma_0,\dots, \sigma_p$ be generic signatures. If there exists no superimposition $\Theta$ with the property that one diagonal has more than $p+1$ intersections with diagonals of the opposite color then there is no signature $\tau$ incident to all the $\sigma_{i}$. The key point is the following. If $\sigma_{0},...,\sigma_{p}$ admit a signature $\tau$ incident to all of them, then $\tau$ has the following property: every segment of the tree $\tau$ (a segment is the part of an edge contained between two vertices, including leaves) belongs to at most one diagonal $(i,j)$ of each $\sigma_i$. Thus, in particular, each segment can be considered as belonging to at most $p+1$ diagonals, one from each $\sigma_i$. Thus, if a red diagonal of $\tau$ crosses $p+2$ or more blue diagonals in the superimposition, there is no one segment of $\tau$ which can belong to all of them, so the red diagonal will necessarily cross more than one blue segment of $\tau$, which is impossible. We will say that the set of signatures $\sigma_0,\dots, \sigma_p$ is [*compatible*]{} if it admits an admissible superimposition $\Theta$ with the property that no diagonal crosses more than $p+1$ diagonals of the opposite color. Note that a red diagonal can never cross a blue diagonal in more than one point. Compatible sets of generic signatures may potentially have non-empty intersection. We will now show how to give a condition on $\Theta$ to see whether or not this is the case. ### Graph associated to an intersection of generic signatures. Let $\sigma_0,\dots, \sigma_p$ be a set of compatible generic signatures and let $\Theta$ be an admissible superimposition. Then $\Theta$ cuts the disk into polygonal regions. Color a region red (resp. blue) if all its edges are red (resp. blue); the intersecting regions are purple. Construct a graph from $\Theta$ as follows : place a vertex in each red or blue region (but not purple) with number of sides greater than three. If two vertices lie in blue (resp. red) polygons that meet at a point, join them with a blue (resp. red) edge (even if this edge crosses purple regions). If two vertices lie in blue (resp. red) polygons that intersect along an edge of the opposite color, connect them with a blue (resp. red) edge. If two vertices lying in the same red (resp. blue) polygon can be connected by a segment inside the polygon which crosses only one purple region, add this segment. Connect each vertex to every terminal vertex lying in the same red (resp. blue) region, and also to any terminal vertex which can be reached by staying within the original red (resp. blue) polygon but crossing through a purple region formed by two blue (resp. red) diagonals emerging from that terminal vertex. Finally, if any vertex of the graph has valency 2, we ignore this vertex and consider the two emerging edges as forming a single edge. We call this graph the graph associated to the superimposition. The graph associated to $\Theta$ is independent of the actual choice of admissible superimposition $\Theta$. Let $\Theta$ and $\Theta'$ be admissible superimpositions, and consider a given diagonal $D$. By the admissibility conditions the number of crossings of $D$ with diagonals of the other color is equal in $\Theta$ and $\Theta'$, and in fact the set of diagonals of the other color crossed by $D$ is identical in $\Theta$ and $\Theta'$. Therefore, the only possible modifications of the $\Theta$ is to move $D$ across an intersection of two diagonals of the other color. But this does not change the associated graph. ### Compatible signatures The canonical graph associated to a set of compatible signatures $\sigma_0,\dots,\sigma_p$ is the graph associated to any admissible superimposition $\Theta$. \[Th:com\] Let $\sigma_0,\dots, \sigma_p$ be generic signatures. Then, there exists a signature incident to all the $\sigma_i$ if and only if the set $\sigma_0,\dots, \sigma_p$ is compatible and the associated canonical graph is a signature. Replacing a blue (or red) polygon by a graph having the shape of a star as in the construction above involves diagonals of the different $\sigma_i$ which must be identified if we want to construct a common incident signature. The contracting moves may be stronger than strictly necessary (i.e. the signature $\tau$ may not be the signature of maximal dimension in the intersection), but any signature in the intersection must either have the same connected components as $\tau$, i.e. be obtained from $\tau$ by applying only smoothing Whitehead moves which do not increase the number of connected components of $\tau$ (these smoothings are the [*partial smoothings*]{}), or lie in the closures of these. Thus, up to such smoothings, the moves constructing $\tau$ are necessary in order to identify the diagonals of the $\sigma_i$. The contracting moves in the construction of the graph associated to $\Theta$, restricted to just one of the signatures $\sigma_i$, has the effect of making a contracting Whitehead move on the blue (resp. red) curves of this signature. Thus, on each of the signatures, the graph construction reduces to a sequence of contracting Whitehead moves. Thus the $\sigma_i$ possess a common incident signature if and only if $\tau$ is such a signature. In the appendix we show the classification of intersections between polygons which are allowed in order to have a non-empty intersection between closures of strata. At the end we show how to construct the canonical graph from the superimposition of a red and a blue polygon. Good Čech cover =============== This section shows how to construct the Čech cover, and includes a proof that multiple intersections are contractible. We will introduce the notation $\underline{A_{\sigma}}$, for the set of all elements $A_{\tau}$ in the set of strata verifying $\tau\prec \sigma$ i.e. $\underline{A_{\sigma}}=\cup_{\tau\prec\sigma} A_{\tau}$ and $codim(\tau)<codim(\sigma)$. The set of elements verifying $\tau\prec\sigma$, where $codim(\tau)<codim(\sigma)$ is denoted by $\underline{\sigma}$. Contractibility theorem -----------------------  \[T:clostra\] Let $\sigma$ be a non-generic signature . Then, $\underline{A_{\sigma}}$ is a contractible set. The proof goes as follows. Using an induction on the codimension of the incident strata to $\sigma$, and the deformation retract lemma we show that $\underline{A}_{\sigma}$ is a deformation retract of $A_{\sigma}$. Then, we use the theorem \[T:Contract\] on contractibility of strata. An element $\tau$ lying in $\underline{A_{\sigma}}$ verifies $\tau \prec \sigma$. In other words, there exists one unique contracting Whitehead move from $\tau$ to $\sigma$ (uniqueness follows from lemma \[L:countWhitehead\] and lemma \[L:tree\]). Suppose that $ codim(\sigma)=codim(\tau)+1$. Now by the deformation retract lemma \[L:DeR\], $A_{\sigma}$ is a deformation retract of $A_{\tau}$. Let $\tau'$ in $\underline{A_{\sigma}}$ be of codimension $k$ and suppose that $A_{\sigma}$ is a deformation retract of $A_{\tau'}$. Let us show that for $ \tau''\prec \sigma$, where $codim(A_{\tau''})=codim(A_{\tau'})+1$, $A_{\sigma}$ is a deformation retract of $A_{\tau''}$. Now, by the deformation retract lemma \[L:DeR\], $A_{\tau'}$ is a deformation retract of $A_{\tau''}$. So, by the induction hypothesis, we have that $A_{\sigma}$ is a deformation retract of $A_{\tau''}$. Finally, using the fact that each stratum is contractible (c.f. Theorem \[T:Contract\]), we have that $\underline{A_{\sigma}}$ is contractible. This proof easily adapts to give the following result. \[C:contra\] Let $\sigma$ be any (non-generic) signature and let $\tilde{\sigma}$ be a subset of $\underline{\sigma}$. Let $A_{\tilde{\sigma}}$ be the union of $A_{\rho}$ with $\rho$ in $\tilde{\sigma}$. Then, $A_{\tilde{\sigma}}$ is contractible. Thickening of strata -------------------- Let $(\dpol_n, \mathcal{S})$ be the stratified smooth topological space. A system of tubes is a family $T$ of triplets $(T_\sigma,\pi_\sigma,\rho_\sigma)\sigma\in S$, where 1. $T_\sigma$ is an open neighbourhood of the stratum $A_{\sigma}$, called its tubular neighbourhood; 2. $\pi_\sigma$ is a deformation retract $T_\sigma \to A_{\sigma}$ (i.e. $\pi_\sigma$ is continuous and $\pi_\sigma(x) = x$ for all $x \in A_{\sigma}$); 3. $\rho_\sigma$ is a continuous function $T_\sigma \to \Re_{+}$, called the distance function of the stratum, such that $A_{\sigma} = \rho^{-1}(0)$ ; 4. For any pair of indices $\tau\prec \sigma$ the restriction of $(\pi_\tau,\rho_\tau)$ to $T_\tau \cap A_{\sigma} \to A_{\tau'} \times \Re_+$ is a smooth submersion 5. for all $x\in T_{\sigma} \cap T_\tau$ we have $\pi_{\sigma}(x)\in T_\tau$ ,$\pi_r\pi_\sigma(x)=\pi_\tau(x)$ and $\rho_\tau(\pi_\sigma(x)) = \rho_\tau(x)$ The Čech cover -------------- We consider only the red (resp. blue) curves of a signature $\sigma$. This is denoted by $\sigma^{R}$ (resp. $\sigma^{B}$). We call a connected component a tree in $\sigma^{R}$ (resp. $\sigma^B$) . A tree is in bijection with its set of leaves. So, a connected component is in bijection with a subset of $\{1,...,4n\}$, corresponding to the labels of its leaves. Such a set is of even cardinality. A complete Whitehead move partitions the set into two subsets of even cardinality. \[L:con\] Consider two non-generic signatures $\sigma$ and $\tau$. Then, $\underline{\sigma}\cap \underline{\tau}\ne \emptyset$ if and only if for each connected component with leaves in $I$ of $\sigma^{R}$ (resp. $\sigma^B$) there exists a connected component of leaves in $J$ of $\tau^{R}$ (resp. $\tau^B$) verifying that $I\setminus (I\cap J)$ and $J\setminus (I\cap J)$ are of even cardinality. Suppose that there exists one connected component of $\sigma^{R}$ which does not verify the property. Without loss of generality we suppose that a connected component of $\sigma^{R}$ is not contained in a connected component of $\tau^{R}$. We suppose that this connected component of $\sigma^{R}$ is defined by the set $I=\{i_1,...,i_{2p}\}$; whereas the investigated component of $\tau^{R}$ is defined by the set $J=\{i_2,...,i_{2p+1}\}$ (a generalisation of this case can be easily obtained using $J=\{i_{2r},...,i_{2k+1}\}$, for some positive integers $r,k$ and where $k\geq p$). Then, a sequence of complete Whitehead moves on both connected components induce partitions of $I$ and $J$, into subsets of even cardinality. The map between these partitions can never be the identity. The same procedure applies to the color $B$. We proceed by the following algorithm. \[L:un\] Let $\sigma$ and $\tau$ be two non-generic signatures such that $\underline{\sigma}\cap \underline{\tau}\ne \emptyset$. If the lowest upper bound exists in $\underline{\sigma}\cap \underline{\tau}$ then it is unique. Consider the connected components of $\sigma^{R}$ (resp. $\sigma^{B}$) and $\tau^{R}$ (resp. $\tau^{B}$) as two partitions of the set of cardinality $2n$: $\sqcup_{i=1}^p A_i$ and $\sqcup_{j=1}^r B_j$. Then, there exists a unique partition which is given by $\cup_{j=1}^{r} (B_j \cap (\sqcup_{i=1}^p A_i))$. However, that this does not provide a unique signature: to one connected component of leaves $\{i_1,...,i_{2k}\}$ corresponds a set of non isomorphic trees, if $k>1$. Let us consider this connected component $\{i_1,...,i_{2k}\}$, obtained by smoothing Whitehead moves in $\sigma^{R}$ and $\tau^{R}$ with the above method. Take the maximum on the number of vertices (which are not leaves!) in both connected components. Modify the connected component having the minimal number of vertices, by a partial smoothing Whitehead move. Repeat until the number of vertices in both connected components coincide and in such a way that edge relations $E$ between leaves and inner vertices are identical in both connected components. Those trees are thus isomorphic and there exists one unique such graph. Repeat this for $\sigma^{B}$ and $\tau^{B}$. Therefore, the greatest lower bound is unique. Let $\sigma_i$ be an upper bound. Each $\underline{A_{\sigma_i}}$ is a lattice. In $\underline{A_{\sigma_i}}$ we have a poset, where we have a subset of the set of signatures and the relation is the incidence relation $\prec$. There is a join $\vee$ and meet $ \wedge$ structure. The join $\vee$ is given by smoothing Whitehead moves and the meet $\wedge$ is given by the contracting Whitehead moves on the elements of the set $\underline{A_{\sigma_i}}$. We can easily verify that the commutativity, associativity and absorption laws are verified. We will use this construction to thicken all our strata. Concerning notation, a thickened stratum $A$ is denoted by $A^+$. Let $\sigma_0,...,\sigma_p$ be a set of non-generic signature, being upper bounds. Then, the open sets of the Čech cover are formed by the thickened sets $\underline{A_{\sigma_i}}^{+}$. Indeed, to have a Čech cover, it is necessary to have open sets and that their multiple intersections are contractible. We have shown that $\underline{A_{\sigma}}$ is contractible. Applying the construction above, we thicken every stratum in $\underline{A_{\sigma}}$. Condition 4 of this construction implies that to cover our space we need the union of all thickened strata lying in $\underline{A_{\sigma_i}}$, which we denote by $\underline{A_{\sigma}}^{+}$. Multiple intersections are contractible. For simplicity we consider the case of two intersecting sets, where $A=\underline{A_{\sigma}}^{+}$ and $B= \underline{A_{\sigma'}}^{+}$. We have shown in the lemma \[L:un\] that if the intersection is non-empty then there exists a unique greatest lower bound. Therefore, by the deformation retract lemma \[L:DeR\], all the signatures in $A\cap B$ retract onto this greatest lower bound. Theorem \[T:cell\] states that the stratum is contractible. The generalisation can be easily obtained by induction. Therefore, multiple intersections are contractible. Superimposition of signatures ============================= Let $\sigma_0 \cup \dots \cup \sigma_p$ be compatible generic signatures and $\Theta$ denote an admissible superimposition. In this subsection, we digress briefly in order to give a visual description of the conditions on the superimposition $\Theta$ for the associated graph to be a signature. In fact, it is quite rare for signatures to intersect. Almost always the canonical graph will not be a signature. Given a red polygon and a blue polygon of $\Theta$, they must either be disjoint or intersect in one of exactly four possible ways: - the intersection is a three sided polygon with two red (resp. blue) edges and one blue edge (resp. red); ([4\*180/[6]{}]{}:1) .. controls([4\*180/[6]{}]{}:0.1) and ([9\*180/[6]{}]{}:0.7) .. ([9\*180/[6]{}]{}:1) ; ([2\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([2\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([6\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([6\*180/[6]{}+ 90/[6]{}]{}:1) ; ([6\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([6\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([11\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([11\*180/[6]{}+ 90/[6]{}]{}:1) ; - The intersection is a four sided polygon with two red and two blue edges; ([0\*180/[6]{}]{}:1) .. controls([0\*180/[6]{}]{}:0.7) and ([1\*180/[6]{}]{}:0.7) .. ([1\*180/[6]{}]{}:1) ; ([0\*180/[6]{}]{}:1) .. controls([0\*180/[6]{}]{}:0.7) and ([11\*180/[6]{}]{}:0.7) .. ([11\*180/[6]{}]{}:1) ; ([1\*180/[6]{}]{}:1) .. controls([1\*180/[6]{}]{}:0.7) and ([5\*180/[6]{}]{}:0.7) .. ([5\*180/[6]{}]{}:1) ; ([7\*180/[6]{}]{}:1) .. controls([7\*180/[6]{}]{}:0.7) and ([11\*180/[6]{}]{}:0.7) .. ([11\*180/[6]{}]{}:1) ; ([5\*180/[6]{}]{}:1) .. controls([5\*180/[6]{}]{}:0.7) and ([6\*180/[6]{}]{}:0.7) .. ([6\*180/[6]{}]{}:1) ; ([6\*180/[6]{}]{}:1) .. controls([6\*180/[6]{}]{}:0.7) and ([7\*180/[6]{}]{}:0.7) .. ([7\*180/[6]{}]{}:1) ; ([3\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([3\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([4\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([4\*180/[6]{}+ 90/[6]{}]{}:1) ; ([2\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([2\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([3\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([3\*180/[6]{}+ 90/[6]{}]{}:1) ; ([4\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([4\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([8\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([8\*180/[6]{}+ 90/[6]{}]{}:1) ; ([2\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([2\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([10\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([10\*180/[6]{}+ 90/[6]{}]{}:1) ; ([9\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([9\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([8\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([8\*180/[6]{}+ 90/[6]{}]{}:1) ; ([9\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([9\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([10\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([10\*180/[6]{}+ 90/[6]{}]{}:1) ; - The intersection is two triangles joined at a point, formed by two crossing blue (resp. red) diagonals, cut transversally on either side of the intersection by two red (resp. blue) diagonals; note that this means that two polygons of the same color meet at a point and both intersect the polygon of the other color; ([0\*180/[6]{}]{}:1) .. controls([0\*180/[6]{}]{}:0.7) and ([1\*180/[6]{}]{}:0.7) .. ([1\*180/[6]{}]{}:1) ; ([0\*180/[6]{}]{}:1) .. controls([0\*180/[6]{}]{}:0.7) and ([11\*180/[6]{}]{}:0.7) .. ([11\*180/[6]{}]{}:1) ; ([1\*180/[6]{}]{}:1) .. controls([1\*180/[6]{}]{}:0.7) and ([7\*180/[6]{}]{}:0.7) .. ([7\*180/[6]{}]{}:1) ; ([5\*180/[6]{}]{}:1) .. controls([5\*180/[6]{}]{}:0.7) and ([11\*180/[6]{}]{}:0.7) .. ([11\*180/[6]{}]{}:1) ; ([5\*180/[6]{}]{}:1) .. controls([5\*180/[6]{}]{}:0.7) and ([6\*180/[6]{}]{}:0.7) .. ([6\*180/[6]{}]{}:1) ; ([6\*180/[6]{}]{}:1) .. controls([6\*180/[6]{}]{}:0.7) and ([7\*180/[6]{}]{}:0.7) .. ([7\*180/[6]{}]{}:1) ; ([3\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([3\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([4\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([4\*180/[6]{}+ 90/[6]{}]{}:1) ; ([2\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([2\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([3\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([3\*180/[6]{}+ 90/[6]{}]{}:1) ; ([4\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([4\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([8\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([8\*180/[6]{}+ 90/[6]{}]{}:1) ; ([2\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([2\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([10\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([10\*180/[6]{}+ 90/[6]{}]{}:1) ; ([9\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([9\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([8\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([8\*180/[6]{}+ 90/[6]{}]{}:1) ; ([9\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([9\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([10\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([10\*180/[6]{}+ 90/[6]{}]{}:1) ; - The intersection is a point at which two blue diagonals and two red diagonals all cross in the cyclic order red, red, blue, blue, red, red, blue, blue; note that this means that in fact four polygons meet at a point, each being the same color as the opposing one. ([0\*180/[6]{}]{}:1) .. controls([0\*180/[6]{}]{}:0.7) and ([1\*180/[6]{}]{}:0.7) .. ([1\*180/[6]{}]{}:1) ; ([0\*180/[6]{}]{}:1) .. controls([0\*180/[6]{}]{}:0.7) and ([11\*180/[6]{}]{}:0.7) .. ([11\*180/[6]{}]{}:1) ; ([1\*180/[6]{}]{}:1) .. controls([1\*180/[6]{}]{}:0.7) and ([7\*180/[6]{}]{}:0.7) .. ([7\*180/[6]{}]{}:1) ; ([5\*180/[6]{}]{}:1) .. controls([5\*180/[6]{}]{}:0.7) and ([11\*180/[6]{}]{}:0.7) .. ([11\*180/[6]{}]{}:1) ; ([5\*180/[6]{}]{}:1) .. controls([5\*180/[6]{}]{}:0.7) and ([6\*180/[6]{}]{}:0.7) .. ([6\*180/[6]{}]{}:1) ; ([6\*180/[6]{}]{}:1) .. controls([6\*180/[6]{}]{}:0.7) and ([7\*180/[6]{}]{}:0.7) .. ([7\*180/[6]{}]{}:1) ; ([3\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([3\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([4\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([4\*180/[6]{}+ 90/[6]{}]{}:1) ; ([2\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([2\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([3\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([3\*180/[6]{}+ 90/[6]{}]{}:1) ; ([2\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([2\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([8\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([8\*180/[6]{}+ 90/[6]{}]{}:1) ; ([4\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([4\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([10\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([10\*180/[6]{}+ 90/[6]{}]{}:1) ; ([9\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([9\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([8\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([8\*180/[6]{}+ 90/[6]{}]{}:1) ; ([9\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([9\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([10\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([10\*180/[6]{}+ 90/[6]{}]{}:1) ; From such a disposition of diagonals, we obtain the graph described above. The graph must be a forest with even valency at every non-terminal vertex. In the following we show how to construct a canonical graph from the superimposition of a red and a blue polygon, using the second superimposition above.  \[F:1\] \(a) at (-6,0) ([0\*180/[6]{}]{}:1) .. controls([0\*180/[6]{}]{}:0.7) and ([1\*180/[6]{}]{}:0.7) .. ([1\*180/[6]{}]{}:1) ; ([0\*180/[6]{}]{}:1) .. controls([0\*180/[6]{}]{}:0.7) and ([11\*180/[6]{}]{}:0.7) .. ([11\*180/[6]{}]{}:1) ; ([1\*180/[6]{}]{}:1) .. controls([1\*180/[6]{}]{}:0.7) and ([5\*180/[6]{}]{}:0.7) .. ([5\*180/[6]{}]{}:1) ; ([7\*180/[6]{}]{}:1) .. controls([7\*180/[6]{}]{}:0.7) and ([11\*180/[6]{}]{}:0.7) .. ([11\*180/[6]{}]{}:1) ; ([5\*180/[6]{}]{}:1) .. controls([5\*180/[6]{}]{}:0.7) and ([6\*180/[6]{}]{}:0.7) .. ([6\*180/[6]{}]{}:1) ; ([6\*180/[6]{}]{}:1) .. controls([6\*180/[6]{}]{}:0.7) and ([7\*180/[6]{}]{}:0.7) .. ([7\*180/[6]{}]{}:1) ; ([3\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([3\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([4\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([4\*180/[6]{}+ 90/[6]{}]{}:1) ; ([2\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([2\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([3\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([3\*180/[6]{}+ 90/[6]{}]{}:1) ; ([4\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([4\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([8\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([8\*180/[6]{}+ 90/[6]{}]{}:1) ; ([2\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([2\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([10\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([10\*180/[6]{}+ 90/[6]{}]{}:1) ; ([9\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([9\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([8\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([8\*180/[6]{}+ 90/[6]{}]{}:1) ; ([9\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([9\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([10\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([10\*180/[6]{}+ 90/[6]{}]{}:1) ; ; (b) at (-2,0) ([0\*180/[6]{}]{}:1) .. controls([0\*180/[6]{}]{}:0.7) and ([1\*180/[6]{}]{}:0.7) .. ([1\*180/[6]{}]{}:1) ; ([0\*180/[6]{}]{}:1) .. controls([0\*180/[6]{}]{}:0.7) and ([11\*180/[6]{}]{}:0.7) .. ([11\*180/[6]{}]{}:1) ; ([1\*180/[6]{}]{}:1) .. controls([1\*180/[6]{}]{}:0.7) and ([5\*180/[6]{}]{}:0.7) .. ([5\*180/[6]{}]{}:1) ; ([7\*180/[6]{}]{}:1) .. controls([7\*180/[6]{}]{}:0.7) and ([11\*180/[6]{}]{}:0.7) .. ([11\*180/[6]{}]{}:1) ; ([5\*180/[6]{}]{}:1) .. controls([5\*180/[6]{}]{}:0.7) and ([6\*180/[6]{}]{}:0.7) .. ([6\*180/[6]{}]{}:1) ; ([6\*180/[6]{}]{}:1) .. controls([6\*180/[6]{}]{}:0.7) and ([7\*180/[6]{}]{}:0.7) .. ([7\*180/[6]{}]{}:1) ; ([3\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([3\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([4\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([4\*180/[6]{}+ 90/[6]{}]{}:1) ; ([2\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([2\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([3\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([3\*180/[6]{}+ 90/[6]{}]{}:1) ; ([4\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([4\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([8\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([8\*180/[6]{}+ 90/[6]{}]{}:1) ; ([2\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([2\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([10\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([10\*180/[6]{}+ 90/[6]{}]{}:1) ; ([9\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([9\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([8\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([8\*180/[6]{}+ 90/[6]{}]{}:1) ; ([9\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([9\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([10\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([10\*180/[6]{}+ 90/[6]{}]{}:1) ; (-0.6,0) circle (0.05) ; (0.6,0) circle (0.05) ; (-0.15,0.58) circle (0.05) ; (0.1,-0.55) circle (0.05) ; ; (c) at (2,0) ([0\*180/[6]{}]{}:1) .. controls([0\*180/[6]{}]{}:0.7) and ([1\*180/[6]{}]{}:0.7) .. ([1\*180/[6]{}]{}:1) ; ([0\*180/[6]{}]{}:1) .. controls([0\*180/[6]{}]{}:0.7) and ([11\*180/[6]{}]{}:0.7) .. ([11\*180/[6]{}]{}:1) ; ([1\*180/[6]{}]{}:1) .. controls([1\*180/[6]{}]{}:0.7) and ([5\*180/[6]{}]{}:0.7) .. ([5\*180/[6]{}]{}:1) ; ([7\*180/[6]{}]{}:1) .. controls([7\*180/[6]{}]{}:0.7) and ([11\*180/[6]{}]{}:0.7) .. ([11\*180/[6]{}]{}:1) ; ([5\*180/[6]{}]{}:1) .. controls([5\*180/[6]{}]{}:0.7) and ([6\*180/[6]{}]{}:0.7) .. ([6\*180/[6]{}]{}:1) ; ([6\*180/[6]{}]{}:1) .. controls([6\*180/[6]{}]{}:0.7) and ([7\*180/[6]{}]{}:0.7) .. ([7\*180/[6]{}]{}:1) ; ([3\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([3\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([4\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([4\*180/[6]{}+ 90/[6]{}]{}:1) ; ([2\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([2\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([3\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([3\*180/[6]{}+ 90/[6]{}]{}:1) ; ([4\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([4\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([8\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([8\*180/[6]{}+ 90/[6]{}]{}:1) ; ([2\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([2\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([10\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([10\*180/[6]{}+ 90/[6]{}]{}:1) ; ([9\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([9\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([8\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([8\*180/[6]{}+ 90/[6]{}]{}:1) ; ([9\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([9\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([10\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([10\*180/[6]{}+ 90/[6]{}]{}:1) ; ([0\*180/[6]{}]{}:1) .. controls([0\*180/[6]{}]{}:0.7) and ([6\*180/[6]{}]{}:0.7) .. ([6\*180/[6]{}]{}:1) ; ([1\*180/[6]{}]{}:1) .. controls([1\*180/[6]{}]{}:0.6) and ([11\*180/[6]{}]{}:0.6) .. ([11\*180/[6]{}]{}:1) ; ([5\*180/[6]{}]{}:1) .. controls([5\*180/[6]{}]{}:0.6) and ([7\*180/[6]{}]{}:0.6) .. ([7\*180/[6]{}]{}:1) ; (-0.6,0) circle (0.05) ; (0.6,0) circle (0.05) ; (-0.15,0.58) circle (0.05) ; (0.1,-0.55) circle (0.05) ; ; (d) at (6,0) ([0\*180/[6]{}]{}:1) .. controls([0\*180/[6]{}]{}:0.7) and ([1\*180/[6]{}]{}:0.7) .. ([1\*180/[6]{}]{}:1) ; ([0\*180/[6]{}]{}:1) .. controls([0\*180/[6]{}]{}:0.7) and ([11\*180/[6]{}]{}:0.7) .. ([11\*180/[6]{}]{}:1) ; ([1\*180/[6]{}]{}:1) .. controls([1\*180/[6]{}]{}:0.7) and ([5\*180/[6]{}]{}:0.7) .. ([5\*180/[6]{}]{}:1) ; ([7\*180/[6]{}]{}:1) .. controls([7\*180/[6]{}]{}:0.7) and ([11\*180/[6]{}]{}:0.7) .. ([11\*180/[6]{}]{}:1) ; ([5\*180/[6]{}]{}:1) .. controls([5\*180/[6]{}]{}:0.7) and ([6\*180/[6]{}]{}:0.7) .. ([6\*180/[6]{}]{}:1) ; ([6\*180/[6]{}]{}:1) .. controls([6\*180/[6]{}]{}:0.7) and ([7\*180/[6]{}]{}:0.7) .. ([7\*180/[6]{}]{}:1) ; ([3\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([3\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([4\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([4\*180/[6]{}+ 90/[6]{}]{}:1) ; ([2\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([2\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([3\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([3\*180/[6]{}+ 90/[6]{}]{}:1) ; ([4\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([4\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([8\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([8\*180/[6]{}+ 90/[6]{}]{}:1) ; ([2\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([2\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([10\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([10\*180/[6]{}+ 90/[6]{}]{}:1) ; ([9\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([9\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([8\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([8\*180/[6]{}+ 90/[6]{}]{}:1) ; ([9\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([9\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([10\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([10\*180/[6]{}+ 90/[6]{}]{}:1) ; ([0\*180/[6]{}]{}:1) .. controls([0\*180/[6]{}]{}:0.7) and ([6\*180/[6]{}]{}:0.7) .. ([6\*180/[6]{}]{}:1) ; ([1\*180/[6]{}]{}:1) .. controls([1\*180/[6]{}]{}:0.6) and ([11\*180/[6]{}]{}:0.6) .. ([11\*180/[6]{}]{}:1) ; ([5\*180/[6]{}]{}:1) .. controls([5\*180/[6]{}]{}:0.6) and ([7\*180/[6]{}]{}:0.6) .. ([7\*180/[6]{}]{}:1) ; ([3\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([3\*180/[6]{}+ 90/[6]{}]{}:0.7) and ([9\*180/[6]{}+ 90/[6]{}]{}:0.7) .. ([9\*180/[6]{}+ 90/[6]{}]{}:1) ; ([2\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([2\*180/[6]{}+ 90/[6]{}]{}:0.6) and ([4\*180/[6]{}+ 90/[6]{}]{}:0.6) .. ([4\*180/[6]{}+ 90/[6]{}]{}:1) ; ([8\*180/[6]{}+ 90/[6]{}]{}:1) .. controls([8\*180/[6]{}+ 90/[6]{}]{}:0.6) and ([10\*180/[6]{}+ 90/[6]{}]{}:0.6) .. ([10\*180/[6]{}+ 90/[6]{}]{}:1) ; (-0.6,0) circle (0.05) ; (0.6,0) circle (0.05) ; (-0.15,0.58) circle (0.05) ; (0.15,-0.58) circle (0.05) ; ; (a)–(b); (b)–(c); (c)–(d); ([0\*180/[8]{}]{}:1) .. controls([0\*180/[8]{}]{}:0.7)and ([8\*180/[8]{}]{}:0.7) .. ([8\*180/[8]{}]{}:1) ; ([7\*180/[8]{}]{}:1) .. controls([7\*180/[8]{}]{}:0.3)and ([9\*180/[8]{}]{}:0.3) .. ([9\*180/[8]{}]{}:1) ; ([15\*180/[8]{}]{}:1) .. controls([15\*180/[8]{}]{}:0.3)and ([1\*180/[8]{}]{}:0.3) .. ([1\*180/[8]{}]{}:1) ; ([12\*180/[8]{}]{}:1) .. controls([12\*180/[8]{}]{}:0.7)and ([4\*180/[8]{}]{}:0.7) .. ([4\*180/[8]{}]{}:1) ; ([3\*180/[8]{}]{}:1) .. controls([3\*180/[8]{}]{}:0.3)and ([5\*180/[8]{}]{}:0.3) .. ([5\*180/[8]{}]{}:1) ; ([11\*180/[8]{}]{}:1) .. controls([11\*180/[8]{}]{}:0.3)and ([13\*180/[8]{}]{}:0.3) .. ([13\*180/[8]{}]{}:1) ; [99]{} V. J. Arnold. [*The cohomology ring of colored braids*]{}. Mat. Zametki [**5**]{} No 2 (1969), 227-231 (Russian), English transl. in Trans. Moscow Math. Soc. [**21**]{}, (1970), 30-52. F. Callegaro, D. Moroni, M. Salvetti. [*Cohomology of Artin groups of type $A_{n}, B_{n}$ and applications*]{}. Geometry and Topology, Monographs [**13**]{}, (2008), 85-104 H. Cartan. [*Généralités sur les espaces fibrés I,*]{} Séminaire Henri Cartan, tome 2 (1949-1950), exp. no 6, p. 1-13 E. Čech. [*Théorie générale de l’homologie dans un espace quelconque*]{}. Fund. Math. [**19**]{}, (1932), 149-183. J. Cerf. [*Topologie de certains espaces de plongements*]{}. Bulletin de la S.M.F. [**89**]{}, (1961), 227-380. J. Cerf. [*Théorèmes de fibration des espaces de plongements. Applications*]{}, Séminaire Henri Cartan, tome 15 (1962-1963), exp. no [**8**]{}, 1-13 F. Cohen. [*Cohomology of braid spaces*]{}. Bull. Amer. Math. Soc. 79 No [**4**]{} (1973), 763-766. N.C. Combe. [*On a new cell decomposition of a complement of the discriminant variety: application to the cohomology of braid groups*]{}. PhD Thesis (2018). N.C. Combe. [*Decomposition in Coxeter-chambers of the configuration space of $d$ marked points on the complex plane*]{}. Arxiv: 237830 \[math AG\] (2018). N.C. Combe, Yu. Manin. [*Symmetries of genus zero modular operad.*]{} arXiv/math.AG/1907.10317 (To appear in Dubrovin’s memorial volume, in AMS in 2020) C. De Concini, C. Procesi, M. Salvetti. [*Arithmetic properties of the cohomology of braid groups*]{}. Topology, Elsevier, [**40**]{}, (2001), 739-751. P. Deligne, D. Mumford. [*The irreducibility of the space of curves of given genus.*]{} Publications mathématiques de l’I.H.E.S, Tome [**36**]{}, (1969), 75-109 J. Earle, J. Eells. [*The diffeomorphism group of a compact Riemann surface.*]{} Bull. Amer. Math. Soc. [**73**]{} (1967), no. 4, 557–559. D.B.A. Epstein. [*Curves on 2-manifolds and isotopies Acta Math.*]{} Volume [**115**]{} (1966), 83-107. A. Eremenko, D. Jakobson, N. Nadirashvili. [*On Nodal Sets and Nodal Domains on $S^2$ and $\Re^2$*]{}, Annales de l’Institut Fourier 57 no. 7 (2007), 2345–2360. D. B. Fuks. [*Cohomologies of the group cos mod 2*]{}. Functional. Anal. i Prilozh. [**4**]{}, No 2 (1970), 62-75 (Russian), English transl. in Functional Anal. Appl. [**4**]{}, (1970), 143-151. C. F. Gauss. [*Demonstratio nova theorematis omnem functionem algebraicum rationalem integram unius variabilis in factores reales primi vel secundi gradus resolvi posse,*]{} Helmstedt dissertation, 1799, reprinted in Werke, Vol. [**3**]{}, 1-30. M. Goresky, R. MacPherson. [*Intersection homology theory, Topology*]{} [**19**]{} (1980), no. 2, 135-162. M. Goresky, R. MacPherson. [*Analyse et topologie sur les espaces singuliers (II-III)*]{} 6 –10 juillet (1981), Astérisque no. 101-102 (1983), 135-192 M. Goresky, R. MacPherson. [*Intersection homology.*]{} II, Invent. Math. [**72**]{} (1983), no. 1, 77–129. V. V.Goryunov. [*Cohomology of the braid groups of the series $C$ and $D$ and some stratifications*]{}, Functional Anal. i Prilozh. [**12**]{}, No [**2**]{} (1978), 76-77 (Russian), English transl. in Functional Anal. Appl. [**12**]{}, (1978), 139-140. A. Grothendieck. [*Esquisse d’un Programme*]{}. \[archive\] (1984). https://fr.wikipedia.org/wiki/Alexander Grothendieck. E. Fadell, L. Neuwirth. [*Configurations spaces*]{}. Math Scand, [**10**]{}, (1962), 111-118. R. Fox, L. Neuwirth. [*The braid groups*]{}. Math Scand, [**10**]{}, (1962), 119-126. S. Keel. [*Intersection theory of moduli spaces of stable $n$–pointed curves of genus zero.*]{} Trans. AMS, 30 (1992), pp. 545–584. M. Kontsevich, Yu. Manin. [*Gromov-Witten classes, quantum cohomology, and enumerative geometry.*]{} Comm. Math. Phys., [**164**]{} (1994), pp. 525–562. P. Lochak, L. Schneps. [*The Grothendieck-Teichmuller group and automorohisms of braid groups*]{}, LMS LNM 200, Cambridge Univ. Press, (1994). Y. Manin. [*Frobenius manifolds, quantum cohomology and moduli spaces.*]{} AMS colloquium publications, vol [**47**]{} (1999). J. Milnor. [*Morse Theory*]{}. Princeton University Press (1963). J. Milnor. [*Singular Points of Complex Hypersurfaces.*]{} Princeton University Press, (1968). J. Mycielski. [*Polynomials with Preassigned Values at their Branching Points*]{}. American Math Monthly [**77**]{}, (1970), 853-855. R. Stanley. [*Catalan numbers*]{}. Cambridge University Press, (2015). R. Thom. [*La stabilité topologique des applications polynomiales*]{}. Enseignement Math., [**8**]{}, (1962), 24-33. F. V. Vainshtein. [*The cohomology of braid groups*]{}. Funktsional. Anal. i Prilozhen., [**12**]{}, (1978), 72-73. O. Ya. Viro. [*Plane real algebraic curves: constructions with controlled topology*]{}, Leningrad Math. J.1 (1990), no.5, 1059-1134.
--- author: - 'Abhineet Agarwal$^{1}$, R. Myrzakulov$^{2}$, S. K. J. Pacif$^{3}$, M. Shahalam$^{4}$' title: 'Cosmic acceleration from coupling of known components of matter: Analysis and diagnostics' --- Introduction {#sect1} ============ Late-time cosmic acceleration is an inevitable ingredient of our Universe directly supported by cosmological observations [@HZTEAM; @SCP]. Other observations such as cosmic microwave background (CMB), baryonic aucostic oscillations (BAO), sloan digital sky survey [@cmb1; @cmb2; @bao1; @bao2; @sdss1; @sdss2] and many others support this fact indirectly. The globular cluster reveals that the age of certain objects in the Universe is larger than the age of Universe estimated in standard model with normal matter. The only known resolution of the puzzle is provided by invoking cosmic acceleration at late-times. Although, there are many ways to explain the accelerating expansion of the Universe (e.g. by adding a source term in the matter part of Einstein field equations, by modifying the geometry or by invoking inhomogeneity), the inclusion of a source term with large negative pressure dubbed “*dark energy*" (DE) [@DE1; @DE2; @DE3; @DE4; @DE5; @DE6; @DE7; @DE8; @DE9; @DE10] is widely accepted to the theorists. However, a promising candidate for dark energy (cosmological constant $\Lambda $) is under scrutiny. A wide varieties of dark energy candidate have been proposed in the past few years such as cosmological constant [@DE3; @carollCC; @padmaCC; @peeblesCC], slowly rolling scalar field [@quint1; @quint2; @quint3; @quint4; @quint5], phantom field [@phant1; @phant2; @phant3; @phant4; @phant5; @phant6], tachyon field [@tachy1; @tachy2; @alam2017] and chaplygin gas [CG1, CG2]{} etc. (See [@DE1; @DEREV1; @DEREV2; @DEREV3] for a detailed list). The modifications of Einstein’s theory of gravity not only account the cosmic acceleration but also resolves many standard problems such as singularity problem, the hierarchy problem, quantization and unification with other theories of fundamental interactions. Massive gravity, Gauss-Bonnet gravity, $f(R)$, $f(T)$, $f(R,T)$ gravities, Chern-Simon gravity, Galileon gravity are name a few among the various alternative theories proposed in the past few years. Modified gravity can also provide unified description of the early-time inflation with that of late-time cosmic acceleration and dark matter (DM). Traditionally, all these modifications invoke extra degrees of freedom non-minimally coupled to matter in the Einstein frame. Generally, it is believed that late-time acceleration requires the presence of dark energy or the extra degrees of freedom. Recently, Berezhiani et al. [@KHOURY2016] discussed a third possibility which requires neither any exotic matter nor the large scale modifications of gravity. They showed that the interaction between the normal matter components, namely, the dark matter and the baryonic matter (BM) can also provide the late-time acceleration in Jordan frame. In context of the coupling, the stability criteria disfavors the conformal coupling while the maximally disformal coupling can give rise to late-time cosmic acceleration in Jordan frame but no acceleration in the Einstein frame. Extending the work of Ref. [@KHOURY2016], Agarwal et al. [@ARSSA] have further investigated the cosmological dynamics of the model obtained by parameterizing the coupling function. Also, they have shown that the model exhibits the sudden future singularity which can be resolved by taking a more generalized parametrization of the coupling function. In this paper, we shall consider two forms of parametrizations. The first parametrization shows the sudden future singularity, which can be pushed into far future if we consider the second parametrization. We further investigate the models using statefinder and $Om$ diagnostics. The paper is organized as follows. Section \[sect:2\] is devoted to the basic equations of the models. In section \[sec:obs\], we put the observational constraints on the model parameters. The detailed analysis of statefinder and $Om$ diagnostics are presented in sections \[sec:state\] and \[sec:om\], respectively. We conclude our results in section \[sec:conc\]. Field equations {#sect:2} =============== The scenario of the interaction between dark matter and the baryonic matter as described briefly in Refs. [@KHOURY2016] and [@ARSSA] in a spatially flat Freidmann-Lemaitre-Robartson-Walker (FLRW) background $$ds^{2}=-dt^{2}+a^{2}(t)\left( dx^{2}+dy^{2}+dz^{2}\right) \text{,} \label{1}$$yield the field equations, $$3H^{2}=8\pi G\left( \Lambda _{DM}^{4}\sqrt{\frac{X}{X_{eq}}}\left( \frac{% a_{eq}}{a}\right) ^{3}-P+QR^{3}\tilde{\rho}_{b}\right) \text{,} \label{2}$$and $$2\frac{\ddot{a}}{a}+H^{2}=-8\pi G(P+P_{b})\text{,} \label{3}$$where $Q$ and $R$ are two arbitrary coupling functions. The Einstein frame metric couples to Jordan frame metric such that $\sqrt{-% \tilde{g}}=QR^{3}\sqrt{-g}$ ($\tilde{g}$ is the determinant of Jordan frame metric $\tilde{g}_{\mu \nu }$, constructed from the Einstein frame metric $% g_{\mu \nu }$) and $X=-g^{\mu \nu }\partial _{\mu }\Theta \partial _{\nu }\Theta $, $\Theta $ being the dark matter field. The quantities $P$ and $P_{b}$ are pressures of DM and BM in Einstein frame respectively which are related by $% P_{b}\equiv QR^{3}\tilde{P}_{b}$. $\tilde{P}_{b}$ and $\tilde{\rho}_{b}$ are the pressure and density of BM in Jordan frame and are related to BM density in Einstein frame by the relation $\rho _{b}=QR^{3}\left( \tilde{\rho}% _{b}\left( 1-2X\frac{Q,_{X}}{Q}\right) +6X\frac{R,_{X}}{R}\tilde{P}% _{b}\right) $. For the detailed derivation of field equations, see [KHOURY2016]{} and [@ARSSA]. Here, we shall note that all the quantity with a tilde above are in Jordan frame and without tilde are in Einstein frame. The Jordan frame scale factor dubbed *physical scale factor* is related a scale factor of Einstein frame as $$\tilde{a}=Ra\text{.} \label{4}$$ Here, we consider the same maximally disformal coupling of BM and DM for which $Q=1$ throughout the evolution and $R=1$ in the early Universe that grows sufficiently fast such that the physical scale factor $\tilde{a}$ in Jordan frame experiences acceleration. The conformal coupling is disfavored by the stability criteria [@KHOURY2016]. One needs to specify the coupling function $% R(a)$ to proceed further or equivalently, $a$ can be parametrized in terms of physical scale factor $\tilde{a}$. The two parametrizations are\ (1) Model 1: $a(\tilde{a})=\tilde{a}+\alpha \tilde{a}^{2}+\beta \tilde{a}^{3}$, where $\alpha $ and $\beta $ are two model parameters.\ (2) Model 2: $a(\tilde{a})=\tilde{a}e^{\alpha \tilde{a}}$, in this case, only $\alpha $ is a model parameter. By expanding the functional $\tilde{a}e^{\alpha \tilde{a}}$ in Taylor series, the first parametrization $\tilde{a}+\alpha \tilde{a}^{2}+\beta \tilde{a}^{3}$ can be recovered by substituting $\beta=\alpha^2/2$. Agarwal et al. [@ARSSA] have studied various features of the model 1 and constrained the parameters $\alpha $ & $% \beta $ by employing the $\chi ^{2}$ analysis using $H(z)+SN+BAO$ datasets. Extending the analysis, we further study some more physical characteristics of the models 1 and 2 such as the statefinder and $Om$ diagnostics, and also put observational constraints on the parameter $\alpha $ of model 2. We also need to express the cosmological parameters in terms of redshifts in both the Einstein frame and Jordan frame which are defined as $$\tilde{a}=\frac{\tilde{a}_{0}}{1+\tilde{z}}\text{ , \ \ }a=\frac{a_{0}}{1+z}% \text{,} \label{5}$$For both the parametrizations, $\tilde{a}_{0}=1$ but $a_{0}=1+\alpha+\beta\neq 1$ (model 1) and $a_{0}=e^\alpha \neq 1$ (model 2). For model 1, the explicit expressions for the Hubble and deceleration parameters in Jordan frame are obtained as $$\tilde{H}(\tilde{a})=\frac{\dot{\tilde{a}}}{\tilde{a}}=\tilde{H}_{0}\frac{% (1+\alpha +\beta )^{\frac{1}{2}}(1+2\alpha +3\beta )}{\tilde{a}^{\frac{3}{2}}% \left[ 1+\alpha \tilde{a}+\beta \tilde{a}^{2}\right] ^{\frac{1}{2}}\left[ 1+2\alpha \tilde{a}+3\beta \tilde{a}^{2}\right] }\text{,} \label{eq:H1}$$ and$$\tilde{q}(\tilde{a})=-\frac{\tilde{a}\ddot{\tilde{a}}}{\dot{\tilde{a}}^{2}}=% \frac{1}{2}\left( \frac{1+2\alpha \tilde{a}+3\beta \tilde{a}^{2}}{1+\alpha \tilde{a}+\beta \tilde{a}^{2}}\right) +\left( \frac{2\tilde{a}(\alpha +3\beta \tilde{a})}{1+2\alpha \tilde{a}+3\beta \tilde{a}^{2}}\right) \text{.} \label{eq:q1}$$Using Eq. (\[5\]), above expressions can be written in terms of redshift $\tilde{z}$ as $$\tilde{H}(\tilde{z})=\tilde{H}_{0}\frac{(1+\alpha +\beta )^{\frac{1}{2}% }(1+2\alpha +3\beta )\left( 1+\tilde{z}\right) ^{\frac{9}{2}}}{\left[ \left( 1+\tilde{z}\right) ^{2}+\alpha \left( 1+\tilde{z}\right) +\beta \right] ^{% \frac{1}{2}}\left[ \left( 1+\tilde{z}\right) ^{2}+2\alpha \left( 1+\tilde{z}% \right) +3\beta \right] }\text{,} \label{a3}$$and$$\tilde{q}(\tilde{z})=\frac{\left[ \left( 1+\tilde{z}\right) ^{2}+2\alpha \left( 1+\tilde{z}\right) +3\beta \right] ^{2}+4\left[ \alpha \left( 1+% \tilde{z}\right) +3\beta \right] \left[ \left( 1+\tilde{z}\right) ^{2}+\alpha \left( 1+\tilde{z}\right) +\beta \right] }{2\left[ \left( 1+% \tilde{z}\right) ^{2}+\alpha \left( 1+\tilde{z}\right) +\beta \right] \left[ \left( 1+\tilde{z}\right) ^{2}+2\alpha \left( 1+\tilde{z}\right) +3\beta % \right] }\text{,} \label{a4}$$together with the effective equation of state (EOS) parameter given by $$\tilde{w}_{eff}(\tilde{z})=\frac{\alpha \left( 5+6\alpha +5\tilde{z}\right) (1+\tilde{z})^{2}+\beta (14+23\alpha +14\tilde{z})(1+\tilde{z})+18\beta ^{2}% }{3\{(1+\tilde{z})^{2}+\alpha (1+\tilde{z})+\beta \}\{(1+\tilde{z}% )^{2}+2\alpha (1+\tilde{z})+3\beta \}}\text{.} \label{a5}$$ Similarly, for model 2, we obtain the expressions for the Hubble and deceleration parameters in Jordan-frame as$$\tilde{H}(\tilde{a})=\frac{\tilde{H}_{0}(1+\alpha )e^{\frac{3}{2}\alpha }}{% \tilde{a}^{\frac{3}{2}}\left[ 1+\alpha \tilde{a}\right] ^{\frac{1}{2}}e^{% \frac{3}{2}\alpha \tilde{a}}}, \label{eq:H2}$$and$$\tilde{q}(a)=\frac{\left( 1+\alpha \tilde{a}\right) ^{2}+2\tilde{a}(2\alpha +\alpha ^{2}\tilde{a})}{2\left( 1+\alpha \tilde{a}\right) }\text{.} \label{eq:q2}$$with the help of Eq. (\[5\]), we obtain$$\tilde{H}(\tilde{z})=\frac{\tilde{H}_{0}(1+\alpha )e^{\frac{3}{2}\alpha }\left( 1+\tilde{z}\right) ^{\frac{5}{2}}}{\left[ \left( 1+\tilde{z}\right) +\alpha \right] e^{\frac{3}{2}\displaystyle\frac{\alpha }{(1+\tilde{z})}}}% \text{,} \label{b3}$$and$$\tilde{q}(\tilde{z})=\frac{1+\tilde{z}^{2}+6\alpha +3\alpha ^{2}+\left( 2+6\alpha \right) \tilde{z}}{2\left( 1+\tilde{z}\right) \left( 1+\tilde{z}% +\alpha \right) }\text{.} \label{b4}$$ The effective equation of state is then given by $$\tilde{w}_{eff}(\tilde{z})=\frac{5\alpha (1+\tilde{z})+3\alpha ^{2}}{3(1+% \tilde{z})\left[ (1+\tilde{z})+\alpha \right] }\text{ .} \label{b5}$$ In both the models, another parameter $H_0$ will come in the expressions of $H(z)$, see Eqs. (\[a3\]) and (\[b3\]). But, here we focus on the parameters of underlying parametrizations. The first parametrization (model 1) consist of two model parameters (i.e. $\alpha$ and $\beta$) while the second parametrization (model 2) consists of a single model parameter (i.e. $\alpha$). Now we are in position to put the observational constraints on the parameters of model 2 in the following section. Before proceeding to next section, we consider The DGP model as [@dgp]: $$\label{eq:DGP} \frac{H(z)}{H_0} = \left[ \left(\frac{1-\omm}{2}\right)+\sqrt{\omm (1+z)^3+ \left(\frac{1-\omm}{2}\right)^2} \right]\,\,$$ where $H_0$ and $\omm$ are the present values of Hubble parameter and energy density parameter of matter. Observational constraints {#sec:obs} ========================= We have already mentioned that the model 1 consists of two parameters, namely, $\alpha$ and $\beta$ which were constrained in Ref. [@ARSSA]. In our analysis, we shall use their best-fit values given as $\alpha =-0.102681$ & $ \beta =-0.078347$. In this section, we put the constraints on parameters of model 2 by employing the same procedure as in [@ARSSA]. One can use the total likelihood to constrain the parameters $\alpha $ and $H_0$ of model 2. The total likelihood function for a joint analysis can be defined as $$\mathcal{L}_{tot}(\alpha , H_0 )=e^{-\frac{\chi _{tot}^{2}(\alpha , H_0 )}{% 2}}\text{,~~~~ where }\chi _{\mathrm{tot}}^{2}=\chi _{\mathrm{Hub}}^{2}+\chi _{% \mathrm{SN}}^{2}+\chi _{\mathrm{BAO}}^{2}\text{.} \label{o1}$$Here, $\chi _{\mathrm{Hub}}^{2}$ denotes the chi-square for the Hubble dataset, $\chi _{\mathrm{SN}}^{2}$ represents the Type Ia supernova and $\chi _{% \mathrm{BAO}}^{2}$ corresponds to the BAO. By minimizing the $\chi _{\mathrm{tot}}^{2}$ , we obtain the best-fit value of $\alpha $ and $H_0$. The likelihood contours are standard i.e. confidence level at 1$\sigma $ and 2$% \sigma $ are $2.3$ and $6.17$, respectively in the 2D plane. First, we consider 28 data points of $H(z)$ used by Farooq and Ratra [@Farooq:2013hq] in the redshift range $0.07\leq z\leq 2.3$, and use $H_{0}=67.8\pm 0.9~Km/S/Mpc$ [@planck2015]. The $\chi ^{2}$, in this case, is defined as $$\chi _{\mathrm{Hub}}^{2}(\theta )=\sum_{i=1}^{29}\frac{\left[ h_{\mathrm{th}% }(z_{i},\theta )-h_{\mathrm{obs}}(z_{i})\right] ^{2}}{\sigma _{h}(z_{i})^{2}}% \,, \label{o2}$$where $h=H/H_{0}$ represents the normalized Hubble parameter, $h_{\mathrm{obs}}$ and $h_{\mathrm{th}}$ are the observed and theoretical values of normalized Hubble parameter and $\sigma _{h}=\left( \frac{\sigma _{H}}{H}+\frac{\sigma _{H_{0}}}{H_{0}}\right) h$. The quantities $\sigma _{H}$ and $\sigma _{H_{0}}$ designate the errors associated with $H$ and ${H_{0}}$, respectively. Second, we use $580$ data points from Union2.1 compilation data [@Suzuki:2011hu]. The corresponding $\chi ^{2}$ is given as $$\chi _{\mathrm{SN}}^{2}(\mu _{0},\theta )=\sum_{i=1}^{580}\frac{\left[ \mu _{th}(z_{i},\mu _{0},\theta )-\mu _{obs}(z_{i})\right] ^{2}}{\sigma _{\mu }(z_{i})^{2}}\,, \label{o3}$$where $\mu _{obs}$, $\mu _{th}$ are the observed, theoretical distance modulus and $\sigma _{\mu }$ is the uncertainty in the distance modulus, and $\theta $ is an arbitrary parameter. The distance modulus $\mu (z)$ is an observed quantity and related to luminosity distance $D_{L}(z)=(1+z)\int_{0}^{z}\frac{H_{0}dz^{\prime }}{% H(z^{\prime })}$ as $\mu (z)=m-M=5\log D_{L}(z)+\mu _{0}$; $m$ and $M$ being the apparent and absolute magnitudes of the supernovae, and $\mu _{0}=5\log \left( \frac{H_{0}^{-1}}{\mathrm{Mpc}}\right) +25$ is a nuisance parameter Finally, we consider BAO data. The corresponding chi-square ($\chi _{\mathrm{BAO}}^{2}$) is defined by [@Giostri:2012ek]: $$\chi _{\mathrm{BAO}}^{2}=Y^{T}C^{-1}Y\,, \label{o4}$$where $$Y=\left( \begin{array}{c} \frac{d_A(z_\star)}{D_V(0.106)} - 30.95 \\ \frac{d_A(z_\star)}{D_V(0.2)} - 17.55 \\ \frac{d_A(z_\star)}{D_V(0.35)} - 10.11 \\ \frac{d_A(z_\star)}{D_V(0.44)} - 8.44 \\ \frac{d_A(z_\star)}{D_V(0.6)} - 6.69 \\ \frac{d_A(z_\star)}{D_V(0.73)} - 5.45 \end{array} \right)\,,$$ and inverse covariance matrix ($C^{-1}$), values $\frac{% d_{A}(z_{\star })}{D_{V}(Z_{BAO})}$ are taken into account as in [Blake:2011en, Percival:2009xn, Beutler:2011hx, Jarosik:2010iu, Eisenstein:2005su, Giostri:2012ek]{}, and $z_{\star }\approx 1091$ is the decoupling time, $d_{A}(z)$ is the co-moving angular-diameter distance and $% D_{V}(z)=\left( d_{A}(z)^{2}z/H(z)\right) ^{1/3}$ is the dilation scale. For model 2, we use an integrated datasets of $H(z)+SN+BAO$, and corresponding likelihood contour at 1$\sigma $ and 2$\sigma $ confidence levels are shown in Fig. \[fig:cont\]. The best-fit values of the model parameters are obtained as $ \protect\alpha =-0.3147$ and $H_{0}=66.84~Km/S/Mpc$. [![This figure shows the 1$\protect\sigma $ (dark shaded) and 2$\protect% \sigma $ (light shaded) likelihood contours in $\protect\alpha -H_{0}$ plane. The figure corresponds to joint datasets of $H(z)+SN+BAO$. A black dot represents the best-fit values of the model parameters which are found to be $ \protect\alpha =-0.3147$ and $H_{0}=66.84~Km/S/Mpc$.[]{data-label="fig:cont"}](ah.pdf "fig:"){width="2.3in" height="2.3in"}]{} The normalized Hubble parameter and the effective EOS ($w_{eff}$) are plotted for both the models with their respective best-fit values of the parameters, and are shown in Fig. \[fig:hw\]. Both the models represent phantom behavior in future. Model 1 exhibits the sudden singularity in near future while this kind of singularity has been delayed and pushed into far future in case of model 2 that is displayed in the right panel of Fig. \[fig:hw\]. Fig. \[fig:mu\] exhibits the error bar plots for models 1 and 2 with $H(z)$ and $SN$ datasets which shows that both the models are consistent with the observations. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [![The figure shows the evolution of normalized Hubble parameter ($H(z)/H_0$) and effective EOS ($w_{eff}$) versus redshift ($z$). The dotted, dashed and dot-dashed lines correspond to DGP, models 1 and 2, respectively. We use best-fit values of the model parameters. The horizontal line is the phantom divide line.[]{data-label="fig:hw"}](zh.pdf "fig:"){width="2.3in" height="2.3in"}]{} [![The figure shows the evolution of normalized Hubble parameter ($H(z)/H_0$) and effective EOS ($w_{eff}$) versus redshift ($z$). The dotted, dashed and dot-dashed lines correspond to DGP, models 1 and 2, respectively. We use best-fit values of the model parameters. The horizontal line is the phantom divide line.[]{data-label="fig:hw"}](zw.pdf "fig:"){width="2.3in" height="2.3in"}]{} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [![This figure exhibits the error bars of $H(z)$ (left) and $SN$ (right) datasets. In both the panels, the dashed and dot-dashed lines show the best-fitted behavior for models 1 and 2, respectively.[]{data-label="fig:mu"}](Hz-error.pdf "fig:"){width="2.3in" height="2.3in"}]{} [![This figure exhibits the error bars of $H(z)$ (left) and $SN$ (right) datasets. In both the panels, the dashed and dot-dashed lines show the best-fitted behavior for models 1 and 2, respectively.[]{data-label="fig:mu"}](sn-error.pdf "fig:"){width="2.3in" height="2.3in"}]{} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The above discussions show the validation of our models corresponding to the observations. In the following sections we shall employ different diagnostics for underlying models. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [![The figure shows the time evolution of the statefinder pairs $\{r,s\} $ (left) and $\{r,q\}$ (right) for models 1 (dashed), 2 (dot-dashed) and DGP (dotted). In the left panel, the fixed point ($r=1$, $s=0 $) corresponds to $\Lambda$CDM. All the models passes through $\Lambda$CDM. In the right panel, all models diverge from the same point ($r=1$, $q=0.5$) which corresponds to SCDM. The DGP converges to the point ($r=1,q=-1$) that represents the de-Sitter expansion (dS) whereas models 1 and 2 do not converge to dS due to phantom nature. The dark dots on the curves denote current values $\{r_{0},s_{0}\}$ (left) and $\{r_{0},q_{0}\}$ (right). In all models, we have taken best-fit values of the model parameters.[]{data-label="fig:rs"}](sr.pdf "fig:"){width="2.3in" height="2.3in"}]{} [![The figure shows the time evolution of the statefinder pairs $\{r,s\} $ (left) and $\{r,q\}$ (right) for models 1 (dashed), 2 (dot-dashed) and DGP (dotted). In the left panel, the fixed point ($r=1$, $s=0 $) corresponds to $\Lambda$CDM. All the models passes through $\Lambda$CDM. In the right panel, all models diverge from the same point ($r=1$, $q=0.5$) which corresponds to SCDM. The DGP converges to the point ($r=1,q=-1$) that represents the de-Sitter expansion (dS) whereas models 1 and 2 do not converge to dS due to phantom nature. The dark dots on the curves denote current values $\{r_{0},s_{0}\}$ (left) and $\{r_{0},q_{0}\}$ (right). In all models, we have taken best-fit values of the model parameters.[]{data-label="fig:rs"}](qr.pdf "fig:"){width="2.3in" height="2.3in"}]{} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Statefinder diagnostic {#sec:state} ====================== The past two decades produced a plethora of theoretical cosmological models of dark energy with improved quality of observational data. So, there should be some analysis which can differentiate these models and predict the deviations from $\Lambda$CDM. Sahni et al. [@Sahni1] have pointed out this idea widely known as statefinder diagnostic. The statefinder pairs $\{r,s\}$ and $\{r,q\}$ are the geometrical quantities that are constructed from any space-time metric directly and can successfully differentiate various competing models of dark energy by using the higher order derivatives of scale factor. In the literature, the $\{r,s\}$ and $\{r,q\}$ pairs are defined as [@Sahni1]. $$q=-\frac{\ddot{a}}{aH^{2}}\text{, \ \ } r=\frac{\dddot{a}}{aH^{3}}\text{, \ \ }s=\frac{r-1}{3(q-\frac{1}{2})}\text{.} \label{d1}$$The statefinder diagnostic is an useful tool in modern day cosmology and being used to serve the purpose of distinguishing different dark energy models [@alam1; @alam2; @alam3]. In this process, different trajectories in $r-s$ and $r-q$ planes are plotted for various dark energy models and study their behaviors. In a spatially flat FLRW background, the statefinder pair $\{r,s\}=\{1,0\}$ and $\{1,1\}$ for $\Lambda$CDM and standard cold dark matter (SCDM). In the $r-s$ and $r-q$ planes, the departure of any dark energy model from these fixed points are analyzed. The pairs $\{r,s\}$ and $\{r,q\}$ for models 1 and 2 are calculated as $$r=\frac{% \begin{array}{c} 11+101\alpha \tilde{a}+(358\alpha ^{2}+158\beta )\tilde{a}^{2}+(488\alpha ^{3}+1237\alpha \beta )\tilde{a}^{3} \\ +(2440\alpha ^{2}\beta +224\alpha ^{4}+1156\beta ^{2})\tilde{a}% ^{4}+(1424\alpha ^{3}\beta +4087\alpha \beta ^{2})\tilde{a}^{5} \\ +(3346\alpha ^{2}\beta ^{2}+2178\beta ^{3})\tilde{a}^{6}+3375\alpha \beta ^{3}\tilde{a}^{7}+1233\beta ^{4}\tilde{a}^{8}% \end{array}% }{2(1+\alpha \tilde{a}+\beta \tilde{a}^{2})^{2}(1+2\alpha \tilde{a}+3\beta \tilde{a}^{2})^{2}} \label{d3}$$$$s=\frac{% \begin{array}{c} 9+89\alpha \tilde{a}+(332\alpha ^{2}+142\beta )\tilde{a}^{2}+(464\alpha ^{3}+1169\alpha \beta )\tilde{a}^{3} \\ +(216\alpha ^{4}+2348\alpha ^{2}\beta +1112\beta ^{2})\tilde{a}% ^{4}+(1384\alpha ^{3}\beta +3971\alpha \beta ^{2})\tilde{a}^{5} \\ +(3272\alpha ^{2}\beta ^{2}+2130\beta ^{3})\tilde{a}^{6}+3315\alpha \beta ^{3}\tilde{a}^{7}+1215\beta ^{4}\tilde{a}^{8}% \end{array}% }{% \begin{array}{c} 15\alpha \tilde{a}+(63\alpha ^{2}+42\beta )\tilde{a}^{2}+(84\alpha ^{3}+255\alpha \beta )\tilde{a}^{3} \\ +(36\alpha ^{4}+438\alpha ^{2}\beta +222\beta ^{2})\tilde{a}^{4}+(228\alpha ^{3}\beta +693\alpha \beta ^{2})\tilde{a}^{5} \\ +(507\alpha ^{2}\beta ^{2}+342\beta ^{3})\tilde{a}^{6}+477\alpha \beta ^{3}% \tilde{a}^{7}+162\beta ^{4}\tilde{a}^{8}% \end{array}% } \label{d4}$$ and $$r=\frac{3+(2+18\alpha )\tilde{a}+(4\alpha +41\alpha ^{2})\tilde{a}% ^{2}+(2\alpha ^{2}+33\alpha ^{3})\tilde{a}^{3}+9\alpha ^{4}\tilde{a}^{4}}{2% \tilde{a}+4\alpha \tilde{a}^{2}+2\alpha ^{2}\tilde{a}^{3}} \label{d5}$$$$s=\frac{-3-18\alpha \tilde{a}-41\alpha ^{2}\tilde{a}^{2}-33\alpha ^{3}\tilde{% a}^{3}-9\alpha ^{4}\tilde{a}^{4}}{-9+(9-30\alpha )\tilde{a}+(18\alpha -30\alpha ^{2})\tilde{a}^{2}+(9\alpha ^{2}-9\alpha ^{3})\tilde{a}^{3}} \label{d6}$$ The deceleration parameter $q$ is given by Eqs. (\[eq:q1\]) and (\[eq:q2\]), respectively. We plot the $r-s$ and $r-q$ diagrams for our models and compare these with the $\Lambda$CDM. Fig. \[fig:rs\] shows the time evolution of the statefinder pairs for different DE models. The left panel exhibits the evolution of $\{r,s\}$ while the right one for $\{r,q\}$. In both panels, the models 1 (dashed) and 2 (dot-dashed) are compared with DGP (dotted) and $\Lambda$CDM. In left panel, the fixed point ($r=1$, $s=0$) corresponds to $\Lambda$CDM, and all models passes through this fixed point. Moreover, one can see that the trajectory of DGP terminates there while the corresponding trajectories of models 1 and 2 evolve further showing that the phantom behavior in future. In right panel, the fixed point ($r=1$, $q=0.5$) represents SCDM. All the underlying models evolve from this point. The DGP converges to the second point ($r=1,q=-1$) that corresponds to de-Sitter expansion (dS) whereas models 1 and 2 do not converge to dS due to their phantom behavior. The dark dots on the curves show present values $\{r_{0},s_{0}\}$ (left) and $% \{r_{0},q_{0}\}$ (right) for the models under consideration. We chose the best-fit values of the model parameters for these plots. $Om$ diagnostic {#sec:om} =============== We shall now diagnose our models with $Om$ analysis which is also a geometrical diagnostic that explicitly depends on redshift and the Hubble parameter, and is defined as [@Sahni2; @Z]: $$Om\left( z\right) =\frac{\left( \frac{H(z)}{H_{0}}\right)^{2}-1}{\left( 1+z\right) ^{3}-1}\text{} \label{eq:om}$$ [![This figure shows the evolution of $Om(z)$ versus redshift $z$ for different DE models such as $w=-1$ ($\Lambda$CDM), $-1.4$ (phantom), $-0.6$ (quintessence) and models 1 (dashed) & 2 (dot-dashed). The horizontal line represents $\Lambda$CDM, and has zero curvature. The DE models with $w>-1$ (quintessence) have negative curvature whereas models with $w<-1$ (phantom) have positive curvature. Models 1 and 2 show the positive curvatures though they have $w_{eff}>-1$ at the present epoch and in future $w_{eff}<-1$ (phantom phase). The vertical solid line represents the present era. The best-fit values are chosen to plot this figure.[]{data-label="fig:om"}](om.pdf "fig:"){width="2.5in" height="2.5in"}]{} The $Om$ diagnostic also differentiates various DE models from $\Lambda$CDM [@Amnras]. This is a simpler diagnostic when applied to observations as it depends only on the first derivative of scale factor. The Hubble parameter for a constant EOS is defined as $$H^2(z) = H_0^2 \left[ \Omega_{0m}(1+z)^3 + (1-\Omega_{0m})(1+z)^{3(1+w)}\right] , \label{eq:Hconst}$$ The expression for $Om(z)$ corresponding to Eq. (\[eq:Hconst\]) is written as $$Om(z) = \Omega_{0m} + (1-\Omega_{0m})\frac{(1+z)^{3(1+w)}- 1}{(1+z)^3-1} \label{eq:omconst}$$ From Eq. (\[eq:omconst\]), one can notice that, $Om(z) = \Omega_{0m}$ for $w=-1$ ($\Lambda$CDM) whereas $Om(z) > \Omega_{0m}$ for $w> -1$ (quintessence) and $Om(z) < \Omega_{0m}$ for $w < -1$ (phantom). The corresponding evolutions of $Om(z)$ for $w=-0.6$ (quintessence), $-1$ ($\Lambda$CDM) and $-1.4$ (phantom) are shown in Fig. \[fig:om\]. From Fig. \[fig:om\], one can clearly see that the $Om(z)$ has negative, zero and positive curvatures for quintessence, $\Lambda$CDM and phantom, respectively. In contrast, we show the evolution of $Om(z)$ for models 1 and 2 in Fig. \[fig:om\]. Both models exhibit positive curvatures though thay have $w_{eff}>-1$ (quintessence) at present epoch and in future $w_{eff}<-1$ (phantom phase), see Figs. \[fig:hw\] and \[fig:om\]. This is a vital result as in the literature, quintessence does not have positive curvature which is a generic feature [@Amnras]. Conclusion {#sec:conc} ========== In this work, we have considered the scenario in which cosmic acceleration might arise due to coupling between known matter components present in the Universe [@KHOURY2016]. To this effect, we further extend the work of Ref. [@ARSSA]. The two models obtained by parameterizing the coupling function (or correspondingly the Einstein frame scale factor in terms of physical scale factor) are analyzed by using an integrated observational data. We used joint data of $H(z)$, Type Ia supernova and BAO, and constrained the model parameters of model 2 (see Fig. \[fig:cont\]). In this case, the best-fit values of $\alpha$ and $H_0$ are found to be $\alpha =-0.3147$ and $H_{0}=66.84~Km/S/Mpc$ (model 1 consists of two parameters that were constrained in Ref. [@ARSSA] and the best-fit values were obtained as $\alpha =-0.102681$ & $\beta =-0.078347$). We used best-fit values of the model parameters to carried out the analysis. The time evolutions of Hubble parameter and effective EOS are shown in Fig. \[fig:hw\]. From this figure we conclude that the model 1 shows phantom behavior with a pressure singularity in the near future while the model 2 is a generalized case of model 1 that pushes the future sudden singularity to the infinite future. In Fig. \[fig:mu\], we have shown error bars of observational data with the models under consideration. One can clearly observe that both the models are compatible with the observations. In addition, the statefinder diagnostic has been performed for the underlying models. We obtained the expressions for the statefinder pairs and their behaviors have been displayed in $r-s$ and $r-q$ planes as shown in Fig. \[fig:rs\]. For comparison, we have also shown DGP model in same figure. In the $r-s$ plane, both the models pass through the fixed point ($r=1, s=0$) and move away from the $\Lambda$CDM while the DGP terminates at the fixed point. This is due the phantom behavior which is a distinguished characteristic of the underlying scenario. In the $q-r$ plane, it is clearly seen that all the models originated from a fixed point ($r=1,q=0.5$) that corresponds to SCDM. The DGP model converges to the second point ($r=1,q=-1$) that represents the dS while the models under consideration do not converge to the dS fixed point due to their phantom nature (see right panel of Fig. \[fig:rs\]). The evolution of $Om(z)$ versus redshift $z$ for different DE models are shown in Fig. \[fig:om\]. We observed that the $\Lambda$CDM, quintessence and phantom have zero, negative and positive curvatures. Models 1 and 2 lie in quintessence regime in the past and remain so till the present epoch, and evolve to phantom in future. Both the models have positive curvature however they lie in the quintessence regime. This is an important result as in the literature, its not possible to have positive curvature for quintessence [@Amnras]. In our opinion, the scenario proposed in [@KHOURY2016] and investigated here is of great interest in view of GW170817. The modification caused by the interaction between known components of matter does not involve any extra degree of freedom and falls into the safe category in the light of recent observations on gravitational waves. Acknowledgment {#acknowledgment .unnumbered} ============== We are highly indebted to M. Sami for suggesting this problem and constant supervision as well as for providing all necessities to complete this work. SKJP wishes to thank National Board of Higher Mathematics (NBHM), Department of Atomic Energy (DAE), Govt. of India for financial support through the post-doctoral research fellowship. [99]{} A. G. Riess *et al.* \[Supernova Search Team\], Astron. J. **116**, 1009 (1998) \[astro-ph/9805201\]. S. Perlmutter *et al.* \[Supernova Cosmology Project Collaboration\], Astrophys. J. **517**, 565 (1999) \[astro-ph/9812133\]. A. H. Jaffe *et al.* \[Boomerang Collaboration\], Phys. Rev. Lett. **86**, 3475 (2001) \[astro-ph/0007333\]. D. N. Spergel *et al.* \[WMAP Collaboration\], Astrophys. J. Suppl. **170**, 377 (2007) \[astro-ph/0603449\]. J. R. Bond, G. Efstathiou and M. Tegmark, Mon. Not. Roy. Astron. Soc. **291**, L33 (1997) \[astro-ph/9702100\]. Y. Wang and P. Mukherjee, Astrophys. J. **650**, 1 (2006) \[astro-ph/0604051\]. U. Seljak *et al.* \[SDSS Collaboration\], Phys. Rev. D **71**, 103515 (2005) \[astro-ph/0407372\]. J. K. Adelman-McCarthy *et al.* \[SDSS Collaboration\], Astrophys. J. Suppl. **162**, 38 (2006) \[astro-ph/0507711\]. E. J. Copeland, M. Sami and S. Tsujikawa, Int. J. Mod. Phys. D **15**, 1753 (2006) \[hep-th/0603057\]. M. Sami, New Adv. Phys. **10**, 77 (2016) \[arXiv:1401.7310 \[physics.pop-ph\]\]. V. Sahni and A. A. Starobinsky, Int. J. Mod. Phys. D **9**, 373 (2000) \[astro-ph/9904398\]. J. Frieman, M. Turner and D. Huterer, Ann. Rev. Astron. Astrophys. **46**, 385 (2008) \[arXiv:0803.0982 \[astro-ph\]\]. R. R. Caldwell and M. Kamionkowski, Ann. Rev. Nucl. Part. Sci. **59**, 397 (2009) \[arXiv:0903.0866 \[astro-ph.CO\]\]. A. Silvestri and M. Trodden, Rept. Prog. Phys. **72**, 096901 (2009) \[arXiv:0904.0024 \[astro-ph.CO\]\]. M. Sami, Curr. Sci. **97**, 887 (2009) \[arXiv:0904.3445 \[hep-th\]\]. L. Perivolaropoulos, AIP Conf. Proc. **848**, 698 (2006) \[astro-ph/0601014\]. J. A. Frieman, AIP Conf. Proc. **1057**, 87 (2008) \[arXiv:0904.1832 \[astro-ph.CO\]\]. M. Sami, Lect. Notes Phys. **720**, 219 (2007). S. M. Carroll, Living Rev. Rel. **4**, 1 (2001) \[astro-ph/0004075\]. T. Padmanabhan, Phys. Rept. **380**, 235 (2003) \[hep-th/0212290\]. P. J. E. Peebles and B. Ratra, Rev. Mod. Phys. **75**, 559 (2003) \[astro-ph/0207347\]. C. Wetterich, Nucl. Phys. B **302**, 668 (1988). B. Ratra and P. J. E. Peebles, Phys. Rev. D **37**, 3406 (1988). R. R. Caldwell, R. Dave and P. J. Steinhardt, Phys. Rev. Lett. **80**, 1582 (1998) \[astro-ph/9708069\]. V. Sahni, M. Sami and T. Souradeep, Phys. Rev. D **65**, 023518 (2002) \[gr-qc/0105121\]. M. Sami and T. Padmanabhan, Phys. Rev. D **67**, 083509 (2003) \[hep-th/0212317\]. \[hep-th\]\]. L. Parker and A. Raval, Phys. Rev. D **60**, 063512 (1999) \[gr-qc/9905031\]. V. Sahni and A. A. Starobinsky, Int. J. Mod. Phys. D **9**, 373 (2000) \[astro-ph/9904398\]. S. Nojiri and S. D. Odintsov, Phys. Lett. B **562**, 147 (2003) \[hep-th/0303117\]. P. Singh, M. Sami and N. Dadhich, Phys. Rev. D **68**, 023522 (2003) \[hep-th/0305110\]. M. Sami and A. Toporensky, Mod. Phys. Lett. A **19**, 1509 (2004) \[gr-qc/0312009\]. M. Sami, A. Toporensky, P. V. Tretjakov and S. Tsujikawa, Phys. Lett. B **619**, 193 (2005) \[hep-th/0504154\]. A. Sen, JHEP **0207**, 065 (2002) \[hep-th/0203265\]. T. Padmanabhan, Phys. Rev. D **66**, 021301 (2002) \[hep-th/0204150\]. M. Shahalam, S.D. Pathak, Shiyuan Li, R. Myrzakulov, Anzhong Wang, Eur. Phys. J. C 77 (2017) 686. A. Y. Kamenshchik, U. Moschella and V. Pasquier, Phys. Lett. B **511**, 265 (2001) \[gr-qc/0103004\]. V. Gorini, U. Moschella, A. Kamenshchik and V. Pasquier, AIP Conf. Proc. **751**, 108 (2005). K. Bamba, S. Capozziello, S. Nojiri and S. D. Odintsov, Astrophys. Space Sci. **342**, 155 (2012) \[arXiv:1205.3421 \[gr-qc\]\]. A. Ali, R. Gannouji and M. Sami, Phys. Rev. D **82**, 103015 (2010) \[arXiv:1008.1588 \[astro-ph.CO\]\]. J. Yoo and Y. Watanabe, Int. J. Mod. Phys. D **21**, 1230002 (2012) \[arXiv:1212.4726 \[astro-ph.CO\]\]. L. Berezhiani, J. Khoury and J. Wang, Phys. Rev. D **95**, no. 12, 123530 (2017) \[arXiv:1612.00453 \[hep-th\]\]. A. Agarwal, R. Myrzakulov, S. K. J. Pacif, M. Sami and A. Wang, arXiv:1709.02133 \[gr-qc\]. G. Dvali, G. Gabadadze and M. Porrati, 4D Gravity on a Brane in 5D Minkowski Space, Phys. Lett. B [**485**]{}, 208 (2000). O. Farooq and B. Ratra, Astrophys. J. **766**, L7 (2013) \[arXiv:1301.5243 \[astro-ph.CO\]\]. And the references their in. P. A. R. Ade *et al.* \[Planck Collaboration\], A & A, 571, A16 (2014). P. A. R. Ade *et al.* \[Planck Collaboration\], A & A, 594, A13 (2016). N. Suzuki, D. Rubin, C. Lidman, G. Aldering, R. Amanullah, K. Barbary, L. F. Barrientos and J. Botyanszki *et al.*, Astrophys. J. **746**, 85 (2012) \[arXiv:1105.3470 \[astro-ph.CO\]\]. R. Giostri, M. V. d. Santos, I. Waga, R. R. R. Reis, M. O. Calvao and B. L. Lago, JCAP **1203**, 027 (2012) \[arXiv:1203.3213 \[astro-ph.CO\]\]. C. Blake, E. Kazin, F. Beutler, T. Davis, D. Parkinson, S. Brough, M. Colless and C. Contreras *et al.*, Mon. Not. Roy. Astron. Soc. **418**, 1707 (2011) \[arXiv:1108.2635 \[astro-ph.CO\]\]. W. J. Percival *et al.* \[SDSS Collaboration\], Mon. Not. Roy. Astron. Soc. **401**, 2148 (2010) \[arXiv:0907.1660 \[astro-ph.CO\]\]. F. Beutler, C. Blake, M. Colless, D. H. Jones, L. Staveley-Smith, L. Campbell, Q. Parker and W. Saunders *et al.*, Mon. Not. Roy. Astron. Soc. **416**, 3017 (2011) \[arXiv:1106.3366 \[astro-ph.CO\]\]. N. Jarosik, C. L. Bennett, J. Dunkley, B. Gold, M. R. Greason, M. Halpern, R. S. Hill and G. Hinshaw *et al.*, Astrophys. J. Suppl. **192**, 14 (2011) \[arXiv:1001.4744 \[astro-ph.CO\]\]. D. J. Eisenstein *et al.* \[SDSS Collaboration\], Astrophys. J. **633**, 560 (2005) \[astro-ph/0501171\]. V. Sahni, T. D. Saini, A. A. Starobinsky and U. Alam, JETP Lett. **77**, 201 (2003); U. Alam, V. Sahni, T. D. Saini, and A. A. Starobinsky, Mon. Not. R. Astron. Soc. [**344**]{}, 1057 (2003). M. Sami [*et al*]{}., Cosmological dynamics of non-minimally coupled scalar field system and its late time cosmic relevance, Phys. Rev. D [**86**]{} (2012) 103532 \[arXiv:1207.6691\] \[ INSPIRE \]. R. Myrzakulov and M. Shahalam, Statefinder hierarchy of bimetric and galileon models for concordance cosmology, JCAP [**10**]{} (2013) 047 \[arXiv:1303.0194\] \[ INSPIRE \]. Sarita Rani [*et al*]{}., Constraints on cosmological parameters in power-law cosmology, JCAP [**03**]{} (2015) 031. V. Sahni, A. Shafieloo and A. A. Starobinsky, Phys. Rev. D **78**, 103502 (2008) \[arXiv:0807.3548 \[astro-ph\]\]. C. Zunckel and C. Clarkson, Phys. Rev. Lett. **101**, 181301 (2008) \[arXiv:0807.4304 \[astro-ph\]\]. M. Shahalam, Sasha Sami, Abhineet Agarwal, $Om$ diagnostic applied to scalar field models and slowing down of cosmic acceleration, Mon. Not. Roy. Astron. Soc. [**448**]{} (2015) 2948-2959 \[arXiv:1501.04047\] \[astro-ph.CO\]
--- abstract: 'The neutrino mixing parameters are thoroughly studied using renormalization-group evolution of Dirac neutrinos with recently proposed parametrization of the neutrino mixing angles referred as ‘high-scale mixing relations’. The correlations among all neutrino mixing and $CP$ violating observables are investigated. The predictions for the neutrino mixing angle $\theta_{23}$ are precise, and could be easily tested by ongoing and future experiments. We observe that the high scale mixing unification hypothesis is incompatible with Dirac neutrinos due to updated experimental data.' author: - Gauhar Abbas - Mehran Zahiri Abyaneh - Rahul Srivastava title: Precise predictions for Dirac neutrino mixing --- Introduction ============ Neutrino mixing is one of the most fascinating and challenging discoveries. This is starkly different from quark mixing which is small in the standard model (SM). There are a number of ways to explain these two very different phenomena. The quark-lepton unification, which is one of the main attractive features of the grand unified theories (GUT)[@Pati:1974yy; @Georgi:1974sy; @Fritzsch:1974nn], could provide an explanation of the origin of neutrino and quark mixing since quarks and leptons live in a joint represenation of the symmetry group. Another interesting approach is to use flavor symmetries [@Altarelli:2010gt; @King:2013eh; @Holthausen:2013vba; @Araki:2013rkf; @Ishimori:2014jwa]. These symmetries could also naturally appear in GUT theories[@Lam:2014kga]. To explain the origin of neutrino and quark mixing, recently a new parametrization of the neutrino mixing angles in terms of quark mixing angles was proposed in Ref.[@Abbas:2015vba]. The varoius simplified limits of this prameterization are referred as ‘high-scale mixing relations’(HSMR). The parametrization is inspired by the high scale mixing unification (HSMU) hypothesis which states that at certain high scales the neutrino mixing angles are identical to that of the quark mixing angles[@Mohapatra:2003tw; @Mohapatra:2005gs; @Mohapatra:2005pw; @Agarwalla:2006dj]. This hypothesis is studied in detail in Refs.[@Abbas:2014ala; @Abbas:2013uqh; @Srivastava:2015tza; @Srivastava:2016fhg; @Haba:2012ar] . The HSMR parametrization of the neutrino mixing angles assumes that the neutrino mixing angles are proportional to those of quarks due to some underlying theory which could be a quark-lepton unification or models based on flavor symmetries. In fact, such models are also presented in Ref.[@Abbas:2015vba]. The scale where the HSMR parametrization could be realized is referred as unification scale. In its most general form, the HSMR parametrization can be written as follows: $$\label{hsmr} \theta_{12} = \alpha_1^{k_1} ~\theta_{12}^q, ~~ \theta_{13} = \alpha_2^{k_2}~ \theta_{13}^q, ~~\theta_{23} =\alpha_3^{k_3} \theta_{23}^q.$$ where $\theta_{ij}$ (with $i,j=1,2,3$) denotes leptonic mixing angles and $\theta_{ij}^q$ are the quark mixing angles. Exponents $k_i$ with $i=(1,2,3)$ are real. Predictions of the HSMR parametrization could be a strong hint of the quark-lepton unification, some flavor symmetry or both. The HSMR parametrization is studied in the framework of the SM extended by the minimum supersymmetric standard model (MSSM). The beginning point is to run the quark mixing angles from the low scale (mass of the $Z$ boson) to the supersymmetry (SUSY) breaking scale using the renormalization-group (RG) evolution of the SM. The RG equations of the MSSM govern the evolution of quark mixing angles from the SUSY breaking scale to the unification scale. After obtaining quark mixing angles at the unification scale, the HSMR parametrization is used to run neutrino mixing parameters from the unification scale to the SUSY breaking scale via RG evolution of the MSSM. From the SUSY breaking scale to the low scale, the SM RG equations are used to evolve the neutrino mixing parameters. The free parameters controlling the top-down evolution of the neutrino mixing parameters are masses of the three light neutrinos, Dirac CP phase and parameters $\alpha_i$. Masses of neutrinos must be quasidegenerate and normal hierarchical. Furthermore, the large value of $\tan \beta$ is required[@Abbas:2015vba]. On the other hand, the nature of neutrinos is still unknown. They could be equally Dirac or Majorana in nature. Hence, from the phenomenological point of view, Dirac neutrinos are as important as Majorana neutrinos. There are many ongoing important experiments to test the nature of neutrinos[@Agostini:2013mzu; @Auger:2012ar; @Gando:2012zm; @Alessandria:2011rc]. However, for the Dirac mass of neutrinos, the Yukawa couplings for neutrinos seem to be unnaturally small. The elegant way to explain this fine-tuning is see-saw mechanism which assumes that neutrinos are Majorana in nature[@Minkowski; @GellMann:1980vs; @Yanagida:1979as; @Glashow:1979nm; @Mohapatra:1979ia]. The smallness of masses for Dirac neutrinos could be explained in many models using heavy degrees of freedom[@Abbas:2016qqc; @Ma:2014qra; @Mohapatra:1986bd; @ArkaniHamed:2000bq; @Borzumati:2000mc; @Kitano:2002px; @Abel:2004tt; @Murayama:2004me; @Smirnov:2004hs; @Mohapatra:2004vr]. There are also models based on extra dimensions which explain the smallness of Dirac neutrino mass by a small overlapping of zero-mode profiles along extra dimensions[@Hung:2004ac; @Ko:2005sh; @Antusch:2005kf]. Dirac neutrinos seem to be a natural choice in certain orbifold compactifications of the heterotic string where the standard see-saw mechanism is difficult to realize[@Giedt:2005vx]. Cosmological data do not prefer Majorana or Dirac neutrinos either. For instance, the baryon asymmetry of the Universe can also be explained for Dirac neutrinos in various theoretical models[@Dick:1999je; @Murayama:2002je; @Gu:2006dc; @Gu:2007mi; @Gu:2007mc; @Gu:2007ug; @Gu:2012fg]. Although the RG evolution of Majorana neutrinos is extensively studied in the literature[@Mohapatra:2003tw; @Mohapatra:2005gs; @Mohapatra:2005pw; @Agarwalla:2006dj; @Casas:2003kh; @Abbas:2014ala; @Abbas:2013uqh; @Srivastava:2015tza; @Casas:1999tg], less attention is being paid to the RG evolution of Dirac neutrinos. In fact, as far as we know, it was shown for the first time in Ref.[@Abbas:2013uqh] that RG evolution for Dirac neutrinos can explain the large neutrino mixing assuming the HSMU hypothesis. However, as we show later, these results are ruled out by new updated data[@Forero:2014bxa; @Capozzi:2013csa; @Gonzalez-Garcia:2014bfa] and due to an improved algorithm used in the package REAP[@private]. It is established that the HSMR parametrization can explain the observed pattern of the neutrino mixing assuming they are Majorana in nature[@Abbas:2015vba]. In this paper, we investigate the consequences of the HSMR parametrization using the RG evolution of Dirac neutrinos. This paper is organized in the following way: In Sec. \[sec1\], we present our results on the RG evolution of the neutrino mixing parameters. In Sec. \[sec2\] we present a model with naturally small Dirac neutrino masses, where the HSMR parametrization discussed in Eq.\[hsmr\] can be explicitly realized. We summarize our work in Sec. \[sec3\]. RG evolution of the neutrino mixing parameters for Dirac neutrinos {#sec1} ================================================================== Now we present our results. The RG equations describing the evolution of the neutrino mixing parameters for Dirac neutrinos are derived in Ref. [@Lindner:2005as]. We have used Mathematica- based package REAP for the computation of the RG evolution at two loops [@Antusch:2005gp]. The first step is to evolve quark mixing angles, gauge couplings, Yukawa couplings of quarks, and charged leptons from the low scale to the SUSY breaking scale. From the SUSY breaking scale to the unification scale, evolution undergoes the MSSM RG equations. The quark mixing angles at the unification scale after evolution are $\theta_{12}^q = 13.02^\circ$, $\theta_{13}^q=0.17^\circ$ and $\theta_{23}^q=2.03^\circ$. Now, quark-mixing angles are used by the HSMR parametrization at the unification scale and neutrino mixing parameters are evolved down to the SUSY breaking scale using the MSSM RG equations. After this, the evolution of mixing parameters are governed by the SM RG equation. The value of $\tan \beta$ is chosen to be $55$. For simplification, we have assumed $k_1=k_2=k_3=1$ in the HSMR parametrization. The global status of the neutrino mixing parameters is given in Table \[tab1\]. Quantity Best fit 3$\sigma$ range ---------------------------------------- ---------- ----------------- $\Delta m^2_{21}~(10^{-5}~{\rm eV}^2)$ $7.60$ 7.11 – 8.18 $\Delta m^2_{31}~(10^{-3}~{\rm eV}^2)$ $2.48$ 2.30 – 2.65 $\theta_{12}^{\circ}$ $34.6$ 31.8 – 37.8 $\theta_{23}^{\circ}$ $48.9$ 38.8 – 53.3 $\theta_{13}^{\circ}$ $8.6 $ 7.9– 9.3 : The global fits for the neutrino mixing parameters [@Forero:2014bxa][]{data-label="tab1"} Results for the SUSY breaking scale at 2 TeV -------------------------------------------- In this subsection, we present our results for the SUSY breaking scale at 2 TeV following the direct LHC searches [@Craig:2013cxa]. The unification scale where the HSMR parametrization could be realized is chosen to be GUT scale ($2 \times 10^{16}$ GeV). The free parameters of the analysis are shown in Table \[tab2\]. Quantity Range at the unification scale ------------------ -------------------------------- $\alpha_1$ $0.7 - 0.8$ $\alpha_2$ $2.12 - 2.78$ $\alpha_3$ $1.002 -1.01$ $m_1$(eV) $0.49227 - 0.49825$ $m_2$ (eV) $0.494 - 0.5$ $m_3$ (eV) $0.52898 - 0.53542$ $\delta_{Dirac}$ $(-14^\circ, 14^\circ)$ : The free parameters of the analysis chosen at the unification scale.[]{data-label="tab2"} In Fig. \[fig1\], we show a correlation between mixing angles $\theta_{13}$ and $\theta_{23}$. It is obvious that our prediction for $\theta_{23}$ is precise. The allowed range of $\theta_{13}$ is $7.94^\circ - 9.3^\circ$. The corresponding range of $\theta_{23}$ is $51.5^\circ - 52.64^\circ$. It is important to note that the predictions for $\theta_{13}$ include the best fit value. Another important prediction is that $\theta_{23}$ is nonmaximal and lies in the second octant. Being precise, this correlation is easily testable in future and ongoing experiments such as INO, T2K, NO$\nu$A, LBNE, Hyper-K, and PINGU [@Abe:2011ks; @Patterson:2012zs; @Adams:2013qkq; @Ge:2013ffa; @Kearns:2013lea; @Athar:2006yb]. ![ The variation of $\theta_{23}^\circ$ with respect to $\theta_{13}^\circ$.[]{data-label="fig1"}](fig1.eps){width="6.5cm" height="5cm"} In Fig. \[fig2\], we show the variation of “averaged electron neutrino mass” $m_\beta$ [@Drexlin:2013lha] with respect to $\Delta m_{31}^2$. The allowed range of $m_\beta$ is $0.4633-0.4690$ eV which is precise. The upper bound on $m_\beta$ is $2$ eV from tritium beta decay [@Kraus:2004zw; @Aseev:2011dq]. The KATRIN experiment is expected to probe $m_\beta$ as low as $0.2$ eV at $90\%$ C.L. [@Drexlin:2013lha]. Hence, our prediction for $m_\beta$ is well within the reach of the KATRIN experiment. The allowed range for $\Delta m_{31}^2$ is $(2.30 - 2.37) \times 10^{-3} \textrm{eV}^2$ which is bounded with respect to the $3\sigma$ range given by the global fit in Table \[tab1\]. It should be noted that the best fit value of $\Delta m_{31}^2$ given in Table \[tab1\] is excluded by our results. ![ The variation of $m_\beta$ with respect to $\Delta m_{31}^2$.[]{data-label="fig2"}](fig2.eps){width="6.5cm" height="5cm"} ![ The variation of $\delta_{Dirac}^\circ$ with respect to $\theta_{13}^\circ$.[]{data-label="fig3"}](fig3.eps){width="6.5cm" height="5cm"} We show in Fig.\[fig3\] another important predictions of this work. This is the variation of the $CP$ violating Dirac phase $\delta_{Dirac}$ with respect to $\theta_{13}$. The Dirac phase $\delta_{Dirac}$ is not known from experiments. Hence, any prediction of this important observable is of great interest. Our prediction for $\delta_{Dirac}$ is $80.01^\circ ~\textrm{to} ~287.09^\circ$ excluding a sufficient part of the allowed parameter space of this quantity. In Fig.\[fig4\], we show the behavior of the Jarlskog invariant $J_{CP}$ with respect to Dirac phase $\delta_{Dirac}$. The allowed range for this observable is $-0.266~\textrm{to} ~0.266$. Thus, a large $CP$ violation is possible in our analysis. ![ The variation of $J_{CP}$ with respect to $\delta_{Dirac}^\circ$.[]{data-label="fig4"}](fig4.eps){width="6.5cm" height="5cm"} The variation of the sum of three neutrino masses, $\Sigma m_i$ with respect to $\Delta m_{31}^2$ is shown in Fig.\[fig5\]. The allowed range of $\Sigma m_i$ is $1.393-1.410$ eV, which is precise. We comment that our prediction for $\Sigma m_i$ is a little higher than that provided by the cosmological and astrophysical observations which is $0.72$ eV at $95\% $C.L. [@Ade:2015xua]. However, cosmological limit on $\Sigma m_i$ is highly model dependent. For example, as shown in Fig. 29 of Ref.[@Ade:2015xua] this could be as large as $1.6$eV. Furthermore, Ref.[@Ade:2015xua] assumes degenerate neutrinos ignoring the observed mass splittings whereas their model ($\Lambda$CDM) assumes two massless and one massive neutrino with $\Sigma m_i =0.06$eV. Moreover, $\Lambda$CDM is facing several challenges in explaining structures on galaxy scales[@Famaey:2013ty]. Hence, our predictions are aimed to test in laboratory-based experiments like KATRIN[@Drexlin:2013lha]. ![ The variation of $\Sigma m_i$ with respect to $\Delta m_{31}^2$.[]{data-label="fig5"}](fig5.eps){width="6.5cm" height="5cm"} We do not obtain any constraints on the mixing angle $\theta_{12}$ and mass square difference $\Delta m_{21}^2$. The whole $3\-\sigma$ ranges of global fit are allowed in this case for these quantities. Variation of the SUSY breaking scale ------------------------------------ Now, we discuss the effect of variation of the SUSY breaking scale on our predictions. In this case, we change the SUSY breaking scale to 5 TeV. However, the unification scale is still at the GUT scale. Our results are summarized in Tables \[tab3\] and \[tab4\]. In Table \[tab3\], we provide our free parameters which are chosen at the GUT scale. Our predictions at the low scale are given in Table \[tab4\]. Quantity Range at the unification scale ------------------ -------------------------------- $\alpha_1$ $0.88 - 1.012$ $\alpha_2$ $2.72 - 2.85$ $\alpha_3$ $1.095$ $m_1$(eV) $0.46878 - 0.47380$ $m_2$ (eV) $0.47 - 0.475$ $m_3$ (eV) $0.50321 - 0.50857$ $\delta_{Dirac}$ $(-14^\circ,14^\circ)$ : Predictions of neutrino mixing parameters and other observables at the low scale for the SUSY breaking scale at 5 TeV.[]{data-label="tab4"} Quantity Range at the low scale ---------------------------------------- --------------------------------------------------------- $\theta_{12}$ $32.85^\circ - 37.74^\circ$ $\theta_{13}$ $7.94^\circ - 8.20^\circ$ $\theta_{23}$ $38.86^\circ -39.45^\circ$ $m_1$(eV) $0.44458 - 0.44932$ $\Delta m^2_{21}~(10^{-5}~{\rm eV}^2)$ $7.15 - 8.15$ $\Delta m^2_{31}~(10^{-3}~{\rm eV}^2)$ $2.30- 2.34 $ $m_\beta$ (eV) $0.4447 - 0.4468 $ $\Sigma m_i$ (eV) $1.337 - 1.351$ $\delta_{Dirac}$ $281.28^\circ - 355.49^\circ$ [and]{} $0 - 89.14^\circ$ $J_{CP}$ $-0.2511~ {\rm to} ~0.2511$ : Predictions of neutrino mixing parameters and other observables at the low scale for the SUSY breaking scale at 5 TeV.[]{data-label="tab4"} We observe that the mixing angle $\theta_{12}$ and mass square difference $\Delta m^2_{21}$ were unconstrained for the SUSY breaking scale at 2 TeV in the previous subsection. Now, we observe that these quantities are bounded with respect to the $3\sigma$ range given by the global fit. The mixing angle $\theta_{23}$, unlike the investigation for SUSY breaking scale 2 TeV, lies in the first octant and is non-maximal. Variation of the unification scale ---------------------------------- In this subsection, we investigate the variation of the unification scale. In Tables \[tab5\] and \[tab6\], we show our results when we choose the unification scale to be $10^{12}$ GeV which is well below the GUT scale. However, the SUSY breaking scale is kept to 2 TeV. We show in Table \[tab5\], the values of the free parameters chosen at the unification scale. In Table \[tab6\], we present our results. The first remarkable prediction is the sum of neutrino masses which is well below the cosmological bound. The Dirac $CP$ phase has a precise range. The mixing angle $\theta_{12}$ and mass square difference $\Delta m^2_{21}$ are now relatively constrained. The mixing angle $\theta_{23}$ lies in the first octant, and is nonmaximal. Quantity Range at the unification scale ------------------ -------------------------------- $\alpha_1$ $0.67 - 0.85$ $\alpha_2$ $19.9 - 20.92$ $\alpha_3$ $7.41 - 7.42$ $m_1$(eV) $0.19815 - 0.20311$ $m_2$ (eV) $0.2 - 0.205 $ $m_3$ (eV) $0.21100 - 0.21628 $ $\delta_{Dirac}$ $(-10^\circ, 18^\circ)$ : Predictions of neutrino mixing parameters and other observables for the unification scale of $10^{12}$ GeV and the SUSY breaking scale at 2 TeV.[]{data-label="tab6"} Quantity Range at the low scale ---------------------------------------- ----------------------------------------------------- $\theta_{12}$ $32.35^\circ - 37.34^\circ$ $\theta_{13}$ $7.94^\circ - 8.45^\circ$ $\theta_{23}$ $38.83^\circ -39.18^\circ$ $m_1$(eV) $0.18321 - 0.18801$ $\Delta m^2_{21}~(10^{-5}~{\rm eV}^2)$ $7.77 - 8.17$ $\Delta m^2_{31}~(10^{-3}~{\rm eV}^2)$ $2.30- 2.42 $ $m_\beta$ (eV) $0.1834 - 0.1880 $ $\Sigma m_i$ (eV) $0.556 - 0.570$ $\delta_{Dirac}$ $182.66^\circ - 203.43^\circ$ [and]{} $0-120^\circ$ $J_{CP}$ $-0.1020~ {\rm to} ~0.2336$ : Predictions of neutrino mixing parameters and other observables for the unification scale of $10^{12}$ GeV and the SUSY breaking scale at 2 TeV.[]{data-label="tab6"} We conclude that there is no parameter space beyond the GUT scale for Dirac neutrinos so that we could recover the experimental data at the low scale using the RG evolution. This is a strong prediction and could be useful in construction of models (particularly GUT models) where Dirac neutrinos are the natural choice[@Ma:2014qra; @Mohapatra:1986bd; @ArkaniHamed:2000bq; @Borzumati:2000mc; @Kitano:2002px; @Abel:2004tt; @Murayama:2004me; @Smirnov:2004hs; @Mohapatra:2004vr]. Model for the HSMR parametrization {#sec2} =================================== We have investigated the HSMR parametrization for Dirac neutrinos in a model independent way. However, for the sake of completeness, in this section we discuss theoretical implementation of the HSMR parametrization in a specific model for Dirac neutrinos. Our model is based on a model presented in Ref. [@Haba:2012ar; @Haba:2011pm] which provides Dirac neutrinos with naturally small masses. This model is a type of neutrinophilic SUSY extension of the SM which can easily be embedded in a class of $SU(5)$ models. To obtain HSMR parametrization in the model given in Ref. [@Haba:2011pm], we impose a $Z_3$ discrete symmetry on this model. Under the $Z_3$ symmetry the first generation of both left- and right-handed quarks and leptons transforms as $1$, while the second generation transforms as $\omega$ and the third generation transforms as $\omega^2$, where $\omega$ denotes cube root of unity with $\omega^3 = 1$. All other fields transform trivially as $1$ under the $Z_3$ symmetry. The $Z_3$ symmetry ensures that the mass matrices for both up and down quarks as well as for charged leptons and neutrinos are all simultaneously diagonal. This in turn implies that the $V_{\rm{CKM}}$ as well as $V_{\rm{PMNS}}$ are both unity and there is no generation mixing in either quark or lepton sectors. To allow for the mixing, we break $Z_3$ in a way as done in Ref. [@Ma:2002yp]. Such corrections can arise from the soft SUSY breaking sector[@Babu:2002dz; @Babu:1998tm; @Gabbiani:1996hi]. For this purpose, we allow symmetry breaking terms of the form $|y''_i| <<|y'_i| <<|y_i| $ where $|y_i|$ are the terms invariant under $Z_3$ symmetry, and $|y'_i|, |y''_i|$ are the symmetry breaking terms transforming as $\omega, \omega^2$ under the $Z_3$ symmetry. This symmetry breaking pattern is well established and is known to explain the CKM structure of the quark sector[@Ma:2002yp]. Here, we have imposed this pattern on quarks as well as leptons simultaneously. Including these symmetry breaking terms, the mass matrices for quarks and leptons become $$\begin{aligned} M_{u,d,l} = \left( \begin{array}{ccc} y_1 v & y'_2 v & y''_3 v \\ y''_1 v & y_2 v & y'_3 v \\ y'_1 v & y''_2 v & y_3 v \\ \end{array} \right)~, \qquad M_{\nu} = \left( \begin{array}{ccc} y_1 u & y'_2 u & y''_3 u \\ y''_1 u & y_2 u & y'_3 u \\ y'_1 u & y''_2 u & y_3 u \\ \end{array} \right)~, \label{brok-mass-mat}\end{aligned}$$ where $v$ stands for the vacuum expectation value (vev) of the usual $H_u, H_d$ doublet scalars of MSSM and $u$ is the vev of the neutrinophilic scalar $H_\nu$ as discussed in Ref. [@Haba:2011pm]. Also, for the sake of brevity we have dropped the sub- and superscripts on the various terms. The mass matrix in (\[brok-mass-mat\]) is exactly same as the mass matrix obtained in Ref. [@Ma:2002yp] and can be diagonalized in the same way as done in Ref.[@Ma:2002yp]. The mass matrices of (\[brok-mass-mat\]) lead to a “Wolfenstein-like structure” for both CKM and Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrices, thus leading to the HSMR parametrization given in Eq.\[hsmr\]. Since this model is a modification of model given in Ref.[@Haba:2011pm] which can be embedded in a class of $SU(5)$ GUT models, therefore, it can also be easily embedded in the $SU(5)$ GUT model in a quite similar way as done in Ref.[@Haba:2011pm]. Summary {#sec3} ======= Neutrino mixing is remarkably different from small quark mixing. The aim of the present work is to provide an insight into a common origin of neutrino as well as quark mixing for Dirac neutrinos. Furthermore, we show that smallness of neutrino masses can be explained through the RG evolution of Dirac neutrinos. The HSMR parametrization of neutrino mixing angles is one among many other theoretical frameworks constructed for this purpose. The origin of this parametrization lies in the underlying concept of the quark-lepton unification or flavor symmetries or both. Hence, the confirmation of predictions provided by the HSMR parametrization would be a strong hint of the quark-lepton unification or a grand symmetry operating at the unification scale. As far as our knowledge is concerned, it was shown for the first time in Ref.[@Abbas:2013uqh] that the RG evolution can also explain the large neutrino mixing for Dirac neutrinos. However, as we have shown in this work, these results are no longer valid due to updated experimental data[@Forero:2014bxa; @Capozzi:2013csa; @Gonzalez-Garcia:2014bfa] and the improved algorithm used in the package REAP[@private]. In the present work, we have investigated the RG evolution of Dirac neutrinos in the framework of the HSMR parametrization. To our knowledge, this is the first thorough study on the RG behavior of Dirac neutrinos. The main achievement is that the RG evolution of Dirac neutrinos could explain the large neutrino mixing including the observation of a small and nonzero value of the mixing angle $\theta_{13}$. We obtain strong correlations among different experimental observables. Our predictions for the mixing angles $\theta_{13}$, $\theta_{23}$, averaged electron neutrino mass $m_\beta$, Dirac $CP$ phase $\delta_{Dirac}$ and the sum of three neutrino masses, $\Sigma m_i$ are precise and easily testable at ongoing and future experiments like INO, T2K, NO$\nu$A, LBNE, Hyper-K, PINGU and KATRIN [@Abe:2011ks; @Patterson:2012zs; @Adams:2013qkq; @Ge:2013ffa; @Kearns:2013lea; @Athar:2006yb; @Drexlin:2013lha]. The mixing angle $\theta_{23}$ is nonmaximal and lies in the second octant for the SUSY breaking scale 2 TeV and unification scale at the GUT scale. For the variation of the SUSY breaking scale and the unification scale, the mixing angle $\theta_{23}$ is nonmaximal and lies in the first octant. The predictions for the mass square difference $\Delta m_{31}^2$ are also well constrained and testable in experiments. Furthermore, the Dirac $CP$ phase is found to be lying in precise ranges in our analysis. The unification scale beyond the GUT scale is ruled out in our investigation. This fact could be useful for the GUT theories having Dirac neutrinos[@Ma:2014qra; @Mohapatra:1986bd; @ArkaniHamed:2000bq; @Borzumati:2000mc; @Kitano:2002px; @Abel:2004tt; @Murayama:2004me; @Smirnov:2004hs; @Mohapatra:2004vr]. We remark that we have investigated the RG evolution of neutrino mixing parameters at two loops. This is a crucial input since the RG evolution at one loop is insufficient to provide the required enhancement of the mixing angles which in turn, cannot yield the results obtained in this work. One of the main consequences of our investigation is that the HSMU hypothesis is not compatible with Dirac neutrinos due to updated experimental data[@Forero:2014bxa; @Capozzi:2013csa; @Gonzalez-Garcia:2014bfa] and a better algorithm used in the package REAP[@private]. The HSMU hypothesis is a particular realization of the HSMR parametrization when we choose $\alpha_1= \alpha_2 =\alpha_3=1$ for $k_1= k_2 =k_3=1$. As can be observed from Tables \[tab2\], \[tab3\] and \[tab5\] the allowed range for $\alpha_i$ excludes the $\alpha_1= \alpha_2 =\alpha_3=1$ case. This result is rigorous and robust in the sense that changing the SUSY breaking scale and the unification scale does not change this conclusion. Hence, the HSMR parametrization is one of the preferable frameworks to study the RG evolution of Dirac neutrinos now. The work of G. A. and M. Z. A. is supported by the Spanish Government and ERDF funds from the EU Commission \[Grants No. FPA2011-23778, FPA2014-53631-C2-1-P and No. CSD2007-00042 (Consolider Project CPAN)\]. RS is funded by the Spanish grants FPA2014-58183-P, Multidark CSD2009-00064, SEV-2014-0398 (MINECO) and PROMETEOII/2014/084 (Generalitat Valenciana). [99]{} J. C. Pati and A. Salam, Phys. Rev. D [**10**]{} (1974) 275 \[Erratum-ibid. D [**11**]{} (1975) 703\]. H. Georgi and S. L. Glashow, Phys. Rev. Lett.  [**32**]{} (1974) 438. H. Fritzsch and P. Minkowski, Annals Phys.  [**93**]{} (1975) 193. G. Altarelli and F. Feruglio, Rev. Mod. Phys.  [**82**]{} (2010) 2701 \[arXiv:1002.0211 \[hep-ph\]\]. S. F. King and C. Luhn, Rept. Prog. Phys.  [**76**]{} (2013) 056201 \[arXiv:1301.1340 \[hep-ph\]\]. M. Holthausen and K. S. Lim, Phys. Rev. D [**88**]{} (2013) 033018 \[arXiv:1306.4356 \[hep-ph\]\]. T. Araki, H. Ishida, H. Ishimori, T. Kobayashi and A. Ogasahara, Phys. Rev. D [**88**]{} (2013) 096002 \[arXiv:1309.4217 \[hep-ph\]\]. H. Ishimori and S. F. King, Phys. Lett. B [**735**]{} (2014) 33 \[arXiv:1403.4395 \[hep-ph\]\]. C. S. Lam, Phys. Rev. D [**89**]{} (2014) 9, 095017 \[arXiv:1403.7835 \[hep-ph\]\]. G. Abbas, M. Z. Abyaneh, A. Biswas, S. Gupta, M. Patra, G. Rajasekaran and R. Srivastava, Int. J. Mod. Phys. A [**31**]{}, no. 17, 1650095 (2016) doi:10.1142/S0217751X16500950 \[arXiv:1506.02603 \[hep-ph\]\]. R. N. Mohapatra, M. K. Parida and G. Rajasekaran, Phys. Rev. D [**69**]{} (2004) 053007 \[hep-ph/0301234\]. R. N. Mohapatra, M. K. Parida and G. Rajasekaran, Phys. Rev. D [**71**]{} (2005) 057301 \[hep-ph/0501275\]. R. N. Mohapatra, M. K. Parida and G. Rajasekaran, Phys. Rev. D [**72**]{} (2005) 013002 \[hep-ph/0504236\]. S. K. Agarwalla, M. K. Parida, R. N. Mohapatra and G. Rajasekaran, Phys. Rev. D [**75**]{} (2007) 033007 \[hep-ph/0611225\]. G. Abbas, S. Gupta, G. Rajasekaran and R. Srivastava, Phys. Rev. D [**89**]{} (2014) 9, 093009 \[arXiv:1401.3399 \[hep-ph\]\]. G. Abbas, S. Gupta, G. Rajasekaran and R. Srivastava, Phys. Rev. D [**91**]{}, no. 11, 111301 (2015) doi:10.1103/PhysRevD.91.111301 \[arXiv:1312.7384 \[hep-ph\]\]. R. Srivastava, Pramana [**86**]{}, no. 2, 425 (2016) \[arXiv:1503.07964 \[hep-ph\]\]. R. Srivastava, Springer Proc. Phys.  [**174**]{}, 369 (2016). N. Haba and R. Takahashi, Europhys. Lett.  [**100**]{}, 31001 (2012) doi:10.1209/0295-5075/100/31001 \[arXiv:1206.2793 \[hep-ph\]\]. F. Alessandria, E. Andreotti, R. Ardito, C. Arnaboldi, F. T. Avignone, III, M. Balata, I. Bandac and T. I. Banks [*et al.*]{}, arXiv:1109.0494. M. Auger [*et al.*]{} \[EXO Collaboration\], Phys. Rev. Lett.  [**109**]{}, 032505 (2012), arXiv:1205.5608. A. Gando [*et al.*]{} \[KamLAND-Zen Collaboration\], Phys. Rev. Lett.  [**110**]{}, no. 6, 062502 (2013), arXiv:1211.3863. M. Agostini [*et al.*]{} \[GERDA Collaboration\], Phys. Rev. Lett.  [**111**]{}, 122503 (2013), arXiv:1307.4720. P. Minkowski, Phys. Lett. B [**67**]{}, 421 (1977). M. Gell-Mann, P. Ramond and R. Slansky, Conf. Proc. C [**790927**]{}, 315 (1979), arXiv:1306.4669. T. Yanagida, Conf. Proc. C [**7902131**]{}, 95 (1979). S. L. Glashow, NATO Adv. Study Inst. Ser. B Phys.  [**59**]{}, 687 (1980). R. N. Mohapatra and G. Senjanovic, Phys. Rev. Lett.  [**44**]{}, 912 (1980). G. Abbas, arXiv:1609.02899 \[hep-ph\]. E. Ma and R. Srivastava, Phys. Lett. B [**741**]{}, 217 (2015) \[arXiv:1411.5042 \[hep-ph\]\]. R. N. Mohapatra and J. W. F. Valle, Phys. Rev. D [**34**]{}, 1642 (1986). N. Arkani-Hamed, L. J. Hall, H. Murayama, D. Tucker-Smith and N. Weiner, Phys. Rev. D [**64**]{}, 115011 (2001), hep-ph/0006312. F. Borzumati and Y. Nomura, Phys. Rev. D [**64**]{}, 053005 (2001), hep-ph/0007018. R. Kitano, Phys. Lett. B [**539**]{}, 102 (2002), hep-ph/0204164. S. Abel, A. Dedes and K. Tamvakis, Phys. Rev. D [**71**]{}, 033003 (2005), hep-ph/0402287. H. Murayama, Nucl. Phys. Proc. Suppl.  [**137**]{}, 206 (2004), hep-ph/0410140. A. Y. .Smirnov, hep-ph/0411194. R. N. Mohapatra, S. Antusch, K. S. Babu, G. Barenboim, M. -C. Chen, S. Davidson, A. de Gouvea and P. de Holanda [*et al.*]{}, hep-ph/0412099. P. Q. Hung, Nucl. Phys. B [**720**]{}, 89 (2005) doi:10.1016/j.nuclphysb.2005.05.023 \[hep-ph/0412262\]. P. Ko, T. Kobayashi and J. h. Park, Phys. Rev. D [**71**]{}, 095010 (2005) doi:10.1103/PhysRevD.71.095010 \[hep-ph/0503029\]. S. Antusch, O. J. Eyton-Williams and S. F. King, JHEP [**0508**]{}, 103 (2005) doi:10.1088/1126-6708/2005/08/103 \[hep-ph/0505140\]. J. Giedt, G. L. Kane, P. Langacker and B. D. Nelson, Phys. Rev. D [**71**]{}, 115013 (2005) doi:10.1103/PhysRevD.71.115013 \[hep-th/0502032\]. K. Dick, M. Lindner, M. Ratz and D. Wright, Phys. Rev. Lett.  [**84**]{}, 4039 (2000), hep-ph/9907562. H. Murayama and A. Pierce, Phys. Rev. Lett.  [**89**]{}, 271601 (2002), hep-ph/0206177. P. H. Gu and H. J. He, JCAP [**0612**]{}, 010 (2006) doi:10.1088/1475-7516/2006/12/010 \[hep-ph/0610275\]. P. H. Gu, H. J. He and U. Sarkar, JCAP [**0711**]{}, 016 (2007) doi:10.1088/1475-7516/2007/11/016 \[arXiv:0705.3736 \[hep-ph\]\]. P. H. Gu, H. J. He and U. Sarkar, Phys. Lett. B [**659**]{}, 634 (2008) doi:10.1016/j.physletb.2007.11.061 \[arXiv:0709.1019 \[hep-ph\]\]. P. H. Gu and U. Sarkar, Phys. Rev. D [**77**]{}, 105031 (2008) doi:10.1103/PhysRevD.77.105031 \[arXiv:0712.2933 \[hep-ph\]\]. P. H. Gu, Nucl. Phys. B [**872**]{}, 38 (2013) doi:10.1016/j.nuclphysb.2013.03.014 \[arXiv:1209.4579 \[hep-ph\]\]. J. A. Casas, J. R. Espinosa and I. Navarro, JHEP [**0309**]{}, 048 (2003) \[hep-ph/0306243\]. J. A. Casas, J. R. Espinosa, A. Ibarra and I. Navarro, Nucl. Phys. B [**573**]{}, 652 (2000) doi:10.1016/S0550-3213(99)00781-6 \[hep-ph/9910420\]. D. V. Forero, M. Tortola and J. W. F. Valle, Phys. Rev. D [**90**]{}, no. 9, 093006 (2014) doi:10.1103/PhysRevD.90.093006 \[arXiv:1405.7540 \[hep-ph\]\]. F. Capozzi, G. L. Fogli, E. Lisi, A. Marrone, D. Montanino and A. Palazzo, Phys. Rev. D [**89**]{} (2014) 9, 093018 \[arXiv:1312.2878 \[hep-ph\]\]. M. C. Gonzalez-Garcia, M. Maltoni and T. Schwetz, JHEP [**1411**]{} (2014) 052 \[arXiv:1409.5439 \[hep-ph\]\]. Private communication with Michael A. Schmidt. M. Lindner, M. Ratz and M. A. Schmidt, JHEP [**0509**]{}, 081 (2005), hep-ph/0506280. S. Antusch, J. Kersten, M. Lindner, M. Ratz and M. A. Schmidt, JHEP [**0503**]{}, 024 (2005), hep-ph/0501272. N. Craig, arXiv:1309.0528 \[hep-ph\]. M. S. Athar [*et al.*]{} \[INO Collaboration\], INO-2006-01. K. Abe [*et al.*]{} \[T2K Collaboration\], Nucl. Instrum. Meth. A [**659**]{}, 106 (2011), arXiv:1106.1238. R. B. Patterson \[NOvA Collaboration\], Nucl. Phys. Proc. Suppl.  [**235-236**]{}, 151 (2013), arXiv:1209.0716. C. Adams [*et al.*]{} \[LBNE Collaboration\], arXiv:1307.7335. E. Kearns [*et al.*]{} \[Hyper-Kamiokande Working Group Collaboration\], arXiv:1309.0184. S. -F. Ge and K. Hagiwara, arXiv:1312.0457. G. Drexlin, V. Hannen, S. Mertens and C. Weinheimer, Adv. High Energy Phys.  [**2013**]{}, 293986 (2013), arXiv:1307.0101. C. Kraus, B. Bornschein, L. Bornschein, J. Bonn, B. Flatt, A. Kovalik, B. Ostrick and E. W. Otten [*et al.*]{}, Eur. Phys. J. C [**40**]{} (2005) 447 \[hep-ex/0412056\]. V. N. Aseev [*et al.*]{} \[Troitsk Collaboration\], Phys. Rev. D [**84**]{} (2011) 112003 \[arXiv:1108.5034 \[hep-ex\]\]. P. A. R. Ade [*et al.*]{} \[Planck Collaboration\], arXiv:1502.01589 \[astro-ph.CO\]. B. Famaey and S. McGaugh, J. Phys. Conf. Ser.  [**437**]{}, 012001 (2013) doi:10.1088/1742-6596/437/1/012001 \[arXiv:1301.0623 \[astro-ph.CO\]\]. N. Haba, Europhys. Lett.  [**96**]{}, 21001 (2011) doi:10.1209/0295-5075/96/21001 \[arXiv:1107.4823 \[hep-ph\]\]. E. Ma, Mod. Phys. Lett. A [**17**]{}, 627 (2002) doi:10.1142/S0217732302006722 \[hep-ph/0203238\]. K. S. Babu, E. Ma and J. W. F. Valle, Phys. Lett. B [**552**]{}, 207 (2003) doi:10.1016/S0370-2693(02)03153-2 \[hep-ph/0206292\]. K. S. Babu, B. Dutta and R. N. Mohapatra, Phys. Rev. D [**60**]{}, 095004 (1999) doi:10.1103/PhysRevD.60.095004 \[hep-ph/9812421\]. F. Gabbiani, E. Gabrielli, A. Masiero and L. Silvestrini, Nucl. Phys. B [**477**]{}, 321 (1996) doi:10.1016/0550-3213(96)00390-2 \[hep-ph/9604387\].
--- abstract: 'Gravitational memory is an important prediction of classical General Relativity, which is intimately related to Bondi-Mezner-Sachs symmetries at null infinity and the so-called soft graviton theorem first shown by Weinberg. For a given transient astronomical event, the angular distributions of energy and angular momentum flux uniquely determine the displacement and spin memory effect in the sky. We investigate the possibility of using the binary black hole merger events detected by Advanced LIGO/Virgo to test the relation between source energy emissions and gravitational memory measured on earth, as predicted by General Relativity. We find that while it is difficult for Advanced LIGO/Virgo, one-year detection of a third-generation detector network will easily rule out the hypothesis assuming isotropic memory distribution. In addition, we have constructed a phenomenological model for memory waveforms of binary neutron star mergers, and use it to address the detectability of memory from these events in the third-generation detector era. We find that measuring gravitational memory from neutron star mergers is a possible way to distinguish between different neutron star equations of state.' author: - Huan Yang - Denis Martynov bibliography: - 'master.bib' title: Testing gravitational memory generation with compact binary mergers --- [**Introduction**]{}.  With recent detection of binary neutron star (BNS) mergers using both gravitational wave (GW) and electromagnetic telescopes [@PhysRevLett.119.161101; @2041-8205-848-2-L12; @2041-8205-848-2-L13], we are quickly entering the era of multi-messenger astronomy with GWs. Future GW observations will be able to provide unprecedented means to uncover physical information of those most compact, exotic objects (such as black holes and neutron stars) in our universe. Moreover, future detections will open an independent window to study cosmology [@schutz1986determining; @ligo2017gravitational], and will be used to test various predictions of General Relativity [@yunes2016theoretical; @berti2018extreme; @berti2018extreme2], such as the gravitational memory effect [@zel1974radiation; @smarr1977gravitational; @bontz1979spectrum; @christodoulou1991nonlinear]. Gravitational memory itself is an observable phenomenon of the spacetime, and conceptually it can be classified into ordinary memory originating from matter motions and GW memory [^1] that arises from nonlinearities in the Einstein equation. The GW memory has a very intimate relation to soft-graviton charges at null infinity [@he2015bms], which may lead to quantum gravity partners responsible for solving the Black Hole Information Paradox [@hawking2016soft]. The latter possibility still contains significant uncertainty that requires further theoretical development [@mirbabayi2016dressed], and it is unclear whether the memory effect is one of the few macroscopic, astrophysical observables that could be traced back to a quantum gravity origin (another example is “echoes from black hole horizon" [@cardoso2016gravitational]). Studying such classical observables is interesting because observation signatures of quantum gravity are normally expected at Planck scale. The detectability of the displacement memory effect using ground, spaced-based detectors and pulsar-timing arrays has been discussed extensively in the literature [@thorne1992gravitational; @lasky2016detecting; @mcneill2017gravitational; @favata2009nonlinear; @favata2010gravitational; @van2010gravitational; @pollney2010gravitational]. In addition, understanding and verifying the relation between memory effect and associated energy/angular momentum emissions from the source is equally important, which displays striking similarities to Weinberg’s soft-graviton theorem [@strominger2017lectures]. Such relation has been written in various forms in different context. In this work we adopt the form suitable to describe the nonlinear memory generated by GW energy flux [@thorne1992gravitational]: $$\begin{aligned} \label{eqmem} {h_{jk}^{\rm TT (mem)}}(T_d) =\frac{4}{d} \int^{T_d}_{-\infty} d t'\, \left [ \int \frac{d E^{\rm GW}}{dt' d \Omega'} \frac{n'_j n'_k}{1-{\bf n}' \cdot {\bf N}} d \Omega'\right ]^{\rm TT}\,,\end{aligned}$$ where $T_d$ is the time of detection, ${h_{jk}^{\rm TT (mem)}}$ is the memory part of the metric in transverse-traceless gauge, $\frac{d E^{\rm GW}}{dt' d \Omega'}$ is the GW energy flux, ${\bf n}'$ is its unit radial vector and ${\bf N}$ is the unit vector connecting the source and the observer (with distance $d$). We propose to use binary black hole merger events to test the validity of Eq. \[eqmem\]. For any single event, a network of detectors is able to approximately determine its sky location and the intrinsic source parameters such as black hole masses, spins, and the orbital inclination, by applying parameter estimation algorithms. The displacement memory effect, being much weaker than the oscillatory part of the GW signal, can be also extracted using the matched-filter method. By computing GW energy with source parameters within the range determined by parameter estimation, we can obtain the value of the right-hand side of Eq. \[eqmem\] and compare it with measured displacement memory. Multiple events are need to accumulate statistical significance for such a test [@yang2017black; @yang2017gravitational]. As an astrophysical application for gravitational memory, we also examine the memory generated by BNS mergers with a simple, semi-analytical memory waveform model. This memory waveform has a part that is sensitive to the star equation of state (EOS) and post-merger GW emissions. Therefore we are able to study the possibility of using memory detection to distinguish different NS EOS in the era of third-generation detectors. [**Memory distribution**]{}. For binary black hole mergers at cosmological distances, the memory contribution can be well approximated by ($h^{\rm mem}_{\times}=0$ for circular orbit and standard choice of polarization basis) [@favata2009nonlinear; @bieri2017gravitational] [^2]: $$\begin{aligned} \label{eqa} h^{\rm (mem)}_{+} =\frac{\eta M_z}{384 \pi d} \sin^2\iota (17+\cos^2\iota) h^{\rm mem}(T_d)\,,\end{aligned}$$ where $M=m_1+m_2$ is the total mass of the binary, $z$ is the redshift, $M_z=M (1+z)$ is the redshifted total mass, $\eta = m_1 m_2/M^2$ is the symmetric mass ratio, $\iota$ is the inclination angle of the orbit. The posterior distribution of these source parameters can be reconstructed by performing Markov-Chain Monte-Carlo parameter estimation procedure for each event. $h^{\rm mem}$ can be well modelled by the [minimal-waveform model]{} discussed in [@favata2009nonlinear]. The angular dependence shown in Eq.  encodes critical information about memory generation described by Eq. . It is maximized for edge-on binaries, which is different from the dominant oscillatory signals with $h_+ \propto (1+\cos^2\iota),\, h_\times \propto \cos \iota$ dependence. In this work, we test the consistency of Eq.  with future GW detections as a way to test the memory generation formula Eq. . In particular, we test the $\iota$-angle dependence [^3] and formulate this problem in a Bayesian model selection framework. ![image](snrmem.pdf){width="0.43\linewidth"} ![image](snreff.pdf){width="0.43\linewidth"} [**Model test**]{}.  We consider two following hypothesis, with $\mathcal{H}_1$ resembling Eq.  and $\mathcal{H}_2$ describing an isotropic memory distribution in the source frame: $$\begin{aligned} \label{eqh1h2} \mathcal{H}_1: h^{\rm (mem)}_{+} & =\frac{\eta M_z}{384 \pi d} \sin^2\iota (17+\cos^2\iota) h^{\rm mem}(T_d)\, \equiv h_{m1}\,, \nonumber \\ \mathcal{H}_2: h^{\rm (mem)}_{+} & =\frac{\eta M_z}{96 \pi d} \sqrt{\frac{3086}{315}}h^{\rm mem}(T_d)\, \equiv h_{m2}\,, \end{aligned}$$ where the numerical coefficient of $h^{\rm (mem)}_{+}$ in $\mathcal{H}_2$ is chosen such that the (source) sky-averaged ${\rm SNR}^2$ (signal-to-noise ratio) is the same for these two hypothesis. For each detected binary black hole merger event, the source parameters are described by $$\begin{aligned} \label{eq:par-ground} \theta^a = (\ln \mathcal{M}_z, \ln \eta, \chi, t_c, \phi_c, \ln d, \alpha,\delta,\psi, \iota)\,,\end{aligned}$$ where $\mathcal{M}_z \equiv M_z \eta^{3/5}$ is the redshifted chirp mass, $\chi \equiv (m_1 \chi_1+m_2 \chi_2)/M$ is the effective spin parameter [@Ajith:2009bn] with $\chi_A$ representing the dimensionless spin of the $A$th body, $t_c$ and $\phi_c$ are the coalescence time and phase, $\alpha$, $\delta$ and $\psi$ are the right ascension, declination and polarization angle in the Earth fixed frame. Given a data stream $y$, to perform the hypothesis test, we evaluate the Bayes factor $$\begin{aligned} \mathcal{B}_{12} =\frac{P(\mathcal{H}_1 | y)}{P(\mathcal{H}_2 | y)}\,.\end{aligned}$$ In addition, the evidence $P(\mathcal{H}_i | y)$ is $$\begin{aligned} \label{eqevidence} P(\mathcal{H}_i | y) = \int d \theta^a P(\theta^a | \mathcal{H}_i) P(y | \theta^a \mathcal{H}_i)\,,\end{aligned}$$ where the prior $P(\theta^a | \mathcal{H}_i) $ is the prior distribution of $\theta^a$ which is set to be flat, and the likelihood function is given by $$\begin{aligned} \log P(y | \theta^a \mathcal{H}_i) &\propto -2\int df \frac{| y-h_{\rm IMR}- h_{mi} |^2}{S_n(f)} \,\nonumber \\ & \equiv -\frac{||y-h_{\rm IMR}- h_{mi}||^2}{2}\,, \end{aligned}$$ with the inspiral-merger-ringdown waveform being $h_{\rm IMR}$ and the single-side detector noise spectrum $S_n$. Both $h_{\rm IMR}$ and $h_{mi}$ (cf. Eq. \[eqh1h2\]) are functions of $\{\theta^a\}$. According to the derivation in the Supplementary Material, after performing the integration in Eq. , the log of this Bayes factor can be approximated by $$\begin{aligned} \label{eqlogb12exp} \log \mathcal{B}_{12} = & -\frac{1}{2} || y-h_{\rm IMR}(\hat{\theta})- \epsilon h_{m1}(\hat{\theta}) ||^2 \nonumber \\ &+\frac{1}{2} || y-h_{\rm IMR}(\hat{\theta})-\epsilon h_{m2}(\hat{\theta}) ||^2\,. \end{aligned}$$ Here $\{ \hat{\theta}^a\}$ are the Maximum Likelihood Estimator for $\{ \theta^a\}$ using the IMR waveform template (PhenomB  [@Ajith:2009bn] is adopted in this work). Similar to the discussion in [@Meidam:2014jpa; @yang2017black; @yang2017gravitational], we denote the distribution of $ \log \mathcal{B}_{12}$ in Eq.  as [*foreground*]{} or [*background*]{} distributions, assuming hypothesis 1 or 2 is true respectively. Given a detected event, these [*foreground*]{} and [*background*]{} distributions can be used to obtain the detection efficiency $P_{\rm d}$ and the false alarm rate $P_{\rm f}$ [@Meidam:2014jpa; @yang2017black; @yang2017gravitational]. Given an underlying set of source parameters $\theta_0=\{ \theta^a_0\}$, the false alarm rate can be obtained if the detection efficiency is known. In this work we follow the convention in [@abbott2017search] and choose $P_{\rm d} =50\%$. For multiple events with data stream $\{ y^{(i)}\}$, the combined Bayes factor is $$\begin{aligned} \mathcal{B}_{12} =\prod_i \frac{P(\mathcal{H}_1 | y^{(i)})}{P(\mathcal{H}_2 | y^{(i)})}\,,\end{aligned}$$ and the above discussion generalizes trivially because these events are independent. It turns out that, if we define ${\rm SNR}^{50\%}_{\rm eff}$ such that $$\begin{aligned} P^{50\%}_{\rm f} =\frac{1}{\sqrt{2\pi}} \int^\infty_{\rm SNR^{50\%}_{eff}} e^{-x^2/2} \,dx\,,\end{aligned}$$ this effective SNR is given by $$\begin{aligned} {\rm SNR^{50\%}_{eff}} = \frac{\sum_i || h^{(i)}_{m2}(\theta_0)- h^{(i)}_{m1}(\theta_0) ||_i^2}{\sigma}\,,\end{aligned}$$ with $$\begin{aligned} \label{eqexp} \sigma^2 & = \sum_i \left \{|| h^{(i)}_{m2}(\theta_0)- h^{(i)}_{m1}(\theta_0) ||_i^2+A^{(i)}_a ({\Gamma^{(i)}_{ab}}^{-1}) A^{(i)}_b \right \} \,,\nonumber \\ \Gamma^{(i)}_{ab} & = \langle \partial_{\theta^a} h^{(i)}_{\rm IMR} | \partial_{\theta^b} h^{(i)}_{\rm IMR} \rangle_i \,,\nonumber \\ A^{(i)}_a & = \langle \partial_{\theta^a} h^{(i)}_{m1} | h^{(i)}_{m1}(\theta_0)-h^{(i)}_{m2}(\theta_0) \rangle_i \,,\end{aligned}$$ and the inner product is defined as $$\begin{aligned} \langle \psi | \chi \rangle_i \equiv 2\int df \frac{\psi(f) \chi^*(f)+h.c.}{S_{n_i}(f)}\,. \end{aligned}$$ The source parameter uncertainties enter into this hypothesis test result through the $A \Gamma^{-1} A$-type terms in Eq. \[eqexp\]. Because of the simplified treatment adopted in this analysis to save computational cost for simulated data, they are obtained essentially by the Fisher-Information method ($\Gamma$ is the Fisher-Information matrix). In principle, the whole procedure can also be performed using Markov-Chain Monte-Carlo method, where the posterior probability distribution of each parameter can be more accurately computed. [**Monte-Carlo source sampling**]{}. In order to investigate the distinguishability between different hypotheses over a given observation period, we randomly sample merging binary black holes (BBHs) using a uniform rate in comoving volume $ 55{\rm Gpc}^{-3} {\rm yr}^{-1}$ consistent with [@abbott2016binary]. The primary mass $m_1$ of the binary is sampled assuming a probability distribution $p(m_1) \propto m_1^{-2.35}$, where the secondary mass is uniformly sampled between $5 M_\odot$ and $m_1$. We also require that the upper mass cut-off to be $M<80 M_\odot$ [@woosley2015deaths]. The effective spin $\chi_i$ is sampled evenly within $|\chi_i| <1$. The right ascension, declination, and inclination angles are randomly sampled assuming uniform distribution on the Earth’s and source’s sky. We perform 100 Monte-Carlo realizations, each of which contains all BNS mergers within $z<0.5$ range (further binary merger events are too faint for memory detections) for a given observation period. The results of the Monte-Carlo (MC) simulation are shown in Fig. \[fig:config\]. We assume a detector network with Advanced LIGO (both Livingston and Hanford sites) and Advanced Virgo, with all detectors reaching design sensitivity. After five-year observation time, we collect all events with expected memory SNR above $0.1$ for each MC realization, and compute the corresponding ${\rm SNR}^{50\%}_{\rm eff}$ as defined in Eq. . With a five-year observation, the median of this astrophysical distribution locates at $\sim 0.65 \sigma$ level, which is insufficient to claim a detection. Therefore under the current best estimate of merger rate and with the assumed binary BH mass distributions, during the operation period of Advanced LIGO-Virgo, it is unlikely to distinguish the (source) sky distribution of the memory term as depicted by Eq. , and an isotropic memory distribution. In comparison, we apply the Voyager (or Cosmic Explorer, CE) sensitivity to both LIGO detectors, and the Einstein Telescope (ET) sensitivity to the Virgo detector, and plot the corresponding SNR in Fig. \[fig:config\]. These 3rd-generation detector networks are fully capable of distinguishing the hypotheses. Such a hypothesis test framework can also be applied to test against other memory distribution as well - one needs to replace the second line of Eq.  by the target hypothesis. As an illustration, we also include the distribution of combined SNR: ${\rm SNR}_{\rm mem} = \sqrt{\sum_i ({\rm SNR}_{\rm mem}^{(i)})^2}$ [^4]. This can be achieved by adding the memory terms from different events coherently, as explained in [@lasky2016detecting]. Its magnitude roughly reflects the strength of combined memory signal over noise and the fact that its detection is likely after five years’ observation, which agrees with [@lasky2016detecting]. [**Recovering the angular dependence**]{}. With a set of detections, it is also instructive to reconstruct the posterior angular dependence of memory, which can be compared with its theoretical prediction. Without loss of generality, we parametrize the memory waveform as $$\begin{aligned} \label{eqp} h^{\rm (mem)}_{+} & =\frac{17 \eta M_z}{384 \pi d} h^{\rm mem}(T_d) f(\{a_n, b_n\},\iota)\,, \nonumber \\ f(\{a_n, b_n\},\iota)& =\sum^N_{n=0} (a_n \sin n \iota + b_n \cos n \iota)\,, \end{aligned}$$ where $N$ is the truncation wave number and $ h^{\rm mem}(T_d)$ is normalized to give the same Post-Newtonian waveform in the early inspiral stage. Given a set of observed events $y_j$, one can obtain the posterior distribution of $a_i, b_i$ using Bayes Theorem ($a_0 =0$): $$\begin{aligned} P(\{ a_i, b_i\} | \{ y_j\}) = \frac{P ( \{ y_j\} |\{ a_i, b_i\} ) P(\{ a_i, b_i\})}{P(\{ y_j\})}\,, \end{aligned}$$ where the detailed expression for the likelihood function $P ( \{ y_j\} |\{ a_i, b_i\} )$ is explained in the Supplementary Material. In Fig. \[fig:ang\], we simulate observed events (with ${\rm SNR}_{\rm m} \ge1$) in one year assuming CE-ET sensitivity. For simplicity, we assume that the memory distribution respects parity symmetry, such that all the $a_i$’s are zero. The cutoff $N$ is set to be $4$. Based on the posterior distribution of the angular distribution parameter $ b_i$, we compute the reconstructed uncertainty of $f_{\iota}$ at $1\sigma$ level, as depicted by the shaded area in Fig. \[fig:ang\]. ![The $1\sigma$ uncertainty of angular dependence $f(\iota)$ reconstructed from a set of simulated events, as indicated by the shaded region. The SNR and $\iota$ of simulated events are presented by the dots in the plot.[]{data-label="fig:ang"}](angular.pdf){width="0.93\linewidth"} [**Binary neutron stars**]{}.  In addition to binary black holes, merging BNSs also generate a gravitational memory. However, as neutron star masses are smaller than the typical BH mass in binaries, and that the merger frequency is outside of the most sensitive band of current detectors, directly detecting gravitational memory from BNS mergers is difficult for second-generation detectors. Since the BNS waveform (especially the post-merger part) depends sensitively on the EOS, it is natural to expect that the detection of memory can be used to distinguish between various EOS. To achieve this goal, we have formulated a [*minimal-waveform*]{} model for BNS mergers similar to the construction for BBHs (see Supplementary Material). Such a model employs the fitting formula for post-merger waveforms developed in [@bose2017neutron] to compute $d E^{\rm GW}/dt$ (c.f. Eq. \[eqmem\]) in the post-merger stage, and a leading-PN description for the energy flux in the inspiral stage. For illustration purpose, we also consider four sample EOS studied in [@bose2017neutron]: GNH3, H4, ALF2, Sly. Assuming a $1.325M_\odot+1.325 M_\odot$ BNS system at distance $50{\rm Mpc}$ away from earth and following the maximally emitting direction, the SNRs for detecting these memory waveforms with Advanced LIGO are all around $0.1$, which are insufficient to study the EOS of neutron stars. On the other hand, if we assume Cosmic Explorer (CE) sensitivity, the corresponding SNRs will be 10.1, 9.6, 8.9, and 10.4 respectively. For third-generation GW detectors such as CE, the inspiral waveform of BNS can be used to determine source parameters (such as $\iota$) to very high accuracies. For a $1.325M_\odot+1.325 M_\odot$ BNS system at distance $50{\rm Mpc}$ [^5], Fisher analysis suggests that the measurement uncertainty of $\iota$ is of order $10^{-2}$. An accurate determination of source parameters breaks the degeneracy of amplitude between different BNS memory waveforms. We shall compute $$\begin{aligned} {\rm SNR}_{\Delta ab} = \sqrt{4 \int^\infty_0 df\,\frac{|\tilde{h}^{\rm mem}_{\rm MWM,a}-\tilde{h}^{\rm mem}_{\rm MWM,b}|^2}{S_{\rm n,CE}}}\,,\end{aligned}$$ as a measure for distinguishability between arbitrary EOS a and b. EOS GNH3 H4 ALF2 Sly ------ ------ ----- ------ ----- GNH3 0 1.3 5.2 3.8 H4 0 3.9 2.7 ALF2 0 2.3 : ${\rm SNR}_\Delta$ for various EOS. \[table:bhnsdsnr\] According to the discussion in [@lindblom2008model], if ${\rm SNR}_{\rm \Delta} \le 1$, we shall say that the two waveforms are indistinguishable. The values listed in Table \[table:bhnsdsnr\] indicate that measuring gravitational memory is a possible way to extract information about neutron star EOS. One unique advantage of this approach is that it is insensitive to phase difference between post-merger modes, as the beating term between modes generally contribute $k$Hz modulation of $d E^{\rm GW}/dt$ or $h^{\rm mem}$, which is outside the most sensitive band of third-generation detectors [^6]. Such mode phases still contain much more significant theoretical uncertainties than mode frequencies in current numerical simulations. [**Memory for ejecta**]{}. The electromagnetic observation of GW170817 provides strong evidence for multi-component ejecta [@hallinan2017radio; @smartt2017kilonova], which could originate from collisions of stars, wind from post-collapse disk [@siegel2017three], etc. Because of the transient nature, the GWs generated by ejecta(s) are likely non-oscillatory, which are mainly composed by ordinary gravitational memory [@braginsky1987gravitational]: $$\begin{aligned} \label{eqlmem} h_{jk}^{\rm TT (mem)} = \Delta \sum^N_{A=1} \frac{4 M_A}{d \sqrt{1-v_A^2}} \left [ \frac{v^j_A v^k_A}{1-{\bf v_A} \cdot {\bf N}}\right ]^{\rm TT}\,.\end{aligned}$$ We shall phenomenologically write the ejecta waveform as $h_+ = h_0 (1+e^{-t/\tau})^{-1}$, with the frequency domain waveform being $i \pi \tau /\sinh(2\pi^2 f \tau)$. Here $\tau$ characterizes the duration of the ejection process, and $h_0$ is the asymptotic magnitude of the linear memory. Depending on the angular distribution of the ejecta material, $h_0$ along the maximally emitting direction can be estimated as $h_0 \sim \Delta M v^2/d$, where $\Delta M$ is the ejecta mass and $v$ is the characteristic speed. Assuming CE sensitivity, the SNR of such ejecta waveforms is a plateau for $\tau \le 1$ms, and drops quickly for larger $\tau$. The plateau value roughly scales as [^7] $$\begin{aligned} {\rm SNR}_{\rm ej} \sim 1.2 \left ( \frac{\Delta M}{0.03 M_\odot} \right ) \left ( \frac{v}{0.3 c} \right )^2 \left ( \frac{d}{50 {\rm Mpc}} \right )^{-1}\,.\end{aligned}$$ In this case, a detection of ejecta waveforms is only plausible with information stacked from multiple events, and/or using detectors that achieve better low frequency sensitivity [@yu2017prospects]. Out of curiosity, one can apply a similar analysis to the jet of a short gamma-ray burst. The SNR roughly scales as $\sim 0.25 (\Delta E_{\rm jet}/10^{51} {\rm erg}) (50 {\rm Mpc}/d)$, which is even smaller. [**Conclusion**]{}.  We have discussed two aspects of measuring gravitational memory in merging compact binary systems. For BBHs, it is ideal to test the memory-generation mechanism, as a way to connect soft-graviton theorem and symmetry charges of the spacetime to astrophysical observables. For BNSs, it can be used to distinguish between different NS EOS, as a complementary way to tidal love number measurements in the inspiral waveform and (possibly) spectroscopy measurements of the post-merger signal. We have shown that both tasks may be achieved with the third-generation detectors. Because of the $1/f$-type scaling of memory waveforms, improving the low-frequency sensitivity of detectors is crucial for achieving better memory SNR. This will be particularly useful for gravitationally probing the ejecta(s) produced in BNS mergers. Another interesting direction will be further exploring the detectability and application of memory in space-based missions, such as LISA or DECIGO. [**Acknowledgments.**]{} We would like to thank Haixing Miao and Lydia Brieri for fruitful discussions. We thank Yuri Levin for reading over the manuscript and making many useful comments. H.Y. is supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. D.M. acknowledge the support of the NSF and the Kavli Foundation. Details of the hypothesis test ============================== We are testing two hypotheses: $$\begin{aligned} \mathcal{H}_1: y & =\delta h_{\rm IMR}+n+ \epsilon \frac{\eta M_z}{384 \pi d} \sin^2\iota (17+\cos^2\iota) h^{\rm mem}(T_d)\,,\nonumber \\ & = \delta h_{\rm IMR}+n + \epsilon h_{m1} \nonumber \\ \mathcal{H}_2: y & =\delta h_{\rm IMR}+n+\frac{\eta M_z}{96 \pi d} \sqrt{\frac{3086}{315}}h^{\rm mem}(T_d)\, \nonumber \\ & = \delta h_{\rm IMR}+n + \epsilon h_{m2} \,, \end{aligned}$$ where $\epsilon$ is a book-keeping parameter to track the power of the memory terms (as they are generally smaller than the oscillatory part), $n$ is the detector noise, $\delta h_{\rm IMR}$ is the residual part due to imperfect subtraction of the oscillatory part of the inspiral-merger-ringdown (IMR) waveform. Notice that the overlap between the memory waveform and the IMR is very small. For example, for GW150914-like events, the overlap is $$\begin{aligned} \mathcal{F}(h^{\rm mem}, h_{\rm IMR}) =\frac{\langle h^{\rm mem} | h_{\rm IMR} \rangle}{\sqrt{\langle h^{\rm mem} | h^{\rm mem}\rangle} \sqrt{\langle h_{\rm IMR} | h_{\rm IMR}\rangle}} \approx 0.7\%\,, \end{aligned}$$ where the inner product is $$\begin{aligned} \langle \psi | \chi \rangle \equiv 2\int df \frac{\psi(f) \chi^*(f)+h.c.}{S_n(f)}\,, \end{aligned}$$ where $S_n(f)$ is the single-side detector spectrum. Similarly we can check $\mathcal{F}(h^{\rm mem}, \partial_{\theta^a} h_{\rm IMR})$ are of similar order. As a result, we approximate the memory waveform to be orthogonal to the oscillatory part of the waveform. Consider $$\begin{aligned} P(\mathcal{H}_i | y) = \int d \theta^a P(\theta^a | \mathcal{H}_i) P(y | \theta^a \mathcal{H}_i)\,,\end{aligned}$$ where the prior $P(\theta^a | \mathcal{H}_i)$ is taken to be flat. On the other hand, the likelihood function $P(y | \theta^a \mathcal{H}_i)$ is given by $$\begin{aligned} P(y | \theta^a \mathcal{H}_i) \propto e^{-1/2 \langle y-h_{\rm IMR}- \epsilon h_{mi} | y-h_{\rm IMR}- \epsilon h_{mi} \rangle}\,, \end{aligned}$$ such that $$\begin{aligned} \label{eqlikeli} P(\mathcal{H}_i | y) = \int d \theta^a e^{-1/2 || y-h_{\rm IMR}- \epsilon h_{mi} ||^2 }\,.\end{aligned}$$ Here both $h_{\rm IMR}$ and $h_{mi}$ are functions of $\theta^a$. We further choose the $\hat{\theta}^a$ such that $$\begin{aligned} \left . \langle y | \partial_{\theta^a} h_{\rm IMR} \rangle \right |_{\hat{\theta}^a} = \left . \langle h_{\rm IMR} | \partial_{\theta^a} h_{\rm IMR} \rangle \right |_{\hat{\theta}^a}\,.\end{aligned}$$ In other words, $\hat{\theta}^a$ are the Maximum Likelihood Estimators of $\theta^a$ using the matched filter $h_{\rm IMR}$. We can further expand the exponent of Eq.  to be $$\begin{aligned} &y-h_{\rm IMR}- \epsilon h_{mi} \nonumber \\ &\approx y-h_{\rm IMR}(\hat{\theta}^a)- \epsilon h_{mi}(\hat{\theta}^a) -\partial_{\theta^a} h_{\rm IMR} \delta \theta^a-\epsilon \partial_{\theta^a} h_{mi} \delta \theta^a\,, \end{aligned}$$ where $\delta \theta^a =\theta^a-\hat{\theta}^a\,$. By applying the orthogonality condition between the IMR waveform and memory waveform and removing terms at $\mathcal{O}(\epsilon^2)$ order, after the Gaussian integration in Eq.  we find that $$\begin{aligned} \label{eqlikeli2} P(\mathcal{H}_i | y) \propto e^{-1/2 || y-h_{\rm IMR}(\hat{\theta})- \epsilon h_{mi}(\hat{\theta}) ||^2 } \frac{1}{\sqrt {{\rm det}(\Gamma_{ab})}}\,.\end{aligned}$$ with $$\begin{aligned} \Gamma_{ab} = \langle \partial_{\theta^a} h_{\rm IMR} | \partial_{\theta^b} h_{\rm IMR} \rangle\,. \end{aligned}$$ As a result, the log Bayes factor is given by $$\begin{aligned} \log \mathcal{B}_{12} = & -\frac{1}{2} || y-h_{\rm IMR}(\hat{\theta})- \epsilon h_{m1}(\hat{\theta}) ||^2 \nonumber \\ &+\frac{1}{2} || y-h_{\rm IMR}(\hat{\theta})-\epsilon h_{m2}(\hat{\theta}) ||^2\,. \end{aligned}$$ With underlying source parameters $\theta^a_0$ and assuming hypothesis $2$ is true, we can evaluate the [*background distribution*]{} of the log Bayes factor [@Meidam:2014jpa; @yang2017black; @yang2017gravitational]. Let us denote $$\begin{aligned} s & =y -h_{\rm IMR}(\hat{\theta})- \epsilon h_{m2}(\hat{\theta})=n +[h_{\rm IMR}(\theta_0)-h_{\rm IMR}(\hat{\theta})] \nonumber \\ &+\epsilon [h_{\rm m2}(\theta_0)-h_{\rm m2}(\hat{\theta})] = n + \delta h_{\rm IMR} +\epsilon \delta h_{m2}\,.\end{aligned}$$ The log Bayes factor becomes $$\begin{aligned} \label{eqlogb1} \log \mathcal{B}_{12} = & -\frac{1}{2} \epsilon^2 || h_{m2}(\hat{\theta})- h_{m1}(\hat{\theta}) ||^2 \nonumber \\ & + \epsilon \langle s | h_{m1}(\hat{\theta})-h_{m2}(\hat{\theta}) \rangle\, \nonumber \\ & = -\frac{1}{2} \epsilon^2 || h_{m2}(\hat{\theta})- h_{m1}(\hat{\theta}) ||^2 \nonumber \\ & + \epsilon \langle n+\epsilon \delta h_{m2} | h_{m1}(\hat{\theta})-h_{m2}(\hat{\theta}) \rangle\, \nonumber \\ & = -\frac{1}{2} \epsilon^2 || h_{m2}(\theta_0)- h_{m1}(\theta_0) ||^2 \nonumber \\ & + \epsilon \langle n+\epsilon \delta h_{m1} | h_{m1}(\theta_0)-h_{m2}(\theta_0) \rangle\,\nonumber \\ & \approx -\frac{1}{2} \epsilon^2 || h_{m2}(\theta_0)- h_{m1}(\theta_0) ||^2 \nonumber \\ & + \epsilon \langle n | h_{m1}(\theta_0)-h_{m2}(\theta_0) \rangle \nonumber \\ &+\epsilon^2 \delta \theta^a_0 \langle \partial_{\theta^a} h_{m1} | h_{m1}(\theta_0)-h_{m2}(\theta_0) \rangle\,. \end{aligned}$$ Notice that if we normalize the magnitude of $\langle n | h_{\rm IMR} \rangle/||h_{\rm IMR}||$ or $\langle n | h_{mi} \rangle/||h_{mi}||$ as $\sim 1$ , we have $\delta \theta^a_0 = \theta^a_0 -\hat{\theta}^a \sim 1/{\rm SNR}_{\rm IMR}$ and $|| h_{mi} || \sim {\rm SNR}_{\rm mem}$. That’s why we have dropped terms like $\delta \theta^a_0 \delta \theta^b_0 \langle \partial_a h_{mi} | \partial_b h_{mi} \rangle \sim (\rm SNR_{mem}/SNR_{IMR})^2$. Let us denote the distribution of the last three lines of Eq.  as $P_1$, the false alarm probability (rate) of a given detection is $$\begin{aligned} P_{\rm f} = \int^\infty_{\log \mathcal{B}_{12}} P_1(X) dX \equiv R_1(\log \mathcal{B}_{12})\,.\end{aligned}$$ On the other hand, assuming hypothesis $1$ is true, the log Bayes factor becomes $$\begin{aligned} \label{eqlogb2} \log \mathcal{B}_{12} & \approx \frac{1}{2} \epsilon^2 || h_{m2}(\theta_0^a)- h_{m1}(\theta_0^a) ||^2 \nonumber \\ & + \epsilon \langle n | h_{m1}(\theta_0^a)-h_{m2}(\theta_0^a) \rangle \nonumber \\ &+\epsilon^2\delta \theta^a_0 \langle \partial_{\theta^a} h_{m2} | h_{m1}(\theta_0^a)-h_{m2}(\theta_0^a) \rangle\,. \end{aligned}$$ Let us denote the distribution of the last three lines of Eq.  as $P_2$, the detection efficiency (probability) is $$\begin{aligned} P_{\rm d} = \int^\infty_{\log \mathcal{B}_{12}} P_d(X) dX \equiv R_2(\log \mathcal{B}_{12})\,.\end{aligned}$$ For a given detection efficiency (say $50\%$), we can obtain the false alarm probability $P^{50\%}_{\rm f}$ based on the underlying source parameter $\theta^a_0$. Such a false alarm rate can be mapped to an effective SNR of a standard Gaussian distribution: $$\begin{aligned} P^{50\%}_{\rm f} =\frac{1}{\sqrt{2\pi}} \int^\infty_{\rm SNR^{50\%}_{eff}} e^{-x^2/2} \,dx\,.\end{aligned}$$ According to the set up of this problem, one can show that $\rm SNR^{50\%}_{eff}$ is $$\begin{aligned} {\rm SNR^{50\%}_{eff}} = \frac{|| h_{m2}(\theta_0)- h_{m1}(\theta_0) ||^2}{\sigma}\,,\end{aligned}$$ with $$\begin{aligned} \sigma^2 & = || h_{m2}(\theta_0)- h_{m1}(\theta_0) ||^2+A_a (\Gamma_{ab}^{-1}) A_b \nonumber \\ A_a & = \langle \partial_{\theta^a} h_{m1} | h_{m1}(\theta_0)-h_{m2}(\theta_0) \rangle\,.\end{aligned}$$ Angular dependence recovery =========================== With a set of observations $y_i$, we first cross product each data stream with the memory waveform $h^{\rm m}(\Theta,T_d) \equiv \frac{17 \eta M_z}{384 \pi d} h^{\rm mem}(T_d)$, with $\Theta$ being a generalization of $\theta^a$ which includes individual spins. $$\begin{aligned} s_i \equiv \langle y_i | h^{\rm m,i}(\Theta_i) \rangle f(\{a_n, b_n\},\iota) \rangle\,.\end{aligned}$$ The likelihood function $\mathcal{L}(s_i | \{ a_n, b_n\}, \Theta_i)$ is $$\begin{aligned} \label{eqlike} & \mathcal{L}(s_i | \{ a_n, b_n \}, \Theta_i) \propto \nonumber \\ &{\rm exp} \left [ -\frac{ (s_i - ||h^{\rm m}||^2 f(\{a_n, b_n\},\iota) )^2}{2 ||h^{\rm m}||^2}\right ]\,. \end{aligned}$$ For the events we are considering here, the SNR of the oscillatory part of the waveform is roughly $50-100$ times larger than the SNR of the memory waveform. Third-generation detectors are generally required for performing the angular dependence recovery of memory. As a result, $\Theta_i$ can be assumed to be accurately determined (with posterior distribution $\pi$) from the oscillatory part of $y_i$, such that $$\begin{aligned} & \int d \Theta_i \mathcal{L}(s_i | \{ a_n, b_n \}, \Theta_i) \pi(\Theta_i) \nonumber \\ & \approx \mathcal{L}(s_i | \{ a_n, b_n \}, \hat{\Theta}_i)\,,\end{aligned}$$ where $\hat{\Theta}_i$ are the Maximum Likelihood Estimators for $\Theta_i$. According to Bayes’ Theorem, the posterior distribution of $\{ a,b\}$ is $$\begin{aligned} P(\{ a_n, b_n\} | \{ y_j\}) \propto \prod_i \mathcal{L}(s_i | \{ a_n, b_n \}, \hat{\Theta}_i)\,.\end{aligned}$$ Based on the function form of Eq. \[eqlike\], the distribution of $\{ a_n, b_n\}$ is still Gaussian, with variance matrix given by $$\begin{aligned} & V^{-1}_{a_n, b_l} = \sum_i || h^{\rm m,i}||^2 \cos n \iota_i \sin l \iota_i\,,\nonumber \\ & V^{-1}_{a_n, a_l} = \sum_i || h^{\rm m,i}||^2 \cos n \iota_i \cos l \iota_i\,, \nonumber \\ & V^{-1}_{b_n, b_l} = \sum_i || h^{\rm m,i}||^2 \sin n \iota_i \sin l \iota_i\,. \end{aligned}$$ Memory waveform for binary neutron star mergers =============================================== EOS $f_1$ ($k$Hz) $\tau_1$ (ms) $f_2$ ($k$Hz) $\tau_2$ (ms) $\gamma_2$ (${\rm Hz}^2$) $\xi_2$ (${\rm Hz}^3$) $\alpha$ $r_m$ (km) A (km) ------ --------------- --------------- --------------- --------------- --------------------------- ------------------------ ---------- ------------ -------- GNH3 1.7 2 2.45 23.45 342 5e4 0.35 28.2 0.726 H4 1.75 5 2.47 20.45 -1077 4.5e3 0.3 27.5 0.692 ALF2 2.05 15 2.64 10.37 -863 2.5e4 0.5 26 0.519 Sly 2.3 1 3.22 13.59 -617 5.5e4 0.5 24.7 0.554 : Parameters for various EOS [@bose2017neutron] \[table:bhnssnr\] We shall construct an analytical memory waveform model for binary neutron star mergers similar to the approach adopted in [@favata2009nonlinear] for binary black holes. Following the [*minimal-waveform model*]{}, the memory waveform can be computed using the radiative moment $$\begin{aligned} h^{\rm mem}(T_d) = \frac{1}{\eta M} \int^{T_d}_{-\infty} |I^{(3)}_{22}(t)|^2 \, dt\,.\end{aligned}$$ We match the leading order inspiral moment to the moment of post-merger hypermassive neutron stars. The qth derivative of the inspiral moment is given by $$\begin{aligned} \label{eqi22} I^{\rm insp(q)}_{22} =2\sqrt{\frac{2 \pi}{5}} \eta M r^2 (-2 i \omega)^q e^{-2 i \phi}\,,\end{aligned}$$ where $\phi$ is the 0PN orbital phase, $\omega=\dot{\phi}=(M/r^3)^{1/2}$, $r=r_m (1-T/\tau_{rr})^{1/4}$ is the orbital separation, $T=t-t_m$ is the time since the matching time $t_m$, $\tau_{rr}=(5/256)(M/\eta)(r_m/M)^4$, and $r_m$ is the distance at the matching time. On the other hand, because $h^{TT}_{ij} \sim \ddot{I}_{ij}/d$ and the post-merger waveform can be approximately parametrized as [@bose2017neutron] $$\begin{aligned} h_{\rm post}(t) \propto & \,\alpha e^{-t/\tau_1}[\sin 2\pi f_1 t +\sin 2 \pi (f_1-f_{1\epsilon}) t \nonumber \\ & + \sin 2 \pi (f_1+f_{1\epsilon} )t ] \nonumber \\ &+e^{-t/\tau_2} \sin (2\pi f_2 t+2 \pi \gamma_2 t^2+2 \pi \xi_2 t^3+\pi \beta_2)\,,\end{aligned}$$ with the waveform parameters given in Table. \[table:bhnssnr\] for various EOS considered here, we find that $$\begin{aligned} I^{\rm post(2)}_{22} = & -i A \alpha e^{-T/\tau_1}[e^{2\pi i f_1 T} +e^{ 2 \pi i (f_1-f_{1\epsilon} ) T} \nonumber \\ &+ e^{ 2 \pi i (f_1+f_{1\epsilon} )T} ] \nonumber \\ &- i A e^{-T/\tau_2} e^{ 2\pi i f_2 T+2 \pi i \gamma_2 T^2+2 \pi i \xi_2 T^3+i \pi \beta_2}\,,\end{aligned}$$ with A determined by fitting with the numerical post-merger waveform. In the timescale of interest ($\tau_1$ or $\tau_2$), we have $f_2, f_1 \gg \gamma_2 \tau \& \xi_2 \tau^2$. Therefore we shall simplify $I^{\rm post(2)}_{22}$ to be $$\begin{aligned} I^{\rm post(2)}_{22} \approx & -i A \alpha e^{-T/\tau_1}[e^{2\pi i f_1 T} +e^{ 2 \pi i (f_1-f_{1\epsilon} )T} + e^{ 2 \pi i (f_1+f_{1\epsilon} )T} ] \nonumber \\ &- i A e^{-T/\tau_2} e^{ 2\pi i f_2 T+i \pi \beta_2}\,\nonumber \\ & =\sum^4_{i=1} A_i e^{i 2 \pi f_i t-t/\tau_i}\end{aligned}$$ ![Gravitational memory waveforms for a $1.325 M_\odot +1.325 M_\odot$ binary neutron star system at $50$Mpc, assuming different EOS and along the maximally emitting direction.[]{data-label="fig:ej"}](mwaveform.png){width="0.93\linewidth"} ![Post-merger waveforms for $1.325 M_\odot+1.325 M_\odot$ binary neutron star system with four EOS considered in this work (GNH3, H4, ALF2, Sly). The distance is assumed to be $50$Mpc.[]{data-label="fig:wave"}](pwaveform.pdf){width="0.93\linewidth"} The 2nd derivative of inspiral and post-merger radiative moments are matched at $t_m$, which has the physical meaning of continuity of $h$. $r_m$ can be estimated by twice the radius of the stars. An alternative way to fix $r_m$ is to use the oscillation amplitude of $h_+$ right before merger [@favata2009gravitational]: $$\begin{aligned} h_+ -i h_\times \approx &\frac{1}{8 d}\sqrt{\frac{5}{2 \pi}} \left [ (1+\cos \iota)^2 e^{2 i \Phi} I^{(2)}_{22} \right . \nonumber \\ & \left .+(1-\cos\iota)^2 e^{-2 i\Phi} I^{(2)}_{2-2}\right ]\,,\end{aligned}$$ with $\Phi$ being the direction of the observer in the source frame. Combining with Eq. , we find that the amplitude of $h_+$ along the maximum emitting direction is $$\begin{aligned} h_{+m} = \frac{4 \eta M r^2 \omega^2}{d}\,,\end{aligned}$$ which can be used to determine $r_m$. Similarly, it is straightforward to obtain that $$\begin{aligned} A_{22,+m}(T=0) = \frac{1}{2 d} \sqrt{\frac{5}{2 \pi}} A\,,\end{aligned}$$ where $A_{22}$ is the amplitude of $22$ mode. This can be used to determine $A$. The memory waveform in the time domain is ($\sigma_i \equiv i 2 \pi f_i +\tau^{-1}_i$) $$\begin{aligned} h^{mem}_{MWM}(T_d) \approx & \frac{8 \pi M}{ r(T_d)} \Theta(-T_d) +\Theta(T_d) \left \{ \frac{8 \pi M}{ r_m} +\frac{1}{ \eta M} \right . \nonumber \\ & \left. \sum^4_{i,j=1} \frac{\sigma_i \sigma^*_j A_i A^*_j}{\sigma_i+\sigma_j} [1-e^{-(\sigma_i+\sigma^*_j)t}]\right \}\,,\end{aligned}$$ where $\Theta(T_d)$ is the Heaviside function. The corresponding frequency domain waveform is $$\begin{aligned} \tilde{h}^{mem}_{MWM}(f) = & \frac{i}{2 \pi f} \left \{ \frac{8 \pi M}{r_m} [1-2 \pi i f \tau_{rr} U(1,7/4, 2\pi i f \tau_{rr})] \right . \nonumber \\ & \left. -\frac{1}{\eta M} \sum^4_{i,j=1} \frac{\sigma_i \sigma^*_j A_i A^*_j}{2 \pi i f -(\sigma_i +\sigma^*_j)} \right \}\,,\end{aligned}$$ where U is Kummer’s confluent hypergeometric function of the second kind. The high frequency poles above $1$kHz are unimportant for the analysis assuming ET or CE, because their low-frequency sensitivity is superior compared to their high frequency sensitivity. SNR of the ejecta waveform ========================== Following the discussion in the main text, we assume the memory waveform model to be $$\begin{aligned} \tilde{h}^{\rm mem}_{\rm MWM,m} = \frac{\Delta M v^2}{d} \frac{i \pi \tau }{\sinh(2\pi^2 f \tau)}\,.\end{aligned}$$ We show the corresponding SNR as a function of $\tau$ in Fig. \[fig:ej\]. We find that for $\tau \le 1$ms, the SNR is relatively flat $\sim 1.2$. For larger $\tau$ values, the SNRs also decrease dramatically. ![SNR for a binary neutron star system at $50$Mpc, assuming ejecta mass $\Delta M=0.03 M_\odot$ and characteristic ejecta speed to be $0.3$c.[]{data-label="fig:ej"}](snrej.pdf){width="0.93\linewidth"} [^1]: Sometimes it is also called Christodoulou memory. [^2]: This angular dependence assumes dominant (2,2) mode emission of GWs. For binary mergers with precessional spins, the effect from other mode emissions may also be included. [^3]: In principle we could also test the dependence of memory amplitude versus other source parameters, such the factor before $\sin^2\iota$ in Eq. . In the Bayesian model selection framework, such dependence can be compared to a null hypothesis, where the amplitude is zero, in which case it becomes a memory detection problem. We refer interested readers to [@lasky2016detecting] for related discussions. [^4]: In order to coherently stack different data sets to boost the SNR of the stacked memory term, one needs to measure the high-order modes of the inspiral waveform to determine the signs of the memory terms in advance [@lasky2016detecting]. For hypothesis tests discussed in this work, such measurement is not required. [^5]: Here we assume CE sensitivity for Handford, Livingston and Virgo detectors. [^6]: Unless it is a high-frequency detector targeting $k$Hz band, such as the one discussed in [@miao2017towards]. [^7]: We assume that the lower cut-off frequency for computing SNR is $5$Hz.
--- abstract: 'We relate the relative nerve ${\mathrm{N}}_f({{\mathcal{D}}})$ of a diagram of simplicial sets $f \colon {{\mathcal{D}}}\to {{\mathsf{sSet}}}$ with the Grothendieck construction ${{\mathsf{Gr}}}F$ of a simplicial functor $F \colon {{\mathcal{D}}}\to {{\mathsf{sCat}}}$ in the case where $f = {\mathrm{N}}F$. We further show that any strict monoidal simplicial category ${{\mathcal{C}}}$ gives rise to a functor ${{\mathcal{C}}}^\bullet \colon \Delta^{\mathrm{op}}\to {{\mathsf{sCat}}}$, and that the relative nerve of ${\mathrm{N}}{{\mathcal{C}}}^\bullet$ is the operadic nerve ${\mathrm{N}}^\otimes({{\mathcal{C}}})$. Finally, we show that all the above constructions commute with appropriately defined opposite functors.' author: - Jonathan Beardsley and Liang Ze Wong bibliography: - 'references.bib' title: | The Operadic Nerve, Relative Nerve,\ and the Grothendieck Construction --- Introduction ============ Given a simplicial colored operad ${{\mathcal{O}}}$, [@ha]\*[2.1.1]{} introduces the *operadic nerve* ${\mathrm{N}}^\otimes({{\mathcal{O}}})$ to be the nerve of a certain simplicial category ${{\mathcal{O}}}^\otimes$. This has a canonical fibration ${\mathrm{N}}^\otimes({{\mathcal{O}}}) \to {\mathrm{N}}({{\mathcal{F}}}in_*)$ to the nerve of the category of finite pointed sets which describes the $\infty$-operad associated to ${{\mathcal{O}}}$. A special case of the above arises when one attempts to produce the *underlying monoidal $\infty$-category* of a simplicial monoidal category ${{\mathcal{C}}}$. Following the constructions of [@dag2]\*[1.6]{} and [@ha]\*[4.1.7.17]{}, one first forms a simplicial category ${{\mathcal{C}}}^\otimes$ from a monoidal simplicial category ${{\mathcal{C}}}$, then takes its nerve to get ${\mathrm{N}}^\otimes({{\mathcal{C}}}) := {\mathrm{N}}({{\mathcal{C}}}^\otimes)$. We call this the *operadic nerve of ${{\mathcal{C}}}$*, where the monoidal structure of ${{\mathcal{C}}}$ will always be clear from context. To be more precise, we should call this construction the operadic nerve of the underlying non-symmetric simplicial colored operad, or simplicial multicategory, of ${{\mathcal{C}}}$, but for ease of reading we do not. The above construction ensures that there is a canonical *coCartesian* fibration ${\mathrm{N}}^\otimes({{\mathcal{C}}}) \to {\mathrm{N}}(\Delta^{\mathrm{op}})$, which imbues ${\mathrm{N}}({{\mathcal{C}}})$ with the structure of a *monoidal $\infty$-category* in the sense of [@dag2]\*[1.1.2]{}. Given that [@dag2] exists only in preprint form, we also refer the reader to [@gephaugenriched]\*[§3.1]{} for a published (and more general than we will need) account of the operadic nerve of a simplicial multicategory. Our paper is motivated by the following: if ${{\mathcal{C}}}$ is a monoidal fibrant simplicial category, then so is its opposite ${{\mathcal{C}}}^{\mathrm{op}}$. We thus get a monoidal $\infty$-category ${\mathrm{N}}^\otimes({{\mathcal{C}}}^{\mathrm{op}})$. However, we could also have started with ${\mathrm{N}}^\otimes({{\mathcal{C}}})$ and arrived at another monoidal $\infty$-category ${\mathrm{N}}^\otimes({{\mathcal{C}}})_{\mathrm{op}}$ by taking ‘fiberwise opposites’. We show that ${\mathrm{N}}^\otimes({{\mathcal{C}}}^{\mathrm{op}})$ and ${\mathrm{N}}^\otimes({{\mathcal{C}}})_{\mathrm{op}}$ are equivalent in the $\infty$-category of monoidal $\infty$-categories i.e. that *taking the operadic nerve of a simplicial monoidal category commutes with taking opposites* (Theorem \[thm:opcommute\]). This follows from a more general statement about the relationship between the simplicial nerve functor, the enriched Grothendieck construction of [@beardswong], and taking opposites (Theorem \[thm:F-op-commute\]). In the process of proving the above, we also give a simplified description of the somewhat complicated *relative nerve* of [@htt] (Theorem \[thm:gr-rel-nerve\]) that we hope will be useful to others. One corollary of our Theorem \[thm:opcommute\] is the fact that *coalgebras* in the monoidal quasicategory $N^\otimes({{\mathcal{C}}})$ can be identified with the nerve of the simplicial category of *strict* coalgebras in ${{\mathcal{C}}}$ itself, and that this relationship lifts to categories of comodules over coalgebras as well (this corollary and its implications are left to future work). There is well developed machinery in [@ha] for passing algebras and their modules from simplicial categories to their underlying quasicategories, but this machinery fails to work for coalgebras and comodules. As such, it is our hope that the work contained herein may lead, in the long run, to a better understanding of *derived coalgebra*. Outline ------- We begin in a more general context: in §\[sec:rel-nerve-gr\], we review the relative nerve ${\mathrm{N}}_f({{\mathcal{D}}})$ of a functor $f \colon {{\mathcal{D}}}\to {{\mathsf{sSet}}}$ and the Grothendieck construction ${{\mathsf{Gr}}}F$ of a functor $F \colon {{\mathcal{D}}}\to {{\mathsf{sCat}}}$. We show that when $F$ takes values in locally Kan simplicial categories, so that the composite $f \colon {{\mathcal{D}}}\xrightarrow{F} {{\mathsf{sCat}}}\xrightarrow{{\mathrm{N}}} {{\mathsf{sSet}}}$ takes values in quasicategories, we have an isomorphism associated to a commutative diagram: $$\begin{aligned} {\mathrm{N}}({{\mathsf{Gr}}}F) \cong {\mathrm{N}}_f({{\mathcal{D}}}), \end{aligned} \quad \quad \quad \quad \begin{aligned} \begin{tikzcd} {{\mathsf{sCat}}}^{{\mathcal{D}}}\ar[r, "{\mathrm{N}}\circ -"] \ar[d, "{{\mathsf{Gr}}}"'] & {{\mathsf{sSet}}}^{{\mathcal{D}}}\ar[d, "{\mathrm{N}}_{(-)}({{\mathcal{D}}})"] \\ {{\mathsf{opFib}}}_{/{{\mathcal{D}}}}\ar[r, "{\mathrm{N}}"] & {{\mathsf{coCart}}}_{/{\mathrm{N}}({{\mathcal{D}}})}. \end{tikzcd} \end{aligned}$$ The relative nerve is itself equivalent to the $\infty$-categorical Grothendieck construction ${{\mathsf{Gr}}}_\infty \colon ({{\mathsf{Cat}}}_\infty)^{{\mathrm{N}}({{\mathcal{D}}})} \to {{\mathsf{coCart}}}_{/{\mathrm{N}}({{\mathcal{D}}})}$, yielding an equivalence of coCartesian fibrations $${\mathrm{N}}({{\mathsf{Gr}}}F) \simeq {{\mathsf{Gr}}}_\infty({\mathrm{N}}(f)).$$ In §\[sec:monoid-struct\], we show that a strict monoidal simplicial category ${{\mathcal{C}}}$ gives rise to a functor ${{\mathcal{C}}}^\bullet \colon \Delta^{\mathrm{op}}\to {{\mathsf{sCat}}}$ whose value at $[n]$ is ${{\mathcal{C}}}^{n}$. We show that ${{\mathsf{Gr}}}\, {{\mathcal{C}}}^\bullet \cong {{\mathcal{C}}}^\otimes$, and thus that the operadic nerve ${\mathrm{N}}^\otimes({{\mathcal{C}}}) := {\mathrm{N}}({{\mathcal{C}}}^\otimes)$ factors as: $$\begin{tikzcd} {{\mathsf{Mon(sCat)}}}\ar[r, "(-)^\bullet"] \ar[rr, bend right = 15, "(-)^\otimes"'] & {{\mathsf{sCat}}}^{\Delta^{\mathrm{op}}} \ar[r, "{{\mathsf{Gr}}}"] & {{\mathsf{opFib}}}_{/\Delta^{\mathrm{op}}} \ar[r, "{\mathrm{N}}"] & {{\mathsf{coCart}}}_{/{\mathrm{N}}(\Delta^{\mathrm{op}})} \end{tikzcd}$$ In §\[sec:op-func\], we show that the above constructions interact well with taking opposites, in that the following diagram ‘commutes:’ $$\begin{tikzcd}[row sep = large] {{\mathsf{Mon(sCat)}}}\ar[r, "(-)^\bullet"] \ar[d, "{\mathrm{op}}" description] & {{\mathsf{sCat}}}^{\Delta^{\mathrm{op}}} \ar[r, "{{\mathsf{Gr}}}"] \ar[d, "{\mathrm{op}}" description] & {{\mathsf{opFib}}}_{/\Delta^{\mathrm{op}}} \ar[r, "{\mathrm{N}}"] & {{\mathsf{coCart}}}_{/{\mathrm{N}}(\Delta^{\mathrm{op}})} \ar[d, "{\mathrm{op}}" description] \\ {{\mathsf{Mon(sCat)}}}\ar[r, "(-)^\bullet"] & {{\mathsf{sCat}}}^{\Delta^{\mathrm{op}}} \ar[r, "{{\mathsf{Gr}}}"] & {{\mathsf{opFib}}}_{/\Delta^{\mathrm{op}}} \ar[r, "{\mathrm{N}}"] & {{\mathsf{coCart}}}_{/{\mathrm{N}}(\Delta^{\mathrm{op}})} \end{tikzcd}$$ We write ‘commutes’ because we only check it *on objects*, and only *up to equivalence* in the quasicategory ${{\mathsf{coCart}}}_{/{\mathrm{N}}(\Delta^{\mathrm{op}})}$. We conclude that ${\mathrm{N}}^\otimes({{\mathcal{C}}}^{\mathrm{op}})$ and the fiberwise opposite ${\mathrm{N}}^\otimes({{\mathcal{C}}})_{\mathrm{op}}$ are equivalent in the $\infty$-category of monoidal $\infty$-categories. Notation ======== In large part, our notation follows that of Lurie’s seminal works in higher category theory [@ha; @htt]. However, here we point out certain notational conventions that may not be immediately obvious to the reader. Some of these conventions may be non-standard, but we adhere to them for the sake of precision. 1. We will mostly avoid using the term “$\infty$-category” in any situation where a more precise term (e.g. quasicategory or simplicially enriched category) is applicable. We make one exception when we discuss the “$\infty$-categorical” Grothendieck construction of [@htt]. 2. A special class of simplicially enriched categories are those in which all mapping objects are not just simplicial sets, but Kan complexes. We will refer to a simplicially enriched category with this property as “locally Kan.” 3. We will often use the term “simplicial category” to refer to a simplicially enriched category. There is no chance for confusion here because at no point do we consider simplicial object in the category of categories. The relative nerve and the Grothendieck construction {#sec:rel-nerve-gr} ==================================================== The $\infty$-categorical Grothendieck construction is the equivalence $${{\mathsf{Gr}}}_\infty \colon ({{\mathsf{Cat}}}_\infty)^S \xrightarrow{\quad \simeq \quad} {{\mathsf{coCart}}}_{/S}$$ induced by the unstraightening functor ${{\textsf{Un}}}^+_S \colon ({{\mathsf{sSet}}}^+)^{{\mathfrak{C}}[S]} \to ({{\mathsf{sSet}}}^+)_{/S}$ of [@htt]\*[3.2.1.6]{}. Here, ${{\mathsf{Cat}}}_\infty$ is the quasicategory of small quasicategories, and ${{\mathsf{coCart}}}_{/S}$ is the quasicategory of coCartesian fibrations over $S \in {{\mathsf{sSet}}}$, and these are defined as nerves of certain simplicial categories. (See \[sec:models\] and \[sec:st-un\], or [@htt]\*[Ch. 3]{} for details.) In general, it is not easy to describe ${{\mathsf{Gr}}}_\infty \varphi$ for an arbitrary morphism $\varphi \colon S \to {{\mathsf{Cat}}}_\infty$. However, when $S$ is the nerve of a small category ${{\mathcal{D}}}$, and $\varphi$ is the nerve of a functor $f \colon {{\mathcal{D}}}\to {{\mathsf{sSet}}}$ such that each $fd$ is a quasicategory, the *relative nerve* ${\mathrm{N}}_f({{\mathcal{D}}})$ of [@htt]\*[3.2.5.2]{} yields a coCartesian fibration equivalent to ${{\mathsf{Gr}}}_\infty {\mathrm{N}}(f)$. If $f$ further factors as ${{\mathcal{D}}}\xrightarrow{F} {{\mathsf{sCat}}}\xrightarrow{{\mathrm{N}}} {{\mathsf{sSet}}}$, where each $Fd$ is a locally Kan simplicial category, we may instead form the simplicially-enriched Grothendieck construction ${{\mathsf{Gr}}}F$ and take its nerve. The purpose of this section is to show that we have an isomorphism of coCartesian fibrations $${\mathrm{N}}({{\mathsf{Gr}}}F) \cong {\mathrm{N}}_f({{\mathcal{D}}}),$$ thus yielding an alternative description of ${{\mathsf{Gr}}}_\infty {\mathrm{N}}(f)$. The relative nerve ${\mathrm{N}}_f({{\mathcal{D}}})$ ---------------------------------------------------- \[def:relnerve\] Let ${{\mathcal{D}}}$ be a category, and $f \colon {{\mathcal{D}}}\to {{\mathsf{sSet}}}$ a functor. The [**nerve of ${{\mathcal{D}}}$ relative to $f$**]{} is the simplicial set ${\mathrm{N}}_f({{\mathcal{D}}})$ whose $n$-simplices are sets consisting of: 1. a functor $d \colon [n] \to {{\mathcal{D}}}$; write $d_i$ for $d(i)$ and $d_{ij} \colon d_i \to d_j$ for the image of the unique map $i \leq j$ in $[n]$, 2. for every nonempty subposet $J \subseteq [n]$ with maximal element $j$, a map $s^J \colon \Delta^J \to fd_j$, 3. such that for nonempty subsets $I \subseteq J \subseteq [n]$ with respective maximal elements $i \leq j$, the following diagram commutes: $$\label{eq:rel-nerve} \begin{tikzcd} \Delta^I \ar[r, "s^I"] \ar[d, hookrightarrow] & fd_i \ar[d, "f d_{ij}"] \\ \Delta^J \ar[r, "s^J"] & fd_j \end{tikzcd}$$ For any $f$, there is a canonical map $p \colon {\mathrm{N}}_f({{\mathcal{D}}}) \to {\mathrm{N}}({{\mathcal{D}}})$ down to the ordinary nerve of ${{\mathcal{D}}}$, induced by the unique map to the terminal object $\Delta^0 \in {{\mathsf{sSet}}}$ [@htt]\*[3.2.5.4]{}. When $f$ takes values in quasicategories, this canonical map is a coCartesian fibration *classified* (Definition \[def:classified\]) by ${\mathrm{N}}(f)$: \[prop:rel-nerve-infty-gr\] Let $f \colon {{\mathcal{D}}}\to {{\mathsf{sSet}}}$ be a functor such that each $fd$ is a quasicategory. Then: 1. $p \colon {\mathrm{N}}_f({{\mathcal{D}}}) \to {\mathrm{N}}({{\mathcal{D}}})$ is a coCartesian fibration of simplicial sets, and 2. $p$ is classified by the functor ${\mathrm{N}}(f) \colon {\mathrm{N}}({{\mathcal{D}}}) \to {{\mathsf{Cat}}}_\infty$, i.e. there is an equivalence of coCartesian fibrations $${\mathrm{N}}_f({{\mathcal{D}}}) \simeq {{\mathsf{Gr}}}_\infty {\mathrm{N}}(f).$$ Note that the version of Proposition \[prop:rel-nerve-infty-gr\] in [@htt] is somewhat ambiguously stated. In particular, it is claimed that, given a functor $f\colon{{\mathcal{D}}}\to {{\mathsf{sSet}}}$, the fibration ${\mathrm{N}}_{f}({{\mathcal{D}}})$ is the one *associated* to the functor ${\mathrm{N}}(f)\colon N({{\mathcal{D}}})\to {{\mathsf{Cat}}}_\infty$. However, a close reading of the proof given in [@htt] makes it clear that, for a functor $f\colon{{\mathcal{D}}}\to {{\mathsf{sSet}}}$ with associated $f^\natural\colon{{\mathcal{D}}}\to {{\mathsf{sSet}}}^+$, there is an equivalence ${\mathrm{N}}_f({{\mathcal{D}}})^\natural\simeq {\mathrm{N}}_{f^\natural}^+({{\mathcal{D}}})\simeq {{\textsf{Un}}}_\phi^+f^\natural$. Here, ${\mathrm{N}}^+_{f^\natural}$ indicates the *marked* analog of the relative nerve described in Definition \[def:relnerve\]. Application of the (large) simplicial nerve functor recovers the form of the proposition given above. The Grothendieck construction ${{\mathsf{Gr}}}F$ ------------------------------------------------ Suppose instead that we have a functor $F \colon {{\mathcal{D}}}\to {{\mathsf{sCat}}}$. We may then take the nerve relative to the composite $f \colon {{\mathcal{D}}}\xrightarrow{F} {{\mathsf{sCat}}}\xrightarrow{{\mathrm{N}}} {{\mathsf{sSet}}}$ to get a coCartesian fibration ${\mathrm{N}}_{f}({{\mathcal{D}}}) \to {\mathrm{N}}({{\mathcal{D}}})$. We now describe a second way to obtain a coCartesian fibration over ${\mathrm{N}}({{\mathcal{D}}})$ from such an $F$. Let ${{\mathcal{D}}}$ be a small category, and let $F \colon {{\mathcal{D}}}\to {{\mathsf{sCat}}}$ be a functor. The [**Grothendieck construction of $F$**]{} is the simplicial category ${{\mathsf{Gr}}}F$ with objects and morphisms: $$\begin{aligned} {{\mathsf{Ob}}}({{\mathsf{Gr}}}F) &:= \coprod_{\;\;\, d \in {{\mathcal{D}}}\;\;\,} {{\mathsf{Ob}}}(Fd) \times \{d\}, \\ {{\mathsf{Gr}}}F\big( (x,c), (y,d) \big) &:= \coprod_{\varphi \colon c \to d} Fd(F\varphi\; x, y) \times \{\varphi\} . \end{aligned}$$ An arrow $(x,c) \to (y,d)$ (i.e. a $0$-simplex in ${{\mathsf{Gr}}}F( (x,c), (y,d))$) is a pair $\left( F\varphi\;x \xrightarrow{\sigma} y, c \xrightarrow{\varphi} d \right)$, while the composite $(x,c) \xrightarrow{(\sigma, \varphi)} (y,d) \xrightarrow{(\tau, \psi)} (z,e)$ is $$\bigg( F(\psi \varphi)\, x = F\psi\, F\varphi\, x{\xrightarrow{F\psi\, \sigma}} F\psi\, y {\xrightarrow{\tau}} z \;,\;\; c {\xrightarrow{\varphi}} d {\xrightarrow{\psi}} e \bigg).$$ There is a simplicial functor $P \colon {{\mathsf{Gr}}}F \to {{\mathcal{D}}},\; (x,c) \mapsto c,$ induced by the unique maps $Fd(F\varphi\; x, y) \to \Delta^0$. Here, ${{\mathcal{D}}}$ is treated as a *discrete* simplicial category with hom-objects $${{\mathcal{D}}}(c,d) = \coprod_{\varphi \colon c \to d} \Delta^0 \times \{\varphi\}.$$ Let $P \colon {{\mathcal{E}}}\to {{\mathcal{D}}}$ be a simplicial functor. A map $\chi \colon e \to e'$ in ${{\mathcal{E}}}$ is [**$P$-coCartesian**]{} if $$\label{eq:opfib-pullback} \begin{tikzcd}[column sep = large] {{\mathcal{E}}}(e',x) \ar[r, "-\circ \chi"] \ar[d, "P_{e'x}"'] & {{\mathcal{E}}}(e,x) \ar[d, "P_{ex}"] \\ {{\mathcal{D}}}(Pe', Px) \ar[r, "-\circ P\chi"] & {{\mathcal{D}}}(Pe, Px) \end{tikzcd}$$ is a (ordinary) pullback in ${{\mathsf{sSet}}}$ for every $x \in {{\mathcal{E}}}$. A simplicial functor $P \colon {{\mathcal{E}}}\to {{\mathcal{D}}}$ is a [**simplicial opfibration**]{} if for every $e \in {{\mathcal{E}}}, d \in {{\mathcal{D}}}$ and $\varphi \colon Pe \to d$, there exists a $P$-coCartesian lift of $\varphi$ with domain $e$. \[prop:bw411\] The functor ${{\mathsf{Gr}}}F \to {{\mathcal{D}}}$ is a simplicial opfibration. \[prop:opfibtococart\] Let ${{\mathcal{D}}}$ be a category (i.e. a discrete simplicial category), and ${{\mathcal{E}}}$ be a locally Kan simplicial category. If $P \colon {{\mathcal{E}}}\to {{\mathcal{D}}}$ is a simplicial opfibration, then ${\mathrm{N}}(P) \colon {\mathrm{N}}({{\mathcal{E}}}) \to {\mathrm{N}}({{\mathcal{D}}})$ is a coCartesian fibration. It suffices to show that any $P$-coCartesian arrow in ${{\mathcal{E}}}$ gives rise to a ${\mathrm{N}}(P)$-coCartesian arrow in ${\mathrm{N}}({{\mathcal{E}}})$. If $\chi \colon e \to e'$ is $P$-coCartesian, then (\[eq:opfib-pullback\]) is an ordinary pullback in ${{\mathsf{sSet}}}$ for all $x \in {{\mathcal{E}}}$. Since ${{\mathcal{D}}}(Pe, Px)$ is discrete and ${{\mathcal{E}}}(e,x)$ is fibrant, $P_{ex}$ is a fibration[^1]; since ${{\mathcal{D}}}(Pe', Px)$ is also fibrant, this ordinary pullback is in fact a *homotopy* pullback [@htt]\*[A.2.4.4]{}. Thus, by [@htt]\*[2.4.1.10]{}, $\chi$ gives rise to a ${\mathrm{N}}(P)$-coCartesian arrow in ${\mathrm{N}}({{\mathcal{E}}})$. The discreteness of ${{\mathcal{D}}}$ and fibrancy of ${{\mathcal{E}}}$ are critical here. An arbitrary ${{\mathsf{sSet}}}$-enriched opfibration $P \colon {{\mathcal{E}}}\to {{\mathcal{D}}}$ is unlikely to give rise to a coCartesian fibration ${\mathrm{N}}(P) \colon {\mathrm{N}}({{\mathcal{E}}}) \to {\mathrm{N}}({{\mathcal{D}}})$. Essentially, we require the ordinary pullback in (\[eq:opfib-pullback\]) to be a homotopy pullback. Let ${{\mathcal{D}}}$ be a small category and $F \colon {{\mathcal{D}}}\to {{\mathsf{sCat}}}$ be such that each $Fd$ is locally Kan. Then ${\mathrm{N}}({{\mathsf{Gr}}}F) \to {\mathrm{N}}({{\mathcal{D}}})$ is a coCartesian fibration. Comparing ${\mathrm{N}}({{\mathsf{Gr}}}F)$ and ${\mathrm{N}}_f({{\mathcal{D}}})$ -------------------------------------------------------------------------------- \[thm:gr-rel-nerve\] Let $F \colon {{\mathcal{D}}}\to {{\mathsf{sCat}}}$ be a functor, and $f = {\mathrm{N}}F$. Then there is an isomorphism of coCartesian fibrations $${\mathrm{N}}({{\mathsf{Gr}}}F) \cong {\mathrm{N}}_f({{\mathcal{D}}}).$$ We will only explicitly describe the $n$-simplices of ${\mathrm{N}}({{\mathsf{Gr}}}F)$ and ${\mathrm{N}}_f({{\mathcal{D}}})$ and show that they are isomorphic. From the description, it should be clear that we do indeed have an isomorphism of simplicial sets that is compatible with their projections down to ${\mathrm{N}}({{\mathcal{D}}})$, hence an isomorphism of coCartesian fibrations (by [@riehl2017fibrations]\*[5.1.7]{}, for example). #### Description of ${\mathrm{N}}({{\mathsf{Gr}}}F)_n$. An $n$-simplex of ${\mathrm{N}}({{\mathsf{Gr}}}F)$ is a simplicial functor $S \colon {\mathfrak{C}}[\Delta^n] \to {{\mathsf{Gr}}}F$. By Lemma \[lem:simplicial-functor\], this is the data of: - for each $i \in [n]$, an object $S_i = (x_i, d_i) \in {{\mathsf{Gr}}}F$, (so $d_i \in {{\mathcal{D}}}, x_i \in Fd_i$) - for each $r$-dimensional bead shape ${{\langle I_0 | \dots | I_r \rangle}}$ of $\{i_0 < \dots < i_m\} \subseteq [n]$ where $m\geq 1$, an $r$-simplex $$S_{{\langle I_0 | \dots | I_r \rangle}} \in {{\mathsf{Gr}}}F (S_{i_0}, S_{i_m}) = \coprod_{\varphi \in {{\mathcal{D}}}(d_{i_0}, d_{i_m})} Fd_{i_m}(F\varphi\; x_{i_0}, x_{i_m})$$ whose boundary is compatible with lower-dimensional data. #### Description of ${\mathrm{N}}_f({{\mathcal{D}}})_n$. An $n$-simplex of ${\mathrm{N}}_f({{\mathcal{D}}})$ consists of a functor $d\colon [n] \to {{\mathcal{D}}}$, picking out objects and arrows $d_i \xrightarrow{d_{ij}} d_j$ for all $0 \leq i \leq j \leq n$ such that $d_{ii}$ are identities and $$d_{jk}d_{ij} = d_{ik}, \quad i \leq j \leq k,$$ and a family of maps $s^J \colon \Delta^J \to fd_j$ for every $J \subseteq [n]$ with maximal element $j$, satisfying (\[eq:rel-nerve\]). Since $f = {\mathrm{N}}F$, such maps $s^J \colon \Delta^J \to {\mathrm{N}}Fd_j$ correspond, under the ${\mathfrak{C}}\dashv {\mathrm{N}}$ adjunction, to maps $S^J \colon {\mathfrak{C}}[\Delta^J] \to Fd_j$ satisfying: $$\label{eq:rel-nerve-transpose} \begin{tikzcd} {\mathfrak{C}}[\Delta^I] \ar[r, "S^I"] \ar[d, hookrightarrow] & Fd_i \ar[d, "F d_{ij}"] \\ {\mathfrak{C}}[\Delta^J] \ar[r, "S^J"] & Fd_j \end{tikzcd}$$ By Lemma \[lem:simplicial-functor\], each $S^J$ is the data of: - for each $i \in J$, an object $S^J_i \in Fd_j$ - for each $r$-dimensional bead shape ${{\langle I_0 | \dots | I_r \rangle}}$ of $\{i_0 < \dots < i_m\} \subseteq J$ where $m \geq 1$, an $r$-simplex $$S^J_{{\langle I_0|\dots | I_r \rangle}} \in Fd_j(S^J_{i_0}, S^J_{i_m})$$ whose boundary is compatible with lower-dimensional data. The condition (\[eq:rel-nerve-transpose\]) is equivalent to $$\begin{aligned} \label{eq:rel-nerve-explicit} Fd_{ij}\, S^I_k &= S^J_k, &\text{and} & & Fd_{ij}\, S^I_{{\langle I_0|\dots|I_r \rangle}} &= S^J_{{\langle I_0|\dots | I_r \rangle}}. \end{aligned}$$ for any $k \in I$ and bead shape ${{\langle I_0|\dots | I_r \rangle}}$ of $I \subseteq J$. #### From ${\mathrm{N}}({{\mathsf{Gr}}}F)_n$ to ${\mathrm{N}}_f({{\mathcal{D}}})_n$. Given $S \colon {\mathfrak{C}}[\Delta^n] \to {{\mathsf{Gr}}}F$, we first produce a functor $d \colon [n] \to {{\mathcal{D}}}$. For any $\{i < j\} \subseteq [n]$, we have a $0$-simplex $$S_{{\langle ij \rangle}} = (Fd_{ij} x_i \xrightarrow{x_{ij}} x_j , d_i \xrightarrow{d_{ij}} d_j) \in {{\mathsf{Gr}}}F \big((x_i, d_i), (x_j, d_j)\big)_0,$$ and for any $\{i < j < k\} \subseteq [n]$, we have a $1$-simplex $S_{{\langle ik|j \rangle}}$ from $S_{{\langle ik \rangle}}$ to $$S_{{\langle jk \rangle}}S_{{\langle ij \rangle}} = (Fd_{jk} Fd_{ij} x_i \xrightarrow{Fd_{jk} x_{ij}} Fd_{jk} x_j \xrightarrow{x_{jk}} x_k\;,\;\; d_i \xrightarrow{d_{ij}} d_j \xrightarrow{d_{jk}} d_k ).$$ But such a $1$-simplex includes the data of a $1$-simplex from $d_{ik}$ to $d_{jk}d_{ij}$ in the *discrete* simplicial set ${{\mathcal{D}}}(x_i, x_k)$. Thus $d_{ik}$ must be *equal* to $d_{jk} d_{ij}$, so the data of $\{ d_i \xrightarrow{d_{ij}} d_j \}_{i \leq j}$, where $d_{ii}$ is the identity, assembles into a functor $d \colon [n] \to {{\mathcal{D}}}$ as desired. Note that since $F$ is a functor, we also have $$Fd_{jk}\; Fd_{ij} = F(d_{jk}d_{ij}) = Fd_{ik}.$$ Next, for each non-empty subset $J \subseteq [n]$ with maximal element $j$, we need a simplicial functor $S^J \colon {\mathfrak{C}}[\Delta^J] \to Fd_j$. For each $i \in J$, set $$S^J_i := Fd_{ij}\, x_i \in Fd_j.$$ For each $r$-dimensional bead shape ${{\langle I_0|\dots|I_r \rangle}}$ of $\{i_0 < \dots < i_m\} \subseteq J$ with $m \geq 1$, we first note that $S_{{\langle I_0|\dots |I_r \rangle}}$ lies in the $d_{i_0 i_m}$ component $$Fd_{i_m}(Fd_{i_0 i_m}\, x_{i_0}, x_{i_m}) \subset {{\mathsf{Gr}}}F(S_{i_0}, S_{i_m})$$ because its sub-simplices (for instance $S_{{\langle i_0 i_m \rangle}}$) do too. Define $$S^J_{{\langle I_0 | \dots | I_r \rangle}} := Fd_{i_m j}\, S_{{\langle I_0|\dots |I_r \rangle}}.$$ We verify that this lives in the correct simplicial set $$\begin{aligned} Fd_j(Fd_{i_m j}\; Fd_{i_0 i_m}\; x_{i_0}, Fd_{i_m j}\; x_{i_m}) &= Fd_j(Fd_{i_0 j} x_{i_0}, Fd_{i_m j} x_{i_m}) \\ &= Fd_j(S^J_{i_0}, S^J_{i_m}). \end{aligned}$$ The boundary of each $S^J_{{\langle I_0 | \dots | I_r \rangle}}$ is compatible with lower-dimensional data because the boundary of each $S_{{\langle I_0 | \dots | I_r \rangle}}$ is as well. We thus get a simplicial functor $S^J \colon {\mathfrak{C}}[\Delta^J] \to Fd_j$, and by construction, the functoriality of $F$ and $d$ implies that (\[eq:rel-nerve-explicit\]) holds. #### From ${\mathrm{N}}_f({{\mathcal{D}}})_n$ to ${\mathrm{N}}({{\mathsf{Gr}}}F)_n$. Conversely, suppose we have $d \colon [n] \to {{\mathcal{D}}}$ and $S^J \colon {\mathfrak{C}}[\Delta^J] \to Fd_j$ for every non-empty $J \subseteq [n]$ with maximal element $j$, satisfying (\[eq:rel-nerve-explicit\]). For each $i \in [n]$, let $S_i := (S^{\{i\}}_i, d_i)$, and for each $r$-dimensional bead shape ${{\langle I_0 | \dots | I_r \rangle}}$ of $I = \{i_0,\dots,i_m\} \subseteq [n]$ where $m \geq 1$, let $$S_{{\langle I_0| \dots |I_r \rangle}} := S^I_{{\langle I_0| \dots I_r \rangle}}.$$ Then $S_{{\langle I_0| \dots | I_r \rangle}}$ is an $r$-simplex in $$Fd_{i_m}(S^I_{i_0}, S^I_{i_m}) = Fd_{i_m}(Fd_{i_0 i_m}\; S^{\{i_0\}}_{i_0}, S^{\{i_m\}}_{i_m}) \subset {{\mathsf{Gr}}}F(S_{i_0}, S_{i_m})$$ as desired, where we have used (\[eq:rel-nerve-explicit\]) in the first equality, and this data yields a simplicial functor $S \colon {\mathfrak{C}}[\Delta^n] \to {{\mathsf{Gr}}}F$. #### Mutual inverses. Finally, it is easy to see that the constructions described above are mutual inverses. For instance, we have $$\begin{aligned} S_{{\langle I_0 | \dots | I_r \rangle}} &= Fd_{ii}\, S_{{\langle I_0 | \dots | I_r \rangle}}, \\ S^J_{{\langle I_0 | \dots | I_r \rangle}} &= Fd_{ij}\, S^I_{{\langle I_0 | \dots | I_r \rangle}}. \end{aligned}$$ Thus ${\mathrm{N}}({{\mathsf{Gr}}}F)_n \cong {\mathrm{N}}_f({{\mathcal{D}}})_n$. In light of Proposition \[prop:rel-nerve-infty-gr\], we obtain: \[cor:gr-infty-gr\] Let $F \colon {{\mathcal{D}}}\to {{\mathsf{sCat}}}$ be a functor such that each $Fd$ is a quasicategory, and $f = {\mathrm{N}}F$. Then there is an equivalence of coCartesian fibrations $${\mathrm{N}}({{\mathsf{Gr}}}F) \simeq {{\mathsf{Gr}}}_\infty {\mathrm{N}}(f).$$ Operadic nerves of monoidal simplicial categories {#sec:monoid-struct} ================================================= Given a monoidal simplicial category ${{\mathcal{C}}}$, [@dag2]\*[1.6]{} describes the formation of a simplicial category ${{\mathcal{C}}}^\otimes$ equipped with an opfibration over $\Delta^{\mathrm{op}}$. The nerve of this opfibration is a coCartesian fibration ${\mathrm{N}}({{\mathcal{C}}}^\otimes) \to {\mathrm{N}}(\Delta^{\mathrm{op}})$ which has the structure of a monoidal quasicategory in the sense of [@dag2]\*[1.1.2]{}. Since this construction is exactly the operadic nerve of [@ha]\*[2.1.1]{} applied to the underlying simplicial operad of ${{\mathcal{C}}}$, we call ${\mathrm{N}}^\otimes({{\mathcal{C}}}) := {\mathrm{N}}({{\mathcal{C}}}^\otimes)$ the *operadic nerve of a monoidal simplicial category ${{\mathcal{C}}}$*. In this section, we apply the results of the previous section to further describe the process of obtaining ${\mathrm{N}}^\otimes({{\mathcal{C}}})$ from a *strict* monoidal ${{\mathcal{C}}}$. We show that the opfibration ${{\mathcal{C}}}^\otimes \to \Delta^{\mathrm{op}}$ is the Grothendieck construction ${{\mathsf{Gr}}}\, {{\mathcal{C}}}^\bullet$ of a functor ${{\mathcal{C}}}^\bullet \colon \Delta^{\mathrm{op}}\to {{\mathsf{sCat}}}$, and hence conclude that the operadic nerve ${\mathrm{N}}^\otimes({{\mathcal{C}}})$ is the nerve of $\Delta^{\mathrm{op}}$ relative to $\Delta^{\mathrm{op}}\xrightarrow{{{\mathcal{C}}}^\bullet} {{\mathsf{sCat}}}\xrightarrow{{\mathrm{N}}} {{\mathsf{sSet}}}.$ Although the operadic nerve may be defined for any monoidal simplicial category ${{\mathcal{C}}}$, we restrict the discussion in this section to *strict* monoidal categories because the results of the previous section require strict functors ${{\mathcal{D}}}\to {{\mathsf{sCat}}}$ and ${{\mathcal{D}}}\to {{\mathsf{sSet}}}$ rather than pseudofunctors. ${{\mathcal{C}}}^\otimes$ and ${{\mathcal{C}}}^\bullet$ from a strict monoidal ${{\mathcal{C}}}$ ------------------------------------------------------------------------------------------------ We start by describing the opfibration ${{\mathcal{C}}}^\otimes \to \Delta^{\mathrm{op}}$ and the functor ${{\mathcal{C}}}^\bullet \colon \Delta^{\mathrm{op}}\to {{\mathsf{sCat}}}$ associated to a strict monoidal simplicial category ${{\mathcal{C}}}$. A [**strict monoidal simplicial category**]{} ${{\mathcal{C}}}$ is a monoid in $({{\mathsf{sCat}}}, \times, *)$. Let $\otimes \colon {{\mathcal{C}}}\times {{\mathcal{C}}}\to {{\mathcal{C}}}$ denote the monoidal product of ${{\mathcal{C}}}$ and ${\mathbf{1}}\colon * \to {{\mathcal{C}}}$ denote the monoidal unit, which we identify with an object ${\mathbf{1}}\in {{\mathcal{C}}}$. Let ${{\mathsf{Mon(sCat)}}}$ denote the category of strict monoidal simplicial categories, which is equivalently the category of monoids in ${{\mathsf{sCat}}}$. A strict monoidal simplicial category is thus a simplicial category with a strict monoidal structure that is *weakly compatible* in the sense of [@dag2]\*[1.6.1]{}. The strictness of the monoidal structure implies that we have equalities (rather than isomorphisms): $$\begin{aligned} (x\otimes y) \otimes z &= x \otimes (y \otimes z), & {\mathbf{1}}\otimes x &= x = x \otimes {\mathbf{1}}. \end{aligned}$$ Let $({{\mathcal{C}}},\otimes, {\mathbf{1}})$ be a strict monoidal simplicial category. Then we define a new category ${{\mathcal{C}}}^\otimes$ as follows: 1. An object of ${{\mathcal{C}}}^\otimes$ is a finite, possibly empty, sequence of objects of ${{\mathcal{C}}}$, denoted $[x_1,\ldots,x_n].$ 2. The simplicial set of morphisms from $[x_1,\ldots,x_n]$ to $[y_1,\ldots,y_m]$ in ${{\mathcal{C}}}^\otimes$ is defined to be $$\coprod_{f \in \Delta\left([m],[n]\right)}\; \prod_{1\leq i\leq m} {{\mathcal{C}}}\big(x_{f(i-1)+1}\otimes x_{f(i-1)+2} \otimes \cdots\otimes x_{f(i)}\;,\;\; y_i\big)$$ where $x_{f(i-1)+1}\otimes \cdots\otimes x_{f(i)}$ is taken to be ${\mathbf{1}}$ if $f(i-1) = f(i)$. A morphism will be denoted $[f; f_1,\dots, f_m]$, where $$x_{f(i-1)+1}\otimes \cdots\otimes x_{f(i)} {\xrightarrow{\quad f_i \quad}} y_i.$$ 3. Composition in ${{\mathcal{C}}}^{\otimes}$ is determined by composition in $\Delta$ and ${{\mathcal{C}}}$: $$\begin{aligned} [g; g_1, \dots g_\ell] \circ [f; f_1,\dots, f_m] &= [f\circ g\;; \;\; h_1, \dots, h_\ell], \\ \text{where} \quad h_i &= g_i \circ (f_{g(i-1)+1} \otimes \dots \otimes f_{g(i)}). \end{aligned}$$ This is associative and unital due to the associativity and unit constraints of $\otimes$. Though we don’t make it explicit here, ${{\mathcal{C}}}^\otimes$ is the category of operators (in the sense of [@maythom] and [@gephaugenriched]\*[2.2.1]{}) of the underlying simplicial multicategory (cf. [@gephaugenriched]\*[3.1.6]{}) of ${{\mathcal{C}}}$. There is a forgetful functor $P \colon {{\mathcal{C}}}^\otimes \to \Delta^{\mathrm{op}}$ sending $[x_1,\dots, x_n]$ to $[n]$ which is an (unenriched) opfibration of categories [@dag2]\*[1.1(M1)]{}. The proof of that statement can easily be modified to show: \[prop:Cotimesopfib\] The functor $P \colon {{\mathcal{C}}}^\otimes\to \Delta^{\mathrm{op}}$ is a simplicial opfibration. Replace all hom-sets by hom-*simplicial*-sets in [@dag2]\*[1.1(M1)]{}. In fact, we may choose $P$-coCartesian lifts so that $P$ is a *split* simplicial opfibration[^2]: given $[x_1, \dots, x_n] \in {{\mathcal{C}}}^\otimes$ and a map $f \colon [m] \to [n]$, let $$\label{eq:yi} y_i = x_{f(i-1)+1} \otimes \dots \otimes x_{f(i)}$$ for all $1 \leq i \leq m$. Then $[f; 1_{y_1}, \dots, 1_{y_m}]$ is a $P$-coCartesian lift of $f$. By the enriched Grothendieck correspondence [@beardswong]\*[Theorem 5.6]{}, the split simplicial opfibration $P\colon {{\mathcal{C}}}^\otimes \to \Delta^{\mathrm{op}}$ with this choice of coCartesian lifts arises from a functor ${{\mathcal{C}}}^\bullet \colon \Delta^{\mathrm{op}}\to {{\mathsf{sCat}}}$ which we now describe. \[def:C-faces\] Let ${{\mathcal{C}}}$ be a strict monoidal simplicial category with monoidal product, unit and terminal morphisms the simplicial functors $\mu_{{{\mathcal{C}}}}\colon {{\mathcal{C}}}\times{{\mathcal{C}}}\to C$, $\eta\colon {\mathbf{1}}\to {{\mathcal{C}}}$ and $\varepsilon\colon {{\mathcal{C}}}\to {\mathbf{1}}$ respectively. Then for each $0\leq i\leq n$ we define the functor ${{\mathcal{C}}}^{\delta_i}\colon {{\mathcal{C}}}^n\to {{\mathcal{C}}}^{n-1}$ to be: 1. the application of $\mu_{{{\mathcal{C}}}}$ to the $i^{th}$ and $i+1^{st}$ coordinates of ${{\mathcal{C}}}^n$, and the identity in all other coordinates, in the case that $0<i<n$; 2. the application of $\varepsilon$ to the $i^{th}$ coordinate and the identity in all other coordinates in the case that $i\in\{0,n\}$. In the other direction, for each $0\leq i\leq n$, we define a functor ${{\mathcal{C}}}^{\sigma_i}\colon C^{n-1}\to{{\mathcal{C}}}^{n}$ to be the isomorphism ${{\mathcal{C}}}^n\cong{{\mathcal{C}}}^{i}\times {\mathbf{1}}\times {{\mathcal{C}}}^{n-i}$ followed by the application of the unit $\eta$ in the $i^{th}$ coordinate ${{\mathcal{C}}}^{i}\times{\mathbf{1}}\times{{\mathcal{C}}}^{n-i}\to {{\mathcal{C}}}^{n}$. \[def:C-bullet\] Let ${{\mathcal{C}}}$ be a strict monoidal simplicial category. Then define the functor ${{\mathcal{C}}}^{\bullet}\colon \Delta^{op}\to {{\mathsf{sCat}}}$ to be the one that takes $[n]$ to ${{\mathcal{C}}}^n$, the face maps $\delta_i\colon [n-1]\to[n]$ to the functors ${{\mathcal{C}}}^{\delta_i}:{{\mathcal{C}}}^n\to {{\mathcal{C}}}^{n-1}$ and the degeneracy maps $\sigma_i\colon [n]\to [n-1]$ to the functors ${{\mathcal{C}}}^{\sigma_i}\colon {{\mathcal{C}}}^{n-1}\to {{\mathcal{C}}}^{n}$, where ${{\mathcal{C}}}^{\delta_i}$ and ${{\mathcal{C}}}^{\sigma_i}$ are as in Definition \[def:C-faces\]. The fact that ${{\mathcal{C}}}$ is a strict monoid in ${{\mathsf{sCat}}}$ implies that the functors $\mu_i$ and $\eta_i$ satisfy the simplicial identities. This is not difficult to check but is tedious, so we will not include a proof of it. \[rem:Cf\] More generally, let $f \colon [m] \to [n]$ be a morphism in $\Delta$. Then by decomposing $f$ into a finite composition of face and degeneracy maps, we have that ${{\mathcal{C}}}^f \colon {{\mathcal{C}}}^n \to {{\mathcal{C}}}^m$ is the functor that sends $(x_1,\dots, x_n)$ to $(y_1, \dots, y_m)$ where $y_i$ is given by (\[eq:yi\]), and (when restricted to zero simplices) sends $(\varphi_1, \dots, \varphi_n)$ to $(\psi_1, \dots, \psi_m)$ where $$\psi_i = \varphi_{f(i-1)+1} \otimes \dots \otimes \varphi_{f(i)}.$$ \[lem:C-otimes-bullet\] For a strict monoidal simplicial category ${{\mathcal{C}}}$, there is an isomorphism of simplicial categories $${{\mathcal{C}}}^\otimes \cong {{\mathsf{Gr}}}\, {{\mathcal{C}}}^\bullet.$$ This follows directly from the definitions of ${{\mathcal{C}}}^\otimes$, ${{\mathcal{C}}}^\bullet$ and ${{\mathsf{Gr}}}$. Explicitly, first notice that there is a bijection on objects $F\colon {{\mathsf{Ob}}}(C^\otimes)\to {{\mathsf{Ob}}}({{\mathsf{Gr}}}{{\mathcal{C}}}^\bullet)$ given by $$F([x_1,\ldots,x_n])=((x_1,\ldots,x_n),[n])\in\!\!\! \coprod_{\;\;\, [m] \in \Delta^{op} \;\;\,} {{\mathsf{Ob}}}({{\mathcal{C}}}^n)\times\{[m]\}.$$ The space of morphisms from $((x_1,\ldots,x_m),[m])$ to $((y_1,\ldots,y_n),[n])$ in ${{\mathsf{Gr}}}({{\mathcal{C}}}^\bullet)$ is, by definition, the coproduct $$\coprod_{\varphi \colon [n] \to [m]} {{\mathcal{C}}}^n({{\mathcal{C}}}^\varphi(x_1,\ldots,x_m),(y_1,\ldots, y_n))\times\{\varphi\},$$ which is clearly isomorphic to $$\coprod_{\varphi \colon [n] \to [m]} {{\mathcal{C}}}^n({{\mathcal{C}}}^\varphi(x_1,\ldots,x_m),(y_1,\ldots, y_n)).$$ By using Definition \[def:C-bullet\], Remark \[rem:Cf\] and the fact that the mapping spaces of a product of categories are the product of mapping spaces, it is easy to see that this last expression is equal to $$\coprod_{\varphi\colon [n]\to[m]}\; \prod_{1\leq i\leq m} {{\mathcal{C}}}\big(x_{\varphi(i-1)+1}\otimes x_{\varphi(i-1)+2} \otimes \cdots\otimes x_{\varphi(i)}\;,\;\; y_i\big)$$ In fact, the results of this subsection hold more generally for monoidal ${{\mathcal{V}}}$-categories, where ${{\mathcal{V}}}$ satisfies the hypotheses of [@beardswong], but we will not need this level of generality. The operadic nerve ${\mathrm{N}}^\otimes$ ----------------------------------------- We now suppose that ${{\mathcal{C}}}$ is a strict monoidal *fibrant* (i.e. locally Kan) simplicial category. Then ${{\mathcal{C}}}^\otimes$ is a fibrant simplicial category as well, so the simplicial nerves of ${{\mathcal{C}}}$ and ${{\mathcal{C}}}^\otimes$ are both quasicategories. Let $({{\mathcal{C}}}, \otimes)$ be a strict monoidal fibrant simplicial category. The [**operadic nerve of ${{\mathcal{C}}}$ with respect to $\otimes$**]{} is the quasicategory $${\mathrm{N}}^\otimes({{\mathcal{C}}}) := {\mathrm{N}}({{\mathcal{C}}}^\otimes).$$ Combining Propositions \[prop:opfibtococart\] and \[prop:Cotimesopfib\] with $p := {\mathrm{N}}(P)$, we obtain: There is a coCartesian fibration $p \colon {\mathrm{N}}^\otimes({{\mathcal{C}}}) \to {\mathrm{N}}(\Delta^{\mathrm{op}})$. In fact, $p$ defines a monoidal structure on ${\mathrm{N}}({{\mathcal{C}}})$ in the following sense: A [**monoidal quasicategory**]{} is a coCartesian fibration of simplicial sets $p:X\to N(\Delta^{\mathrm{op}})$ such that for each $n \geq 0$, the functors $X_{[n]}\to X_{\{i,i+1\}}$ induced by $\{i, i+1\} \hookrightarrow [n]$ determine an equivalence of quasicategories $$X_{[n]}{\xrightarrow{\quad \simeq \quad}} X_{\{0,1\}}\times\cdots\times X_{\{n-1,n\}} \cong (X_{[1]})^n,$$ where $X_{[n]}$ denotes the fiber of $p$ over $[n]$. In this case, we say that $p$ defines a [**monoidal structure on $X_{[1]}$**]{}. If ${{\mathcal{C}}}$ is a strict monoidal fibrant simplicial category then $p \colon {\mathrm{N}}^\otimes({{\mathcal{C}}}) \to {\mathrm{N}}(\Delta^{\mathrm{op}})$ defines a monoidal structure on the quasicategory ${\mathrm{N}}({{\mathcal{C}}})\cong ({\mathrm{N}}^\otimes({{\mathcal{C}}}))_{[1]}$. The [**quasicategory of monoidal quasicategories**]{} is the full subquasicategory ${{\mathsf{MonCat}}}_\infty \subset {{\mathsf{coCart}}}_{/{\mathrm{N}}(\Delta^{\mathrm{op}})}$ containing the monoidal quasicategories. Let ${{\mathcal{C}}}$ be a strict monoidal fibrant simplicial category. The [**vertex associated to ${{\mathcal{C}}}$**]{} in ${{\mathsf{MonCat}}}_\infty$ or ${{\mathsf{coCart}}}_{/{\mathrm{N}}(\Delta^{\mathrm{op}})}$ is the vertex corresponding to $p \colon {\mathrm{N}}^\otimes({{\mathcal{C}}}) \to {\mathrm{N}}(\Delta^{\mathrm{op}})$. By Definition \[def:cocartqcat\], the vertex associated to ${{\mathcal{C}}}$ is equivalently the vertex corresponding to ${\mathrm{N}}^\otimes({{\mathcal{C}}})^\natural \to {\mathrm{N}}(\Delta^{\mathrm{op}})^\sharp$ in ${\mathrm{N}}\big(({{\mathsf{sSet}}}^+)_{/S}\big)^\circ$. Note that, by [@htt]\*[3.1.4.1]{}, the assignment $(X \to S) \mapsto (X^\natural \to S^\sharp)$ is injective up to isomorphism. Finally, we tie together the results of this and the previous sections. \[cor:NC-GrC\] Let ${{\mathcal{C}}}$ be a strict monoidal fibrant simplicial category, and let $\xi$ be the composite $\Delta^{\mathrm{op}}{\xrightarrow{{{\mathcal{C}}}^\bullet}} {{\mathsf{sCat}}}{\xrightarrow{{\mathrm{N}}}} {{\mathsf{sSet}}}$. Then we have the following string of isomorphisms and equivalences: $$\label{eq:op-rel-nerve} {\mathrm{N}}^\otimes({{\mathcal{C}}}) \cong {\mathrm{N}}({{\mathsf{Gr}}}\, {{\mathcal{C}}}^\bullet) \cong {\mathrm{N}}_{\xi}(\Delta^{\mathrm{op}}) \simeq {{\mathsf{Gr}}}_\infty{\mathrm{N}}(\xi).$$ The preceding Corollary and the $\infty$-categorical Grothendieck correspondence (\[cor:gr-corr-infty\]) suggest that we may equivalently define a monoidal quasicategory to be $\xi \in ({{\mathsf{Cat}}}_\infty)^{{\mathrm{N}}(\Delta^{\mathrm{op}})}$ such that the maps $$\xi([n]) {\xrightarrow{ \xi\left(\{i, i+1\} \hookrightarrow [n]\right) }} \xi(\{i,i+1\})$$ induce an equivalence $$\xi({[n]}) {\xrightarrow{\quad \simeq \quad}} \xi({\{0,1\}}) \times\cdots\times \xi({\{n-1,n\}}) \cong \xi({[1]})^n.$$ We have worked entirely on the level of *objects* as we are only interested in understanding the operadic nerve of one monoidal simplicial category at a time. However, we believe it should be possible to show that these constructions and equivalences are *functorial*, so that the following diagram is an actual commuting diagram of functors between appropriately defined categories or quasicategories: $$\begin{tikzcd} {{\mathsf{Mon(sCat)}}}\ar[r, "(-)^\bullet"] \ar[rr, bend right = 15, "(-)^\otimes"' description, near end] \ar[rrr, bend right = 20, "{\mathrm{N}}^\otimes"' description, near end] & {{\mathsf{sCat}}}^{\Delta^{\mathrm{op}}} \ar[r, "{{\mathsf{Gr}}}"] & {{\mathsf{opFib}}}_{/\Delta^{\mathrm{op}}} \ar[r, "{\mathrm{N}}"] & {{\mathsf{coCart}}}_{/{\mathrm{N}}(\Delta^{\mathrm{op}})} \end{tikzcd}$$ For an ordinary category ${{\mathcal{D}}}$, we also believe that there is a model structure on ${{\mathsf{sCat}}}_{/{{\mathcal{D}}}}$ whose fibrant objects are simplicial opfibrations (or the analog for a suitable version of *marked* simplicial categories), along with a Quillen adjunction between ${{\mathsf{sCat}}}_{/D}$ and $({{\mathsf{sSet}}}^+)_{/{\mathrm{N}}({{\mathcal{D}}})}$ whose restriction to fibrant objects picks out the maps arising as nerves of simplicial opfibrations. Opposite functors {#sec:op-func} ================= Finally, we turn to the question which motivated this paper: how does the operadic nerve interact with taking opposites? Recall that there is an involution on the category of small categories ${\mathrm{op}}\colon{{\mathsf{Cat}}}\to{{\mathsf{Cat}}}$ which takes a category to its opposite. There are higher categorical generalizations of this functor to the category of simplicial sets and the category of simplicially enriched categories, which we review in turn. Opposites of (monoidal) simplicial categories --------------------------------------------- Given a simplicial category ${{\mathcal{C}}}\in {{\mathsf{sCat}}}$, let ${{\mathcal{C}}}^{\mathrm{op}}$ denote the category with the same objects as ${{\mathcal{C}}}$, and morphisms $${{\mathcal{C}}}^{\mathrm{op}}(x,y) := {{\mathcal{C}}}(y,x).$$ Let ${\mathrm{op}}_s\colon {{\mathsf{sCat}}}\to {{\mathsf{sCat}}}$ be the functor sending ${{\mathcal{C}}}$ to ${\mathrm{op}}_s({{\mathcal{C}}}) := {{\mathcal{C}}}^{\mathrm{op}}$, and sending a simplicial functor $F$ to the simplicial functor $F^{\mathrm{op}}$ given by $F^{\mathrm{op}}x:= Fx$ and $F^{\mathrm{op}}_{x,y} := F_{y,x}$. We note a few immediate properties of opposites. \[lem:op-s-selfadj\] The functor ${\mathrm{op}}_s$ is self-adjoint. Let ${{\mathcal{C}}}$ be a simplicial category. If ${{\mathcal{C}}}$ is fibrant, then so is ${{\mathcal{C}}}^{\mathrm{op}}$. Let ${{\mathcal{C}}}$ be a strict monoidal simplicial category. Then ${{\mathcal{C}}}^{\mathrm{op}}$ is canonically a strict monoidal simplicial category as well. Given $x,y \in {{\mathcal{C}}}^{\mathrm{op}}$, define their tensor product to be the same object as their tensor in ${{\mathcal{C}}}$. One can check that this extends to a monoidal structure on ${{\mathcal{C}}}^{\mathrm{op}}$. Alternatively, since ${\mathrm{op}}_s$ is self-adjoint, it preserves limits and colimits of simplicial categories. In particular, it preserves the Cartesian product, and is therefore a monoidal functor from $({{\mathsf{sCat}}}, \times)$ to itself. It thus preserves monoids in ${{\mathsf{sCat}}}$. Since the same object represents the tensor product of $x$ and $y$ in ${{\mathcal{C}}}$ or ${{\mathcal{C}}}^{\mathrm{op}}$, we will use the same symbol $\otimes$ to denote the tensor product in either category. The functor ${\mathrm{op}}_s \colon {{\mathsf{sCat}}}\to {{\mathsf{sCat}}}$ induces functors $(-)^{\mathrm{op}}\colon {{\mathsf{Mon(sCat)}}}\to {{\mathsf{Mon(sCat)}}}$ and $(-)^{\mathrm{op}}\colon {{\mathsf{sCat}}}^{\Delta^{\mathrm{op}}} \to {{\mathsf{sCat}}}^{\Delta^{\mathrm{op}}}$, where the latter is composition with ${\mathrm{op}}_s$. We wish to show that these functors commute with the construction ${{\mathcal{C}}}\mapsto {{\mathcal{C}}}^\bullet$ of Definition \[def:C-bullet\]. \[lem:C-bullet-op\] Let ${{\mathcal{C}}}$ be a strict monoidal simplicial category. Then $$({{\mathcal{C}}}^\bullet)^{\mathrm{op}}= ({{\mathcal{C}}}^{\mathrm{op}})^\bullet,$$ i.e. the following diagram commutes on objects. $$\begin{tikzcd}[row sep = large] {{\mathsf{Mon(sCat)}}}\ar[r, "(-)^\bullet"] \ar[d, "{\mathrm{op}}" description] & {{\mathsf{sCat}}}^{\Delta^{\mathrm{op}}} \ar[d, "{\mathrm{op}}" description] \\ {{\mathsf{Mon(sCat)}}}\ar[r, "(-)^\bullet"] & {{\mathsf{sCat}}}^{\Delta^{\mathrm{op}}} \end{tikzcd}$$ The objects of both $({{\mathcal{C}}}^n)^{\mathrm{op}}$ and $({{\mathcal{C}}}^{\mathrm{op}})^n$ are $n$-tuples $(x_1, \dots, x_n)$ where $x_i \in {{\mathcal{C}}}$, while the simplicial set of morphisms from $(x_1, \dots, x_n)$ to $(y_1,\dots, y_n)$ are both ${{\mathcal{C}}}(y_1,x_1) \times \dots \times {{\mathcal{C}}}(y_n, x_n)$, so $({{\mathcal{C}}}^n)^{\mathrm{op}}= ({{\mathcal{C}}}^{\mathrm{op}})^n$. Therefore $({{\mathcal{C}}}^{op})^\bullet$ and $({{\mathcal{C}}}^\bullet)^{op}$ agree on objects $[n]\in\Delta^{op}$. Now consider the face and degeneracy morphisms of $\Delta$ under the two functors $({{\mathcal{C}}}^\bullet)^{op}\colon \Delta^{op}\to{{\mathsf{sCat}}}$ and $({{\mathcal{C}}}^{op})^\bullet\colon \Delta^{op}\to{{\mathsf{sCat}}}$. In the first case, they are taken to, respectively, an application of the opposite monoidal structure to the $i^{th}$ and $i+1^{st}$ coordinates $({{\mathcal{C}}}^{\delta_i})^{op}\colon ({{\mathcal{C}}}^n)^{op}\to({{\mathcal{C}}}^{n-1})^{op}$ and an application of the “opposite” unit in the $i^{th}$ coordinate $({{\mathcal{C}}}^{\sigma_i})^{op}\colon ({{\mathcal{C}}}^{n-1})^{op}\to ({{\mathcal{C}}}^{n})^{op}$. Because the monoidal structure of ${{\mathcal{C}}}^{op}$ is by definition the opposite of the monoidal structure of ${{\mathcal{C}}}$, and both the identity maps and the unit maps are self-dual under $op$ (and the fact that $op$ is self-adjoint so preserves products up to equality), it is clear that these are equal to $({{\mathcal{C}}}^{op})^{\delta_i}$ and $({{\mathcal{C}}}^{op})^{\sigma_i}$ respectively. The diagram above is an actual commuting diagram of functors, but we will not show this here, since we have not fully described the functorial nature of $(-)^{\mathrm{op}}$. We note also that opposites commute with the simplicially enriched Grothendieck construction, but we will not need this result in the rest of the paper. \[def:sfib-op\] Let $P \colon {{\mathcal{E}}}\to {{\mathcal{D}}}$ be a simplicial opfibration. The [**fiberwise opposite**]{} of $P$ is the simplicial opfibration $P_{\mathrm{op}}\colon {{\mathcal{E}}}_{\mathrm{op}}\to {{\mathcal{D}}}$ given by $${{\mathsf{Gr}}}\circ {\mathrm{op}}_s \circ {{\mathsf{Gr}}}^{-1}(P).$$ Note that we have deliberately avoided writing $P^{\mathrm{op}}$ and ${{\mathcal{E}}}^{\mathrm{op}}$, since these mean the direct application of ${\mathrm{op}}_s$ to $P$ and ${{\mathcal{E}}}$, which is not what we want. Let ${{\mathcal{C}}}$ be a strict monoidal simplicial category. Then $$({{\mathcal{C}}}^\otimes)_{\mathrm{op}}\cong ({{\mathcal{C}}}^{\mathrm{op}})^\otimes.$$ Apply ${{\mathsf{Gr}}}$ to Lemma \[lem:C-bullet-op\], and note that ${{\mathsf{Gr}}}\, ({{\mathcal{C}}}^\bullet)^{\mathrm{op}}\cong ({{\mathcal{C}}}^\otimes)_{\mathrm{op}}$. Opposites of $\infty$-categories -------------------------------- We now turn to opposites of simplicial sets and quasicategories, and relate these to opposites of simplicial categories. In this and the next subsection, we will make frequent use of the notation and results of \[sec:models\] and \[sec:st-un\], so the reader is encouraged to review them before proceeding. To avoid unnecessary complexity in our exposition and proofs, we will freely use the fact that $\Delta$, the simplex category, is equivalent to the category ${\mathsf{floSet}}$ of finite, linearly ordered sets and order preserving functions between them. In fact, $\Delta$ is a *skeleton* of ${\mathsf{floSet}}$, so the equivalence is given by the inclusion $\Delta\hookrightarrow{\mathsf{floSet}}$. Define the functor ${\mathsf{rev}}\colon\Delta\to \Delta$ to be the functor that takes a finite linearly ordered set to the same set with the reverse ordering. Then given $X\in {{\mathsf{sSet}}}= \mathsf{Fun}(\Delta,{\mathsf{Set}})$, we define ${\mathrm{op}}_\Delta X$ to be the simplicial set $X\circ {\mathsf{rev}}$. This defines a functor ${\mathrm{op}}_\Delta\colon {{\mathsf{sSet}}}\to {{\mathsf{sSet}}}$. We will often write $X^{\mathrm{op}}$ instead of ${\mathrm{op}}_\Delta X$. Define the functor ${\mathrm{op}}_\Delta^+\colon {{\mathsf{sSet}}}^+\to {{\mathsf{sSet}}}^+$ to be the functor that takes a marked simplicial set $(X,W)$ to $({\mathrm{op}}_\Delta X,W)$, where we use the fact that there is a bijection between the 1-simplices of ${\mathrm{op}}_\Delta X$ and those of $X$. \[lem:opselfadj\] The functors ${\mathrm{op}}_\Delta$ and ${\mathrm{op}}_\Delta^+$ are self-adjoint. If $X$ is a quasicategory, then so is $X^{\mathrm{op}}$. The functors ${\mathrm{op}}_s, {\mathrm{op}}_\Delta$ and ${\mathrm{op}}_\Delta^+$ are related in the following manner: \[lem:opcommute\] The following diagram commutes: \^& \^& \^+\ \^& \^& \^+ The right hand square of the above diagram obviously commutes, so it only remains to show that ${\mathrm{N}}\circ {\mathrm{op}}_s\cong {\mathrm{op}}_\Delta\circ {\mathrm{N}}$. Recall that the nerve of a simplicial category ${{\mathcal{C}}}$ is the simplicial set determined by the formula $${{\mathsf{Hom}}}_{{{\mathsf{sSet}}}}(\Delta^n,{\mathrm{N}}{{\mathcal{C}}}) = {{\mathsf{Hom}}}_{{{\mathsf{sCat}}}}({\mathfrak{C}}[\Delta^n],{{\mathcal{C}}})$$ where ${\mathfrak{C}}[\Delta^n]$ is the value of the functor ${\mathfrak{C}}\colon \Delta\to {{\mathsf{sCat}}}$ defined in [@htt]\*[1.1.5.1, 1.1.5.3]{} at the finite linearly ordered set $\{0<1<\cdots<n\}$. Moreover, by extending along the Yoneda embedding $\Delta\to {{\mathsf{sSet}}}$, we obtain (cf. the discussion following Example 1.1.5.8 of [@htt]) a colimit preserving functor ${\mathfrak{C}}\colon{{\mathsf{sSet}}}\to {{\mathsf{sCat}}}$ which is left adjoint to ${\mathrm{N}}$. This justifies using the notation ${\mathfrak{C}}[\Delta^n]$ for the application of ${\mathfrak{C}}$ to $\{0<1<\cdots<n\}$. It is not hard to check from definitions that, for any finite linearly ordered set $I$ the simplicial categories ${\mathfrak{C}}[I]^{\mathrm{op}}$ and ${\mathfrak{C}}[I^{\mathrm{op}}]$ are equal and that this identification is natural with respect to the morphisms of $\Delta$. So by using this fact, the fact that ${\mathfrak{C}}\dashv{\mathrm{N}}$, and liberally applying the self-adjointness of ${\mathrm{op}}_s, {\mathrm{op}}_\Delta$ and ${\mathrm{op}}_\Delta^+$ (Lemmas \[lem:op-s-selfadj\] and \[lem:opselfadj\]), we have the following sequence of isomorphisms: $$\begin{aligned} {{\mathsf{Hom}}}_{{{\mathsf{sSet}}}}(\Delta^n,{\mathrm{N}}({{\mathcal{C}}})^{\mathrm{op}})&\cong {{\mathsf{Hom}}}_{{{\mathsf{sSet}}}}((\Delta^n)^{\mathrm{op}},{\mathrm{N}}({{\mathcal{C}}}))\\ &\cong {{\mathsf{Hom}}}_{{{\mathsf{sCat}}}}({\mathfrak{C}}[(\Delta^n)^{\mathrm{op}}],{{\mathcal{C}}})\\ &\cong {{\mathsf{Hom}}}_{{{\mathsf{sCat}}}}({\mathfrak{C}}[\Delta^{n}]^{\mathrm{op}},{{\mathcal{C}}})\\ &\cong {{\mathsf{Hom}}}_{{{\mathsf{sCat}}}}({\mathfrak{C}}[\Delta^n],{{\mathcal{C}}}^{\mathrm{op}})\\ &\cong {{\mathsf{Hom}}}_{{{\mathsf{sSet}}}}(\Delta^n,{\mathrm{N}}({{\mathcal{C}}}^{\mathrm{op}})). \end{aligned}$$ All of our constructions are natural with respect to the morphisms of $\Delta$, so we have the result. \[cor:F-f-op\] Let $F \colon {{\mathcal{D}}}\to {{\mathsf{sCat}}}$ be a functor such that each $Fd$ is fibrant, and let $f = {\mathrm{N}}F$. Then $$f^{\mathrm{op}}= ({\mathrm{N}}F)^{\mathrm{op}}\cong {\mathrm{N}}(F^{\mathrm{op}}).$$ Let $f \colon {{\mathcal{D}}}\to {{\mathsf{sSet}}}$ be a functor such that each $fd$ is a quasicategory. Then $$(f^{\mathrm{op}})^\natural = (f^\natural)^{\mathrm{op}}.$$ The preceding Corollary is about functors ${{\mathcal{D}}}\to {{\mathsf{sSet}}}$ taking values in quasicategories. Taking the nerve of such a functor, we obtain a *vertex* in the quasicategory $({{\mathsf{Cat}}}_\infty)^{{\mathrm{N}}({{\mathcal{D}}})}$. From now on, we restrict ourselves to the quasicategories ${{\mathsf{Cat}}}_\infty, ({{\mathsf{Cat}}}_\infty)^{{\mathrm{N}}({{\mathcal{D}}})}$ and ${{\mathsf{coCart}}}_{/{\mathrm{N}}({{\mathcal{D}}})}$, so that all future statements are about *vertices* in these quasicategories. By [@barwickschommerpries Theorem 7.2], there is a unique-up-to-homotopy non-identity involution of the quasicategory ${{\mathsf{Cat}}}_\infty$, as it is a theory of $(\infty,1)$-categories. Thus, this involution, which we denote ${\mathrm{op}}_{\infty}$, must be equivalent to the nerve of ${\mathrm{op}}_\Delta^+$. So we have the following lemma: Let ${\mathrm{op}}_\infty \colon {{\mathsf{Cat}}}_\infty \to {{\mathsf{Cat}}}_\infty$ denote the above involution on ${{\mathsf{Cat}}}_\infty$. Then ${\mathrm{op}}_\infty \simeq {\mathrm{N}}({\mathrm{op}}_\Delta^+)$. \[cor:markedop\] Let $f \colon {{\mathcal{D}}}\to {{\mathsf{sSet}}}$ be a functor such that each $fd$ is a quasicategory, and continue to write $f$ for $f^\natural \colon {{\mathcal{D}}}\to {{\mathsf{sSet}}}^+$. In the quasicategory $({{\mathsf{Cat}}}_\infty)^{{\mathcal{D}}}$, we have an equivalence $${\mathrm{N}}(f^{\mathrm{op}}) \simeq {\mathrm{N}}(f)^{\mathrm{op}},$$ where $f^{\mathrm{op}}= {\mathrm{op}}_\Delta^+ \circ f$ and ${\mathrm{N}}(f)^{\mathrm{op}}= {\mathrm{op}}_\infty \circ {\mathrm{N}}(f)$. By the functoriality of the (large) simplicial nerve functor and the previous Lemma, we have ${\mathrm{N}}(f^{\mathrm{op}}) \simeq {\mathrm{N}}(op_\Delta^+)\circ {\mathrm{N}}(f) \simeq {\mathrm{op}}_\infty \circ {\mathrm{N}}(f)$. Opposites of fibrations and monoidal quasicategories ---------------------------------------------------- We now define fiberwise opposites of a coCartesian fibration, in a manner similar to Definition \[def:sfib-op\], keeping in mind that we need to work within the quasicategory ${{\mathsf{coCart}}}_{/S}$. \[def:cocartfib-op\] Let $p \colon X \to S$ be a coCartesian fibration of quasicategories, treated as a vertex of ${{\mathsf{coCart}}}_{/S}$. The [**fiberwise opposite**]{} of $p$ is the coCartesian fibration corresponding to the vertex $${{\mathsf{Gr}}}_\infty \circ {\mathrm{op}}_\infty \circ {{\mathsf{Gr}}}_\infty^{-1} (p) \in {{\mathsf{coCart}}}_{/S}.$$ Denote this coCartesian fibration by $p_{\mathrm{op}}\colon X_{\mathrm{op}}\to S$. (Again, we do not write $p^{\mathrm{op}}$ or $X^{\mathrm{op}}$, since these refer to the direct application of ${\mathrm{op}}_\Delta^{+}$). \[thm:F-op-commute\] Let $F \colon {{\mathcal{D}}}\to {{\mathsf{sCat}}}$ be a functor such that each $Fd$ is fibrant. In the quasicategory ${{\mathsf{coCart}}}_{/{\mathrm{N}}({{\mathcal{D}}})}$, there is an equivalence of vertices $${\mathrm{N}}{{\mathsf{Gr}}}(F^{\mathrm{op}}) \simeq {\mathrm{N}}{{\mathsf{Gr}}}\, (F)_{\mathrm{op}},$$ i.e. the following diagram commutes on objects, and up to equivalence in ${{\mathsf{coCart}}}_{/{\mathrm{N}}({{\mathcal{D}}})}$. $$\begin{tikzcd}[row sep = large] {{\mathsf{sCat}}}^{{{\mathcal{D}}}} \ar[r, "{{\mathsf{Gr}}}"] \ar[d, "{\mathrm{op}}" description] & {{\mathsf{opFib}}}_{/{{\mathcal{D}}}} \ar[r, "{\mathrm{N}}"] & {{\mathsf{coCart}}}_{/{\mathrm{N}}({{\mathcal{D}}})} \ar[d, "{\mathrm{op}}" description] \\ {{\mathsf{sCat}}}^{{{\mathcal{D}}}} \ar[r, "{{\mathsf{Gr}}}"] & {{\mathsf{opFib}}}_{/{{\mathcal{D}}}} \ar[r, "{\mathrm{N}}"] & {{\mathsf{coCart}}}_{/{\mathrm{N}}({{\mathcal{D}}})} \end{tikzcd}$$ We have a string of equivalences: $$\begin{aligned} {\mathrm{N}}{{\mathsf{Gr}}}\, (F)_{\mathrm{op}}&= { {{\mathsf{Gr}}}_\infty \circ {\mathrm{op}}_\infty \circ {{\mathsf{Gr}}}_\infty^{-1} }({\mathrm{N}}{{\mathsf{Gr}}}\, (F)) & & \text{(Definition \ref{def:cocartfib-op})} \\ &\simeq {{\mathsf{Gr}}}_\infty \circ {\mathrm{op}}_\infty \circ {{\mathsf{Gr}}}_\infty^{-1} {{\mathsf{Gr}}}_\infty {\mathrm{N}}(f) & & \text{(Corollary \ref{cor:gr-infty-gr})}\\ &\simeq {{\mathsf{Gr}}}_\infty \circ {\mathrm{op}}_\infty \circ {\mathrm{N}}(f) & &\text{(Definition \ref{def:grinfinity})} \\ &\simeq {{\mathsf{Gr}}}_\infty {\mathrm{N}}( {f^{\mathrm{op}}}) & & \text{(Corollary \ref{cor:markedop})} \\ &\simeq {\mathrm{N}}{{\mathsf{Gr}}}(F^{\mathrm{op}}) & & \text{(Corollary \ref{cor:gr-infty-gr})} \end{aligned}$$ where $f = {\mathrm{N}}F$ and $f^{\mathrm{op}}\cong {\mathrm{N}}(F^{\mathrm{op}})$ by Corollary \[cor:F-f-op\]. The reader following the above proof closely should be aware of the fact that we implicitly use Proposition \[prop:rel-nerve-infty-gr\] [@htt]\*[3.2.5.21]{} several times. Finally, we turn our attention back to monoidal quasicategories and monoidal simplicial categories. Let $p \colon X \to {\mathrm{N}}(\Delta^{\mathrm{op}})$ define a monoidal structure on $X_{[1]}$. Then $p_{\mathrm{op}}\colon X_{\mathrm{op}}\to {\mathrm{N}}(\Delta^{\mathrm{op}})$ defines a monoidal structure on $(X_{[1]})^{\mathrm{op}}$. It is easy to check that the coCartesian fibration $p_{\mathrm{op}}$ is a monoidal quasicategory, and that $(X_{\mathrm{op}})_{[1]} \simeq (X_{[1]})^{\mathrm{op}}$. \[thm:opcommute\] Let ${{\mathcal{C}}}$ be a strict monoidal fibrant simplicial category and equip ${{\mathcal{C}}}^{\mathrm{op}}$ with its canonical monoidal structure. Then ${\mathrm{N}}^\otimes({{\mathcal{C}}}^{\mathrm{op}})$ and ${\mathrm{N}}^\otimes ({{\mathcal{C}}})_{\mathrm{op}}$ define equivalent monoidal structures on ${\mathrm{N}}({{\mathcal{C}}}^{\mathrm{op}}) \simeq {\mathrm{N}}({{\mathcal{C}}})^{\mathrm{op}}$. Combine Lemma \[lem:C-bullet-op\] with Theorem \[thm:F-op-commute\], taking $F = {{\mathcal{C}}}^\bullet$. Appendices ========== Models for $\infty$-categories, and their nerves {#sec:models} ------------------------------------------------ In this paper, we pass between simplicially enriched categories, ${{\mathsf{sCat}}}$, and simplicial sets, ${{\mathsf{sSet}}}$. We also often invoke *marked* simplicial sets ${{\mathsf{sSet}}}^+$. In this section, we describe how these categories, equipped with suitable model structures, serve as models for a category of $\infty$-categories, and how they are related. We recall the definitions of the three categories above with certain model category structures: 1. Let ${{\mathsf{sCat}}}$ denote the category of simplicially enriched categories in the sense of [@kellyenriched], with the *Bergner* model structure described in [@bergmodelcat]. In particular, the fibrant objects are the categories enriched in Kan complexes and the weak equivalences are the so-called Dwyer-Kan (or DK) equivalences of simplicial categories. 2. Let ${{\mathsf{sSet}}}$ denote the category of simplicial sets with the *Joyal* model structure as described in [@joyalapps] and [@htt]. The fibrant objects are the quasicategories, and the weak equivalences are the categorical equivalences of simplicial sets. 3. Let ${{\mathsf{sSet}}}^+$ denote the category of *marked simplicial sets*. Its objects are pairs $(S,W)$ where $S$ is a simplicial set and $W$ is a subset of $S[1]$, the collection of 1-simplices of $S$. The model structure on ${{\mathsf{sSet}}}^+$ is given by [@htt]\*[3.1.3.7]{}. By [@htt]\*[3.1.4.1]{}, the fibrant objects are the pairs $(S,W)$ for which $S$ is a quasicategory and $W$ is the set of 1-simplices of $S$ that become isomorphisms after passing to the homotopy category (i.e. the equivalences of $S$). The weak equivalences, by [@htt]\*[3.1.3.5]{}, are precisely the morphisms whose underlying maps of simplicial sets are categorical equivalences. 4. Let ${{\mathsf{RelCat}}}$ denote the category of *relative categories*, whose objects are pairs $({\mathsf{C}},{\mathsf{W}})$, where ${\mathsf{C}}$ is a category and ${\mathsf{W}}$ is a subcategory of ${\mathsf{C}}$ that contains all the objects of ${\mathsf{C}}$. In [@barwickkanrelcats], it is shown that ${{\mathsf{RelCat}}}$ admits a model structure, but we will not need it here. We only point out that any model category ${\mathsf{C}}$ has an underlying relative category in which ${\mathsf{W}}$ is the subcategory containing every object of ${\mathsf{C}}$ with only the weak equivalences as morphisms. Given a model category ${\mathsf{C}}$, we will denote by ${\mathsf{C}}^\circ$ the full subcategory spanned by bifibrant (i.e fibrant and cofibrant) objects. We also introduce several functors which are useful in comparing the above categories as models of $\infty$-categories: 1. Let ${\mathrm{N}}\colon{{\mathsf{sCat}}}\to {{\mathsf{sSet}}}$ be the *simplicial nerve* functor (first defined by Cordier) of [@htt]\*[1.1.5.5]{}. Crucially, if ${{\mathcal{C}}}$ is a fibrant simplicial category, then ${\mathrm{N}}{{\mathcal{C}}}$ is a quasicategory. This nerve has a left adjoint ${\mathfrak{C}}$. $$\begin{tikzcd}[column sep = large] {{\mathsf{sSet}}}\ar[r, bend left = 20, "{\mathfrak{C}}", ""{name = L}] & {{\mathsf{sCat}}}\ar[l, bend left = 20, "{\mathrm{N}}", ""{name = R}] \ar[from = L, to = R, symbol = \dashv] \end{tikzcd}$$ 2. Let $L^H\colon{{\mathsf{RelCat}}}\to {{\mathsf{sCat}}}$ denote the *hammock localization* functor, defined in [@dwyerkancalculating]. 3. Let $(-)^\natural\colon{{\mathsf{sSet}}}^\circ\to {{\mathsf{sSet}}}^+$ denote the functor, defined in [@htt]\*[3.1.1.9[^3]]{}, that takes a quasicategory $C$ to the pair $(C,W)$ where $W$ is the collection of weak equivalences[^4] in $C$. 4. Let $(-)^{\sharp} \colon{{\mathsf{sSet}}}\to {{\mathsf{sSet}}}^+$ denote the functor, defined in [@htt]\*[3.1.0.2]{} that takes a simplicial set $S$ to the pair $(S,S[1])$, in which every edge of $S$ has been marked. 5. Let ${\mathrm{u.q.}}\colon {{\mathsf{RelCat}}}\to {{\mathsf{sSet}}}$ denote the *underlying quasicategory* functor of [@mazelgeeadjunctions], given by the composition $${{\mathsf{RelCat}}}\xrightarrow{\;L^H\;} {{\mathsf{sCat}}}\xrightarrow{\;{\mathbb{R}}\;} {{\mathsf{sCat}}}\xrightarrow{\;{\mathrm{N}}\;} {{\mathsf{sSet}}}$$ where ${\mathbb{R}}\colon {{\mathsf{sCat}}}\to {{\mathsf{sCat}}}$ is the fibrant replacement functor of simplicial categories defined in [@mazelgeeadjunctions]\*[§1.2]{}. Note that, because of the fibrant replacement, ${\mathrm{u.q.}}({\mathsf{C}},{\mathsf{W}})$ is indeed a quasicategory for any relative category $({\mathsf{C}},{\mathsf{W}})$. We can now give a definition of *the* quasicategory of $\infty$-categories: \[def:cat-infty\] Since the fibrant-cofibrant objects in ${{\mathsf{sSet}}}^+$ correspond to quasicategories, we let the [**quasicategory of quasicategories**]{}, or of $\infty$-categories, be: $${{\mathsf{Cat}}}_\infty := {\mathrm{N}}({{\mathsf{sSet}}}^+)^\circ,$$ where we write ${\mathrm{N}}({{\mathsf{sSet}}}^+)^\circ$ instead of the more cumbersome ${\mathrm{N}}\big( ({{\mathsf{sSet}}}^+)^\circ \big)$. \[rem:nerve-notation\] Going forward, we will often write ${\mathrm{N}}(-)^\circ$ instead of ${\mathrm{N}}\big((-)^\circ \big)$ to indicate the simplicial nerve applied to the bifibrant subcategory of a simplicial model category. The underlying quasicategories of the model categories ${{\mathsf{sCat}}}$, ${{\mathsf{sSet}}}$ and ${{\mathsf{sSet}}}^+$are all equivalent to ${{\mathsf{Cat}}}_\infty$. First note that [@hinichdwyerkan]\*[Proposition 1.5.1]{} implies that a Quillen equivalence of model categories induces an equivalence of underlying quasicategories. There are Quillen equivalences ${{\mathsf{sCat}}}\leftrightarrows{{\mathsf{sSet}}}$ [@bergner]\*[Theorem 7.8]{} and ${{\mathsf{sSet}}}\leftrightarrows{{\mathsf{sSet}}}^+$ [@htt]\*[3.1.5.1 (A0)]{}. As a result, there are equivalences of quasicategories ${\mathrm{u.q.}}({{\mathsf{sSet}}},{\mathcal{WE}})\to {\mathrm{u.q.}}({{\mathsf{sSet}}}^+,{\mathsf{WE}})$, where ${\mathsf{WE}}$ denotes the collection of weak equivalences between marked simplicial sets, and ${\mathrm{u.q.}}({{\mathsf{sCat}}},{\mathsf{DK}})\to {\mathrm{u.q.}}({{\mathsf{sSet}}},{\mathsf{WE}})$, where ${\mathsf{DK}}$ denotes the collection of Dwyer-Kan equivalences. It then follows, by [@htt]\*[3.1.3.5]{}, that there are equivalences of marked simplicial sets ${\mathrm{u.q.}}({{\mathsf{sSet}}},{\mathsf{WE}})^\natural\leftarrow {\mathrm{u.q.}}({{\mathsf{sSet}}}^+,{\mathsf{WE}})^\natural$ and ${\mathrm{u.q.}}({{\mathsf{sCat}}},{\mathsf{DK}})^\natural\to {\mathrm{u.q.}}({{\mathsf{sSet}}},{\mathsf{WE}})^\natural$. Now by [@hinichdwyerkan]\*[Proposition 1.4.3]{} and its corollary, we have a (Dwyer-Kan) equivalence of simplicial categories $({{\mathsf{sSet}}}^+)^\circ\to L^H({{\mathsf{sSet}}}^+,{\mathsf{WE}})$. By definition of fibrant replacement, we also have equivalences $({{\mathsf{sSet}}}^+)^\circ\to {\mathbb{R}}({{\mathsf{sSet}}}^+)^\circ$. Since the latter morphism is between fibrant objects, and the right Quillen adjoint ${\mathrm{N}}$ preserves equivalences between fibrant objects (by Ken Brown’s Lemma), we have an equivalence of simplicial sets ${\mathrm{N}}({{\mathsf{sSet}}}^+)^\circ\to {\mathrm{u.q.}}({{\mathsf{sSet}}}^+,{\mathsf{WE}})$. Thus another application of [@htt]\*[3.1.3.5]{} gives an equivalence of marked simplicial sets $({\mathrm{N}}({{\mathsf{sSet}}}^+)^\circ)^\natural\to {\mathrm{u.q.}}({{\mathsf{sSet}}}^+,{\mathsf{WE}})^\natural$. So we have equivalences of marked simplicial sets: $$({\mathrm{N}}({{\mathsf{sSet}}}^+)^\circ)^\natural\to {\mathrm{u.q.}}({{\mathsf{sSet}}}^+,{\mathsf{WE}})^\natural\to {\mathrm{u.q.}}({{\mathsf{sSet}}},{\mathsf{WE}})^\natural \to {\mathrm{u.q.}}({{\mathsf{sCat}}},{\mathsf{DK}})^\natural$$ These imply the result after applying the (large) nerve to the (large) quasicategory of marked simplicial sets. Straightening, unstraightening and ${{\mathsf{Gr}}}_\infty$ {#sec:st-un} ----------------------------------------------------------- This section is a summary of results from [@htt]\*[3.2 and 3.3]{} regarding straightening and unstraightening. Let $S$ be a simplicial set, ${{\mathcal{D}}}$ a simplicial category, and $\phi \colon {\mathfrak{C}}[S] {\xrightarrow{\; \simeq \;}} {{\mathcal{D}}}$ an equivalence of simplicial categories. Then there is a Quillen equivalence $$\begin{tikzcd}[column sep = large] {}_{\phantom{S}} ({{\mathsf{sSet}}}^+)^{{\mathcal{D}}}\ar[r, bend left = 20, "{{\textsf{Un}}}_\phi^+", ""{name = L}, start anchor = north east, end anchor = north west] & ({{\mathsf{sSet}}}^+)_{/S} \ar[l, bend left = 20, "{{\textsf{St}}}_\phi^+", ""{name = R}, start anchor = south west, end anchor = south east] \ar[from = L, to = R, symbol = \vdash] \end{tikzcd}$$ where $({{\mathsf{sSet}}}^+)_{/S}$ is the category of marked simplicial sets over $S$ with the coCartesian model structure, and $({{\mathsf{sSet}}}^+)^{{\mathcal{D}}}$ is the category of ${{\mathcal{D}}}$ shaped diagrams in marked simplicial sets with the projective model structure. Both $({{\mathsf{sSet}}}^+)_{/S}$ and $({{\mathsf{sSet}}}^+)^{{\mathcal{D}}}$ are simplicial model categories, and ${{\textsf{Un}}}_\phi^+$ is a simplicial functor[^5] which induces an equivalence of simplicial categories $$({{\textsf{Un}}}_\phi^+)^\circ\colon \big(({{\mathsf{sSet}}}^+)^{{\mathcal{D}}}\big)^\circ\xrightarrow{\;\simeq \;} \big(({{\mathsf{sSet}}}^+)_{/S}\big)^\circ.$$ \[cor:st-un-quasi\] Taking the nerve of this equivalence, there is an equivalence of quasicategories[^6] $${\mathrm{N}}({{\textsf{Un}}}_\phi^+)^\circ \colon {\mathrm{N}}\big(({{\mathsf{sSet}}}^+)^{{\mathcal{D}}}\big)^\circ \xrightarrow{\; \simeq \;} {\mathrm{N}}\big(({{\mathsf{sSet}}}^+)_{/S}\big)^\circ.$$ Note that, for [@htt]\*[A.3.1.12]{} to apply above, it is essential that all of the objects of $({{\mathsf{sSet}}}^+)_{/S}$ are cofibrant. This follows from [@htt]\*[3.1.3.7]{} when we set $S=\Delta^0$ and the recollection that every object of ${{\mathsf{sSet}}}$ is cofibrant in Joyal model structure. By [@htt]\*[3.1.1.11[^7]]{}, the vertices of ${\mathrm{N}}\big(({{\mathsf{sSet}}}^+)_{/S}\big)^\circ$ are precisely maps of marked simplicial sets of the form $X^\natural \to S^\sharp$ where $X \to S$ is a coCartesian fibration. We may thus *identify* $X \to S$ with $X^\natural \to S^\sharp$ and treat the vertices of ${\mathrm{N}}\big(({{\mathsf{sSet}}}^+)_{/S}\big)^\circ$ as coCartesian fibrations over $S$. This motivates and justifies the following notation: \[def:cocartqcat\] The [**quasicategory of coCartesian fibrations over $S$**]{} is $${{\mathsf{coCart}}}_{/S} := {\mathrm{N}}\big(({{\mathsf{sSet}}}^+)_{/S}\big)^\circ.$$ \[cor:gr-corr-infty\] There is an equivalence of quasicategories $$({{\mathsf{Cat}}}_\infty)^S \simeq {{\mathsf{coCart}}}_{/S}.$$ By Corollary \[cor:st-un-quasi\] with ${{\mathcal{D}}}= {\mathfrak{C}}[S]$ and $\phi$ the identity, it suffices to show that we have an equivalence of quasicategories $${\mathrm{N}}\big(({{\mathsf{sSet}}}^+)^{{\mathfrak{C}}[S]} \big)^\circ \simeq ({{\mathsf{Cat}}}_\infty)^S.$$ But this is precisely [@htt]\*[4.2.4.4]{}, which states that $${\mathrm{N}}\big(({{\mathsf{sSet}}}^+)^{{\mathfrak{C}}[S]} \big)^\circ \simeq \big({\mathrm{N}}({{\mathsf{sSet}}}^+)^\circ \big)^S,$$ together with Definition \[def:cat-infty\]. \[def:grinfinity\] Let ${{\mathsf{Gr}}}_\infty$ denote the above equivalence of quasicategories, $$\begin{tikzcd} ({{\mathsf{Cat}}}_\infty)^S \ar[rr, bend left = 15, "{{\mathsf{Gr}}}_\infty"] & \simeq & {{\mathsf{coCart}}}_{/S} \ar[ll, bend left = 15, "{{\mathsf{Gr}}}_\infty^{-1}"] \end{tikzcd}$$ and let ${{\mathsf{Gr}}}_\infty^{-1}$ denote its weak inverse (i.e. there are natural equivalences of functors ${\textsf{Id}}_{{{\mathsf{coCart}}}_{/S}}\simeq{{\mathsf{Gr}}}_\infty\circ {{\mathsf{Gr}}}_\infty^{-1}$ and ${\textsf{Id}}_{({{\mathsf{Cat}}}_\infty)^S}\simeq {{\mathsf{Gr}}}_\infty^{-1}\circ {{\mathsf{Gr}}}_\infty$). The existence of a weak inverse ${{\mathsf{Gr}}}_\infty^{-1}$ is a result of the “fundamental theorem of quasicategory theory” [@rezkstuff]\*[§30]{}. By [@htt]\*[5.2.2.8]{}, one can check that ${{\mathsf{Gr}}}_\infty$ and ${{\mathsf{Gr}}}_\infty^{-1}$ are adjoints in the sense of [@htt]\*[5.2.2.1]{}, but we will not need that here. Note that ${{\mathsf{Gr}}}_\infty^{-1}$ is *not* the nerve of $({{\textsf{St}}}_\phi^+)^\circ$ (the latter is not even a simplicial functor). See [@riehl2017comprehension]\*[6.1.13, 6.1.22]{} for a description of ${{\mathsf{Gr}}}_\infty^{-1}$ *on objects*, and [@riehl2017comprehension]\*[6.1.19]{} for an alternative description of ${{\mathsf{Gr}}}_\infty$. \[def:classified\] For $p \colon X \to S$ a coCartesian fibration, a map $f \colon S \to {{\mathsf{Cat}}}_\infty$ [**classifies $p$**]{} if there is an equivalence of coCartesian fibrations $X \simeq {{\mathsf{Gr}}}_\infty f$. Functors out of ${\mathfrak{C}}[\Delta^n]$ {#sec:func-cc} ------------------------------------------ We review the characterization of simplicial functors out of ${\mathfrak{C}}[\Delta^n]$ that will be used in the proof of Theorem \[thm:gr-rel-nerve\]. All material here is from [@riehl2017comprehension], with some slight modifications in notation and terminology. Throughout, $[n]$ denotes the poset $\{0 < 1 < \dots < n\}$. \[def:bead\] Let $I = \{i_0 < i_1 < \dots < i_m \}$ be a subset of $[n]$ containing at least $2$ elements (i.e. $m \geq 1$). An [**$r$-dimensional bead shape**]{} of $I$, denoted $\langle I_0 | I_1 | \dots | I_r \rangle$, is a partition of $I$ into non-empty subsets $I_0,\dots, I_r$ such that $I_0 = \{i_0, i_m\}$. \[eg:bead\] A $2$-dimensional bead shape of $I = \{0,1,2,3,5,6\}$: $$\begin{aligned} I_0 &= \{0,6\} , & I_1 &= \{3\} , & I_2 &= \{1,2,5\}. \end{aligned}$$ We write $S_{\langle I_0 | I_1 | I_2 \rangle}$ to mean the same thing as $S_{\langle 06|3|125\rangle}$. \[lem:simplicial-functor\] A simplicial functor $S \colon {\mathfrak{C}}[\Delta^n] \to {{\mathcal{K}}}$ is precisely the data of: - For each $i \in [n]$, an object $S_i \in {{\mathcal{K}}}$ - For each subset $I = \{i_0 < \dots < i_m\} \subseteq [n]$ where $m \geq 1$, and each $r$-dimensional bead shape ${{\langle I_0 | \dots | I_r \rangle}}$ of $I$, an $r$-simplex $S_{\langle I_0 | \dots | I_r \rangle}$ in ${{\mathcal{K}}}(S_{i_0}, S_{i_m})$ whose boundary is compatible with lower-dimensional data. The main benefit of this description is that *no further coherence conditions* need to be checked. Instead of describing what it means for the boundary to compatible with lower-dimensional data, which can be found in [@riehl2017comprehension], we illustrate this with an example. But first, we introduce the abbreviation $$S_{{\langle i_0i_1 \dots i_m \rangle}} := S_{{\langle i_{m-1}i_m \rangle}}S_{{\langle i_{m-2}i_{m-1} \rangle}}\dots S_{{\langle i_1 i_2 \rangle}} S_{{\langle i_0 i_1 \rangle}}.$$ The bead shape in Example \[eg:bead\] is $2$-dimensional, so $S_{{\langle I_0|I_1|I_2 \rangle}} = S_{\langle 06 | 3 | 125 \rangle}$ should be a $2$-simplex in ${{\mathcal{K}}}(S_0, S_6)$. The boundary of this $2$-simplex is compatible with lower-dimensional data in the sense that it is given by the following: - The first vertex is always $S_{{\langle I_0 \rangle}}$, which in this case is $S_{{\langle 06 \rangle}} \in {{\mathcal{K}}}(S_0, S_6)_0$. - The last vertex is always $S_{{\langle I \rangle}}$, which in this case is $S_{{\langle 012356 \rangle}}$. Between the first and last vertex, we have $$S_{{\langle 06 \rangle}} \xrightarrow{\quad S_{{\langle 06|1235 \rangle}} \quad} S_{{\langle 012356 \rangle}} \quad \quad \in {{\mathcal{K}}}(S_0,S_6)_1,$$ representing the insertion of $I_1 \cup I_2 \cup \dots \cup I_r$ into $I_0$. This is always the starting edge of $S_{{\langle I_0|\dots|I_r \rangle}}$. - The remaining vertices and edges are generated by first inserting $I_1$ into $I_0$, then $I_2$ into $I_0 \cup I_1$ and so on, up to inserting $I_r$ into $I\setminus I_r$. - In our case, we first insert $I_1 = \{3\}$ into $I_0$. This yields the vertex $S_{{\langle I_0 \cup I_1 \rangle}} =S_{{\langle 036 \rangle}} = S_{{\langle 36 \rangle}}S_{{\langle 03 \rangle}}$ and the edge $$S_{{\langle 06 \rangle}} \xrightarrow{\quad S_{{\langle 06|3 \rangle}} \quad} S_{{\langle 036 \rangle}} \quad \quad \in {{\mathcal{K}}}(S_0,S_6)_1.$$ - Next, we insert $I_2 = \{1,2,5\}$ into $I_0 \cup I_1$. Since this gives all of $I$ and we already have $S_{{\langle I \rangle}}$, we do not need to add any more vertices. We only add the edge $$S_{{\langle 036 \rangle}} \xrightarrow{\quad S_{{\langle 36|5 \rangle}} S_{{\langle 03|12 \rangle}} \quad} S_{{\langle 01235 \rangle}}\quad \quad \in {{\mathcal{K}}}(S_0,S_6)_1,$$ where $S_{{\langle 36|5 \rangle}} \in {{\mathcal{K}}}(S_3,S_5)_1$ and $S_{{\langle 03|12 \rangle}} \in {{\mathcal{K}}}(S_0, S_3)_1$. Note that $5$, lying between $3$ and $6$, goes into $S_{{\langle 36 \rangle}}$, as indicated by $S_{{\langle 36|5 \rangle}}$; similarly, $1$ and $2$ go into $S_{{\langle 03 \rangle}}$, as indicated by $S_{{\langle 03|12 \rangle}}$. We denote this composite $$S_{{\langle 036|125 \rangle}} := S_{{\langle 36|5 \rangle}} S_{{\langle 03|12 \rangle}}.$$ - We can then choose $S_{{\langle 06|3|125 \rangle}}$ to be *any* $2$-simplex in ${{\mathcal{K}}}(S_0,S_6)$ fitting into the following: $$\begin{tikzcd}[row sep = huge] S_{{\langle 06 \rangle}} \ar[rr, "S_{{\langle 06|1235 \rangle}}", ""{name=U, below}] \ar[dr, "S_{{\langle 06|3 \rangle}}"'] & & S_{{\langle 012356 \rangle}} \\ & S_{{\langle 036 \rangle}} \ar[ur, "S_{{\langle 036|125 \rangle}}"'] \ar[from = U, Rightarrow, shorten >= 1em, "S_{{\langle 06|3|125 \rangle}}" near start]& \end{tikzcd}$$ The rule that $I_0$ must have exactly $2$ elements in Definition \[def:bead\] allows us to distinguish bead shapes from abbreviations. For instance, $S_{{\langle 06|3 \rangle}}$ arises from a bead shape, while $S_{{\langle 036|125 \rangle}}$ is an abbreviation. Note that we *should not* abbreviate the composite $S_{{\langle 036|125 \rangle}} S_{{\langle 06|3 \rangle}}$ as $S_{{\langle 06|1235 \rangle}}$, since the latter implies that we insert $\{1,2,3,5\}$ all at once into $\{0,6\}$. Indeed, the point of $S_{{\langle 06|3|125 \rangle}}$ is to relate $S_{{\langle 036|125 \rangle}} S_{{\langle 06|3 \rangle}}$ and $S_{{\langle 06|1235 \rangle}}$. We only abbreviate $S_{{\langle j_0 \dots j_\ell |\dots \rangle}} S_{{\langle i_0 \dots i_k|\dots \rangle}}$ as $S_{{\langle i_0 \dots i_k j_1 \dots j_\ell | \dots \rangle}}$ if $i_k = j_0$. The upshot is that *there is an entirely unambiguous process* of converting an abbreviation into a composite of bead shapes, and *not all compsites* of bead shapes may be abbreviated. See [@riehl2017comprehension]\*[4.2.4]{} for details. [^1]: Any map into a coproduct of simplicial sets induces a coproduct decomposition on its domain (by taking fibers over each component of the codomain). Since all horns $\Lambda^n_k$ are connected, any commuting square from a horn inclusion to $P_{ex}$ necessarily factors through one of the components of ${{\mathcal{E}}}(e,x)$, and may thus be lifted because ${{\mathcal{E}}}(e,x)$ is fibrant. [^2]: This essentially means that ${{\mathcal{C}}}^\bullet$ is a functor rather than a pseudofunctor. Note that if ${{\mathcal{C}}}$ is not strictly monoidal, then $x_{f(i-1)+1} \otimes \dots \otimes x_{f(i)}$ is not well-defined: a choice of parentheses needs to be made. Although the various choices are isomorphic, they are not identical, and this obstructs our ability to obtain a split opfibration. [^3]: This refers to the published version listed in our references. The same definition appears at 3.1.1.8 in the April 2017 version on Lurie’s website. [^4]: We are using the fact that the unique map $p \colon C \to \Delta^0$ is a Cartesian fibration iff $C$ is a quasicategory, and the $p$-Cartesian edges are precisely the weak equivalences. [^5]: But ${{\textsf{St}}}_\phi^+$ is not always a simplicial functor. [^6]: We use the notational convention in Remark \[rem:nerve-notation\]. [^7]: This is 3.1.1.10 in the April 2017 version on Lurie’s website.
--- abstract: 'We prove the existence of a minimal action nodal solution for the quadratic Choquard equation $$-\Delta u + u = \bigl(I_\alpha \ast {\lvert u \rvert}^2\bigr)u \quad\text{in \({{\mathbb R}}^N\)},$$ where $I_\alpha$ is the Riesz potential of order $\alpha\in(0,N)$. The solution is constructed as the limit of minimal action nodal solutions for the nonlinear Choquard equations $$-\Delta u + u = \bigl(I_\alpha \ast {\lvert u \rvert}^p\bigr)|u|^{p-2}u \quad\text{in \({{\mathbb R}}^N\)}$$ when $p{\searrow}2$. The existence of minimal action nodal solutions for $p>2$ can be proved using a variational minimax procedure over Nehari nodal set. No minimal action nodal solutions exist when $p<2$.' address: - | Università di Pisa\ Dipartimento di Matematica\ Largo B. Pontecorvo 5\ 56100 Pisa\ Italy - | Swansea University\ Department of Mathematics\ Singleton Park\ Swansea\ SA2 8PP\ Wales, United Kingdom - | Université Catholique de Louvain\ Institut de Recherche en Mathématique et Physique\ Chemin du Cyclotron 2 bte L7.01.01\ 1348 Louvain-la-Neuve\ Belgium author: - Marco Ghimenti - Vitaly Moroz - Jean Van Schaftingen title: | Least action nodal solutions\ for the quadratic Choquard equation --- Introduction ============ We study least action nodal solutions of the quadratic Choquard equation $$\label{eqChoquard} \tag{$\mathcal{C}_2$} -\Delta u + u = \bigl(I_\alpha \ast {\lvert u \rvert}^2\bigr) u \quad\text{in \({{\mathbb R}}^N\)},$$ for $N \in {{\mathbb N}}$ and $\alpha \in (0, N)$. Here $I_\alpha : {{\mathbb R}}^N \to {{\mathbb R}}$ is the Riesz potential defined for each $x \in {{\mathbb R}}^N \setminus \{0\}$ by $$\begin{aligned} I_\alpha (x) &= \frac{A_\alpha}{{\lvert x \rvert}^{N - \alpha}}, & &\text{where } &A_\alpha = \frac{\Gamma(\tfrac{N-\alpha}{2})} {\Gamma(\tfrac{\alpha}{2})\pi^{N/2}2^{\alpha} }.\end{aligned}$$ For $N = 3$ and $\alpha = 2$ equation is the *Choquard–Pekar equation* which goes back to the 1954’s work by S.I.Pekar on quantum theory of a polaron at rest and to 1976’s model of P.Choquard of an electron trapped in its own hole, in an approximation to Hartree-Fock theory of one-component plasma [@Lieb1977]. In the 1990’s the same equation reemerged as a model of self-gravitating matter and is known in that context as the *Schrödinger–Newton equation*. Equation is also studied as a nonrelativistic model of boson stars . Mathematically, the existence and some qualitative properties of solutions of Choquard equation have been studied by variational methods in the early 1980s, see for earlier work on the problem. Recently, nonlinear Choquard equation $$\label{eqChoquard-p} \tag{$\mathcal{C}_p$} -\Delta u + u = \bigl(I_\alpha \ast {\lvert u \rvert}^p\bigr) {\lvert u \rvert}^{p - 2} u \quad\text{in \({{\mathbb R}}^N\)},$$ with a parameter $p>1$ attracted interest of mathematicians, see and further references therein; see also for modifications of involving fractional Laplacian. Most of the recent works on Choquard type equations so far were dedicated to the study of positive solutions. Nodal solutions were studied in . Equation is the Euler equation of the Choquard action functional $\mathcal{A}_p$ which is defined for each function $u$ in the Sobolev space $H^1 ({{\mathbb R}}^N)$ by $$\mathcal{A}_p (u) = \frac{1}{2} \int_{{{\mathbb R}}^N} {\lvert \nabla u \rvert}^2 + {\lvert u \rvert}^2 - \frac{1}{2 p} \int_{{{\mathbb R}}^N} \bigl(I_\alpha \ast {\lvert u \rvert}^p\bigr) {\lvert u \rvert}^p.$$ By the Hardy–Littlewood–Sobolev inequality, if $s \in (1, \frac{N}{\alpha})$ then for every $v \in L^s ({{\mathbb R}}^N)$, $I_\alpha \ast v\in L^\frac{N s}{N - \alpha s} ({{\mathbb R}}^N)$ and $$\label{eqHLS} \int_{{{\mathbb R}}^N} {\lvert I_\alpha \ast v \rvert}^\frac{N s}{N - \alpha s} \le C \Bigl(\int_{{{\mathbb R}}^N} {\lvert v \rvert}^s \Bigr)^\frac{N}{N - \alpha s},$$ (see for example [@LiebLoss2001]\*[theorem 4.3]{}). In view of the classical Sobolev embedding, the action functional $\mathcal{A}_p$ is well-defined and continuously differentiable if and only if $$\frac{N - 2}{N + \alpha} \le \frac{1}{p} \le \frac{N}{N + \alpha}.$$ By testing equation  against $u$, the natural *Nehari constraint* ${\langle \mathcal{A}' (u), u \rangle} = 0$ appears. Then positive solutions of can be obtained by studying the infimum $$c_{0, p} = \inf\, \bigl\{\mathcal{A}_p (u) {\;:\;}u \in \mathcal{N}_{0, p}\bigr\}$$ over the *Nehari manifold* $$\mathcal{N}_{0, p} = \bigl\{u \in H^1 ({{\mathbb R}}^N) \setminus \{0\} {\;:\;}{\langle \mathcal{A}_p' (u), u \rangle} = 0\}.$$ It turns out that the infimum $c_{0, p}$ is achieved when $$\frac{N - 2}{N + \alpha} < \frac{1}{p} < \frac{N}{N + \alpha},$$ and these assumptions are optimal . Besides positive solutions minimising $c_{0, p}$, which are known as [*groundstates*]{} or [*least action solutions*]{}, additional solutions can be constructed by several variational construction. In particular, one can consider *least action nodal solutions*, the sign-changing counterpart of least actions solutions. One way to search for such solutions is to consider the infimum $$c_{\mathrm{nod}, p} = \inf\, \bigl\{\mathcal{A}_p (u) {\;:\;}u \in \mathcal{N}_{0, p}\bigr\}$$ over the *Nehari nodal set* $$\begin{gathered} \mathcal{N}_{\mathrm{nod}, p} =\bigl\{ u \in H^1 ({{\mathbb R}}^N) {\;:\;}u^+ \ne 0 \ne u^-,\,\\ {\langle \mathcal{A}_p'(u), u^+ \rangle} = 0 \text{ and } {\langle \mathcal{A}_p'(u), u^- \rangle} = 0\bigr\},\end{gathered}$$ where $u = u^+ - u^-$. Such construction has been performed for local elliptic problems on bounded domains of ${{\mathbb R}}^N$, see , whereas the approach fails for the nonlinear Schrödinger equation $$\label{e-NLS} -\Delta u+u={\lvert u \rvert}^{2p - 2} u \quad\text{in \({{\mathbb R}}^N\)},$$ which has no least action nodal solutions . Moreover, the least action energy on the Nehari nodal set is not approximated by nodal solutions of , see [@Weth2006]\*[theorem 1.5]{}. Surprisingly, it has been proved that unlike its local counterpart , the nonlocal Choquard equation admits least action nodal solutions when $$\frac{N - 2}{N + \alpha} < \frac{1}{p} < \frac{1}{2}$$ [@GhimentiVanSchaftingen]\*[theorem 2]{}; while the infimum $c_{\mathrm{nod}, p}$ is not achieved when $$\frac{1}{2} < \frac{1}{p} < \frac{N}{N + \alpha},$$ because $c_{\mathrm{nod}, p} = c_{0, p}$ [@GhimentiVanSchaftingen]\*[theorem 3]{}. The borderline quadratic case $p = 2$ was not covered by either existence or non-existence proofs in [@GhimentiVanSchaftingen], because of the possible degeneracy of the minimax reformulation of the problem that was introduced (see [@GhimentiVanSchaftingen]\*[eq. (3.3)]{}) and of difficulties in controlling the norms of the positive and negative parts of Palais–Smale sequences. The goal of the present work is to study the existence of least action nodal solutions for Choquard equation in the physically most relevant quadratic case $p = 2$. Because the minimax procedure for capturing $c_{\mathrm{nod}, p}$ introduced in [@GhimentiVanSchaftingen] apparently fails for $p=2$, a different approach is needed. Instead of directly minimizing $c_{\mathrm{nod}, 2}$, our strategy will be to employ Choquard equations with $p>2$ as a regularisation family for the quadratic Choquard equation and to pass to the limit when $p{\searrow}2$. Our main result is the following. \[theoremMain\] If $N \in \mathbb N$ and $\alpha \in ((N - 4)^+, N)$, then there exists a weak solution $u \in H^1 ({{\mathbb R}}^N)$ of Choquard equation such that $u^+ \ne 0 \ne u^-$ and $\mathcal{A}_2 (u) = c_{\mathrm{nod},2}$. The constructed nodal solution $u$ is regular, that is, $u\in L^1({{\mathbb R}}^N)\cap C^2({{\mathbb R}}^N)$, see [@MorozVanSchaftingen13]\*[proposition 4.1]{}. The conditions of the theorem are optimal, as for $\alpha\not\in ((N - 4)^+, N)$ no sufficiently regular solutions to exist in $H^1 ({{\mathbb R}}^N)$ by the Pohožaev identity [@MorozVanSchaftingen13]\*[theorem 2]{}. In order to prove theorem , we will approximate a least action nodal solution of quadratic Choquard equation by renormalised least action nodal solution of with $p{\searrow}2$. To do this, in section \[section2\] we first establish continuity of the energy level $c_{0, p}$ with respect to $p$. Then in section \[section3\] we prove theorem \[theoremMain\] by showing that as $p {\searrow}2$, positive and negative parts of the renormalised least action nodal solutions of do not vanish and do not diverge apart from each other. Continuity of the critical levels {#section2} ================================= In the course of the proof of theorem \[theoremMain\], we will need the following strict inequality on critical levels. \[propositionStrictInequality\] If $\frac{N - 2}{N + \alpha} < \frac{1}{p} < \frac{N}{N + \alpha}$, then $$c_{\mathrm{nod}, p} < 2 c_{0, p}.$$ This follows directly from [@GhimentiVanSchaftingen]\*[propositions 2.4 and 3.7]{}. The construction in the latter reference is done by taking translated pa ositive and a negative copy of the groundstate of and by estimating carefully the balance between the truncation and the effect of the Riesz potential interaction. When $p < 2$, it is known that $c_{\mathrm{nod}, p} = c_{0, p}$ [@GhimentiVanSchaftingen]\*[theorem 3]{} and proposition \[propositionStrictInequality\] loses its interest. Because we shall approximate the quadratic case $p = 2$ by $p > 2$, it will be useful to have some information about the continuity of $c_{0, p}$. \[continuityGroundstate\] The function $ p \in (\frac{N + \alpha}{N}, \frac{N + \alpha}{(N - 2)_+}) \mapsto c_{0, p} \in {{\mathbb R}}$ is continuous. It can be observed that $$c_{0, p} = \inf \Biggl\{ \Bigl(\frac{1}{2} - \frac{1}{2 p}\Bigr) \Biggl(\frac{\Bigl(\displaystyle \int_{{{\mathbb R}}^N} {\lvert \nabla u \rvert}^2 + {\lvert u \rvert}^2\Bigr)^\frac{p}{p - 1}}{\Bigl(\displaystyle \int_{{{\mathbb R}}^N} \bigl(I_\alpha \ast {\lvert u \rvert}^p\bigr) {\lvert u \rvert}^p\Bigr)^\frac{1}{p - 1}} \Biggr) {\;:\;}u \in H^1 ({{\mathbb R}}^N) \setminus \{0\} \Biggr\}.$$ Since for every $u \in H^1 ({{\mathbb R}}^N)$, the function $$p \in (\tfrac{N + \alpha}{N}, \tfrac{N + \alpha}{(N - 2)_+}) \mapsto \int_{{{\mathbb R}}^N} \bigl(I_\alpha \ast {\lvert u \rvert}^p\bigr) {\lvert u \rvert}^p$$ is continuous, the function $p \mapsto c_{0, p}$ is then upper semicontinuous as an infimum of continuous functions. We now consider the more delicate question of the lower semicontinuity. There exists a family of functions $u_p \in H^1 ({{\mathbb R}}^N)$ such that holds and $\mathcal{A}_p (u_p) = c_{0, p}$. By the upper semicontinuity, it follows that the function $p \in (\frac{N + \alpha}{N}, \frac{N + \alpha}{(N - 2)_+}) \mapsto u_p\in H^1 ({{\mathbb R}}^N)$ is locally bounded. In view of the equation and of the Hardy–Littlewood–Sobolev inequality, we have $$\int_{{{\mathbb R}}^N} {\lvert \nabla u_p \rvert}^2 + {\lvert u_p \rvert}^2 = \int_{{{\mathbb R}}^N} \bigl(I_\alpha \ast {\lvert u_p \rvert}^p\bigr) {\lvert u_p \rvert}^p \le {\refstepcounter{cte} C_{\thecte}}\Bigl(\int_{{{\mathbb R}}^N} {\lvert u_p \rvert}^\frac{2 N p}{N + \alpha}\Bigr)^\frac{N + \alpha}{N},$$ where constant $C_1$ could be chosen uniformly bounded when $p$ remains in a compact subset of $(\frac{N + \alpha}{N}, \frac{N + \alpha}{(N - 2)_+})$. This implies that $$\begin{gathered} \int_{{{\mathbb R}}^N} {\lvert \nabla u_p \rvert}^2 + {\lvert u_p \rvert}^2\\ \le {\refstepcounter{cte} C_{\thecte}}\Bigl(\int_{{{\mathbb R}}^N} {\lvert \nabla u_p \rvert}^2 + {\lvert u_p \rvert}^2 \Bigr)^\frac{N + \alpha}{N} \Bigl( \sup_{a \in {{\mathbb R}}^N} \int_{B_1 (a)} {\lvert u_p \rvert}^\frac{2 N p}{N + \alpha} \Bigr)^{(N + \alpha)\bigl(\frac{1}{N} - \frac{N + \alpha}{p} \bigr)},\end{gathered}$$ where ${C_{\thecte}}$ can be also chosen uniformly bounded when $p$ is in a compact subset of $(\frac{N + \alpha}{N}, \frac{N + \alpha}{(N - 2)_+})$. Up to a translation in ${{\mathbb R}}^N$, we can thus assume that the function $$p \in (\tfrac{N + \alpha}{N}, \tfrac{N + \alpha}{(N - 2)_+}) \mapsto \int_{B_1} {\lvert u \rvert}^\frac{2 N p}{N + \alpha}$$ is locally bounded away from $0$. We assume now $(p_n)_{n \in {{\mathbb N}}}$ to be a sequence in the interval $(\tfrac{N + \alpha}{N}, \tfrac{N + \alpha}{(N - 2)_+})$ that converges to $p_* \in (\tfrac{N + \alpha}{N}, \tfrac{N + \alpha}{(N - 2)_+})$. Since the sequence $(u_{p_n})_{n \in {{\mathbb N}}}$ is bounded in the space $H^1 ({{\mathbb R}}^N)$, there exists a sequence $(n_k)_{k \in {{\mathbb N}}}$ diverging to infinity and $u_* \in H^1 ({{\mathbb R}}^N)$ such that the subsequence $(u_{p_{n_k}})_{k \in {{\mathbb N}}}$ converges weakly in $H^1 ({{\mathbb R}}^N)$ to $u_*$. Moreover, we have by the Rellich–Kondrachov compactness theorem $$\int_{B_1} {\lvert u_* \rvert}^\frac{2 N p_*}{N + \alpha} = \lim_{k \to \infty} \int_{B_1} {\lvert u_{p_{n_k}} \rvert}^\frac{2 N p}{N + \alpha} > 0.$$ Thus $u_* \ne 0$ and $u_*$ satisfies $$-\Delta u_* + u_* = \bigl(I_\alpha \ast {\lvert u \rvert}^{p_*}\bigr) {\lvert u \rvert}^{p_* - 2} u.$$ We have thus $$\begin{split} c_{0, p_*} \le \mathcal{A}_{p_*} (u_*) &= \Bigl(\frac{1}{2} - \frac{1}{2 p_*}\Bigr) \int_{{{\mathbb R}}^N} {\lvert \nabla u_* \rvert}^2 + {\lvert u_* \rvert}^2\\ &\le \liminf_{k \to \infty} \Bigl(\frac{1}{2} - \frac{1}{2 p_{n_k}}\Bigr) \int_{{{\mathbb R}}^N} {\lvert \nabla u_{p_{n_k}} \rvert}^2 + {\lvert u_{p_{n_k}} \rvert}^2\\ &= \liminf_{k \to \infty} \mathcal{A}_{p_{n_k}} (u_{p_{n_k}}) = \liminf_{k \to \infty} c_{0, p_{n_k}}. \end{split}$$ Since the sequence $(p_n)_{n \in {{\mathbb N}}}$ is arbitrary, this proves the lower semicontinuity. Proof of the main theorem {#section3} ========================= \[Proof of theorem \[theoremMain\]\] We shall successively construct a family of solutions of , prove that neither the positive nor the negative part of this family goes to $0$, and show that the negative part and the positive part cannot diverge from each other as $p \to 2$. The theorem will then follow from a classical weak convergence and local compactness argument. \[claimConstructionSequence\] There exists a family $(u_p)_{p \in (2, \frac{N + \alpha}{N - 2})}$ in $H^1 ({{\mathbb R}}^N)$ such that for each $p \in (2, \frac{N + \alpha}{N - 2})$, the function $u_p$ changes sign and $$-\Delta u_p + u_p = \bigl(I_\alpha \ast {\lvert u_p \rvert}^p\bigr) {\lvert u_p \rvert}^{p - 2} u_p.$$ Moreover $$\limsup_{p \to 2} \mathcal{A}_{p} (u_p) \le c_{\mathrm{nod},2}$$ and $$\limsup_{p \to 2} \int_{{{\mathbb R}}^N} {\lvert \nabla u_p \rvert}^2 + {\lvert u_p \rvert}^2 \le 4\, c_{\mathrm{nod},2}.$$ The claim implies the uniform boundedness in $H^1 ({{\mathbb R}}^N)$ of the solutions $u_p$ since $c_{\mathrm{nod}, 2} \le c_{\mathrm{odd}, 2} < 2 c_{0,2} < \infty$ [@GhimentiVanSchaftingen]. The existence of a function $u_p \in H^1 ({{\mathbb R}}^N)$ that changes sign and satisfies Choquard equation has been proved in [@GhimentiVanSchaftingen]\*[theorem 2]{}. Moreover, $$\Bigl(\frac{1}{2} - \frac{1}{2p}\Bigr) \int_{{{\mathbb R}}^N} {\lvert \nabla u_p \rvert}^2 + {\lvert u_p \rvert}^2 = \mathcal{A}_p (u_p) = c_{\mathrm{nod}, p}.$$ It remains to obtain some upper asymptotics on $c_{\mathrm{nod}, p}$ as $p {\searrow}2$. Let $w \in \mathcal{N}_{2, \mathrm{nod}}$ and define $w_p = t_{+, p}^{{1}/{p}} w^+ - t_{-, p}^{{1}/{p}} w^-$, where $(t_{+, p}, t_{-, p}) \in [0, \infty)^2$ is the unique maximizer of the concave function $$\begin{gathered} (t_+, t_-) \in [0, \infty)^2 \mapsto E_p (t_+, t_-) = \mathcal{A}_p (t_+^\frac{1}{p} w^+ - t_-^\frac{1}{p} w^-)\\ = \frac{t_+^\frac{2}{p}}{2} \int_{{{\mathbb R}}^N} {\lvert \nabla w^+ \rvert}^2+{\lvert w^+ \rvert}^2 + \frac{t_-^\frac{2}{p}}{2} \int_{{{\mathbb R}}^N} {\lvert \nabla w^- \rvert}^2+{\lvert w^- \rvert}^2\\ - \frac{1}{2 p} \int_{{{\mathbb R}}^N} {\bigl\lvert I_{\alpha/2} \ast \bigl(t_+ {\lvert w^+ \rvert}^p + t_- {\lvert w^- \rvert}^p) \bigr\rvert}^2.\end{gathered}$$ Since $(t_{+, p}, t_{-, p}) \in (0, \infty)$, we have $w_p \in \mathcal{N}_{\mathrm{nod}, p}$ and $$c_{\mathrm{nod}, p} \le \mathcal{A}_p (w_p).$$ Since $E_p (t_+, t_-) \to -\infty$ as $(t_+, t_-) \to \infty$ uniformly in $p$ in bounded sets and since $E_p \to E_2$ as $p \to 2$ uniformly over compact subsets of $[0, \infty)^2$, we have $t_{\pm, p} \to 1$ as $p \to 2$. Therefore $$\lim_{p \to 2} \mathcal{A}_p (w_p) = \mathcal{A}_2 (w).$$ Since the function $w \in \mathcal{N}_{\mathrm{nod}, 2}$ is arbitrary, we deduce that $$\limsup_{p \to 2} c_{\mathrm{nod}, p} \le c_{\mathrm{nod}, 2},$$ from which the upper bounds of the claim follow. \[claimNonzero\] $$\liminf_{p \to 2} \int_{{{\mathbb R}}^N} {\lvert \nabla u_p^\pm \rvert}^2 + {\lvert u_p^\pm \rvert}^2 = \liminf_{p \to 2} \int_{{{\mathbb R}}^N} \bigl(I_\alpha \ast {\lvert u_p \rvert}^p\bigr) {\lvert u_p^\pm \rvert}^p > 0.$$ We first compute by the Hardy–Littlewood–Sobolev inequality $$\begin{split} \int_{{{\mathbb R}}^N} {\lvert \nabla u_p \rvert}^2 + {\lvert u_p \rvert}^2 = \int_{{{\mathbb R}}^N} \bigl(I_\alpha \ast {\lvert u_p \rvert}^p\bigr) {\lvert u_p \rvert}^p &\le {\refstepcounter{cte} C_{\thecte}}\Bigl(\int_{{{\mathbb R}}^N} {\lvert u_p \rvert}^{\frac{2 N p}{N + \alpha}} \Bigr)^{1 + \frac{\alpha}{N}}\\ &\le {\refstepcounter{cte} C_{\thecte}}\Bigl(\int_{{{\mathbb R}}^N} {\lvert \nabla u_p \rvert}^2 + {\lvert u_p \rvert}^2 \Bigr)^p, \end{split}$$ where the constant ${C_{\thecte}}$ can be taken independently of $p \in (2, \frac{N + \alpha}{N - 2})$ once $p$ remains bounded. It follows then that $$\liminf_{p \to 2} {C_{\thecte}}\Bigl(\int_{{{\mathbb R}}^N} {\lvert \nabla u_p \rvert}^2 + {\lvert u_p \rvert}^2\Bigr)^{p - 1} \ge 1,$$ and thus, $$\liminf_{p \to 2} \int_{{{\mathbb R}}^N} {\lvert \nabla u_p \rvert}^2 + {\lvert u_p \rvert}^2 > 0.$$ We assume now that there is a sequence $(p_n)_{n \in {{\mathbb N}}}$ such that $$\label{eqPositivePartContradictionAssumption} \lim_{n \to \infty} \int_{{{\mathbb R}}^N} {\lvert \nabla u_{p_n}^- \rvert}^2 + {\lvert u_{p_n}^- \rvert}^2 = 0.$$ For $p \in (2, \frac{N + \alpha}{(N - 2)_+})$ we define the renormalised negative part $$v_{p} = \frac{u_{p}^-}{{\lVert u_{p}^- \rVert}_{H^1 ({{\mathbb R}}^N)}}.$$ We first observe that $$\label{eqRenormalizedInteractionLowerBound} \int_{{{\mathbb R}}^N} \bigl(I_{\alpha} \ast {\lvert u_{p} \rvert}^{p}\bigr) {\lvert v_p \rvert}^p = 1.$$ By [@GhimentiVanSchaftingen]\*[lemma 3.6]{}, for every $\beta \in \bigl(\alpha, N\bigr)$ there exist $C_5,C_6 > 0$ such that $$\begin{gathered} \int_{{{\mathbb R}}^N} \bigl(I_\alpha \ast {\lvert u_p \rvert}^p\bigr) {\lvert v_p \rvert}^p \le {\refstepcounter{cte} C_{\thecte}}\Bigl(\int_{{{\mathbb R}}^N} {\lvert \nabla u_p \rvert}^2 + {\lvert u_p \rvert}^2 \int_{{{\mathbb R}}^N} {\lvert \nabla v_p \rvert}^2 + {\lvert v_p \rvert}^2 \Bigr)^\frac{1}{2}\\ \shoveright{\times \Bigl(\sup_{a \in {{\mathbb R}}^N} \int_{B_R (a)} {\lvert u_p \rvert}^\frac{2 N p}{N + \alpha}\int_{B_R (a)} {\lvert v_p \rvert}^\frac{2 N p}{N + \alpha}\Bigr)^{\frac{N + \alpha}{2 N}(1 - \frac{1}{p})}}\\ + \frac{{\refstepcounter{cte} C_{\thecte}}}{R^{\beta - \alpha}} \Bigl(\int_{{{\mathbb R}}^N} {\lvert \nabla u_p \rvert}^2 + {\lvert u_p \rvert}^2 \int_{{{\mathbb R}}^N} {\lvert \nabla v_p \rvert}^2 + {\lvert v_p \rvert}^2 \Bigr)^\frac{p}{2}\end{gathered}$$ The constants come from the Hardy–Littlewood–Sobolev inequality and from the Sobolev inequality; they can thus be taken to be uniform as $p \to 2$. Since $u_p$ and $v_p$ remain bounded in $H^1 ({{\mathbb R}}^N)$ as $p \to 2$, we have $$\begin{gathered} \int_{{{\mathbb R}}^N} \bigl(I_\alpha \ast {\lvert u_p \rvert}^p\bigr) {\lvert v_p \rvert}^p\\ \le {\refstepcounter{cte} C_{\thecte}}\Bigl(\sup_{a \in {{\mathbb R}}^N} \int_{B_R (a)} {\lvert u_p \rvert}^\frac{2 N p}{N + \alpha}\int_{B_R (a)} {\lvert v_p \rvert}^\frac{2 N p}{N + \alpha}\Bigr)^{\frac{N + \alpha}{2 N}(1 - \frac{1}{p})} + \frac{{\refstepcounter{cte} C_{\thecte}}}{R^{\beta - \alpha}}.\end{gathered}$$ In view of , there exists $R > 0$ such that $$\liminf_{p \to 2} \sup_{a \in {{\mathbb R}}^N} \Bigl(\int_{B_R (a)} {\lvert u_p \rvert}^\frac{2 N p}{N + \alpha}\int_{B_R (a)} {\lvert v_p \rvert}^\frac{2 N p}{N + \alpha} \Bigr) > 0.$$ In particular, there exists a sequence of vectors $(a_n)_{n \in {{\mathbb N}}}$ in ${{\mathbb R}}^N$ and a sequence of real numbers $(p_n)_{n \in {{\mathbb N}}}$ in $(2, \frac{N + \alpha}{N - 2}) $ converging to $2$ such that $$\label{ineqLiminfuv} \liminf_{n \to \infty} \Bigl(\int_{B_R (a_n)} {\lvert u_{p_n} \rvert}^\frac{2 N {p_n}}{N + \alpha}\int_{B_R (a_n)} {\lvert v_{p_n} \rvert}^\frac{2 N {p_n}}{N + \alpha} \Bigr) > 0.$$ There exists thus a subsequence $(n_k)_{k \in {{\mathbb N}}}$ such that the sequences of functions $(u_{p_{n_k}} (\cdot - a_{n_k}))_{k \in {{\mathbb N}}}$ and $(v_{p_{n_k}} (\cdot - a_{n_k}))_{k \in {{\mathbb N}}}$ both converge weakly in the space $H^1 ({{\mathbb R}}^N)$ to some functions $u$ and $v\in H^1({{\mathbb R}}^N)$. By our contradiction assumption and the Rellich–Kondrachov compactness theorem, we have $u \ge 0$. By the classical Rellich–Kondrachov compactness theorem, it follows from that $$\begin{aligned} \label{eqUPositivity} \int_{B_R} {\lvert u \rvert}^\frac{2 N p}{N + \alpha}& > 0& &\text{ and }& \int_{B_R} {\lvert v \rvert}^\frac{2 N p}{N + \alpha} & > 0.\end{aligned}$$ We also observe that by definition of $v_p$, $$\{x \in {{\mathbb R}}^N {\;:\;}v_p (x) < 0\} \subseteq \{ x \in {{\mathbb R}}^N {\;:\;}u_p (x) \le 0\},$$ so that by the Rellich–Kondrachov theorem, we have $$\label{eqUNegativity} \{ x \in {{\mathbb R}}^N {\;:\;}v (x) < 0\} \subseteq \{ x \in {{\mathbb R}}^N {\;:\;}u (x) \le 0\}.$$ Since by the classical Rellich–Kondrachov compactness theorem, the sequence $({\lvert u_{p_{n_k}} (\cdot - a_{n_k}) \rvert}^{p_n})_{k \in {{\mathbb N}}}$ converges locally in measure to ${\lvert u \rvert}^2$ and is bounded in $L^{2 N/(N + \alpha)} ({{\mathbb R}}^N)$, it converges weakly to ${\lvert u \rvert}^2$ in the space $L^{2N/(N + \alpha)} ({{\mathbb R}}^N)$ . In view of the Hardy–Littlewood–Sobolev inequality and the continuity of bounded linear operators for the weak topology, the sequence $(I_\alpha \ast {\lvert u_{p_{n_k}} (\cdot - a_{n_k}) \rvert}^{p_{n_k}})_{k \in {{\mathbb N}}}$ converges weakly to $I_\alpha \ast {\lvert u \rvert}^2$ in $L^{2N/(N - \alpha)} ({{\mathbb R}}^N)$. Since $(({\lvert u_{p_{n_k}} \rvert}^{p_{n_k} - 2} u_{p_{n_k}})(\cdot - a_{n_k}))_{k \in {{\mathbb N}}}$ converges to $u$ in $L^2_{\mathrm{loc}} ({{\mathbb R}}^N)$, we conclude that $$\bigl(I_\alpha \ast {\lvert u_{p_{n_k}} (\cdot - a_{n_k}) \rvert}^{p_{n_k}}\bigr)\bigl({\lvert u_{p_{n_k}} \rvert}^{p_ {n_k} - 2} u_{p_{n_k}}\bigr)\,(\cdot - a_{n_k}) \to \bigl(I_\alpha \ast {\lvert u \rvert}^2\bigr)u$$ in $L^{2N/(2 N - \alpha)} ({{\mathbb R}}^N)$, as $k \to \infty$. By construction of the function $u_p$, we deduce from that the function $u \in H^1 ({{\mathbb R}}^N)$ is a weak solution of the problem $$-\Delta u + u = \bigl(I_{\alpha} \ast {\lvert u \rvert}^2\bigr) u.$$ By the classical bootstrap method for subcritical semilinear elliptic problems applied to the Choquard equation (see for example ), $u$ is smooth. Since $u \ge 0$, by the strong maximum principle we have either $u=0$ or $u > 0$, in contradiction with and . The claim is thus proved by contradiction. There exists $R > 0$ such that $$\limsup_{p \to 2} \sup_{a \in {{\mathbb R}}^N} \int_{B_R (a)} {\lvert u_p^+ \rvert}^\frac{2 N p}{N + \alpha} \int_{B_R (a)} {\lvert u_p^- \rvert}^\frac{2 N p}{N + \alpha} > 0.$$ We assume by contradiction that for every $R > 0$, $$\label{eqRepulsionContradiction} \lim_{p \to 2} \sup_{a \in {{\mathbb R}}^N} \int_{B_R (a)} {\lvert u_p^+ \rvert}^\frac{2 N p}{N + \alpha} \int_{B_R (a)} {\lvert u_p^- \rvert}^\frac{2 N p}{N + \alpha} = 0.$$ In view of [@GhimentiVanSchaftingen]\*[lemma 3.6]{} and since the sequences $(u_n^+)_{n \in {{\mathbb N}}}$ and $(u_n^-)_{n \in {{\mathbb N}}}$ are both bounded in $H^1 ({{\mathbb R}}^N)$, we have, as in the proof of claim \[claimNonzero\], for every $\beta \in (\alpha, N)$ and $R > 0$, $$\begin{gathered} \int_{{{\mathbb R}}^N} (I_\alpha \ast {\lvert u_p^+ \rvert}^p) {\lvert u_p^- \rvert}^p\\ \le {\refstepcounter{cte} C_{\thecte}}\Bigl(\sup_{a \in {{\mathbb R}}^N} \int_{B_R (a)} {\lvert u_p^+ \rvert}^\frac{2 N p}{N + \alpha}\int_{B_R (a)} {\lvert u_p^- \rvert}^\frac{2 N p}{N + \alpha}\Bigr)^{\frac{N + \alpha}{2 N}(1 - \frac{1}{p})} + \frac{{\refstepcounter{cte} C_{\thecte}}}{R^{\beta - \alpha}}.\end{gathered}$$ By our assumption , we have thus $$\label{eqNoInteraction} \lim_{p \to 2} \int_{{{\mathbb R}}^N} \bigl(I_\alpha \ast {\lvert u_p^+ \rvert}^p\bigr) {\lvert u_p^- \rvert}^p = 0.$$ We define now the pair $(t_{p, +}, t_{p, -}) \in (0, \infty)^2$ by the condition that $t_{p, \pm} u_p^\pm \in \mathcal{N}_{0, p}$, or equivalently, $$t_{p, \pm}^{2 p - 2} = \frac{\displaystyle \int_{{{\mathbb R}}^N}{\lvert \nabla u_p^\pm \rvert}^2 + {\lvert u_p^\pm \rvert}^2}{ \displaystyle \int_{{{\mathbb R}}^N} \bigl(I_\alpha \ast {\lvert u_p^\pm \rvert}^p\bigr) {\lvert u_p^\pm \rvert}^p} =\frac{\displaystyle \int_{{{\mathbb R}}^N} \bigl(I_\alpha \ast {\lvert u_p \rvert}^p\bigr) {\lvert u_p^\pm \rvert}^p}{ \displaystyle \int_{{{\mathbb R}}^N} \bigl(I_\alpha \ast {\lvert u_p^\pm \rvert}^p\bigr) {\lvert u_p^\pm \rvert}^p} =1 + o (1)$$ as $p \to 2$, in view of claim \[claimNonzero\] and of . Since the family $u_p$ remains bounded in $H^1 ({{\mathbb R}}^N)$, we have $$\lim_{p \to 2} \mathcal{A}_p (t_{p, +} u_p^+ + t_{p, -} u_p^-) - \mathcal{A}_p (u_p) = 0.$$ In view of the identity $$\begin{gathered} \mathcal{A}_p (t_{p, +} u_p^+ + t_{p, -} u_p^-)\\ = \mathcal{A}_p (t_{p, +} u_p^+) + \mathcal{A}_p(t_{p, -} u_p^-) -\frac{t_{p, +}^p t_{p, -}^p}{p} \int_{{{\mathbb R}}^N} \bigl(I_\alpha \ast {\lvert u_p^+ \rvert}^p\bigr) {\lvert u_p^- \rvert}^p\end{gathered}$$ and by , we conclude that $$\liminf_{p \to 2} \mathcal{A}_p (u_p) \ge 2 \liminf_{p \to 2} c_{0, p}.$$ By claim \[claimConstructionSequence\] and by proposition \[continuityGroundstate\], this implies that $$c_{\mathrm{nod}, 2} \ge 2 c_{0, 2},$$ in contradiction with proposition \[propositionStrictInequality\]. We are now in a position to conclude the proof. Up to a translation, there exist $R>0$ and a sequence $(p_n)_{n \in {{\mathbb N}}}$ in $(2, \frac{N + \alpha}{N - 2})$ such that $p_n {\searrow}2$ as $n \to \infty$, $$\liminf_{n \to \infty} \int_{B_R} {\lvert u_{p_n}^\pm \rvert}^\frac{2 N p}{N + \alpha} \ge 0$$ and the sequence $(u_{p_n})_{n \in {{\mathbb N}}}$ converges weakly in $H^1 ({{\mathbb R}}^N)$ to some function $u \in H^1 ({{\mathbb R}}^N)$. As in the proof of claim \[claimNonzero\], by the weak convergence and by the classical Rellich–Kondrachov compactness theorem, we have $\mathcal{A}'_2 (u) = 0$ and $u^\pm \ne 0$, whence $u \in \mathcal{N}_{2, \mathrm{nod}}$. We also have by the weak lower semicontinuity of the norm, $$\begin{split} \lim_{n \to \infty} \mathcal{A}_{p_n} (u_{p_n}) &= \lim_{n \to \infty} \Bigl(\frac{1}{2} - \frac{1}{2 p_n} \Bigr)\int_{{{\mathbb R}}^N} {\lvert \nabla u_{p_n} \rvert}^2 + {\lvert u_{p_n} \rvert}^2\\ &\ge \frac{1}{4} \int_{{{\mathbb R}}^N} {\lvert \nabla u \rvert}^2 + {\lvert u \rvert}^2 =\mathcal{A}_2 (u), \end{split}$$ and thus $\mathcal{A}_2 (u) = c_{\mathrm{nod}, 2}$. In claim \[claimNonzero\], the study of the renormalised negative part to prevent vanishing is reminiscent of the idea of taking the renormalised approximate solution to bypass the Ambrosetti–Rabinowitz superlinearity condition .
--- abstract: 'Most of the world’s poorest people come from rural areas and depend on their local ecosystems for food production. Recent research has highlighted the importance of self-reinforcing dynamics between low soil quality and persistent poverty but little is known on how they affect poverty alleviation. We investigate how the intertwined dynamics of household assets, nutrients (especially phosphorus), water and soil quality influence food production and determine the conditions for escape from poverty for the rural poor. We have developed a suite of dynamic, multidimensional poverty trap models of households that combine economic aspects of growth with ecological dynamics of soil quality, water and nutrient flows to analyze the effectiveness of common poverty alleviation strategies such as intensification through agrochemical inputs, diversification of energy sources and conservation tillage. Our results show that (i) agrochemical inputs can reinforce poverty by degrading soil quality, (ii) diversification of household energy sources can create possibilities for effective application of other strategies, and (iii) sequencing of interventions can improve effectiveness of conservation tillage. Our model-based approach demonstrates the interdependence of economic and ecological dynamics which preclude blanket solution for poverty alleviation. Stylized models as developed here can be used for testing effectiveness of different strategies given biophysical and economic settings in the target region.' author: - 'Sonja Radosavljevic$^{1,*}$, L. Jamila Haider$^1$, Steven J. Lade$^1$, Maja Schlüter$^1$' bibliography: - 'ref.bib' title: 'Effective alleviation of rural poverty depends on the interplay between productivity, nutrients, water and soil quality' --- [ ***Keywords—*** [**Keywords:**]{} poverty trap, dynamical system, multistability, agroecosystem, phosphorus, soil quality ]{} Introduction ============ How to alleviate global poverty and eradicate hunger in places with low agricultural productivity are among humanity’s greatest challenges. The concept of poverty traps as situations characterized by persistent, undesirable and reinforcing dynamics [@Haider] is increasingly being used to understand the relationship between persistent poverty and environmental sustainability [@Barrett15; @BarrettCo; @Lade]. How poverty and environmental degradation are conceptualized and represented in models can inform development interventions and thereby influence the effectiveness of those interventions [@Lade]. Previous poverty trap models have focused on environmental quality or pollution [@BarroSala; @Smulders; @Xepapadeas], neglecting social-ecological interactions; have illustrated how positive feedback between wealth and technology can increase inequality and result in poverty traps through resource degradation [@Mirza]; have investigated relations between human health and poverty [@ng]; have used one-dimensional models that can lead to simplified conclusions and inappropriate policy outcomes [@Kraay]; have been static models that cannot capture dynamic phenomena such as traps and feedbacks [@Barrett15]; or have been highly abstracted [@Lade]. Biophysical complexity is not often considered in poverty trap models and relations between agricultural interventions and social-ecological poverty trap dynamics remain unexplored. Partially because of this, development efforts tend to focus on blanket solutions, such as the ‘big push’: promoting external asset inputs, while neglecting a multitude of other factors affecting poverty. [@Lade] highlighted the importance of linking economic, natural and human factors in explaining poverty traps and concluded that the usefulness of interventions depends on context, particularly the relationship between poverty and environmental degradation. We build on this study as a conceptual framework to address knowledge gaps regarding the interplay between poverty and the biophysical environment in three ways: (1) we explore how biophysical complexity of the household-farm social-ecological system influences the dynamics of poverty traps in agroecosystems, (2) we assess the impact of development interventions on the dynamics of the system, and (3) we test the effectiveness of interventions (Figure \[Figure1\]). To this end we have developed a series of dynamical systems models that we use to test diverse sequences of interventions for alleviating poverty. We describe biophysical complexity through factors that affect crop growth and limit food production [@Drechsel; @Rockstrom2000], such as nutrients, especially phosphorus, water and soil quality. First, phosphorus is thought to have crossed a threshold of overuse at the global scale, leading to environmental consequences such as eutrophication [@Rockstrom09], acidification [@Guo] and introduction of environmentally persistent chemicals or harmful elements in soil [@Carvalho; @Pizzol; @Roberts; @Schnug]. However, at a local level many of the world’s poorest areas (e.g. Sub-Saharan Africa) suffer from a lack of soil nutrients, of which phosphorus is one of the main limiting factors for food production [@Nz; @Verde]. Research indicates that global demand for phosphorus will rise over the remainder of the 21st century. At the same time, supply of high quality and accessible phosphate rock is likely to peak within the next few decades leading to increases in prices and decreases in affordability, mostly for low income countries [@Cordell]. Phosphorus application therefore presents a ‘double-edged sword’: in some cases it is necessary to overcome extreme levels of poverty and soil nutrient deficiency i.e. to break a poverty trap [@Lade], but in other cases over application of fertilizers can have severe negative environmental consequences. A second critical factor for crop growth is water. Rainfed agriculture plays a dominant role in food production, particularly in some of the poorest areas of the world, such as sub-Saharan Africa. Yield gaps are large and often caused by rainfall variability in occurrence and amount rather than by the total lack of water [@Rockstrom2000]. Because of this, investing in rainwater harvesting, water management and conservation practices, such as conservation tillage, is an important strategy for increasing food security and improving livelihoods. In small-scale semi-arid rainfed farming, these practices prove to be useful to mitigate drought and dry spells [@Rockstrom2003] or to allow diversification and cultivating high-value crops, which can be an important poverty alleviation strategy [@Burney]. A third critical factor for crop growth is soil quality. It reflects complex interactions between soil physical, chemical and biological properties including environmental quality and soil’s contributions to health, food production and food quality. Including it in models bring additional level of realism and might explain human-environment relations [@Altieri; @Bunemann; @Parr; @Verhulst; @Thrupp]. Agricultural interventions are a common strategy for poverty alleviation in developing countries. The interventions we consider here are largely carried out by actors external to the local community, such as non-governmental organisations (NGOs) or government programmes. For example, in the quest for an ‘African Green Revolution’ interventions to increase crop yields have been driven by: major cross-continental initiatives (Alliance for a Green Revolution in Africa), Millennium Villages Programmes (third party funded), donors (U.S. government’s Feed the Future program), and national governments with NGO’s implementing programmes at a local scale (Scoones and Thompson, 2011). In our models we focus on the implementation level of agricultural interventions. Inputs of fertilizers or improved seeds in the form of agricultural intensification schemes, or conservation tillage and use of manure as a fertilizer, while diversifying household energy sources are commonly used interventions. An intervention may influence one or more of the factors (assets, phosphorus, water or soil quality), thus ultimately influencing the dynamics of the whole agroecosystem. Since there are several factors at play, poverty alleviation might require more than one intervention to be effective. ![We investigate how phosphorus, soil quality and water interact with crop production and household assets. We treat a model of this household-farm social ecological system (middle section) with different combinations of interventions (left section) and observe the resulting poverty trap dynamics (right section). Some interventions involve households investing assets to improve phosphorus, soil quality and/or water levels (dashed line). In the model, soil quality can self-regenerate to a limited extent but phosphorus and water are reliant on continual replenishment.[]{data-label="Figure1"}](Doc2.pdf){width="\linewidth"} The aim of this paper is to develop a series of models that represent interlinked dynamics of assets, phosphorus, water and soil quality and allow investigating their effects on the low-productivity poverty trap of many sub-Saharan communities [@Barrett06; @Barrett08; @Tittonell]. Furthermore, we use models to assess the effectiveness of different development interventions for various household-farm initial conditions. We begin by constructing a dynamical system model of an agroecosystem prior to any agricultural intervention and continue by developing three models representing changes in the dynamics of the agroecosystem due to agricultural interventions. Model assumptions are based on empirical evidence from the literature on nutrients, soil quality, water and economic aspects of poverty in arid areas as well as expert interviews (Table \[Table1\]). We first analyse the baseline model without interventions and then sequentially assess the effectiveness of different alleviation strategies and their combinations (see Table 2 for a summary of the results and insights). We conclude by discussing our results and insights in relation to other theoretical and empirical work, and their importance for development practice and future research. The poverty trap models ======================= We use systems of nonlinear ordinary differential equations to set up a series of multidimensional dynamical systems model of poverty traps. We begin by setting up a model which describes a household-farm system prior to any intervention and continue by presenting models incorporating different agricultural interventions. Table 1 contains our main assumptions about important factors for food production and the relationships between them derived from an extensive literature review and expert interviews. We use empirical evidence about poverty and agricultural production in arid regions, particularly Sub Saharan Africa, to extend a one dimensional theoretical poverty traps model towards a multi-dimensional and more realistic model. [m[5cm]{} m[11cm]{}]{} **[Model]{} & **[Assumptions and literature]{}\ The baseline model & Rainfed agriculture [@Akhtar; @Rockstrom2000; @Rockstrom2003; @Rockstrom2010]Manure used for household energy [@Int; @Mekonnen; @Niguisse] Agrochemicals (artificial fertilizers) are not used [@Druilhe] Water, phosphorus and assets are necessary for crop production [@Kataria]\ Scenario 1: Input of agrochemicals & 1a: Endogenous strategy (agrochemicals purchased with savings) Agrochemicals increase phosphorus level in the soil, but may have negative effect on soil quality. [@Guo; @Loreau; @Pizzol; @Roberts; @Schnug] Soil quality can regenerate. [@Bunemann; @Smulders; @Xepapadeas] Improved water conditions are enabled by rainwater harvest. [@Enfors08; @Enfors13; @Yosef] 1b: Exogenous strategy (purchasing through external support or loan) Strong negative effect of agrochemicals on soil quality [@Geiger; @Pizzol; @Roberts; @Savci; @Schnug]\ Scenario 2: Diversification of household energy sources & Different household energy sources in SSA. Diverse energy sources allows manure to be used as fertiliser instead of fuel. [@Int] Manure improves soil quality and nutrient level. [@Bationo; @DeAngelis; @Kihanda; @McConville; @Pretty; @Probert; @Wanjekeche] Improved water conditions are enabled by rainwater harvest. [@Enfors13; @Yosef]\ Scenario 3: Conservation tillage & 3a: Conservation tillage with phosphorus as limiting factor and no additional nutrient input [@DeAngelis; @Ito; @McConville; @Pretty; @Verhulst] 3b: Conservation tillage with phosphorus as limiting factor and artificial fertilizer/manure application [@DeAngelis; @Ito; @McConville; @Pretty; @Verhulst; @Wanjekeche] 3c: Conservation tillage with water as limiting factor [@Asmamw]\ **** These assumptions enable us to construct causal loop diagrams (Figures 2-4) and to choose state variables and functional forms for our dynamical systems. The key assumptions are: 1. Phosphorus content of soils. Agricultural production removes phosphorus from crop producing soils, which if not balanced by agroecological methods [@Altieri], or application of organic or artificial fertilizers limits crop growth and leads to lower yields [@Drechsel]. 2. Water content of soils. Although rainfed agriculture is a widespread practice, it cannot always provide optimal water conditions, especially under the conditions of climate change. 3. Soil quality. Soil quality is a more complex variable than the nutrient content of soils or its capacity to produce crops alone. Accordingly, we model its dynamics separately to that of phosphorus and acknowledge that it might be self-regenerating. 4. Assets. Assets such as agrochemicals, improved seeds, and tools used for agriculture, supports agricultural production and can be a limiting factor for people living below the poverty line [@Druilhe; @Kataria]. We extend standard neoclassical dynamics of assets in which profit can be consumed or saved for investment in future production. Specifically, we implement a ‘savings trap’, in which households have a lower savings rate at low asset levels, leading to a trap in which they are unable to accumulate enough assets to escape poverty [@Kraay]. While each of these variables has its own dynamics, they also interact in complex ways (Figure 2A). Understanding the resulting dynamics is important for designing effective poverty alleviation strategies. We use the household scale because we seek to investigate the consequences of household-level decision making and because agricultural interventions often focus on smallholder farms [@Nz; @Probert; @Rockstrom2000; @Verde]. We analyse dynamics of the system by studying its attractors and basins of attraction. An attractor is a state (or set of states) to which the system tends over time starting from an initial state. It is defined by values of state variables, e.g. assets, phosphorus, water or soil quality. A basin of attraction is a set of all states of the system which tend over time towards the same attractor. The baseline model ------------------ The purpose of the baseline model is to describe dynamics of a typical low-income farming household in sub-Saharan Africa. Due to lack of assets and external inputs, artificial fertilizers are not used. Manure is used as a household energy source and rain is the only source of water. Key factors for food production are assets, phosphorus, water and soil quality and here we briefly explain their role as state variables in the models. The neoclassical economic theory of growth defines production output $y$ as a function $f(k)$, where $k$ is capital. In the economic literature, it is common to consider different forms of physical capital, such as infrastructure or machinery, but here we include per capita assets $k_a$, phosphorus $k_p$, water $k_w$ and soil quality $k_q$. Like previous works on poverty traps [@Lade; @Kraay] we use a Solow model [@BarroSala] to model asset dynamics, $$\begin{aligned} \label{solow} \frac{dk_a}{dt} &= s(k_a)f(k_a,k_p,k_w,k_q)-(\delta_a+r)k_a,\end{aligned}$$ where $s(k_a)$ is a nonlinear savings rate [@Kraay] and $f(k_a,k_p,k_w,k_q)$ is a production function with assets, phosphorus, water and soil quality as necessary variables. In other words, the value of the production function $f$ is zero given zero assets, phosphorus, water or soil quality, making crop production impossible in those cases. We assume that the function $f$ is the Cobb-Douglas production function of the form $$\begin{aligned} \label{cobb} f(k_a,k_p,k_w,k_q)=Ak_a^{\alpha_a}k_p^{\alpha_p}k_w^{\alpha_w}k_q^{\alpha_q}, \quad A>0, \quad \alpha_a+\alpha_p+\alpha_w+\alpha_q \le 1,\end{aligned}$$ or some of its simpler variants $$\begin{aligned} \label{cobb1} f(k_a,k_p,k_w)=Ak_a^{\alpha_a}k_p^{\alpha_p}k_w^{\alpha_w}, \quad A>0, \quad \alpha_a+\alpha_p+\alpha_w \le 1,\end{aligned}$$ or $$\begin{aligned} \label{cobb2} f(k_a,k_p,k_q)=Ak_a^{\alpha_a}k_p^{\alpha_p}k_q^{\alpha_q}, \quad A>0, \quad \alpha_a+\alpha_p+\alpha_q \le 1,\end{aligned}$$ where $A$ is a constant productivity term. Parameters $\delta_a$ and $r$ in equation (\[solow\]) denote assets depreciation rate and population growth rate, respectively. Both of them affect assets growth rate negatively. For more details on derivation of equation (\[solow\]) we refer readers to Appendix A. An s-shaped function for the savings rate $s(k_a)$ allows formation of savings traps [@Kraay]. We assume that it has the following form $$\begin{aligned} \label{savings} s(k_a)=\frac{s_1}{1+e^{-s_2k_a+s_3}}, \quad s_1,s_2>0,s_3\ge0.\end{aligned}$$ Phosphorus cycling in an agroecosystem system begins with phosphorus in the soil. From there, it is taken up by plants and transported though the food web to the consumers on or off-farm. By-products of food production, manure or human waste are usually not recycled and used as fertilizers (Table \[Table1\]). Even if there is no agricultural production, poor people may rely on collecting biomass to secure their livelihood. In addition to this, short intensive rainfalls which occur more frequently contribute to phosphorus loss by washing away top soil layers. Thus, the amount of phosphorus in the soil needed for crop growth is constantly declining, which we describe by the following equation: $$\begin{aligned} \label{p0} \frac{dk_p}{dt}= -\delta_pk_p, \quad \delta_p>0, \end{aligned}$$ where $\delta_p$ is the phosphorus loss rate. Phosphorus loss can depend on assets levels (increasing with assets to reflect consequences of intensified production), in which case the phosphorus loss rate can be written as $\delta_p(k_a)=\delta_p(1+\frac{d_1k_a}{d_2+k_a})$, where $d_1\ge 0$ and $d_2>0$. Having this more complicated phosphorus loss rate will not affect qualitative behavior of the model since the term in the brackets is always positive and the only solution to equation $\frac{dk_p}{dt}=0$ is $k_p=0$. Because of this we formulate our models using the assets independent loss rate as in equation (\[p0\]). We assume that rain is the only water supply. A portion of rain water is used by plants for their growth, while the rest is lost due to evaporation, leaking or sinking into lower soil layers inaccessible to plants. Therefore, water dynamics satisfies the following equation: $$\begin{aligned} \label{w0} \frac{dk_w}{dt} = r_w-\delta_wk_w, \quad r_w\ge 0, \, \delta_w>0, \end{aligned}$$ where $r_w$ is the amount of water gained by rainfall and $\delta_w$ is the water loss rate. Soil quality refers to the soil’s properties that enable food production, such as soil structure, amount of pollutants or microorganisms, but exclude soil’s nutrient content since it is modeled through phosphorus and water. The purpose of having this variable is to introduce biochemical and biophysical complexity of soil into models and to enable modelling various influences human actions may have. Since soil quality can be related to populations of organisms that live in or on soil or contribute to soil organic matter when they decompose, we assume that soil quality can regenerate (Table \[Table1\]) following logistic growth: $$\begin{aligned} \label{q0} \frac{dk_q}{dt} = r_qk_q\left(1-\frac{k_q}{Q}\right), \quad r_q\ge 0, \, Q>0,\end{aligned}$$ where $r_q$ is soil’s quality recovery rate and $Q$ its carrying capacity. If soil quality represent soil’s capacity to absorb pollution, then its values vary between zero and some positive upper bound $Q$ and the ecological processes that give this ability can be modeled using the logistic model. The baseline scenario is represented by the causal loop diagram in Figure \[Figure2\] and the corresponding dynamical system $$\label{baseline1} \begin{aligned} \frac{dk_a}{dt} &= s(k_a)f(k_a,k_p,k_w)-(\delta_a+r)k_a, \\ \frac{dk_p}{dt} &= -\delta_pk_p, \\ \frac{dk_w}{dt} &= r_w-\delta_wk_w, \end{aligned}$$ where we used equations (\[solow\]), (\[p0\]) and (\[w0\]) to describe dynamics of each variable and the function $f$ is given by (\[cobb1\]). ![Causal loop diagram (A) and state space plot (B) for system (\[baseline1\]) before interventions. Abbreviations denote: M manure, HES household energy source, CG crop growth, $k_a$ assets, $k_p$ phosphorus, $k_w$ water. The dashed line in figure (A) indicates that insignificant, if any, amount of manure is used as a fertilizer. The blue disc in figure (B) represents a unique attractor of this system, whose basin of attraction is the whole state space. The parameters are $s_1=0.1, s_2=10, s_3=20, A=10, \alpha_a=0.3, \alpha_p=0.3, \alpha_w=0.3, \delta_a=1, \delta_p=1, r_w=1.5, \delta_w=0.5$. []{data-label="Figure2"}](NullH){width="0.8\linewidth"} In what follows, we will present three scenarios which describe common agricultural interventions. Applications of agrochemicals, including artificial fertilisers, preserves the openness of the agroecosystem [@DeAngelis]. Two other interventions, conservation tillage and household energy diversification, lead to a more closed agroecosystem in which energy and matter is recycled internally. These interventions are sometimes accompanied by water preserving techniques and we include them in our models to show effects of different water regimes. Scenario 1: Input of agrochemicals ---------------------------------- Using improved seeds is usually accompanied by application of combinations of agrochemicals, such as fertilizers, herbicides and pesticides. Apart from the intended effects of increasing phosphorus levels in the soil, side effects such as soil acidification and loss of biodiversity have been observed (Table \[Table1\]). In order to describe this dual effects of agrochemicals and study corresponding dynamics, we use assets, phosphorus and soil quality as state variables for the system. We model that household invests part of its assets in agrochemicals. We also assume that the household invests a part of its income into water management and because of this water is not a limiting factor for crop growth. The causal loop diagram is given in Figure \[Figure3\]A and mathematical formulation of the model is obtained by modifying model (\[baseline1\]) and reads as follows: $$\label{af} \begin{aligned} \frac{dk_a}{dt} &= s(k_a)f(k_a,k_p,k_q)-(\delta_a+r)k_a, \\ \frac{dk_p}{dt} &= I_p(k_a)-\delta_pk_p, \\ \frac{dk_q}{dt} &= r_qk_q\left(1-\frac{k_q}{Q}\right)-I_q(k_a)k_q, \end{aligned}$$ where $f$ is defined by (\[cobb2\]), $I_p(k_a)$ is the increase in phosphorus due to artificial fertilizer and $I_q(k_a)$ is the negative effect of agrochemicals on soil quality. Positive contributions of fertilizer to soil’s phosphorus content are limited and the same is true for negative effects of agrochemicals on soil quality. We assume that these functions have the form $$I_p(k_a)=\frac{c_1k_a^2}{c_2+k_a^2} \quad\mbox{and}\quad I_q(k_a)=\frac{c_3k_a}{c_4+k_a}, \quad c_1,c_2,c_3,c_4>0.$$ The coupled system (\[af\]) incorporates the positive feedback between assets and phosphorus and the negative feedback between assets and soil quality. The parameters $c_1$ and $c_2$ define positive contributions of fertilizer to soil’s phosphorus content, and the parameters $c_3$ and $c_4$ define strength of negative effect of agrochemicals on soil quality. Depending on the parameter values, the system can have different number of attractors. ![Causal loop diagram and state space with attractors and basins of attraction for agrochemical use (A-C) and diversification of household energy sources (D-F). Abbreviations denote: M manure, HES household energy source, AF combination of artificial fertilizer, improved seeds and chemicals, CG crop growth, WM water management, $k_a$ assets, $k_p$ phosphorus, $k_w$ water and $k_q$ soil quality. The blue and green lines in (A) represent endogenous water management and agrochemicals input. The green line in (D) is for endogenous household energy sources. The blue and purple discs represent attractors and colored volumes are corresponding basins of attraction. (B) Good water conditions and mild negative effect of agrochemicals on soil quality; system (\[af\]) with $s_1=0.25, s_2=2.5, s_3=20, A=10, \alpha_a=0.4, \alpha_p=0.3, \alpha_q=0.2, \delta_a=0.7, c_1=1, c_2=20, c_3=1, c_4=4, \delta_p=0.2, r_q=1, Q=10$. (C) Good water conditions and strong negative effect of agrochemicals on soil quality; $c_3=4$. (E) Sufficient amount of nutrient rich manure and good water conditions; system (\[energy\]) with $s_1=0.1, s_2=1, s_3=0, A=10, \alpha_a=0.3, \alpha_p=0.3, \alpha_q=0.2, \delta_a=0.5, c_1=1, c_2=5, c_3=1, c_4=1.8, \delta_p=0.2, r_w=1, c_5=1, c_6=40, \delta_w=1$. (F) Insufficient amount or nutrient poor manure and good water conditions; $c_1=0.5$.[]{data-label="Figure3"}](AFendo){width="\linewidth"} ![Causal loop diagram and state space with attractors and basins of attraction for agrochemical use (A-C) and diversification of household energy sources (D-F). Abbreviations denote: M manure, HES household energy source, AF combination of artificial fertilizer, improved seeds and chemicals, CG crop growth, WM water management, $k_a$ assets, $k_p$ phosphorus, $k_w$ water and $k_q$ soil quality. The blue and green lines in (A) represent endogenous water management and agrochemicals input. The green line in (D) is for endogenous household energy sources. The blue and purple discs represent attractors and colored volumes are corresponding basins of attraction. (B) Good water conditions and mild negative effect of agrochemicals on soil quality; system (\[af\]) with $s_1=0.25, s_2=2.5, s_3=20, A=10, \alpha_a=0.4, \alpha_p=0.3, \alpha_q=0.2, \delta_a=0.7, c_1=1, c_2=20, c_3=1, c_4=4, \delta_p=0.2, r_q=1, Q=10$. (C) Good water conditions and strong negative effect of agrochemicals on soil quality; $c_3=4$. (E) Sufficient amount of nutrient rich manure and good water conditions; system (\[energy\]) with $s_1=0.1, s_2=1, s_3=0, A=10, \alpha_a=0.3, \alpha_p=0.3, \alpha_q=0.2, \delta_a=0.5, c_1=1, c_2=5, c_3=1, c_4=1.8, \delta_p=0.2, r_w=1, c_5=1, c_6=40, \delta_w=1$. (F) Insufficient amount or nutrient poor manure and good water conditions; $c_1=0.5$.[]{data-label="Figure3"}](AFStrong){width="\linewidth"} ![Causal loop diagram and state space with attractors and basins of attraction for agrochemical use (A-C) and diversification of household energy sources (D-F). Abbreviations denote: M manure, HES household energy source, AF combination of artificial fertilizer, improved seeds and chemicals, CG crop growth, WM water management, $k_a$ assets, $k_p$ phosphorus, $k_w$ water and $k_q$ soil quality. The blue and green lines in (A) represent endogenous water management and agrochemicals input. The green line in (D) is for endogenous household energy sources. The blue and purple discs represent attractors and colored volumes are corresponding basins of attraction. (B) Good water conditions and mild negative effect of agrochemicals on soil quality; system (\[af\]) with $s_1=0.25, s_2=2.5, s_3=20, A=10, \alpha_a=0.4, \alpha_p=0.3, \alpha_q=0.2, \delta_a=0.7, c_1=1, c_2=20, c_3=1, c_4=4, \delta_p=0.2, r_q=1, Q=10$. (C) Good water conditions and strong negative effect of agrochemicals on soil quality; $c_3=4$. (E) Sufficient amount of nutrient rich manure and good water conditions; system (\[energy\]) with $s_1=0.1, s_2=1, s_3=0, A=10, \alpha_a=0.3, \alpha_p=0.3, \alpha_q=0.2, \delta_a=0.5, c_1=1, c_2=5, c_3=1, c_4=1.8, \delta_p=0.2, r_w=1, c_5=1, c_6=40, \delta_w=1$. (F) Insufficient amount or nutrient poor manure and good water conditions; $c_1=0.5$.[]{data-label="Figure3"}](ManureGood.jpg){width="\linewidth"} ![Causal loop diagram and state space with attractors and basins of attraction for agrochemical use (A-C) and diversification of household energy sources (D-F). Abbreviations denote: M manure, HES household energy source, AF combination of artificial fertilizer, improved seeds and chemicals, CG crop growth, WM water management, $k_a$ assets, $k_p$ phosphorus, $k_w$ water and $k_q$ soil quality. The blue and green lines in (A) represent endogenous water management and agrochemicals input. The green line in (D) is for endogenous household energy sources. The blue and purple discs represent attractors and colored volumes are corresponding basins of attraction. (B) Good water conditions and mild negative effect of agrochemicals on soil quality; system (\[af\]) with $s_1=0.25, s_2=2.5, s_3=20, A=10, \alpha_a=0.4, \alpha_p=0.3, \alpha_q=0.2, \delta_a=0.7, c_1=1, c_2=20, c_3=1, c_4=4, \delta_p=0.2, r_q=1, Q=10$. (C) Good water conditions and strong negative effect of agrochemicals on soil quality; $c_3=4$. (E) Sufficient amount of nutrient rich manure and good water conditions; system (\[energy\]) with $s_1=0.1, s_2=1, s_3=0, A=10, \alpha_a=0.3, \alpha_p=0.3, \alpha_q=0.2, \delta_a=0.5, c_1=1, c_2=5, c_3=1, c_4=1.8, \delta_p=0.2, r_w=1, c_5=1, c_6=40, \delta_w=1$. (F) Insufficient amount or nutrient poor manure and good water conditions; $c_1=0.5$.[]{data-label="Figure3"}](ManureNoHarvest.jpg){width="\linewidth"} Scenario 2: Diversification of household energy sources ------------------------------------------------------- According to the assumptions in Table \[Table1\], manure is a valuable fertilizer, but most of it is used as a household energy source. We model a situation when a household invests some of its assets into new energy source and more fuel efficient technologies and uses manure as a fertilizer. We also model that farmers invest part of their assets in rainwater harvesting technologies, which improves water conditions. This leads us to the causal loop diagram in Figure 3D and following dynamical system: $$\label{energy} \begin{aligned} \frac{dk_a}{dt} &= s(k_a)f(k_a,k_p,k_w)-(\delta_a+r)k_a, \\ \frac{dk_p}{dt} &= I_p(k_a,k_p)-\delta_pk_p, \\ \frac{dk_w}{dt} &= r_w+I_w(k_a)k_w-\delta_wk_w, \end{aligned}$$ where $f$ is given by (\[cobb1\]) and functions $I_p(k_a,k_p)$ and $I_w(k_a)$ have the form $$I_p(k_a,k_p) = \frac{c_1k_a^2}{c_2+k_a^2}\cdot\frac{c_3k_p}{c_4+k_p} \quad\mbox{and}\quad I_w(k_a)=\frac{c_5k_a^2}{c_6+k_a^2}, \quad c_i>0, i=\overline{1,6}.$$ The first factor in the function $I_p(k_a,k_p)$ is related to the amount of manure that can be gained by energy source diversification. We choose a s-shaped function of assets since the available manure is limited. The second factor in the function $I_p(k_a,k_p)$ is related to manure quality, which is measured by the amount of phosphorus manure contains and this content depends on the environment (low in degraded environment, high in good environment). Water gains are modelled using function $I_w(k_a)$. Scenario 3: Conservation tillage -------------------------------- Conservation tillage is a method which includes a range of tillage practices aimed to increase water infiltration and nutrient conservation and decrease water and nutrient loss through evaporation, leaching and erosion [@Busari]. Since conservation tillage does not provide additional nutrient or water input, we model it using the baseline model (\[baseline1\]) and Figure \[Figure2\] with reduced phosphorus and water loss rates. Depending on its effectiveness, conservation tillage can reduce or even eliminate phosphorus or water loss. The outcome of the intervention in the first case is still depletion of phosphorus, but at a slower pace than in the baseline model. If tillage eliminates phosphorus loss, $\frac{dk_p}{dt}=0$, the corresponding dynamical system is then $$\label{ct} \begin{aligned} \frac{dk_a}{dt} &= s(k_a)f(k_a,k_w)-(\delta_a+r)k_a, \\ \frac{dk_w}{dt} &= r_w-\delta_wk_w, \end{aligned}$$ where $f(k_a,k_w)=Ak_a^{\alpha_a}k_w^{\alpha_w}$ and the productivity term $A$ incorporates effects of constant phosphorus level on crop growth. ![Causal loop diagram (A) and state space plot for conservation tillage which eliminates phosphorus loss (B) for high phosphorus content and good water conditions; system (\[ct\]) with $s_1=0.1, s_2=10, s_3=20, A=6, \alpha_a=0.4, \alpha_w=0.4, \delta_a=1, r_w=1, \delta_w=0.2$.[]{data-label="Figure4"}](ClosedHigh){width="0.8\linewidth"} Results ======= Multi-dimensional poverty trap model of household agriculture (baseline model) ------------------------------------------------------------------------------ The baseline model (system (\[baseline1\]); Figure 2) represents agroecological dynamics at the household-farm scale in sub-Saharan Africa prior to any agricultural intervention. The soil gradually loses phosphorus, reducing crop growth and income and pushing household to a poor state characterized by a low asset level and phosphorus depletion (blue disc in Figure 2B). Regardless of the initial levels of water, nutrients and assets, the household-farm system always reaches the low well-being attractor due to losses of phosphorus and lack of replenishment. Short term external asset inputs provide only a change in the initial conditions, but leave the attractor unchanged, and because of this, they are unable to alleviate persistent poverty. This result suggests that short-term external poverty alleviation interventions are not sufficient in this situation; some structural change in household-farm dynamics is required to enable the existence of a non-poor state. Disrupting the negative phosphorus balance is necessary and it can be achieved by external phosphorus inputs or by preventing its loses. In what follows we analyse Scenario 1, 2 and 3 and assess their poverty alleviation potential. Agrochemical inputs can reinforce poverty and degradation of soil quality ------------------------------------------------------------------------- Using improved seeds and agrochemicals is a common practice in agricultural intensification, but there is evidence both of success [@Carvalho; @Weight] and failure [@Fischer]. In order to visualize the outcome of agrochemical applications, we assume constant water levels and present the case where production is not water-limited (system (\[af\]); Figure 3A-C). Cases when moderate agrochemical application [@Snapp] leads to mildly negative effects on soil quality (Table \[Table1\]) is represented in Figure 3B. In addition to the poor attractor (blue disc), an alternative high well-being attractor exists in our model (purple disc). At high asset levels farmers have the resources to purchase agrochemicals which increases farm productivity, although at the cost of soil quality. At low asset levels, farmers do not have the resources to increase productivity by purchasing agrochemicals and therefore the low asset level attractor remains in the model (blue disc). To escape the low asset level poverty trap, agrochemical application would therefore need to occur in conjunction with an external asset input. Evidence is accumulating that agrochemical application can be severely harmful for soil quality (Table \[Table1\]). This case is represented in Figure 3C and strong negative effect of agrochemicals is obtained by increasing $c_3$. If the harmful effects of agrochemicals on productivity via degradation of soil quality outweigh improvements in productivity via improved soil nutrient levels, poverty is the likely outcome. Our results show how the dual consequences, both positive and negative, of agrochemical inputs can in fact reinforce as well as alleviate poverty, depending on which effect is stronger in a specific situation. Application of agrochemicals can also fail since they need to be recurrently applied, which makes farmers dependent on government support [@Gerber] or forces farmers to prioritise investment in artificial fertilizers, though we do not model these mechanisms here. Prioritising diversification of energy sources can establish the conditions for effective application of other strategies ------------------------------------------------------------------------------------------------------------------------- Application of manure to soils can improve crop yields by increasing nutrient content of the soil and improving soil quality [@Bationo; @Kihanda; @Probert]. However, most manure in Sub-Saharan Africa is used as a household energy source [@Mekonnen; @Niguisse], and little is left to be used as a fertilizer. Diversifying household energy sources using gas, charcoal or electricity allows manure to be used for soil fertilization [@Int] and investment in rainwater harvesting technologies improves water levels [@Yosef]. In practice, such investments in alternative energy and rainwater technologies may also require capacity building and behavioural change as well as access to an affordable supply of the alternative technology. We here provide insights gained from model (\[energy\]). A case when there is sufficient quantity of nutrient rich manure is given in Figure 3E. Investment in energy diversification and rainwater harvesting can introduce an alternative attractor at higher phosphorus levels (purple disc). An attractor remains at low phosphorus levels (blue disc), where households remain trapped in poverty because they cannot afford sufficient spending to increase nutrient or water levels or because the soil is too degraded. To transition out of the poor and degraded attractor to the new attractor would require an initial input of external assets and increasing phosphorus levels in the soil prior to manure application. If insufficient quantities of manure is available, or its nutrient content is low, only the initial poor and degraded attractor remains (Figure 3F). In such cases, manure for fertilization is inadequate for obtaining higher yields and escaping poverty, as for example was observed in [@Wanjekeche]. A similar result is obtained if the total rainfall is too low. In these cases, water management cannot improve water conditions, leading to low production and even large inputs of external assets will not allow households to escape their poverty trap. Our results show that energy diversification and rainwater harvesting can be prerequisites for effective application of other poverty alleviation strategies. In a highly degraded environment, it may be necessary to also follow diversification of energy sources with a combination of manure and artificial fertilisers [@Ito] to raise phosphorus content to a level needed for production. Sequencing of interventions matters for effectiveness of conservation tillage ----------------------------------------------------------------------------- Conservation tillage is a popular intervention method in Sub-Saharan Africa [@Enfors11; @FR]. It includes a range of tillage practices aimed to increase water infiltration and nutrient conservation and decrease water evaporation, nutrient leaching and soil erosion [@Busari]. Conservation tillage does not provide any external inflows of nutrients or water, but it reduces their loss. We used system (\[baseline1\]) and (\[ct\]) to study how the effectiveness of conservation tillage, water conditions, phosphorus and asset levels affect households in poverty traps. If conservation tillage reduces, but does not eliminate nutrient leaching, the system dynamics have the same properties and long-term behavior as the baseline model (system (\[baseline1\]); Figure 2). Regardless of the initial conditions, a household will end in poverty, likely at a slower pace than in the baseline case. In this case, conservation tillage will need to be paired with continual application of manure or artificial fertiliser, as for example in [@Ito] (and which in our model leads to the same two-attractor configuration as in Figure 3B). Low water levels can also limit crop growth and the effectiveness of conservation tillage for any nutrient level. In this case, improving water levels through rainwater harvesting technologies such as small-scale water catchments [@Enfors13] would prepare the conditions for conservation tillage to be useful. If tillage eliminates (or almost eliminates) nutrient leaching, phosphorus levels will be conserved over time. System (\[ct\]) has two attractors for sufficiently high phosphorus and water levels (Figure \[Figure4\]). Higher levels of phosphorus that enable productivity can introduce an alternative attractor with higher asset level. Because the low asset level attractor remains after conservation tillage, additional external asset inputs may be required along with or after conservation tillage to allow the household to escape the poverty trap. If the phosphorus (or water) level is low, the subsequent low levels of agricultural production keep the household in poverty for any levels of assets and water (or phosphorus) and only one attractor exists (Figure 5 in Supplementary Information). The dynamic nature of our model shows how the sequence of interventions can critically affect whether conservation tillage can allow a household to escape a poverty trap. A sequence of interventions starting with nutrient application, then conservation tillage accompanied by a one-off external asset input may be most effective, especially when initial nutrient levels are low. If conservation tillage does not eliminate nutrient leaching, farmers may need to invest in energy diversification or artificial fertilisers to allow continued application of nutrients. Case study example ------------------ Our models are not designed to represent a particular real-world case study. They aim to capture key dynamics and contextual factors found in the context of rural poverty in a stylized way. Simplicity of our models allows testing and assessing the consequences of a combination of factors assumed to be present in a specific case before designing an intervention or building an empirical model. Their main purpose is thus to support a process of thinking through complex interactions that are difficult or impossible to assess in an empirical study. We demonstrate the value-added of using dynamical systems modelling as a thinking tool to support development interventions in agricultural contexts through a case study. In North-Eastern Tanzania, @Enfors13 conducted a study on how water management technology would influence agro-ecosystem dynamics. The study outlines alternative development trajectories based on specific social-ecological feedbacks and the role of small-scale water systems in breaking trap dynamics. Our modelling approach could help an implementing body (an NGO for example) that aims to introduce conservation tillage (as a water saving intervention) to compare possible outcomes depending on the households’ initial conditions and local biophysical and economic context. For example, conservation tillage can preserve nutrients and water but it will only be effective if there are enough nutrients and water in the soil as a starting point (Scenario 3, Figure 4, Figure 5 in Appendix A). Conservation tillage should be complemented with water management to increase the level of water in case of severe drought, demonstrating that the sequencing of interventions matter. This corresponds with findings in @Enfors13 where it was observed that conservation tillage increases productivity significantly more during good rainfall seasons than in dry periods. Another conclusion coming from the models is that because of severe nutrient limitations existing in the case-study catchment, like much of sub-Saharan Africa, interventions focusing on water technology will only be effective with simultaneous nutrient inputs. Thus, modeling results may highlight potential benefits or shortcomings even before an intervention or empirical experiment takes place and help in their design. Discussion ========== Alleviating poverty in rural agricultural settings is particularly challenging because of the interdependence between economic well-being, agricultural practices and the state of the biophysical environment. Interventions that only address single aspects of one or the other and neglect the other dimensions are likely to lead to unintended or ineffective outcomes. We show how poverty and soil dynamics are deeply interlinked and jointly determine the ability to meet food security goals in rural areas. An intervention targeting economic well-being through improved agricultural productivity using artificial fertilizers will fail in an environment where soil quality is compromised. At the same time interventions to improve soil quality, e.g. through conservation tillage will be unsuccessful if initial soil quality and economic well-being are too low. The complex and dynamic nature of interactions means that a blanket solution for persistent poverty does not exist and a sequence of interventions, rather than only one intervention, may be necessary for escape from the trap (Table 2 in SI). Models such as the ones presented here can be useful tools to test implications of dynamic interactions between the different dimensions and to identify which sequences may be appropriate in different contexts. Our work advances understanding of the complex dynamics of rural poverty by combining the neoclassical economic theory of growth, ecological theories of nutrient cycling and empirical knowledge of interventions and development strategies. In situations with persistent poverty simply improving agricultural practices is not enough. Instead, a careful assessment is needed of the current state of the social-ecological system, including the socio-economic conditions of households, the biophysical conditions of the agroecosystems such as soil quality, nutrient and water availability and existing agricultural practices. Based on an understanding of a given context, combinations of interventions can be devised. These will most likely have to include methods to improve economic and biophysical conditions as well as initiating changes in farmers’ habits and agricultural practices. Our analysis gives three main insights for development practice (Table 2). First, agrochemical inputs can sometimes reinforce poverty by degrading soil quality. Because of this, monitoring soil quality and moderate use of agrochemicals are potentially good practices. Second, prioritising diversification of energy sources can establish the conditions for effective application of other strategies. This is however possible only if people change their habits of using manure as a fuel source. Third, sequencing of interventions matters for conservation tillage to be effective because it preserves existing but does not contribute additional nutrients and water. In cases where there is not enough nutrients or water, conservation tillage should be combined with nutrient or water inputs and eventually followed by asset inputs. The theoretical models presented here serve as thinking tools to unravel the complex dynamics and context-dependence of poverty traps in rural areas. We have built them on a synthesis of insights from empirical research. Future work should directly test these models and implications with data-based empirical models. Furthermore, our models focus on the importance of biophysical dynamics for escaping poverty traps at the households scale. Since nitrogen is often a limiting factor for crop growth, studying its dynamics is an important research question. Its concentration in the soil can be increased by intercropping with nitrogen fixating plants. Our models can easily be extended to represent nitrogen dynamics, but it was beyond the scope of this paper and we leave it for the future research. Poverty traps dynamics are, however, influenced by many factors at and across scales [@Haider]. Future research may thus include cross-scale effects caused by e.g. population structure, migration, or the relationship between urbanization and poverty [@Chen; @DeBrauw; @Hunter]. Another important aspect that we only touch upon is the need to consider human behavior [@Beckage] and culture [@Lade]. In summary, nutrients, water, soil quality and household assets are critical factors for agricultural productivity, and their interactions can lead to reinforcing or breaking poverty traps. Dynamical systems modelling, which we used here, enables the testing of assumptions across various contexts to examine the implications of different agricultural interventions for poverty alleviation. As our models demonstrate, effective poverty alleviation is often best achieved by a planned sequence of interventions, rather than just one strategy. [**Code availability:**]{} The Mathematica code used to generate the state space plots with attractors and basins of attractons in this article is available upon request from the corresponding author. The algorithms for plotting two- and three-dimensional basins of attraction were originally developed by [@Lade]. [**Acknowledgements:**]{} We are grateful to our colleagues Million Belay and Linus Dagerskog for providing comments and empirical background for the paper. [**Funding:**]{} The research leading to these results received funding from the Sida-funded GRAID program at the Stockholm Resilience Centre, the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 682472 — MUSES), and the Swedish Research Council Formas (project grant 2014-589). [1111]{} Haider, L. J., Boonstra, W. J., Peterson, G. D., & Schlüter, M. (2018). Traps and sustainable development in rural areas: a review. World Development, 101, 311-321. Barrett, C.B., Bevis, L.E. The self-reinforcing feedback between low soil fertility and chronic poverty. Nature Geoscience. 2015 Dec 1;8(12):907-12. Barrett, C. B., Constas, M. A. (2014). Toward a theory of resilience for international development applications. Proceedings of the National Academy of Sciences, 111(40), 14625-14630. Lade, S., Haider, L.J., Engström, T., Schluter, M. Resilience offers escape from trapped thinking on poverty alleviation. Science Advances 03 May 2017: Vol. 3, no. 5, e1603043 DOI: 10.1126/sciadv.1603043 Barro, R., Sala-i-Martin, X. (2004). Economic Growth. 2nd Edt. Cambridge, Massachusetts: MIT Press Smulders, S. (2000). Economic growth and environmental quality. Principles of environmental and resource economics, Edward Elgar, Cheltenham, 602-664 Xepapadeas, A. (2005). Economic growth and the environment. Handbook of environmental economics, 3, 1219-1271 Mirza, M. U., Richter, A., van Nes, E. H., Scheffer, M. (2019). Technology driven inequality leads to poverty and resource depletion. Ecological Economics, 160, 215-226. Kraay, A., Raddatz, C. (2005). Poverty traps, aid, and growth. The World Bank. Drechsel, P., Gyiele, L., Kunze, D., & Cofie, O. (2001). Population density, soil nutrient depletion, and economic growth in sub-Saharan Africa. Ecological economics, 38(2), 251-258. Rockstrom, J. (2000). Water resources management in smallholder farms in Eastern and Southern Africa: an overview. Physics and Chemistry of the Earth, Part B: Hydrology, Oceans and Atmosphere, 25(3), 275-283. Rockström, J., et al. “Planetary boundaries: exploring the safe operating space for humanity.” Ecology and society 14.2 (2009). Guo, J. H., Liu, X. J., Zhang, Y., Shen, J. L., Han, W. X., Zhang, W. F., ... & Zhang, F. S. (2010). Significant acidification in major Chinese croplands. Science, 1182570. Carvalho, F. P. (2006). Agriculture, pesticides, food security and food safety. Environmental science & policy, 9(7-8), 685-692. Pizzol, M., Smart, J. C., & Thomsen, M. (2014). External costs of cadmium emissions to soil: a drawback of phosphorus fertilizers. Journal of cleaner production, 84, 475-483. Roberts, T. L. (2014). Cadmium and phosphorous fertilizers: the issues and the science. Procedia Engineering, 83, 52-59. Schnug, E., & Lottermoser, B. G. (2013). Fertilizer-derived uranium and its threat to human health. Environmental Science & Technology 2013 47 (6), 2433-2434 DOI: 10.1021/es4002357 Nziguheba, G., Zingore, S., Kihara, J., Merckx, R., Njoroge, S., Otinga, A., ... & Vanlauwe, B. (2016). Phosphorus in smallholder farming systems of sub-Saharan Africa: implications for agricultural intensification. Nutrient cycling in agroecosystems, 104(3), 321-340. Verde, B., Matusso, J. (2014). Phosphorus in sub-Sahara african soils-strategies and options for improving available soil phosphorus in smallholder farming systems: a review. Acad. Res. J. Agric. Sci. Res, 2(1), 1-5. Cordell, D., Drangert, J. O., White, S. (2009). The story of phosphorus: global food security and food for thought. Global environmental change, 19(2), 292-305. Rockström, J. (2003). Resilience building and water demand management for drought mitigation. Physics and Chemistry of the Earth, Parts A/B/C, 28(20-27), 869-877. Burney, J. A., Naylor, R. L. (2012). Smallholder irrigation as a poverty alleviation tool in sub-Saharan Africa. World Development, 40(1), 110-123. Altieri, M. A. (2002). Agroecology: the science of natural resource management for poor farmers in marginal environments. Agriculture, ecosystems & environment, 93(1-3), 1-24. Bünemann, E. K., Bongiorno, G., Bai, Z., Creamer, R. E., De Deyn, G., de Goede, R., & Pulleman, M. (2018). Soil quality–A critical review. Soil Biology and Biochemistry, 120, 105-125 Parr, J. F., Papendick, R. I., Hornick, S. B., & Meyer, R. E. (1992). Soil quality: attributes and relationship to alternative and sustainable agriculture. American Journal of Alternative Agriculture, 7(1-2), 5-11. Verhulst, N., Govaerts, B., Verachtert, E., Castellanos-Navarrete, A., Mezzalama, M., Wall, P., ... & Sayre, K. D. (2010). Conservation agriculture, improving soil quality for sustainable production systems. Advances in soil science: food security and soil quality, 1799267585, 137-208. Thrupp, L. A. (2000), Linking Agricultural Biodiversity and Food Security: the Valuable Role of Agrobiodiversity for Sustainable Agriculture. International Affairs, 76: 283-297. doi:10.1111/1468-2346.00133 Barrett, C. B., Swallow, B. M. (2006). Fractal poverty traps. World development, 34(1), 1-15. Barrett, C. B. (2008). Poverty traps and resource dynamics in smallholder agrarian systems. Economics of poverty, environment and natural-resource use, 17-40. Tittonell, P., Giller, K. E. (2013). When yield gaps are poverty traps: The paradigm of ecological intensification in African smallholder agriculture. Field Crops Research, 143, 76-90. Akhtar, M., ‐Hassan, F.‐u., Ahmed, M., Hayat, R., and Stöckle, C. O. (2016) Is Rainwater Harvesting an Option for Designing Sustainable Cropping Patterns for Rainfed Agriculture?. Land Degrad. Develop., 27: 630–640. doi: 10.1002/ldr.2464. Rockström, J., Karlberg, L., Wani, S. P., Barron, J., Hatibu, N., Oweis, T., ... & Qiang, Z. (2010). Managing water in rainfed agriculture -The need for a paradigm shift. Agricultural Water Management, 97(4), 543-550. International Energy Agency (2006) Energy for cooking in developing countries. World energy outlook 2006. Paris: International Energy Agency. pp 420–445 Mekonnen, A., Köhlin, G. (2008). Biomass fuel consumption and dung use as manure: evidence from rural households in the Amhara Region of Ethiopia. Environment for Development Discussion Paper-Resources for the Future (RFF), (08-17). Nigussie, A., Kuyper, T. W., & de Neergaard, A. (2015). Agricultural waste utilisation strategies and demand for urban waste compost: evidence from smallholder farmers in Ethiopia. Waste management, 44, 82-93. Druilhe, Z., Barreiro-Hurlé, J. (2012). Fertilizer subsidies in sub-Saharan Africa (No. 12-04). ESA Working paper. Kataria, K., Curtiss, J., Balmann, A. (2012). Drivers of agricultural physical capital development: Theoretical framework and hypotheses (No. 122). Centre for European Policy Studies. Loreau, M., Holt, R. D. (2004). Spatial flows and the regulation of ecosystems. The American Naturalist, 163(4), 606-615. Enfors, E. I., Gordon, L. J. (2008). Dealing with drought: The challenge of using water system technologies to break dryland poverty traps. Global Environmental Change, 18(4), 607-616. Enfors, E. (2013). Social–ecological traps and transformations in dryland agro-ecosystems: using water system innovations to change the trajectory of development. Global Environmental Change, 23(1), 51-60. Yosef, B. A., Asmamaw, D. K. (2015). Rainwater harvesting: An option for dry land agriculture in arid and semi-arid Ethiopia. International Journal of Water Resources and Environmental Engineering, 7(2), 17-28. Geiger, F., Bengtsson, J., Berendse, F., Weisser, W. W., Emmerson, M., Morales, M. B., Eggers, S. (2010). Persistent negative effects of pesticides on biodiversity and biological control potential on European farmland. Basic and Applied Ecology, 11(2), 97-105. Savci, S. (2012). An agricultural pollutant: chemical fertilizer. International Journal of Environmental Science and Development, 3(1), 73. Bationo, A. (Ed.). (2004). Managing nutrient cycles to sustain soil fertility in Sub-Saharan Africa. CIAT. DeAngelis, D. (2012). Dynamics of nutrient cycling and food webs (Vol. 9). Springer Science & Business Media. Kihanda, F.M. (1996) The role of farmyard manure in improving maize production in sub-humid highlands of central Kenya. Ph. D. Thesis. University of Reading. UK. McConville, J., Drangert, J., Tidåker, P., Neset, T., Rauch, S., Strid, I., Tonderski, K. (2017) Closing the food loops: guidelines and criteria for improving nutrient management, Sustainability: Science, Practice and Policy, 11:2, 33-43, DOI: 10.1080/15487733.2015.11908144 Pretty, J. 2008. Agricultural sustainability: concepts, principles and evidence. Phil. Trans. R. Soc. B (2008) 363, 447–465. Probert, M., Okalebo, E., Simpson, J.R. (1995) The use of manure on smallholders in semi-arid eastern Kenya. Experimental Agriculture 31: 371-381 Wanjekeche, E., Mwangi, T., Powon, P. and Khaemba, J. (1999) Management practices and their effects on nutrient status of farmyard manure in West Pokot district, Kenya. Paper presented at the 17th Conference of the Soil Science Society of East Africa, Kampala, Uganda. 6th -10th August 1999. Ito, M., Matsumoto, T., Quinones, M.A. Conservation tillage practice in sub-Saharan Africa: The experience of Sasakawa Global 2000, Crop Protection, Volume 26, Issue 3, 2007, Pages 417-423, ISSN 0261-2194, http://dx.doi.org/10.1016/j.cropro.2006.06.017. Asmamaw, D. K. (2017) A Critical Review of the Water Balance and Agronomic Effects of Conservation Tillage under Rain‐fed Agriculture in Ethiopia. Land Degrad. Develop., 28: 843–855. doi: 10.1002/ldr.2587. Busari, M. A., Kukal, S. S., Kaur, A., Bhatt, R., & Dulazi, A. A. (2015). Conservation tillage impacts on soil, crop and the environment. International Soil and Water Conservation Research, 3(2), 119-129. Weight, D., Kelly, V. A., 1999. “Fertilizer Impacts on Soils and Crops of Sub-Saharan Africa,” Food Security International Development Papers 54050, Michigan State University, Department of Agricultural, Food, and Resource Economics. Fischer, K., and F. Hajdu. 2015. Does raising maize yields lead to poverty reduction? A case study of the Massive Food Production Programme in South Africa. Land Use Policy 46:304–313. Snapp, S. S., Blackie, M. J., Gilbert, R. A., Bezner-Kerr, R., & Kanyama-Phiri, G. Y. (2010). Biodiversity can support a greener revolution in Africa. Proceedings of the National Academy of Sciences, 107(48), 20840-20845. Gerber, A. Short-Term Success versus Long-Term Failure: A Simulation-Based Approach for Understanding the Potential of Zambia’s Fertilizer Subsidy Program in Enhancing Maize Availability. Sustainability 2016, 8(10):1036 Enfors, E., Barron, J., Makurira, H., Rockström, J., & Tumbo, S. (2011). Yield and soil system changes from conservation tillage in dryland farming: A case study from North Eastern Tanzania. Agricultural Water Management, 98(11), 1687-1695. Fowler, R., Rockstrom, J. Conservation tillage for sustainable agriculture: an agrarian revolution gathers momentum in Africa. Soil and tillage research. 2001. 61(1), 93-108. Chen, R., Ye, C., Cai, Y., Xing, X., & Chen, Q. (2014). The impact of rural out-migration on land use transition in China: Past, present and trend. Land Use Policy, 40, 101-110. De Brauw, A., Mueller, V., & Lee, H. L. (2014). The role of rural–urban migration in the structural transformation of Sub-Saharan Africa. World Development, 63, 33-42. Hunter, L. M., Nawrotzki, R. , Leyk, S. , Maclaurin, G. J., Twine, W. , Collinson, M., Erasmus, B. (2014), Rural Outmigration, Natural Capital, and Livelihoods in South Africa. Popul. Space Place, 20: 402-420. doi:10.1002/psp.1776 Beckage, B., Gross, L. J., Lacasse, K., Carr, E., Metcalf, S. S., Winter, J. M., & Kinzig, A. (2018). Linking models of human behaviour and climate alters projected climate change. Nature Climate Change, 8(1), 79.
--- abstract: 'We perform an exact spherical geometry finite-size diagonalization calculation for the fractional quantum Hall ground state in three different experimentally relevant GaAs-$\mbox{Al}_{x} \mbox{Ga}_{1-x}$As systems: a wide parabolic quantum well, a narrow square quantum well, and a heterostructure. For each system we obtain the Coulomb pseudopotential parameters entering the exact diagonalization calculation by using the realistic subband wave function from a self-consistent electronic structure calculation within the local density approximation (LDA) for a range of electron densities. We compare our realistic LDA pseudopotential parameters with those from widely used simpler model approximations in order to estimate the accuracies of the latter. We also calculate the overlap between the exact numerical ground state and the analytical Laughlin state as well as the excitation gap as a function of density. For the three physical systems we consider the calculated overlap is found to be large in the experimental electron density range. We compare our calculated excitation gap energy to the experimentally obtained activated transport energy gaps after subtracting out the effect of level broadening due to collisions. The agreement between our calculated excitation gaps and the experimental measurements is excellent.' address: - 'Department of Physics, University of Maryland, College Park, Maryland 20742' - 'AT&T Bell Laboratories, Murray Hill, NJ 07974' - 'Department of Physics, University of Maryland, College Park, Maryland 20742' author: - 'M. W. Ortalano' - Song He - 'S. Das Sarma' title: 'Realistic Calculations of Correlated Incompressible Electronic States in GaAs–$\mbox{Al}_{x} \mbox{Ga}_{1-x}$As Heterostructures and Quantum Wells' --- Background ========== The fractional quantum Hall effect (FQHE) has been observed in high mobility GaAs-$\mbox{Al}_{x} \mbox{Ga}_{1-x}$As quantum structures at low temperatures and in strong magnetic fields [@TSUI; @STORM; @CHANG; @SHAYEGAN; @WILLET1; @BOEB; @WILLET2]. This effect produces quantized plateaus in the Hall resistivity concurrent with minima in the longitudinal resistivity at special values of the electron density. A well formed electron correlation driven energy gap separating the ground state from the excited states occurring at these special densities and magnetic fields is the underlying reason for the FQHE phenomenon. The special density and magnetic field needed for the FQHE correspond to a Landau level filling factor $ \nu=p/q$ where q is an odd integer in the simplest situation. Laughlin’s theory [@LAUGHLIN] of the FQHE, valid for filling fractions of the form $\nu = 1/m$, where $m$ is an odd integer, is based on the following two-dimensional (2D) many-body wave function: $$\psi _{m} \left(z_{1}, \ldots ,z_{N} \right) = \prod_{i<j} \left(z_{i}-z_{j} \right)^{m} \exp \left(-1/4 \sum_{j=1}^{N} |z_{j}|^{2} \right)$$ where $z_{i}=x_{i}-iy_{i}$ is the complex representation of the $i^{th}$ electron’s 2D position vector. The Laughlin state describes a droplet of an incompressible correlated 2D electron liquid. At these special values of the filling fraction, the system has an excitation gap that separates the ground state from the excited states. The elementary excitations are fractionally charged anyons. In a single layer 2D system, Laughlin’s theory explains the FQHE at $\nu=1/m$ and $\nu=1-1/m$, where m is an odd integer. The hierarchy construction extends this theory to filling fractions of the form $\nu=p/q$ where $q$ is an odd integer [@PRANGE; @HALPERIN1; @HALDANE3; @HALPERIN2; @JAIN]. Direct numerical calculations involving the exact diagonalization of a small ($4 \sim 10$) number of interacting 2D electrons has verified the Laughlin theory extremely well. The geometry of choice in most numerical simulations is the translationally invariant, rotationally invariant spherical geometry [@HALDANE3]. Using just a small number of electrons, $N_{e}$, ($N_{e} \le 10$) can yield an accurate picture of the physical system. In this geometry, a magnetic monopole is placed at the center of a sphere of radius $R$, producing a radial magnetic field $B$. By the Dirac quantization condition, the total flux $4 \pi R^{2} B$ must be an integral multiple, $2S$, of the elemental flux $hc/e$. From the hierarchy construction a relationship between $2S$ and the filling factor $\nu$ can be found. For $\nu=1/3$, $2S=3(N_{e}-1)$. The appropriate unit of length is $l_{c}=\sqrt{ \hbar c/e B}$. If all of the electrons in the system are placed in the lowest Landau level and the coupling between Landau levels is ignored the resulting many-body problem is exactly soluble. The Hilbert space is of finite dimensions and an exact finite dimensional Hamiltonian matrix can be written down. This matrix can be diagonalized by standard techniques to find the energy eigenvalues and the eigenvectors. It is for this reason that exact finite size diagonalization has become such an important tool for studying the FQHE. In fact such numerical studies have been instrumental in confirming Laughlin’s many-body wave function [@FANO; @HALDANE1; @HALDANE4]. One direction that research in finite size diagonalization studies of the FQHE has taken is in increasing system size, [*i.e.*]{} increasing the number of electrons in the numerical simulation. In this paper, we take a different approach and place the emphasis on quantitatively improving the model used in describing the electron-electron interaction in the system to make it more realistic with respect to the experimental systems. Historically the first studies of the FQHE used a pure 2D Coulomb potential, and the pure $1/r$ Coulomb interaction, where $r$ is the separation between 2D electrons, is still the most popular model for exact finite size diagonalization studies. It was later pointed out that the finite layer thickness in quantum structures would cause the short range part of the Coulomb interaction to become softened. This could cause the ground state of the system to be no longer incompressible for very thick layers, and the FQHE may eventually be destroyed in thick layers. Zhang and Das Sarma [@SDS5] and He [*et al.*]{} [@SONG] investigated this ‘finite thickness’ phenomena using a simple variant of the Coulomb interaction, namely $$V(\vec{r})=\frac{e^{2}}{\kappa} \frac{1}{ \sqrt{r^{2}+ \lambda ^{2}}}$$ where the length scale $\lambda$ represents the finite extent of the electron wave function in the $z$ direction and $\vec{r}$ is the 2D position vector. Recently the FQHE has been studied using more sophisticated models for the electron-electron interaction [@SDS5; @SONG; @BELKHIR2]. The most accurate approximation that one can make in this respect is to do a self-consistent electronic structure calculation within the framework of the local density approximation (LDA) to describe the interaction. In this paper we perform such a calculation for a wide parabolic quantum well (PQW), a narrow square quantum well (SQW), and a heterostructure, and we use the realistic LDA electronic structure to compute the effective electron-electron interaction entering the exact diagonalization study. We use Haldane’s spherical geometry [@HALDANE3] to do a fully spin polarized finite size FQHE diagonalization calculation for six electrons, which should be sufficient for our purpose. The many-body Hamiltonian matrix for the FQHE is constructed using these LDA-Coulomb matrix elements. Using the Lanczos method, we calculate the eigenvalues and the eigenvectors of this fairly sparse FQHE Hamiltonian matrix, following standard techniques. The overlap with the Laughlin $\nu =1/3$ state is found by diagonalizing the Hamiltonian matrix using the pseudopotential parameters appropriate for this state and then calculating the inner product between this state and the exact numerical ground state found using the LDA pseudopotential parameters. The excitation gap is determined by looking at the size of the cusp at the relevant filling factor. In order to compare this calculated excitation gap to relevant experimental transport data, we subtract out the level broadening due to collisions, $\Gamma$. Using $\mu$, the experimentally determined value for the mobility, $\Gamma$ can be determined from $$\Gamma = \left( \frac{\hbar}{2 \tau_{s}} \right) = \left( \frac{\hbar e}{2m^{*} \mu} \right).$$ However, this equation is not quite correct [@SDS2] for the single particle level broadening if the scattering is strongly peaked in the forward direction as it is in modulation doped GaAs structures. We estimate the correct $\Gamma$ ([*i.e.*]{} the single particle broadening) using the transport data by employing a simple theory [@SDS2] which accounts for this forward scattering correction. We then take the calculated excitation gap and subtract out $2 \Gamma$ when comparing with experiment. Local Density Approximation =========================== We begin with a description of the procedure for a self-consistent electronic structure calculation at zero temperature and in zero magnetic field. We assume that the effective mass and the static dielectric constant do not vary over the width of the quantum well and that interface grading and dielectric mismatch are negligible. We solve the effective-mass, single-particle effective Schrödinger equation for a particle in a quantum well and Poisson’s equation for the electrostatic potential due to the free electric charge self-consistently. Assuming separability of the planar and perpendicular degrees of freedom, the three dimensional Schrödinger equation reduces to $$\left( -\frac{\hbar ^{2}}{2m^{*}} \frac{d^{2}}{dz^{2}} + V_{EFF}(z) \right) \xi _{i}(z) = E_{i} \xi _{i}(z)$$ where $\xi_{i}(z)$ and $E_{i}$ are the subband wave functions and energies respectively, and the effective one electron potential energy $V_{EFF}(z)$ is given by $$\label{potential} V_{EFF}(z) = V_{W}(z) + V_{H}(z) + V_{XC}(z)$$ with $V_{W}(z)$ being the quantum well confinement potential, $V_{H}(z)$ the self-consistent Hartree potential, and $V_{XC}(z)$ is the exchange-correlation potential. For the bare confining potential of a quantum well of width $a$, we take $$\begin{aligned} V_{W}(z) & = & \left \{ \begin{array}{ll} V_{0} \theta \left(|z| -a/2 \right), & \mbox{ for a SQW}\\ V_{0} \theta \left(|z|-a/2 \right)+ \alpha z^{2} \theta \left( a/2 - |z| \right ) , & \mbox{ for a PQW} \end{array} \right.\end{aligned}$$ with $V_{0}$ being the barrier height for a square quantum well and the barrier height above the edge of the parabolic portion for a parabolic quantum well, and $\alpha$ is the curvature of the parabolic quantum well. Poisson’s equation for the Hartree potential is given by $$\frac{d^{2}V_{H}(z)}{dz^{2}}=-\frac{4\pi e^{2}}{\kappa} \left( n(z)-n_{I}(z) \right)$$ where $\kappa$ is the background dielectric constant for GaAs, $n(z)$ is the electron density computed from the effective single particle subband wave functions and $n_{I}(z)$ is the density of donor impurities. We do not include $n_{I}(z)$ explicitly but include it via the boundary conditions in the solution of Poisson’s equation. The areal density is determined from the subband wave functions by $$n(z) = 2 \sum_{i}^{imax} N_{i} | \xi _{i}(z)|^{2}$$ where $N_{i}$ is the occupancy of the $i$th subband and is given by $$N_{i}= \int \frac{kdk}{2 \pi} \theta \left( E_{F}-E_{i} - \frac{k^{2}}{2m^{*}} \right)= \frac{m^{*}}{2 \pi} \left( E_{F} -E_{i} \right) .$$ The chemical potential $E_{F}$ is determined by the relation $$N_{s} = \int dz n(z) = 2 \sum_{i}^{imax} N_{i} = \frac{1}{2 \pi} \sum_{i}^{imax} 2m^{*} \left( E_{F}-E_{i} \right)$$ where $N_{s}$ is the total surface density. The above equation is inverted to give $E_{F}$ and $imax$. Many-body effects beyond the Hartree approximation are included by means of the density functional theory in the local density approximation (LDA) [@HK; @KS; @SK]. A chief concern of density functional theory is the calculation of the exchange-correlation energy functional, $V_{XC}[n]$. This functional of the electron density contains all those interaction parts of the energy functional which in general are unknown. The local density approximation consists of replacing the functional $V_{XC}[n]$ with a function $V_{XC}(n)$ whose value at a given point in space $z_{0}$, where the density is $n(z_{0})$, is determined as though the density was constant and equal to $n(z_{0})$ everywhere. The validity of this approximation requires that the variation of the electron density be small over distances of the order of a Fermi wavelength. This condition is in general violated in most semiconductor quasi-2D systems. However, there is considerable evidence that this approximation when used in these systems gives excellent agreement with experiment [@SDS1; @ANDO1; @ANDO2; @ANDO3; @SDS4]. For the exchange-correlation potential, we used the parametrization due to Hedin and Lundqvist [@HL]: $$V_{XC}(z) = - \left( 1+0.7734 x \ln (1+x^{-1}) \right) \left( \frac{2}{\pi \beta r_{s}} \right) Ry^{*}$$ where $\beta=(4/9 \pi)^{1/3}$, $x=r_{s}/21$ and $$r_{s} = \left( \frac{4}{3} \pi a^{*^{\scriptstyle 3}} n(z) \right) ^{-1/3}$$ with $a^{*}$ and $Ry^{*}$ being the effective Bohr radius and the effective Rydberg respectively in GaAs. The self-consistent procedure is to start with an initial guess for the electron density $n(z)$. The Hartree and exchange-correlation potentials are then computed for this density. The Schrödinger equation is then solved numerically to obtain $\xi_{i}$ and $E_{i}$. A new density is then computed and compared to the previous $n(z)$ through $$\eta = \frac{\int dz \left| n_{new}(z) - n_{old}(z) \right|}{\int dz n_{old}}.$$ If $\eta$ is larger than some specified tolerance, the new density is then mixed with the old density in the form $n(z)=n_{old}(z)(1-f)+n_{new}(z)f$ where f is a suitably chosen number between zero and one. This density is used as input to the calculation and the procedure is iterated until $\eta$ is smaller than the tolerance. That is, convergence is achieved when the previous density and the new density do not vary much. The above procedure is correct for quantum wells. For a heterostructure [@SDS1], however, this procedure requires modification since $m^{*}=m^{*}(z)$ and $\kappa = \kappa(z)$. The Schrödinger equation takes the form $$\left( -\frac{\hbar^{2}}{2} \frac{d}{dz} \frac{1}{m^{*}(z)} \frac{d}{dz} + V_{EFF}(z) \right) \xi _{i}(z) = E_{i} \xi _{i}(z)$$ with $V_{EFF}(z)$ still being given by equation (\[potential\]) where $V_{W}$ is $$V_{W}(z)=V_{0} \theta \left (-z \right) .$$ Poisson’s equation for a position dependent dielectric constant is $$\frac{d}{dz} \kappa (z) \frac{dV_{H}}{dz}= -4 \pi e^{2} \left( n(z)-n_{I}(z) \right).$$ The remaining pieces of the self-consistent calculation for heterostructures are unchanged from the quantum well case. The main uncontrolled approximation we are making in applying this LDA procedure to FQHE calculations is the assumption that the applied external magnetic field does not appreciably affect the LDA results. Because the applied magnetic field is in the $z$ direction it is not unreasonable to assume that the single particle Schrödinger equation in the $z$ variable is not substantially modified by the magnetic field. But we assume uncritically that $V_{XC}(z)$ has no magnetic field dependence, which should be a reasonable approximation for subband quantization arising from $z$ confinement. Pseudopotentials ================ The basic ingredients entering the finite size FQHE diagonalization study are the Coulomb pseudopotential parameters, $V_{m}$, introduced by Haldane [@HALDANE3; @HALDANE4]. Once all the $V_{m}$’s are known, the FQHE Hamiltonian is completely defined. The pseudopotential parameters are the energies of pairs of particles with relative angular momentum $m$. They are given by [@HALDANE4] $$V_{m} = \int_{0}^{ \infty } qdq \tilde{V}(q) \left( L_{n} \left(\frac{q^{2}}{2} \right) \right)^{2} L_{m} \left(q^{2} \right) \exp{ \left(-q^{2} \right)}$$ where $\tilde{V}(q)$ is the Fourier transform of the electron interaction potential, $V(r)$, and $n$ is the Landau level index. For small (large) $m$, $V_{m}$ describes the short (long) range part of the interaction. If the electrons are fully spin-polarized, then only $V_{m}$ with odd $m$ are relevant. For the density as determined from an LDA calculation, where $|\xi (z)|^{2}$ represents the density profile in the $z$ direction, the relevant equation for $\tilde{V}(q)$ is given by $$\tilde{V}(q) = \frac{2 \pi e^{2}}{\kappa q} \int dz_{1} \int dz_{2} | \xi (z_{1})|^{2} |\xi (z_{2})|^{2} \exp{(-q|z_{1}-z_{2}|)}.$$ Various approximations to the electron wave function in Eq. (18) give rise to different pseudopotential parameters $V_{m}$. The simplest approximation that one can make is to take the electron-electron interaction to be a pure 2D Coulomb interaction. In this case, $$\tilde{V}(q)=\frac{2 \pi e^{2}}{\kappa} \frac{1}{q}.$$ In order to take into account the effect of finite layer thickness in a quasi-2D electron system, a useful and simple approximation for the electron-electron interaction is [@SDS5] the finite-$\lambda$ model $$V(\vec{r})=\frac{e^{2}}{\kappa} \frac{1}{ \left(r^{2}+ \lambda ^{2} \right) ^{1/2} }$$ where $\lambda$ is the effective half-width of the electron layer in the $z$ direction and $\vec{r}$ is the 2D position vector. In momentum space, $$\tilde{V}(q)=\frac{2 \pi e^{2}}{\kappa} \frac{ \exp \left(-q \lambda \right)}{q}.$$ For an infinite barrier square quantum well of width $d$ [@SDS3], $$\tilde{V}(q) = \frac{2 \pi e^{2}}{\kappa q} \frac{1}{(qd)^{2}+ 4 \pi ^{2}} \left( 3qd +\frac{8 \pi ^{2}}{qd}-\frac{32 \pi ^{4} \left(1-\exp \left(-qd \right) \right )}{(qd)^{2}((qd)^{2}+ 4 \pi ^{2})} \right).$$ For a heterostructure, the Fang-Howard variational result [@ANDO1] is $$\begin{aligned} \tilde{V}(q) & = & \frac{2 \pi e^{2}}{\kappa_{avg} q} \left( \frac{1}{16} \left(1+ \kappa_{rel} \right) \left( 1 + \frac{q}{b} \right ) ^{-3} \left(8+\frac{9q}{b}+\frac{3q^{2}}{b^{2}} \right) \right. \nonumber \\ & & \mbox{} \left. + \frac{1}{2} \left( 1- \kappa_{rel} \right) \left( 1+ \frac{q}{b} \right) ^{-6} \right)\end{aligned}$$ with $\kappa_{avg}=(\kappa_{sc} + \kappa_{ins})/2$ being the average dielectric constant and $\kappa_{rel} = \kappa_{ins} / \kappa_{sc}$ being the relative dielectric constant of the insulating and semiconductor materials and $b=3/z_{0}$ where $z_{0}$ is the average extent of the electron wave function in the $z$ direction. In terms of the density, $b$ is given by $$b= \left( 48 \pi m^{*} e^{2} N^{*} / \kappa_{sc} \hbar ^{2} \right)^{1/3}$$ where $$N^{*}= N_{d}+ \frac{11}{32} N_{s}$$ with $N_{d}$ being the depletion charge density in GaAs and $N_{s}$ is the 2D electron density in the layer. A major focus of our work is to determine how accurate these approximate models are when used in a FQHE calculation. In particular, pseudopotential parameters for these simple model approximations will be compared to those calculated using the self-consistent LDA calculation. The eigenstates of a many-body Hamiltonian are unchanged if the Hamiltonian (or the potential in the Hamiltonian) is shifted by a constant amount. This suggests [@SONG] that differences of the $V_{m}$ would be a useful quantity to look at. The f-parameters are defined [@SONG] in terms of the pseudopotential parameters by $$f_{m} = \frac{V_{3}-V_{m}}{V_{1}-V_{3}}.$$ $f_{1}=-1$ and $f_{3}=0$ for any pair potential. The Laughlin $\nu=1/3$ state is the exact nondegenerate ground state for a hard core model Hamiltonian [@HALDANE4]. In terms of pseudopotential parameters, the hard core model is given by $\{ V_{1},V_{3},V_{5}, \ldots \} = \{V_{1},0,0, \ldots \}$ and its f-parameters are $\{f_{1},f_{3},f_{5}, \ldots \} = \{-1,0,0, \ldots \}$. A large deviation from these values implies that the system is not well represented by the hard core model and consequently the ground state of the system may not be incompressible. Our goal in this paper is to investigate the ground state incompressibility in increasingly more realistic approximations for the Coulomb pseudopotentials. Parabolic Quantum Well ====================== In this section we show the results obtained for a wide parabolic quantum well. A PQW is constructed by grading the Al concentration in such a way as to give the conduction band edge a parabolic shape. As the areal electron density in the well is increased, the half width at half maximum of the density, $\lambda$, increases. Shayegan [*et al.*]{} [@SHAYEGAN], reported that the FQHE excitation gap decreases dramatically when $\lambda/l_{c} \approx 3.5 \mbox{ to } 5$. This would indicate that the FQHE is becoming weakened and that the ground state of the system is no longer incompressible. From the physical parameters given in Shayegan [*et al.*]{} [@SHAYEGAN], we take $V_{0}=276 \mbox{ mev}$, $\alpha= 5.33 \times 10^{-5} \mbox{ mev}/ \mbox{\AA}^{2}$, and $a=3000 \mbox{\AA}$. The LDA pseudopotential parameters were calculated using these values for several densities. In Fig. \[IA\], we show our calculated LDA $V_{m}$ for LDA for the experimentally determined carrier densities of Shayegan[*et al.*]{} [@SHAYEGAN] compared with the $V_{m}$ for a pure Coulomb interaction. In Fig. \[IAA\], $V_{m}$ for two relevant approximate models, the infinite well model and the finite-$\lambda$ model, are shown. For $m$ greater than approximately $12$, $V_{m}$ for the different models agree well with the LDA pseudopotentials. For small $m$ the pure Coulomb and the infinite well model seem to overestimate $V_{m}$, while the finite-$\lambda$ model underestimates $V_{m}$. In Figs. \[IB\] and  \[IBB\] we show the corresponding f-parameters for these pseudopotential parameters. The f-parameters in the finite-$\lambda$ model rise more rapidly with increasing density than the f-parameters for LDA and the other models. This would give the appearance that for large densities the ground state would no longer be incompressible in the finite-$\lambda$ model, as has been concluded  [@SONG] in the literature. Using the LDA pseudopotential parameters, we studied the $\nu=1/3$ FQHE state employing the finite size exact diagonalization technique. Figs. \[IC\] and \[ID\] show the calculated overlap with the Laughlin $\nu=1/3$ state, and the calculated bare excitation gap, $\Delta$, and the gap minus the level broadening, $\Delta -2 \Gamma$, as a function of electron density. Also shown is the gap as measured experimentally [@SHAYEGAN] by Shayegan [*et al.*]{} The agreement between the experimental results and our calculation is very good. For a pure Coulomb interaction, $\Delta \approx 14.2 \mbox{ K}$ and $\Delta -2 \Gamma \approx 11.4 \mbox{ K}$ and using the finite-$\lambda$ model, $\Delta \approx 2.9 \mbox{ K to } 1.4 \mbox{ K}$ and $\Delta -2 \Gamma \approx 0.1 \mbox{ K to } 0.0 \mbox{ K}$. The overlap of the LDA result with the Laughlin state is found to be quite large for all densities. This is to be contrasted with the finite-$\lambda$ model where the overlap is $ \approx 0.8 \mbox{ to } 0.4$ for the given range of densities. For a pure Coulomb interaction the overlap with the Laughlin state is also quite large. We also studied the subband dependence of the PQW results in an artificial model calculation which bears no resemblance to reality. Pseudopotential parameters were calculated assuming that (a) only the lowest subband was occupied, (b) only the first excited subband was occupied, and (c) only the second excited subband was occupied. We then performed a FQHE calculation using the parameters for (a)-(c). The difference between these results and our full LDA results was quite small (less than $1 \%$) for the overlap with the Laughlin $\nu=1/3$ state. The differences for the gap were larger (as much as $30 \%$). Heterostructure =============== For the LDA calculation, we took the physical parameters to be those appropriate for a typical GaAs-$\mbox{Al}_{x} \mbox{Ga}_{1-x}$As heterostructure: $V_{0}=276 \mbox{ mev}$, $\kappa_{sc} = 12.8$, $\kappa_{ins}=12.1$, $m^{*}_{sc}=0.068 \mbox{ } m_{0}$, and $m^{*}_{ins}=0.088 \mbox{ } m_{0}$. In Fig. \[IIA\] we show the pseudopotential parameters for electron densities between $1 \times 10^{10}$ to $3 \times 10^{11} \mbox{cm}^{-2}$. From Fig. \[IIAA\] it is clear that the approximate variational model [@ANDO1] is fairly reliable when compared to the LDA pseudopotentials especially for large densities. Figs. \[IIB\] and \[IIBB\] show the corresponding f-parameters. These parameters are fairly constant for all densities greater than $ \approx 1 \times 10^{11} \mbox{cm}^{-2}$. For smaller densities, the f-parameters deviate more strongly from the hard core model f-parameters. The overlap with the Laughlin state as a function of density is shown in Fig. \[IIC\]. As expected it is quite large, especially for the larger densities. The finite-$\lambda$ model also gives a large overlap with the Laughlin state, $ \approx 0.9 \mbox{ to } 0.99$ for this range of densities. The disagreement between the LDA results and the finite-$\lambda$ model is larger for smaller densities. Figure \[IID\] shows calculated gaps $\Delta$ and $\Delta - 2 \Gamma$ as a function of density. The agreement between our ‘subtracted’ gap and the experimental measurement [@WILLET1] of Willet [*et al.*]{} is very good. For comparison, a pure Coulomb interaction gives $\Delta \approx 14.2 \mbox{ K}$ and $\Delta -2 \Gamma \approx 13.8 \mbox{ K}$, while the finite-$\lambda$ model gives $\Delta \approx 12.8 \mbox{ K to } 5.7 \mbox{ K}$ and $\Delta -2 \Gamma \approx 12.7 \mbox{ K to } 5.3 \mbox{ K}$. Square Quantum Well =================== We consider a typical narrow SQW with a width $139 \mbox{\AA}$ and $V_{0}=276 \mbox{ mev}$, for a typical range of densities, $N_{S} = 1 \times 10^{10} \mbox{ to } 5 \times 10^{11} \mbox{cm}^{-2}$. The LDA pseudopotentials are shown in Fig. \[IIIA\]. For this well and this range of densities the calculated $V_{m}$ show very little density variation. The pseudopotential parameters for the other models are shown in Fig. \[IIIAA\]. As in the PQW case, the infinite well model overestimates the $V_{m}$ while the finite-$\lambda$ model underestimates the $V_{m}$. However, for all $m$ greater than 4, the $V_{m}$ for all of the models are approximately equal. The f-parameters are shown in Figs. \[IIIB\] and \[IIIBB\]. These parameters for the LDA model are almost constant for the given range of densities and they are close to being equal in all of the models. These parameters do not rise above one for any of the models. Since the $V_{m}$s remain small for all $m$, it is reasonable to assume that the ground state should be incompressible over this range of densities in a square quantum well. The overlap of the exact numerical wave function with the $\nu=1/3$ Laughlin state is shown in Fig. \[IIIC\]. It also shows almost no variation with density and it is very close to the overlap computed using either the pure Coulomb or the finite-$\lambda$ model. The gap, as shown in Fig. \[IIID\], shows almost no variation with density. It is very close to the pure Coulomb value, $\Delta \approx 14.2 \mbox{ K}$. Using the finite-$\lambda$ model, $\Delta \approx 13.8 \mbox{ K}$ for this range of densities. Thus, for a square quantum well, all of the approximations for the pseudopotential should work well in FQHE calculations. Even denominator FQHE : $\nu=5/2$ ================================= The first unambiguous observation of an even denominator filling factor in a single layer system was made by Willet [*et al.*]{} [@WILLET2]. Magnetotransport experiments carried out in a high mobility GaAs-AlGaAs heterostructure showed [@WILLET2] a plateau in the Hall resistivity concurrent with a deep minima in the longitudinal resistivity corresponding to a filling factor of $5/2$. Tilted field experiments [@EISEN1] on the $5/2$ state have shown that it is rapidly destroyed by increasing the Zeeman energy which indicates that the spin degree of freedom may be important in understanding this state. In analogy with the Laughlin state for odd denominator filling factors, Haldane and Rezayi [@HALDANE6] have proposed a ‘hollow core’ model wave function that may describe the physics of the $\nu=5/2$ state. This spin-singlet wave function represents an incompressible state for $\nu=1/2$. However this hollow-core wave function requires a substantially reduced short range repulsion between the electrons relative to a pure 2D Coulomb interaction. From the physical parameters for the heterostructure of Willet [*et al.*]{} [@WILLET2] we calculate the LDA pseudopotential parameters for the first Landau level ($n=1$). These parameters are shown in Table 1 and in Fig. \[IVA\]. Figure \[IVA\] also shows the effect on the pseudopotential parameters for the variational model when the electron density is varied by 20% and also the effect of varying the relative dielectric constant, $\kappa_{ins}/\kappa_{sc}$ from $0$ to $1.5$. Our reason for varying the system parameters ([*i.e.*]{} electron density and background dielectric constants) is to check whether such parameter modifications could produce an incompressible hollow core state at $\nu=5/2$. Assuming that the lowest Landau level is completely filled and inert, we perform a finite size diagonalization calculation for a system of eight electrons in the spherical geometry. Shown in Fig. \[IVB\] is the excitation spectrum as a function of total angular momentum $L$ for total spin $S$ equal to 4. We find that the ground state is in the $L=0$ $S=4$ sector. The ground state energy for $L=0$ $S=4$ is however close to the energies found for the other $L=0$ sectors. The overlap of our wave function from the finite size diagonalization calculation with the hollow core model is quite small ($5 \times 10^{-3}$) indicating that this model is not a good candidate for the $5/2$ state. At this stage, therefore, we conclude, in agreement with earlier investigations [@MAC1] of this issue, that the $5/2$ FQHE phenomenon as observed in ref.  remains unexplained theoretically, and in particular, the hollow core model proposed in ref.  is not quantitatively consistent with the system parameters of the experimental sample in ref. . Conclusion ========== In summary, we have obtained realistic Coulomb pseudopotential parameters for FQHE calculations in 2D GaAs–$\mbox{Al}_{x} \mbox{Ga}_{1-x}$As quantum structures using a self-consistent LDA electronic subband structure results. We compare the LDA pseudopotential parameters with those from a number of simpler model approximations ([*eg.*]{} the pure 2D Coulomb model, the finite-$\lambda$ model, the infinite well model, and the variational model) to estimate the quantitative accuracy of the simpler models for various systems and different electron densities. Our most realistic calculations yield FQH excitation gaps which, when corrected for the level broadening effect, are in excellent quantitative agreement with the experimentally determined activation gaps as obtained from transport data. For the $\nu=5/2$ FQHE observed in ref.  our calculations show that the hollow core model of ref.  is quantitatively inconsistent with the LDA Coulomb pseudopotentials for the experimental sample parameters of ref. . Our calculated realistic Coulomb pseudopotentials for various systems should enable future FQHE finite size exact diagonalization calculations to be quantitatively more realistic. This work is supported by the US-ONR. D. C. Tsui, H. L. St[ö]{}rmer, and A. C. Gossard, [*Phys. Rev. Lett.*]{} [**48**]{}, 1559 (1982). H. L. St[ö]{}rmer, A. M. Chang, D. C. Tsui, J. C. M. Hwang, A. C. Gossard, and W. Weigman, [*Phys. Rev. Lett.*]{} [**50**]{}, 1953 (1983). A. M. Chang, P. Berglund, D. C. Tsui, H. L. St[ö]{}rmer, and J. C. M. Hwang, [*Phys. Rev. Lett.*]{} [**53**]{}, 997 (1984). M. Shayegan, J. Jo, Y. W. Suen, M. Santos, and V. J. Goldman, [*Phys. Rev. Lett.*]{} [**65**]{}, 2916 (1990). R. L. Willet, H. L. St[ö]{}rmer, D. C. Tsui, A. C. Gossard, and J. H. English, [*Phys. Rev. B*]{} [**37**]{}, 8476 (1988). G. S. Boebinger, A. M. Chang, H. L. St[ö]{}rmer, and D. C. Tsui, [*Phys. Rev. Lett.*]{} [**55**]{}, 1606 (1985). R. Willet, J. P. Eisenstein, H. L. St[ö]{}rmer, D. C. Tsui, A. C. Gossard, and J. H. English, [*Phys. Rev. Lett.*]{} [**59**]{}, 1776 (1987). R. B. Laughlin, [*Phys. Rev. Lett.*]{} [**50**]{}, 1395 (1985). , edited by R. E. Prange and S. M. Girvin (Springer, New York, 1990). B. I. Halperin, [*Helv. Phys. Acta.*]{} [**56**]{}, 775 (1984). F. D. M. Haldane, [*Phys. Rev. Lett.*]{} [**51**]{}, 605 (1983). B. I. Halperin, [*Phys. Rev. Lett.*]{} [**52**]{}, 1583 (1984). J. K. Jain, [*Phys. Rev. B*]{} [**41**]{}, 7653 (1990). G. Fano, F. Ortolani, and E. Colombo, [*Phys. Rev. B*]{} [**34**]{}, 2670 (1986). F. D. M. Haldane and E. H. Rezayi, [*Phys. Rev. Lett.*]{} [**54**]{}, 237 (1985). F. D. M. Haldane, in [*The Quantum Hall Effect*]{} (Ref.9), Chap. 8. F. C. Zhang and S. Das Sarma, [*Phys. Rev. B*]{} [**33**]{}, 2903 (1986). Song He, F. C. Zhang, X. C. Xie, and S. Das Sarma, [*Phys. Rev. B*]{} [**42**]{}, 11376 (1990). D. J. Yoshioka, [*J. Phys. Soc. Jpn.*]{} [**55**]{}, 885 (1986); Song He, S. Das Sarma, and X. C. Xie, [*Phys. Rev. B*]{} [**47**]{}, 4394 (1993); L. Belkhir, and J. K. Jain, [*Solid State Commun.*]{} [**94**]{}, 107 (1995). Frank Stern and S. Das Sarma, [*Phys. Rev. B*]{} [**30**]{}, 840 (1984). P. Hohenberg and W. Kohn, [*Phys. Rev.*]{} [**136**]{}, B864 (1964). W. Kohn and L. J. Sham, [*Phys. Rev.*]{} [**140**]{}, A1133 (1965). L. J. Sham and W. Kohn, [*Phys. Rev.*]{} [**145**]{}, 561 (1966). S. Das Sarma and Frank Stern, [*Phys. Rev. B*]{} [**32**]{}, 8442 (1985). T. Ando, A. B. Fowler, and F. Stern, [*Rev. Mod. Phys.*]{} [**54**]{}, 437 (1982). T. Ando, [*J. Phys. Soc. Jpn.*]{} [**51**]{}, 3893 (1982). T. Ando, [*Phys. Rev. B*]{} [**13**]{}, 3468 (1976). S. Das Sarma and B. Vinter, [*Phys. Rev. B*]{} [**23**]{}, 6832 (1981), [**26**]{}, 960 (1982), [**28**]{}, 3639 (1983). L. Hedin and B. I. Lundqvist, [*J. Phys. C*]{} [**4**]{}, 2064 (1971). S. Das Sarma and B. A. Mason, [*Annals Phys. (NY)*]{} [**163**]{}, 78 (1985). J. P. Eisenstein, R. L. Willet, H. L. St[ö]{}rmer, D. C. Tsui, A. C. Gossard, and J. H. English, [*Phys. Rev. Lett.*]{} [**61**]{}, 997 (1988). F. D. M. Haldane and E. H. Rezayi, [*Phys. Rev. Lett.*]{} [**60**]{}, 956 (1988). A. H. MacDonald, D. Yoshioka, and S. M. Girvin, [*Phys. Rev. B*]{} [**39**]{}, 8044 (1989); J. P. Eisenstein and A. H. MacDonald, unpublished; L. Belkhir, X. G. Wu, and J. K. Jain, [*Phys. Rev. B*]{} [**48**]{}, 15245 (1993); Gautam Dev, X. C. Xie, and B. A. Mason, [*Phys. Rev. B*]{} [**51**]{}, 10905 (1995). m $V_{\mbox{m}}$ ---- ---------------- -- 0 0.47665957508 1 0.37332084804 2 0.35230370587 3 0.28844405415 4 0.24996235081 5 0.22361694066 6 0.20414646380 7 0.18900642725 8 0.17679961714 9 0.16668746764 10 0.15813248355 11 0.15077224603 12 0.14435235363 13 0.13868828532 14 0.13364252111 15 0.12911019556 16 0.12500975822 17 0.12127670242 18 0.11785925293 19 0.11471532800 20 0.11181031734
--- abstract: 'In distributed optimization and iterative consensus literature, a standard problem is for $N$ agents to minimize a function $f$ over a subset of Euclidean space, where the cost function is expressed as a sum $\sum f_i$. In this paper, we study the private distributed optimization (PDOP) problem with the additional requirement that the cost function of the individual agents should remain differentially private. The adversary attempts to infer information about the private cost functions from the messages that the agents exchange. Achieving differential privacy requires that any change of an individual’s cost function only results in unsubstantial changes in the statistics of the messages. We propose a class of iterative algorithms for solving PDOP, which achieves differential privacy and convergence to the optimal value. Our analysis reveals the dependence of the achieved accuracy and the privacy levels on the the parameters of the algorithm. We observe that to achieve $\epsilon$-differential privacy the accuracy of the algorithm has the order of $O(\frac{1}{\epsilon^2})$.' author: - | Zhenqi Huang  Sayan Mitra  Nitin Vaidya\ {zhuang25, mitras, nhv}@illinois.edu\ Coordinate Science Laboratory\ University of Illinois at Urbana Champaign\ Urbana, IL 61801 bibliography: - 'Privacy.bib' - 'sayan1.bib' title: Differentially Private Distributed Optimization ---
--- author: - 'Sanjam Garg[^1]' - 'Shafi Goldwasser[^2]\' - Prashant Nalini Vasudevan bibliography: - 'abbrev1.bib' - 'crypto.bib' - 'refs.bib' title: Formalizing Data Deletion in the Context of the Right to be Forgotten --- [^1]: EECS, UC Berkeley. Email: `{sanjamg,prashvas}@berkeley.edu`. Supported in part from AFOSR Award FA9550-19-1-0200, AFOSR YIP Award, NSF CNS Award 1936826, DARPA and SPAWAR under contract N66001-15-C-4065, a Hellman Award and research grants by the Okawa Foundation, Visa Inc., and Center for Long-Term Cybersecurity (CLTC, UC Berkeley). The views expressed are those of the authors and do not reflect the official policy or position of the funding agencies. [^2]: Simons Institute for the Theory of Computing, UC Berkeley. Email: `shafi@theory.csail.mit.edu`. Supported in part by the C. Lester Hogan Chair in EECS, UC Berkeley, and Fintech@CSAIL.
--- abstract: 'We present a simple physical model for the minimum mass of bound stellar clusters as a function of the galactic environment. The model evaluates which parts of a hierarchically-clustered star-forming region remain bound given the time-scales for gravitational collapse, star formation, and stellar feedback. We predict the initial cluster mass functions (ICMFs) for a variety of galaxies and we show that these predictions are consistent with observations of the solar neighbourhood and nearby galaxies, including the Large Magellanic Cloud and M31. In these galaxies, the low minimum cluster mass of $\sim10^2\Msun$ is caused by sampling statistics, representing the lowest mass at which massive (feedback-generating) stars are expected to form. At the high gas density and shear found in the Milky Way’s Central Molecular Zone and the nucleus of M82, the model predicts that a mass $>10^2\Msun$ must collapse into a single cluster prior to feedback-driven dispersal, resulting in narrow ICMFs with elevated characteristic masses. We find that the minimum cluster mass is a sensitive probe of star formation physics due to its steep dependence on the star formation efficiency per free-fall time. Finally, we provide predictions for globular cluster (GC) populations, finding a narrow ICMF for dwarf galaxy progenitors at high redshift, which can explain the high specific frequency of GCs at low metallicities observed in Local Group dwarfs like Fornax and WLM. The predicted ICMFs in high-redshift galaxies constitute a critical test of the model, ideally-suited for the upcoming generation of telescopes.' author: - | Sebastian Trujillo-Gomez[^1], Marta Reina-Campos and J. M. Diederik Kruijssen\ Astronomisches Rechen-Institut, Zentrum f[ü]{}r Astronomie der Universit[ä]{}t Heidelberg, Monchhofstra[ß]{}e 12-14, D-69120 Heidelberg, Germany bibliography: - 'merged.bib' date: 'Accepted 2019 July 9. Received 2019 June 17; in original form 2019 March 11' title: A model for the minimum mass of bound stellar clusters and its dependence on the galactic environment --- \[firstpage\] stars: formation — globular clusters: general — galaxies: evolution — galaxies: formation –– galaxies: star clusters: general Introduction {#sec:intro} ============ Star clusters are potentially powerful tools for understanding the assembly of galaxies in a cosmological context. For instance, the properties of globular cluster (GC) systems are tightly correlated not only to their host galaxies [@BrodieStrader06; @Kruijssen14], but also to their inferred dark matter halo masses [@Blakeslee97; @Harris17; @Hudson18]. Likewise, young stellar clusters provide detailed information about the recent star formation conditions in their host galaxies [@PortegiesZwart10; @Longmore14; @Chilingarian18]. Clusters are also ideal tracers of gravity and probe the detailed mass distribution of dark matter haloes [@Cole12; @Erkal15; @Alabi16; @Contenta18; @vanDokkum18b]. In order to fully exploit stellar clusters as tracers of galaxy and structure formation, we must understand how their birth environments give rise to their initial properties (including masses, ages, structure and chemical composition) and how these evolve across cosmic time. Understanding the relation between star and cluster formation has important implications for the hierarchical formation and evolution of galaxies. A common hypothesis for the origin of GCs considers them products of regular star formation in the extreme conditions in the interstellar medium (ISM) of $z \sim 2-3$ galaxies [e.g. @KravtsovGnedin05; @Elmegreen10; @Shapiro10; @Kruijssen15b; @Reina-Campos19]. Within this framework, GCs correspond to the dynamically-evolved remnants of massive clusters formed at high redshift [e.g. @Forbes18; @Kruijssen19a]. Recently studies are showing that GCs are excellent tracers of the assembly histories of galaxies, and in particular of the Milky Way [e.g. @Kruijssen18b; @Myeong18]. Naturally, the properties of GC populations can only be predicted given a complete model for their initial demographics, of which the initial cluster mass function (ICMF) is an essential component. Many fundamental aspects of the process of star cluster formation are still poorly understood, including the fraction of stars that form in clusters, and the ICMF, and their dependence on the large-scale galactic environment. For example, it is a widespread assumption in the literature that the initial mass function of bound star clusters follows a power-law with a logarithmic slope $\sim -2$ [@ZhangFall99; @Bik03; @Hunter03; @McCradyGraham07; @Chandar10; @PortegiesZwart10]. This was later revised to include a high-mass truncation [cf. @Schechter76], with additional evidence of a strong environmental dependence of the truncation mass [e.g. @Gieles06; @Larsen09; @Adamo15; @Johnson17; @Reina-Campos17; @Messa18]. Despite all the effort that has been put into understanding and modelling the environmental dependence of the high-mass end, the low-mass truncation is still assumed to be $\sim 10^2\Msun$ [e.g. @LadaLada03; @Lamers05] across all environments. Recently, @Reina-Campos17 [hereafter ] developed a model for the maximum mass of stellar clusters that simultaneously includes the effect of stellar feedback and centrifugal forces. The model predicts the upper mass scale of molecular clouds (and by extension, that of star clusters) by considering how much mass from a centrifugally-limited region (containing a ‘Toomre mass’, see @Toomre64) can collapse before stellar feedback halts star formation. The authors find that the resulting upper truncation mass of the ICMF depends on the gas pressure, where environments with higher gas pressures are able to form more massive clusters. Local star-forming discs (such as the Milky Way and M31) with low ISM surface densities are predicted to have much lower truncation masses than (nuclear) starbursts and $z \sim 2$ clumpy discs, where stellar feedback is slow relative to the collapse time-scale. Together with a theoretical model predicting an increase of the cluster formation efficiency (CFE) with gas pressure [@Kruijssen12b], these results reproduce observations of young massive clusters (YMCs) in the local Universe [e.g. @Adamo15; @Messa18]. The general implication of these results is that cluster properties are shaped by the galactic environment. This environmental coupling hints at the exciting prospect of using clusters to trace the evolution of their host galaxies. In addition to allowing the use of clusters as tracers of galaxy assembly, the above models also provide the initial conditions for studies of cluster dynamical evolution [e.g. @LamersGieles06; @Lamers10; @Baumgardt19], as well as for sub-grid modelling of star cluster populations in cosmological simulations. Together with the environmentally-dependent modelling of dynamical evolution including tidal shocking and evaporation, these cluster formation models enable the formation and evolution of the entire star cluster population to be followed from extremely high redshift down to $z=0$ [@Pfeffer18; @Kruijssen19a]. Self-consistently forming and evolving the entire cluster population in cosmological simulations of a representative galaxy sample is currently an intractable problem due to the extremely high resolution required, although case studies are promising [@kim18; @li18]. Despite the recent progress on the theory of GC formation, many of the observed properties of GC populations still remain a puzzle. The GC mass function has a close to log-normal shape with a characteristic peak at $\sim 10^5\Msun$ [e.g. @Harris91; @Jordan07], whereas the young cluster mass function (CMF) continues as a power law down to much lower masses [e.g. @ZhangFall99; @Hunter03; @Johnson17]. This difference can be explained if the majority of low-mass clusters are disrupted over several Gyr due to dynamical effects [@Spitzer87; @Gnedin99; @FallZhang01; @BaumgardtMakino03; @LamersGieles06; @Kruijssen15b]. However, recent observations of GC systems in nearby dwarf galaxies seem to challenge this scenario. @Larsen12 [@Larsen14] determined the chemical properties of GCs around a number of Local Group dwarf galaxies. These studies found that a strikingly large fraction ($\sim 20-50$ per cent) of low metallicity stars in Fornax and in WLM belong to their GCs (which have a characteristic mass of $\sim 10^5\Msun$). This is extremely high compared to the typical fraction of $0.1$ per cent found in Milky Way-mass galaxies, and it is also the largest GC specific frequency ever observed. This feature seems to extend to every dwarf galaxy where GC and field star metallicities have been determined, and contradicts the existence of a universal power-law ICMF down to a common lower mass limit of $\sim10^2\Msun$ [@Larsen18]. In these dwarf galaxies, the traditionally assumed universal [@Schechter76] ICMF extending down to $\sim 10^2\Msun$ requires the majority of the low-mass clusters to have been disrupted after a Hubble time of dynamical evolution, thus returning their mass to the field population. This would allow at most $10$ per cent of the low-metallicity stars to reside in the surviving GCs, contrary to the much larger observed fraction of $20{-}50$ per cent. In this paper, we examine the possibility that the low-mass end of the ICMF is not universal, but is instead determined by the environmentally dependent minimum mass of a bound star cluster. We develop a model for the dependence of the minimum cluster mass on galactic birth environment. The model is based on the hierarchical nature of star formation in molecular clouds regulated by stellar feedback, combined with empirical input on the structure and scaling relations of clouds in the local Universe. By estimating the time-scale for stellar feedback to halt star formation in relation to the collapse time of clouds with a spectrum of masses, we can predict the range of cloud masses that can achieve the minimum star formation efficiency needed to remain bound after the remaining gas is blown out by feedback. This minimum mass scale emerges naturally as the largest scale that must collapse and merge into a single bound object, which corresponds to the bottom of the hierarchy of young stellar structure in galaxies. The paper is organised as follows. In Section \[sec:model\], we present the derivation of the minimum bound cluster mass as a function of cloud properties as well as global galaxy observables. Section \[sec:uncertainties\] presents an estimate of the dominant uncertainties. Section \[sec:environments\] illustrates the predicted variation of the minimum mass and the width of the ICMF across the broad range of observed galaxies. In Section \[sec:predictions\], we make predictions of the full ICMF and compare these with observational estimates in the solar neighbourhood, the Large Magellanic Cloud (LMC), M31, the Antennae galaxies, and galactic nuclei including the Central Molecular Zone (CMZ) of the Milky Way, and the nucleus of M82. This section also discusses the effect of the minimum mass on the inferred CFE. In Section \[sec:GCs\], we illustrate how the model can be used to reconstruct the galactic environment that gave rise to the populations of GCs in the Fornax dSph, and also to predict the ICMFs in the high-redshift environments that will be within reach of the next generation of observational facilities. Lastly, Section \[sec:conclusions\] summarises our results. Model {#sec:model} ===== We begin by assuming that the ICMF follows a power law with exponential truncations at both the high- and low-mass ends: $$\frac{{\rm d}N}{{\rm d}M} \propto M^{\beta} \exp\left( -\frac{\Mmin}{M} \right) \exp\left( -\frac{M}{\Mmax} \right) , \label{eq:CMF}$$ where $\beta = -2$ as expected from gravitational collapse in hierarchically structured clouds [e.g. @Elmegreen96; @Guszejnov18], $\Mmin$ is the minimum cluster mass scale, and $\Mmax$ is the maximum cluster mass scale, which we determine from the mass of the largest molecular cloud that can survive disruption by feedback or galactic centrifugal forces, according to the model by . This ICMF introduces three different regimes. At $M\gg\Mmax$, bound clusters are extremely unlikely to form due to the disruptive effects of galactic dynamics and stellar feedback, inhibiting the collapse of the largest spatial scales. At $M\ll\Mmin$, bound clusters must be part of a larger bound part of the hierarchy, because the attained star formation efficiencies are very high, causing them to merge into a single bound object of a higher mass. In between these mass scales, self-similar hierarchical growth imposes a power law ICMF. When describing star and cluster formation in a disc in hydrostatic equilibrium [cf. @krumholzmckee05; @Kruijssen12b], $\Mmax$ can be expressed in terms of the ISM surface density $\SigmaISM$, the angular velocity of the rotation curve $\Omega$, and the Toomre $Q$ parameter of the galactic gas disc. In this section, we outline a model to derive the minimum bound cluster mass $\Mmin$ and in Section \[sec:minmass\] we present the analytical formalism. In Section \[sec:IMF\], we include the impact of sampling the stellar initial mass function (IMF) in low-mass molecular clouds, and in Section \[sec:global\] we formulate the minimum mass in terms of global galaxy properties. Following @Kruijssen12b and , we model star and cluster formation as a continuous process that takes place when overdense regions within molecular clouds and their substructures collapse due to local gravitational instability. The collapse leads to fragmentation and the formation of stars until the newly formed stellar population deposits enough feedback energy and momentum in the local gas reservoir to stop the gas supply and the corresponding star formation. The critical time-scale that defines how much gas is converted into stars is determined by the time required for stellar feedback to halt star formation. This ‘feedback time-scale’ determines the total star formation efficiency through the relation $$\epsfb \equiv \frac{M_*(t_{\rm SF})}{\Mc} , \label{eq:epsfb1}$$ where $M_*(t_{\rm SF})$ is the mass of stars formed after a feedback time-scale $t_{\rm SF}$ and $\Mc$ is the cloud mass. This feedback-regulated star formation efficiency also determines the ability of the star cluster to stay bound after the residual gas is expelled. The detailed role of various stellar feedback processes in cloud disruption is still a highly debated topic in the literature [@Korpi99; @JoungMacLow06; @Thompson05; @Murray10; @Dobbs11; @Dale12; @Kruijssen19b]. These processes include photoionisation, radiation pressure, stellar winds, and supernova (SN) explosions. The detailed treatment of each of these processes is well beyond the scope of our model. We therefore use the fact that the integrated specific momentum output of each of these mechanisms is similar [e.g. @Agertz13] and use feedback by SN explosions as a phenomenological proxy for the complete array of feedback processes. In what follows, we will use the the minimum lifetime of O- and B-type stars as the time delay until the first SNe, $t_{\rm OB} \simeq 3\Myr$. This is strictly an upper limit on the delay for the onset of the energetic effects of feedback. We refer the reader to Section \[sec:earlyfeedback\] for a discussion of the uncertainties related to this assumption. To obtain the minimum star cluster mass within a galaxy with a given set of characteristic global properties (such as the ISM surface density and the angular velocity), we must calculate the range of cloud masses in which the star formation efficiency is guaranteed to be large enough for the stars to collapse into a single cluster that remains bound after the remaining gas is expelled by stellar feedback [@Hills80; @Lada84; @Kroupaetal01]. Motivated by the comprehensive exploration of parameter space in idealised $N$-body simulations [e.g. @BaumgardtKroupa07], this condition can be written in terms of the minimum local star formation efficiency needed to form a bound cluster ($\epsilon_{\rm min}$) as $$\epsfb \geq \epsilon_{\rm min} , \label{eq:epsfb2a}$$ such that star-forming regions that do not convert at least half of their gas mass into stars by the time gas is expelled will not form bound clusters. Because the maximum star formation efficiency is limited by feedback from protostellar outflows disrupting protostellar cores (the formation sites of individual stars), which is adiabatic and therefore undisruptive, this condition can be written $$\epsfb \geq \epsbound\epscore , \label{eq:epsfb2}$$ where $\epscore$ is the limiting efficiency of star formation within protostellar cores, and $\epsbound$ is the minimum fraction of cloud mass that must condense into molecular cores to obtain a bound cluster. This expression is one of the key ingredients of our model. Idealised $N$-body simulations find values $\epsbound \approx 0.4$ across a broad variety of cluster properties and environments [@BaumgardtKroupa07]. Observations of protostellar cores find $\epscore \approx 0.5$ [e.g. @Enoch08], such that the star formation efficiency should be $\epsfb \geq 0.2$ in order to guarantee collapse into a single bound cluster. Summarising, the procedure to obtain the minimum bound cluster mass is as follows. 1. Derive the total star formation time-scale, $\tfb$ from the time required for stellar feedback to over-pressure a collapsing gas cloud of density $\rhoc$ embedded in a galactic disc with ISM surface density $\SigmaISM$. 2. Express the mean volume density in terms of the cloud surface density and mass by assuming spherical symmetry. 3. Obtain the total star formation efficiency of the cloud, $\epsfb$, as a function of cloud mass and surface density by multiplying the ratio of the star formation time-scale and the free-fall time by the empirical star formation efficiency per free-fall time. Because low mass clouds have higher gas densities, the integrated star formation efficiency will be a decreasing function of cloud mass. 4. Compare the total star formation efficiency ($\epsfb$) to the efficiency required for the cluster to remain bound after stellar feedback blows out the residual gas ($\epsbound\epscore$). The maximum cloud mass that reaches this threshold efficiency will set the minimum scale for collapse into a single bound cluster, because [*lower mass scales are part of a larger bound part of the hierarchy*]{}, i.e. they are guaranteed to merge into larger bound structures before star formation is halted by feedback. This scale then defines the bottom of the merger hierarchy. 5. The minimum bound cluster mass as a function of cloud mass, cloud surface density, and ISM surface density is then obtained by multiplying the threshold cloud mass by the minimum required efficiency to remain bound, $\epsbound\epscore$. In the following section, we follow the above procedure to derive the minimum cluster mass. The minimum mass of a bound star cluster {#sec:minmass} ---------------------------------------- Finding the minimum star cluster mass amounts to solving equations (\[eq:epsfb1\]) and (\[eq:epsfb2\]) simultaneously for $\Mmin = M_*$. In other words, we must find the range of cloud (and resulting cluster) masses where star formation is efficient enough to reach $\epsfb \geq \epsbound\epscore$, such that the local stellar population is guaranteed to remain gravitationally bound, even after any residual gas reservoir is expelled by stellar feedback. Because the star formation efficiencies are defined locally, we are interested in the largest mass scale at which boundedness is certain to be achieved. Lower-mass aggregates will be part of a larger bound structure, such that the minimum cluster mass is set by the largest structure that [*must*]{} be gravitationally bound. The first step is to obtain the feedback-regulated star formation efficiency as a function of the cloud mass. The integrated local star formation efficiency can be expressed in terms of the specific star formation efficiency per free-fall time as [@Kruijssen12b] $$\epsilon_{\rm SF} = \epsff \frac{ t_{\rm SF} }{ t_{\rm ff} } , \label{eq:5}$$ where $t_{\rm SF}$ is the total duration of the star formation process in the cloud, and $\epsff$ and $\tff$ are the star formation efficiency per free-fall time and the mean cloud free-fall time, respectively. Motivated by detailed measurements of molecular clouds in the Milky Way [e.g. @Evans14; @Lee16], as well as across many nearby galaxies [@Leroy17; @Utomo18], we assume a fiducial constant value $\epsff = 0.01$ (see Section \[sec:uncertainties\] for a discussion of the uncertainty on this number). Following @Kruijssen12b, the duration of star formation within a gas reservoir of density $\rhoc$, i.e. $\tfb$, is set by the time it takes for stellar feedback to pressurise the gas and stop the supply of fresh material. This time-scale can be calculated by comparing the external confining pressure of the ISM (or parent molecular cloud) to the gas pressure within the feedback-affected region. The duration of star formation is then obtained by adding the time delay between the onset of star formation ($\tsn$) and the first SN explosion and the time between the first SN and pressure equilibrium with the ISM ($t_{\rm eq}$), i.e. $$\label{eq:t_fb_def} \tfb = \tsn + t_{\rm eq} .$$ We start by writing the the ambient pressure of the ISM at the disc midplane as $$\label{eq:pressure1} P_{\rm ISM} = \phi_{\rm P} \frac{\pi}{2} G ~\SigmaISM^2 ,$$ where $\SigmaISM$ is the surface density of the ISM of the galaxy, and $\phi_{\rm P} \approx 3$ is a correction due to the gravity of the stars [@krumholzmckee05]. The outward pressure exerted by stellar feedback is [@Kruijssen12b] $$\begin{aligned} \label{eq:pressure2} P_{\rm fb} & = \frac{ E_{\rm fb} }{ V } \nonumber \\ & = \phifb \epsfb \rhoc ~t_{\rm eq} ,\end{aligned}$$ where $E_{\rm fb}$ is the feedback energy per unit stellar mass, $V$ is the region volume, $\rhoc$ is the mean cloud density, $\phi_{\rm fb} = 3.2 \times 10^{32} \erg \s^{-1} \Msun^{-1}$ is the mean rate of SN energy injection per unit stellar population mass , $\epsfb$ is the total fraction of gas mass that is converted into stars during the lifetime of the cloud (equation \[eq:5\]), and $t_{\rm eq}$ is the time it takes for the cloud to reach pressure equilibrium with the ISM after the death of the first massive (OB-type) star. By equating the inward and outwards pressures we can then solve for the time required to reach pressure equilibrium and stop gas accretion $t_{\rm eq}$ using equations (\[eq:pressure1\]) and (\[eq:pressure2\]), $$\label{eq:t_eq} t_{\rm eq} = \frac{ \pi \phi_{\rm P} G ~\SigmaISM^2 }{ 2 \phifb \epsfb \rhoc } .$$ The time-scale corresponding to the duration of star formation, $\tfb$, may then be defined as the time required for stellar feedback to cut the fresh gas supply and halt star formation. Using equations (\[eq:t\_fb\_def\]), (\[eq:pressure1\]), and (\[eq:pressure2\]), we obtain $$\label{eq:2} \tfb = \tsn + \frac{ \pi \phiP G ~\SigmaISM^2 }{ 2 \phifb \epsfb \rhoc } ,$$ where $\tsn$ is the time delay before the first SN (corresponding to an OB-type progenitor star) explodes. The next step is to express the molecular cloud density in terms of its mass and mean surface density. Assuming spherical symmetry, the mean cloud gas density is $$\rhoc = \frac{ \Mc }{ \frac{4}{3}\pi\Rc^3 } . \label{eq:rho_c_rad}$$ Expressing the cloud radius in terms of the mean cloud surface density, $\Sigmac$, yields $$\label{eq:mean_sd} \Rc = \left( \frac{ \Mc } { \pi \Sigmac } \right)^{1/2}.$$ Substituting this expression into equation (\[eq:rho\_c\_rad\]) gives $$\rhoc = \frac{3}{4} \left( \frac{ \pi \Sigmac^3 }{ \Mc } \right)^{1/2} . \label{eq:rho_c}$$ The final expression for the duration of star formation $\tfb$ can now be obtained in terms of the cloud mass and surface density by substituting equations (\[eq:5\]) and (\[eq:rho\_c\]) into equation (\[eq:2\]), $$\tfb = \tsn + \frac{2\pi}{3} \frac{ \phiP G ~\SigmaISM^2 \tff }{ \phifb \epsff \tfb} \left(\frac{ \Mc }{ \pi \Sigmac^3 }\right)^{1/2}, \label{eq:t_sf_quad}$$ where the free-fall time can be written in terms of the cloud mass and surface density using equation (\[eq:rho\_c\]), i.e. $$\tff = \sqrtsign{ \frac{3 \pi}{32 G \rhoc} } = \sqrtsign{ \frac{\pi^{1/2}}{8G} } \left( \frac{\Mc}{\Sigmac^3} \right)^{1/4} \equiv \mathcal{C}\left( \frac{\Mc}{\Sigmac^3} \right)^{1/4} , \label{eq:t_ff}$$ where the final equality defines the constant $\mathcal{C}$. The star-formation time-scale can now be obtained by solving the quadratic equation (\[eq:t\_sf\_quad\]) after substituting equation (\[eq:t\_ff\]) for $\tff$. The result is $$\tfb = \frac{\tsn}{2} \left[ 1 + \sqrtsign{ 1 + \frac{ 8\pi^{1/2}\mathcal{C} }{3} \frac{ \phiP G ~\SigmaISM^2 }{ \phifb \epsff \tsn^2 } \frac{ \Mc^{3/4} }{ \Sigmac^{9/4} } } \right] . \label{eq:t_fb}$$ At a constant cloud surface density, which is typically observed within a given galactic environment [e.g. @Heyer09; @Sun18], two cloud mass regimes emerge. For large cloud masses, the second term inside the square root in equation (\[eq:t\_fb\]) becomes $\gg 1$, and the star formation time-scale $\tfb$ is proportional to $\Mc^{3/4}$. Physically, this describes the regime where the time required to build up enough SN energy to pressurise the cloud increases with cloud mass, because more massive clouds have lower volume densities (and hence lower integrated star formation efficiencies and SN energy per unit cloud mass). For lower cloud masses, the second term inside the square root becomes $\ll 1$, and the feedback time-scale $\tfb \to \tsn$. This corresponds to the physical regime where the cloud mass is low enough (and hence its density and integrated star formation efficiency high enough) that the first SN provides enough energy density to overpressure the cloud and halt star formation. In this regime, the duration of star formation is then set by the time delay until the first SN by massive star formation and stellar evolution, combined with the sampling of the IMF. We will include this effect in the following section. Finally, we are now able to write the condition to form a bound star cluster in terms of the molecular cloud mass $\Mc$ and surface density $\Sigmac$ by substituting equations (\[eq:5\]), (\[eq:t\_ff\]), and (\[eq:t\_fb\]) into equation (\[eq:epsfb2\]), $$\begin{multlined} \epsff \frac{\tsn}{2\mathcal{C}} \frac{\Sigmac^{3/4}}{\Mc^{1/4}} \left[ 1 + \sqrtsign{ 1 + \frac{ 8\pi^{1/2}\mathcal{C} }{3} \frac{ \phiP G ~\SigmaISM^2 }{ \phifb \epsff \tsn^2 } \frac{ \Mc^{3/4} }{ \Sigmac^{9/4} } } \right] \\ \geq \epsbound \epscore . \end{multlined} \label{eq:boundcond}$$ Impact of IMF sampling on the feedback time-scale {#sec:IMF} ------------------------------------------------- In deriving the feedback time-scale, we implicitly excluded the effects of stochastically sampling the stellar IMF. Naturally, the derivation of the feedback time-scale, $\tfb$, in Section \[sec:minmass\] breaks down for arbitrarily low cloud masses, as $M \downarrow 0$ and the cloud mass becomes too small to produce even a single massive star. This should lead to a rapid rise in the delay time $\tsn$ as the cloud mass decreases, because it takes longer to build up enough cluster mass ($\MOB$) to produce a massive star. We can account for this effect by writing the characteristic feedback time-scale as $$\tsn = \tsnO + \Delta t, \label{eq:tsn_IMF}$$ where $\Delta t$ is the delay from the onset of star formation until the cluster has enough mass to contain at least one massive OB-type star, and $\tsnO$ is the time interval between this moment and the onset of stellar feedback. We assume $\tsnO = 3\Myr$ based on the shortest lifetime of a star more massive than $8\Msun$ [e.g. @Ekstrom12]. To calculate the delay $\Delta t$ in this low-mass regime we write the minimum stellar mass $\MOB$ that must be formed for the cluster to contain at least one massive star as $$\MOB = \epsilon \Mc = \epsff \frac{\Delta t}{\tff} \Mc , \label{eq:M_OB}$$ where $\epsilon$ is the integrated star formation efficiency of the cloud. For low enough masses the mass of the cloud becomes smaller than $\MOB$ and $\Delta t > \tff/\epsff$. This corresponds to the regime of such low cloud masses that no massive stars can be produced and star formation cannot be stopped by stellar feedback.We assume that such low-mass clouds will continue to accrete until they reach sufficiently high masses to form a massive star. The value of $\MOB$ can then be obtained by solving the system of integral equations (for a given choice of stellar IMF) $$\begin{aligned} \int^{\infty}_{8M_{\odot}} \Phi \frac{{\rm d}N(m)}{{\rm d}m} {\rm d}m & = 1 , \\ \int^{\infty}_{0.08M_{\odot}} \Phi \frac{{\rm d}N(m)}{{\rm d}m} m ~{\rm d}m & = \MOB,\end{aligned}$$ for $\MOB$ and the normalisation of the IMF, $\Phi$. These two equations simply state that there is one massive star in the cluster and that the total mass under the IMF is $\MOB$. The relevant integration limits are the hydrogen-burning mass limit, $0.08\Msun$, and the minimum mass of a B star, $8\Msun$. Solving the equations above for a @chabrier03 IMF gives $\MOB = 99\Msun$. The time $\Delta t$ required to form at least one massive star is then obtained by inverting equation (\[eq:M\_OB\]), $$\Delta t = \frac{\MOB}{\epsff} \frac{\tff}{\Mc} . \label{eq:delta_t}$$ Substituting equation (\[eq:t\_ff\]) to express the free-fall time, the general expression for the time delay between the onset of star formation and the first SN in a cloud of mass $\Mc$ and surface density $\Sigmac$ is $$\label{eq:t_sn} \tsn = \tsnO + \mathcal{C} \frac{\MOB}{\epsff} \left(\Sigmac \Mc \right)^{-3/4} .$$ This means that the delay time increases as the cloud mass and surface density decrease, as expected from the corresponding changes of the star formation rates and free-fall times. Dependence on global galactic environment {#sec:global} ----------------------------------------- To relate the minimum bound cluster mass to its galactic star-forming environment, we should express the condition to form a bound cluster (equation \[eq:boundcond\]) in terms of the properties of the host galaxy. This condition already has a dependence on the mean ISM surface density $\SigmaISM$, in addition to the dependence on the cloud mass $\Mc$ and surface density $\Sigmac$. The mean cloud surface density can be written as a function of the global ISM surface density in the host galaxy using equation (9) from @Kruijssen15b, $$\label{eq:f_sigma} f_{\Sigma} = \frac{\Sigmac}{\SigmaISM} = 3.92 \left( \frac{10 - 8f_{\rm mol}}{2} \right)^{1/2} ,$$ where the global molecular gas fraction, $f_{\rm mol}$, is a function of the ISM surface density, $\SigmaISM$, parameterised using equation (73) of @krumholzmckee05 as $$f_{\rm mol} \approx \left[1 + 0.025~\left(\frac{\SigmaISM}{10^2\Msunpc2}\right)^{-2} \right]^{-1} .$$ This relation implies that in galaxies with high ISM surface densities ($\SigmaISM \ga 100\Msunpc2$), the ISM becomes nearly fully molecular $f_{\rm mol} \sim 1$, and $\fSigma \sim 4$. For the low ISM surface densities characteristic of nearby spirals, $\SigmaISM \sim 10\Msunpc2$, the density contrast is $\fSigma \approx 7.7$. The expressions for the free-fall (equation \[eq:t\_ff\]) and the OB star (equation \[eq:t\_sn\]) time-scales now become $$\label{eq:tff_sigma} \tff(\SigmaISM,\Mc) = \mathcal{C} \frac{ \Mc^{1/4} }{ \left(f_{\Sigma}\SigmaISM \right)^{3/4} } ,$$ and $$\label{eq:tob_sigma} \tsn(\SigmaISM,\Mc) = \tsnO + \mathcal{C} \frac{\MOB}{\epsff}\left(f_{\Sigma} \SigmaISM \Mc \right)^{-3/4} .$$ We may now rewrite the condition to form a bound cluster (equation \[eq:boundcond\]) solely in terms of the galactic ISM surface density. Using equations (\[eq:f\_sigma\]), and (\[eq:tob\_sigma\]), equation (\[eq:boundcond\]) can be rewritten to obtain the condition for the threshold cloud mass, $\Mth$, at which the total feedback-regulated star formation efficiency, $\epsfb$, is large enough for the stars to remain bound after gas expulsion. Writing the functional dependence of the feedback time-scale implicitly we obtain $$\begin{aligned} & \epsbound \epscore = \frac{\epsff}{2\mathcal{C}} \tsn(\SigmaISM,\Mth) \frac{ \fSigma^{3/4} \SigmaISM^{3/4} }{ \Mth^{1/4} } \times \nonumber \\ & \left[ 1 + \sqrtsign{ 1 + \frac{ 8\pi^{1/2} \mathcal{C} G \phiP }{ 3 ~\phifb \epsff } \frac{ \Mth^{3/4} }{ \tsn^2(\SigmaISM,\Mth) \fSigma^{9/4} \SigmaISM^{1/4} } } \right] , \label{eq:boundcondISM}\end{aligned}$$ with $\tsn(\SigmaISM,\Mth)$ given by equation (\[eq:tob\_sigma\]) with $\Mc = \Mth$. This expression can be solved numerically to obtain $\Mth$ as a function of only the ISM surface density $\SigmaISM$. To visualise the variation of the star formation time-scale with cloud mass, we show in Figure \[fig:time-scales\] the free-fall and feedback time-scales. As an example, we choose here the fiducial case of the solar neighbourhood environment and we assume an ISM surface density of $\SigmaISM = 13\Msun\pc^{-2}$ [@KennicuttEvans12]. Figure \[fig:time-scales\] shows the emergence of two regimes in the behaviour of the star formation time-scale. For large cloud masses, $\tfb$ increases with mass because the feedback energy per unit cloud mass decreases (see discussion of equation \[eq:t\_fb\] in Section \[sec:minmass\]), requiring that star formation proceeds for longer so that stellar feedback can match the inward pressure. Towards the regime of low cloud masses, the star formation time-scale first begins to saturate near its minimum value of $\tfb \sim 3 \Myr$ (the SN delay of the most massive OB star) and then rises again with decreasing mass due to the growing delay until the formation of the first massive star. ![Predicted star formation time-scales as a function of cloud mass in the environment of the solar neighbourhood. The solid and dashed lines show the free-fall time and the time until SN feedback stops star formation, respectively. Clouds in the regime where $\tfb \gg \tff$ will form stars for several free-fall times and may reach a star formation efficiency high enough to remain bound after the remaining gas is expelled. Clouds with $\tfb < \tff$ will form stars for less than a free-fall time, becoming unbound after gas expulsion. The dotted line shows the effect of ignoring the time delay until the formation of a massive star introduced by the sampling of the IMF.[]{data-label="fig:time-scales"}](timescales.pdf){width="1.0\columnwidth"} The dependence of the condition to form a bound cluster on the ISM surface density is shown in Figure \[fig:boundcond\_mass\]. The figure shows the total feedback-regulated star formation efficiency, $\epsfb$, as a function of cloud mass for $\SigmaISM = [1,10,10^2,10^3,10^4]\Msun\pc^{-2}$. At a given ISM surface density, when $\epsfb$ crosses into the single-object collapse region, all cloud masses below this threshold mass will collapse into a single bound cluster. The largest cloud mass scale $\Mth$, which limits where the stars are guaranteed to collapse into a single bound object, increases with host galaxy ISM surface density. For $\SigmaISM < 10\Msunpc2$, this mass is just below $10^3 \Msun$ and it increases rapidly to more than $10^5\Msun$ for $\SigmaISM > 10^{3.3}\Msunpc2$, following an asymptotic dependence of approximately $\Mth\propto\SigmaISM^{3}$. These threshold cloud mass values are marked by filled circles for each line corresponding to a fixed ISM surface density in Figure \[fig:boundcond\_mass\]. For very large ISM surface densities, $\SigmaISM > 10^{3.5}\Msunpc2$, all cloud scales form bound clusters. Due to the hierarchical nature of molecular cloud structure, all scales below the threshold bound mass will remain bound and will eventually merge. This implies that for a given host galaxy environment, the minimum mass of bound stellar clusters in our model is given by $$\Mmin(\SigmaISM) = \epsbound \epscore \Mth \simeq 0.2\Mth . \label{eq:Mmin_Mth}$$ The minimum cluster mass is indicated in Figure \[fig:boundcond\_mass\] for each $\SigmaISM$ using filled circles, with numerical values displayed along the top axis. The absolute minimum limit on the minimum cluster mass is set by $\MOB$ (see Section \[sec:IMF\]). This can be understood by examining the behaviour of equation (\[eq:boundcondISM\]) when $\Mth \downarrow 0$, which results in the limiting condition $\epsbound\epscore\Mth \geq \MOB$. In the regime of very large ISM surface densities, $\SigmaISM > 10^{3.5}\Msunpc2$, all cloud scales merge hierarchically into a single bound cluster, and this process is limited only by the fraction of the Toomre mass that is able to collapse under the influence of feedback (i.e. the maximum cluster mass predicted by the model). In this regime we set $\Mmin = \Mmax$, where $\Mmax$ is corrected relative to its classical form in to account for the effect of IMF sampling (see Appendix \[sec:appendix\]). The maximum cluster mass can be expressed as a function of the ISM surface density $\SigmaISM$, the disc angular rotation velocity $\Omega$, and the @Toomre64 stability parameter defined as $$Q \equiv \frac{ \kappa \vdispISM }{ \pi G \SigmaISM } , \label{eq:Q}$$ where $\kappa$ is the epicyclic frequency, $$\kappa \equiv \sqrt{2} \frac{V}{R}\sqrt{1 + \frac{{\rm d} \ln V}{{\rm d} \ln R}} = \sqrt{2}\Omega ,$$ where $V$ is the circular velocity at a galactocentric radius $R$, and the last equality holds for a flat rotation curve. This model thus introduces a dependence of the minimum mass on disc angular velocity and $Q$ in the regime of very high ISM surface density. ![Dependence of the feedback-regulated integrated star formation efficiency, $\epsfb$, on the cloud mass, $\Mc$, and ISM surface density, $\SigmaISM$. Clouds in the regime $\epsfb > 0.2$ convert gas into stars so efficiently that they are guaranteed collapse into a single bound object prior to residual gas expulsion. The circles mark the threshold cloud mass below which all stars must collapse into a single bound object, $\Mth$. The stellar mass of this object then becomes the bottom of the hierarchy of subclusters that populate the ICMF. The minimum mass of a bound stellar cluster is then $\Mmin = \epsfb \Mth$ and its values are indicated by the top axis. The dotted lines show the effect of neglecting the delay time until the formation of the first massive star on the resulting minimum mass. The thick black line indicates the prediction for the conditions in the solar neighbourhood.[]{data-label="fig:boundcond_mass"}](boundcond_mass.pdf){width="1.02\columnwidth"} Model uncertainties from empirically-derived parameters {#sec:uncertainties} ======================================================= Star formation efficiency ------------------------- Our model for the environmental dependence of the minimum cluster mass hinges on the feedback-regulated total star formation efficiency, $\epsfb$, because it determines the maximum cloud mass ($\Mth$) and associated cluster mass ($\Mmin$) below which the star formation efficiency is so high that the young stars must collapse into a larger bound part of the hierarchy before the gas is lost. The main sources of uncertainty in $\Mmin$ are thus the empirically derived star formation efficiency per free-fall time, $\epsff$, and the product $\epsbound\epscore$. The star formation efficiency per free-fall time of the cloud enters in equation (\[eq:boundcondISM\]) with a scaling $\epsfb \propto \epsff$ (assuming for simplicity that $\tsn = \tsnO$ and neglecting the subdominant second term inside the square root). Many estimates of $\epsff$ are available in the literature. For instance, @Utomo18 obtain the largest and most direct sample of measurements of $\epsff$ in external galaxies using CO observations at the scales of typical GMCs. The authors find a mean value $\epsff = 0.7 \pm 0.3$ per cent across the sample of 14 galaxies. However, larger values, $\epsff = 1.5 - 2.5$ per cent, are found in studies of individual Milky Way clouds [@Evans14; @Lee16; @Barnes17], while lower values, $0.3 - 0.36$ per cent, are observed in M51 [@Leroy17]. Taken together, these results imply an uncertainty in the efficiency per free-fall time of at least a factor of $\sim 2$ in each direction. Beyond the uncertainty in the determination of $\epsff$ in local galaxies, there is also the additional possibility that the star formation efficiency $\epsff$ is not universally constant, but instead varies as a function of time and cloud properties. For instance, in numerical simulations of isolated molecular clouds including stellar feedback, @Grudic18 find that $\epsff$ correlates with cloud surface density. Observational studies where star-forming regions in the Milky Way are matched to the nearest molecular clouds find a much larger scatter in the star formation efficiency $\sigma_{\log \epsff} \sim 1$ [@Evans14; @Lee16; @Vutisalchavakul16] compared to all other methods, including YSO counting and pixel statistics. @Lee16 explain this large scatter using a model with a strong time-dependence of the star formation efficiency. However, @Ochsendorf17 find that, when applied to the same data set, the cloud matching method tends to overestimate the median and the scatter in $\epsff$ compared to the YSO counting method because it associates the entire flux from a star-forming region to its nearest GMC, even in regions where there is no overlap or physical association. Indeed, different cloud matching studies using the same sample obtain mean efficiencies that are different by a factor of almost $\sim 10$ due to a difference in the sensitivity of the observations [@Krumholz19]. This bias in the cloud matching studies is also consistent with the expectation that star formation relations observed at galactic scales break down below the spatial and temporal scales of individual clouds due to statistical under-sampling of independent regions [@KruijssenLongmore14; @Kruijssen18a]. Following the recent review of observational measurements of $\epsff$ by @Krumholz19, we adopt the constant value $\log \epsff = -2$ with a systematic uncertainty $\sim 0.5$ dex, which is consistent with all other methods except cloud matching. Implementing a time dependent $\epsff$ would require following the time evolution of the structure of the gas within clouds, and this is beyond the scope of the simple model presented here. For future work, it would be interesting to explore of the effect of a dependence on cloud surface density [@Grudic18]. At present, the only measurement of $\epsff$ in the high-surface density regime (in the Central Molecular Zone of the MW) yields $\epsff \approx 1.8$ per cent [@Barnes17], consistent with the values found for lower surface density clouds in the solar neighbourhood [e.g. @Evans14; @Heyer16; @Ochsendorf17]. According to equation (\[eq:boundcondISM\]), and neglecting again the second term inside the square root, at a fixed ISM surface density in the regime $\SigmaISM \ga 10^2\Msun$ the total star formation efficiency scales approximately as $\propto \Mc^{-1/4}$ (i.e. the region below the turnover for each of the dotted lines of Figure \[fig:boundcond\_mass\]). This implies that, for a fixed value of $\epsbound\epscore$, a factor of two change in $\epsfb$ results in a factor of $\sim 2^4 = 16$ difference in the threshold cloud mass in the high ISM surface density regime ($\SigmaISM \ga 10^2\Msun$). This scaling requires that the effect of IMF sampling is minor, which means the dependence of $\Mmin$ on $\epsfb$ is largest for ISM surface densities $\SigmaISM \ga 10^2\Msunpc2$. The sensitivity of the total star formation efficiency $\epsfb$ to an uncertainty of a factor of two in $\epsff$ is illustrated in the left panel of Figure \[fig:boundcond\_mass\_eps\_ff\]. Moreover, the panel also shows that the minimum mass in low ISM surface density environments, $\SigmaISM \la 10^2\Msunpc2$ [such as in local disc galaxies; @Kennicutt98], is much more robust as its scaling with cloud mass is much steeper (approximately $\propto \Mc^{-1}$) due to the effect of IMF sampling discussed in Section \[sec:IMF\]. The right panel of Figure \[fig:boundcond\_mass\_eps\_ff\] shows the explicit dependence of the minimum cluster mass on $\epsff$ for lines of constant ISM surface density, indicating the conditions across various galactic environments including the solar neighbourhood, the Antennae galaxies, and the CMZ (see Section \[sec:environments\] for a description of the parameters used). The minimum cluster mass is a sensitive probe of the physics of star formation in high surface density environments like the CMZ. ![image](boundcond_mass_epsff.pdf){width="1.0\columnwidth"} ![image](minmass_vs_epsff.pdf){width="1.0\columnwidth"} Because the value of $\epsbound$ is reasonably well constrained by simulations, the uncertainty in the product $\epsbound\epscore$ is dominated by the limiting efficiency in molecular cores, $\epscore$. Also known as the core-to-star efficiency, its value is constrained by observations, analytical models and simulations to the range $\epscore \simeq 0.3 - 0.7$ [@MatznerMcKee00; @Enoch08; @FederrathKlessen12; @FederrathKlessen13]. This relative uncertainty is therefore about a factor of 2 smaller than the $\epsff$ uncertainty and its effect on the minimum mass is in the opposite direction. Assuming $\epscore > 0.5$ lowers $\Mmin$ for a fixed $\epsff$, while assuming $\epsff > 0.01$ increases $\Mmin$ by the same amount for a fixed $\epscore$. Increasing both parameters by a factor of 2 would result in an overall change in $\Mmin$ of less that a factor of 2. To summarise, we highlight the following points: 1. The minimum cluster mass predicted by our model for *high ISM surface density environments* ($\SigmaISM \ga 10^2\Msunpc2$) is very sensitive to the assumed star formation efficiency per free-fall time, $\epsff$, and gas conversion efficiency in pre-stellar cores, $\epscore$. More stringent constraints on these values from future observations will be necessary to make more precise minimum cluster mass predictions in environments with ISM surface densities $\SigmaISM \ga 10^2\Msunpc2$. 2. Despite the sensitivity of the minimum mass on $\epsff$, the relative *scaling* of the minimum cluster mass with ISM surface density that was obtained in the previous section (i.e. the slope of the lines in Figure \[fig:boundcond\_mass\_eps\_ff\]) is a robust prediction of our model in the regimes where the ISM surface density $\SigmaISM \ll 10^2\Msunpc2$ (e.g. quiescent discs) and $\SigmaISM \gg 10^2\Msunpc2$ (e.g. galactic nuclei). 3. A benefit of the sensitivity of the minimum cluster mass to the star formation efficiency per free-fall time $\epsff$ in the high surface density regime is that $\Mmin$ is an independent observational probe of the small-scale physics of star formation. The effect of radiation and stellar winds on the feedback timescale {#sec:earlyfeedback} ------------------------------------------------------------------- The model presented here assumes that the feedback energy injection begins after the onset of the first supernova, $\tsn = 3\Myr$ (see equation \[eq:t\_fb\_def\]). However, stellar winds and radiation could have a significant role in cloud disruption due to their nearly instantaneous onset compared to the delayed effect of SNe. As a result of this, the relative role of each of these processes in regulating star formation is still highly debated in the literature [for a review, see @Krumholz19]. To determine the sensitivity of the ICMF model to the effect of early feedback from radiation and stellar winds, we must consider both the total momentum injected, and the timescale over which it stops star formation. The momentum injection rates from stellar winds, radiation, and SNe are all comparable [@Agertz13]. As shown in Figure \[fig:time-scales\], the feedback timescale at low cloud masses ($\la 10^5\Msun$ in the solar neighbourhood) is dominated by the delay in the formation of the first massive star due to IMF sampling. In the majority of parameter space populated by galaxies, this sets the threshold cloud mass below which all scales remain bound. Indeed, neglecting the IMF delay, these clouds shut off their star formation very quickly after the onset of feedback, with $\tfb \sim \tsn$ (dotted line in Figure \[fig:time-scales\]). Increasing the energy injection parameter $\phifb$ would have a negligible effect on the duration of star formation because feedback is already extremely efficient in clouds with low enough masses to be affected by IMF sampling (the left branch of the $\tfb$ curve in Figure \[fig:time-scales\]). ![Impact of the timescale for the onset of feedback, $\tsnO$ on the predicted minimum cluster mass. We show a reproduction of Figure \[fig:boundcond\_mass\] with values of $\tsnO = 3\Myr$ (fiducial model; solid lines), and $\tsnO = 1\Myr$ (dot-dashed lines). The threshold cloud mass (where $\epsfb = \epsbound\epscore$ and the lines cross into the shaded region) is sensitive to the assumed value of $\tsnO$ only for high ISM surface densities, $\SigmaISM \ga 10^2\Msunpc2$. At lower surface densities the efficiency is driven mainly by the feedback onset delay caused by IMF sampling in low mass clouds. The thick black line shows the prediction in the solar neighbourhood.[]{data-label="fig:boundcond_mass_tsn0"}](boundcond_mass_tsn0.pdf){width="1.0\columnwidth"} The termination of star formation in simulated clouds can occur before the first SNe explode, $\tfb < \tsn$, when radiative feedback is included [@Grudic18]. This is also expected from simplified analytical arguments [@Murray10]. However, it is difficult to derive a single timescale for the termination of star formation by radiation and stellar winds due to their highly nonlinear dependence on cloud structure. For massive clusters, all of these mechanisms become important [@Krumholz19]. It is also challenging to constrain the importance of radiation and stellar winds using observations of individual clouds because their entire evolutionary sequence is not observable. However, methods that rely on the statistics of star formation and molecular gas tracers across entire galaxies can constrain the mean evolutionary timescales of clouds [@KruijssenLongmore14; @Kruijssen18a]. Using this approach, @Kruijssen19b and @Chevance19 obtain the typical duration of star formation across several nearby spirals, $t_{\rm SF} - \tsn \sim 3 \Myr$. Such a short timescale implies that early feedback from radiation and stellar winds, and not SNe, regulate the star formation process in the conditions typically found in the local universe. In these conditions, our model predicts that molecular clouds with masses $\Mc \la 10^6\Msun$ will have a feedback stage duration $t_{\rm fb} = 3-4\Myr$ after forming a massive star. This is in broad agreement with the observed value. In summary, two factors limit the effect of early feedback on the minimum cluster mass. First, the energy injection rate due to SNe alone effectively halts star formation on a very short timescale at the low cloud masses which define the bottom of the cluster hierarchy. This makes the model insensitive to the increase in the energy injection due to stellar winds and radiation. Second, the observed feedback timescale is $\sim 3\Myr$ in nearby spirals, in agreement with the total duration of star formation at the cloud masses that set the minimum cluster mass. To illustrate the effect of assuming a shorter feedback timescale, Figure \[fig:boundcond\_mass\_tsn0\] shows the integrated star formation efficiency and minimum cluster mass for $\tsnO = 1\Myr$. A factor of three reduction in the feedback onset time results in negligible change in the minimum cluster mass for gas surface densities $\SigmaISM \la 10^2\Msunpc2$ due to the dominance of the feedback delay due to IMF sampling. At larger surface densities early feedback reduces the integrated star formation efficiency and the resulting minimum mass by up to a factor of $\sim 10$. However, at large surface densities, SNe, direct radiation pressure, and ionisation become less effective [@Krumholz19], and this effect could increase the star formation efficiency in this regime. Although improved constraints on the feedback timescale will reduce this uncertainty in the future, Figure \[fig:boundcond\_mass\_tsn0\] shows that the results will not change qualitatively. The minimum cluster mass across the observed range of galactic environments {#sec:environments} =========================================================================== The model described in Section \[sec:model\] defines the ICMF as a function of three parameters of the host galaxy. The low-mass truncation $\Mmin$ is determined by the gas surface density $\SigmaISM$ using equations (\[eq:tob\_sigma\])-(\[eq:Q\]), and the high-mass truncation $\Mmax$ is given by $\SigmaISM$, the angular rotation velocity of the disc $\Omega$ (or the epicyclic frequency), and Toomre $Q$ using equations (\[eq:tfb\_Mmax\]) - (\[eq:Mmax\]) (see discussion below). In this section, we explore the behaviour of the ICMF truncation masses, $\Mmin$ and $\Mmax$, within the three-dimensional parameter space spanned by these parameters. ![image](CMF_paramspace_Q.pdf){width="\textwidth"} The top row of Figure \[fig:paramspace\] shows the dependence of the minimum cluster mass on the ISM surface density and angular velocity (obtained from solving equation \[eq:boundcondISM\]). The columns show, from left to right, the results for three different Toomre parameter values $Q \in [0.5, 1.5, 3.0]$. The range of the colorbar has an upper limit at $M=10^{10}\Msun$ for clarity. The symbols reproduced in each panel represent observations of star-forming galaxies and starbursts from @Kennicutt98, high-redshift galaxies from @Tacconi13, the solar neighbourhood, and the Milky Way’s CMZ. For the solar neighbourhood, we consider an ISM surface density $\SigmaISM = 13\Msun\pc^{-2}$ and $\Omega = 0.029\Myr^{-1}$ (see Section \[sec:global\]). For the CMZ, we use $\SigmaISM \sim 10^3\Msun\pc^{-2}$ [@Henshaw16] and calculate the angular velocity using the enclosed mass profile from @Kruijssen15a. The minimum cluster mass depends mainly on the ISM surface density across most of the parameter space occupied by galaxies. The dependence on the angular velocity only becomes significant in the top right region, mostly corresponding to galactic nuclei, where both the ISM surface density and angular velocity are high. This is the region where $\Mmin = \Mmax$, because the entire cloud hierarchy merges into a single bound object. There is a large variation of the minimum mass with galactic environment, with the sequence of observed galaxies, from local discs to high-redshift galaxies, spanning $\sim 5$ orders of magnitude in minimum stellar cluster mass. The physical mechanisms setting the minimum cluster mass also vary with the galactic environment. For low ISM surface densities ($\SigmaISM \la 10^2\Msunpc2$), the nearly constant minimum mass $\Mmin \sim 10^{2-2.5}\Msun$ is caused by the delay in the formation of the first massive star setting a fixed lower limit to the cloud mass that produces a bound cluster (see Figure \[fig:boundcond\_mass\]). For ISM surface densities typical of high-redshift galaxies ($\SigmaISM > 10^2\Msunpc2$), the minimum bound cluster mass scales with the mean ISM surface density approximately as $\Mmin \propto \SigmaISM^3$. Physically, this corresponds to the regime dominated by the steep dependence of the free-fall time on the ISM surface density ($\epsfb \propto \tff^{-1} \propto \SigmaISM^{3/4}/\Mc^{1/4}$) in the first term on the right-hand side of equation (\[eq:boundcondISM\]). As the ISM surface density increases, increasingly massive clouds ($\Mc \propto \SigmaISM^3$) achieve such high star formation efficiencies ($\epsfb$) that they must collapse into a single bound cluster. At very large ISM surface densities ($\SigmaISM \ga 2\times10^3\Msunpc2$) the scaling becomes increasingly steeper because the second term inside the square root in equation (\[eq:boundcondISM\]) becomes important, causing the minimum of the $\epsfb$ versus $\Mcloud$ curve (see Figure \[fig:boundcond\_mass\]) to approach the threshold value $\epsbound\epscore$ at an increasing rate. This is the regime where stellar feedback becomes increasingly inefficient and the feedback timescale grows with cloud mass, allowing for higher integrated star formation efficiencies ($\epsfb$). For the largest observed surface densities ($\SigmaISM > 4\times10^3\Msunpc2$), the minimum of the $\epsfb$ curve is entirely above the threshold efficiency (see Figure \[fig:boundcond\_mass\]), and all cloud masses will collapse into a single bound object limited in mass only by the maximum cluster mass. This mass is determined by the collapse of the largest unstable scale in the model (see Section \[sec:global\]). As can be seen in Figure \[fig:paramspace\], the minimum cluster mass has a negligible dependence on the Toomre parameter. This occurs in the high-ISM surface density and angular velocity regime and originates from the behaviour of the maximum mass. As a result of this shift, most observed galaxies with low shear (low angular velocity) have a minimum cluster mass that depends only on the ISM surface density. The middle row of Figure \[fig:paramspace\] shows the maximum cluster mass predicted by the model. In order to be consistent with the minimum cluster mass model presented in Section \[sec:model\], we have updated the original model to include the effect of IMF sampling. The modifications are described in Appendix \[sec:appendix\]. As discussed in Section \[sec:global\], at gas surface densities above $\sim 4\times 10^3\Msunpc2$ the minimum and maximum cluster mass are equal because the entire cloud mass spectrum will collapse into a single bound object at the maximum mass scale. We combine the minimum mass model with the model for the maximum cluster mass from to predict the full width of the ICMF and its dependence on the galactic environment. The bottom panels of Figure \[fig:paramspace\] show the logarithmic width of the ICMF as a function of ISM surface density and angular velocity for values of $Q \in [0.5, 1.5, 3.0]$. Because of the intrinsically different dependence of the minimum and maximum mass scales on the ISM surface density and angular velocity, the predicted width of the ICMF shows a large non-monotonic variation within the region of parameter space populated by observed galaxies. The model predicts that galaxies with gas surface densities, $10 \la \SigmaISM \la 100\Msunpc2$ and slow rotation, $\Omega \la 0.03\Myr^{-1}$, such as local quiescent discs, will have relatively broad mass functions. On the other extreme, galactic environments with either high gas ISM surface densities, $\SigmaISM \ga 2\times 10^3\Msunpc2$ (e.g. massive high-redshift discs), or fast rotation, $\Omega \ga 0.5\Myr^{-1}$ (e.g. galactic nuclei) should have narrow ICMFs. Comparison to observed cluster mass functions in the local Universe {#sec:predictions} =================================================================== After exploring the general predictions of our minimum cluster mass model for the range of observed galaxy properties, we turn our attention to the detailed predictions for the ICMF in nearby galaxies where observational constraints are currently available. As a result of observational systematics that are hard to correct for with current data, it is extremely challenging to determine observationally the abundance of low-mass ($\la 10^3\Msun$) clusters in nearby galaxies, including the Milky Way. None the less, current observations provide valuable upper limit constraints for the low-mass truncation of the ICMF predicted by the model. The predictions we provide here for $\Mmin$ and for the full ICMF should become testable with upcoming observational facilities, such as 30-m class telescopes, in the near future. Here we assume the model ICMF defined as a power law with index $\beta=-2$ and exponential truncations at the minimum ($\Mmin$, equation \[eq:boundcondISM\]) and maximum ($\Mmax$, equation 10 of modified to include the effects of IMF sampling; see Appendix \[sec:appendix\]) mass scales, as described in equation (\[eq:CMF\]). Since the observed CMF evolves rapidly after several million years due to dynamical effects, which are not included in the model [e.g. @BaumgardtMakino03; @Lamers05; @Kruijssen12c], we restrict the comparison to the mass functions of observed young clusters in two separate age ranges, $\tau \la 10\Myr$ and $\tau \la 100\Myr$, where $\tau$ is the cluster age. In observational samples where these ranges are not available, we use the two lowest age bins. Clusters in the youngest bin should be least affected by dynamical disruption, retaining the initial mass distribution, while the older clusters should allow a more complete statistical sampling of the high mass tail of the ICMF (which is also the most insensitive to tidal disruption). However, the youngest age bin is also the most strongly affected by contamination by unbound associations [@bastian12; @kruijssen16] In the following sections, we compare the model predictions to observations of the young CMF. These are chosen to represent a broad range of star-forming conditions, including the solar neighbourhood and the LMC as examples of low-ISM surface density environments, and the Antennae galaxies and the Milky Way’s CMZ representing conditions of high density and high shear. The solar neighbourhood {#sec:sn} ----------------------- The Milky Way is an ideal place to test predictions for the low-mass turnover of the ICMF. The deepest limits can be obtained in the solar neighbourhood, such that the turnover of the CMF at the minimum mass might be detectable. To determine the observed CMF in the solar neighbourhood we use the @Kharchenko05 cluster catalogue. The catalogue contains homogeneously determined cluster membership, distances and apparent magnitudes. For cluster masses we use estimates from @Lamers05 [and H. J. G. L. M. Lamers, private communication]. We then calculate the CMF in two age bins following the procedure in @Piskunov08, with mass bins chosen to reduce sampling errors. We chose the normalisation of the theoretical ICMF to yield the same total number of clusters in the relevant mass range as the catalogue in the surveyed area. The result is shown in Figure \[fig:CMF\_MW\]. The main issues affecting the precise determination of the CMF in the Milky Way are incompleteness due to dust extinction for the faintest clusters as well as uncertainties in membership determination. In addition, many clusters lack mass estimates due to the low number of available member stars. These systematic effects are difficult to quantify, so the data for the lowest mass clusters should be interpreted with caution. The error bars we quote represent the statistical uncertainty and are therefore only lower limits on the total uncertainties. ![image](CMF_MW.pdf){width="100.00000%"} To obtain our model prediction for the minimum cluster mass in the solar neighbourhood environment, we use the observed ISM surface density, $\SigmaISM =13\Msun~\pc^{-2}$ [@KennicuttEvans12], and angular velocity (assuming a flat rotation curve), $\Omega =0.029\Myr^{-1}$ [@Bland-HawthornGerhard16]. Using these values and the typical observed ISM velocity dispersion, $\sigma_{\rm ISM} = 10\kms$ [@HeilesTroland03], we derive the Toomre $Q$ parameter (equation \[eq:Q\]). Figure \[fig:CMF\_MW\] shows a direct comparison of the observed CMF for clusters of ages $\tau < 10$ and $\tau < 100\Myr$ with the prediction of our model. To include the effects of discrete sampling in the tails of the distribution function, we also produce Monte Carlo samples by drawing clusters from the predicted mass distribution until the total number of clusters in the sample matches the total observed number. The model predicts a low-mass truncation of the ICMF at a minimum mass $\Mmin = 1.1\times10^2\Msun$, and a maximum cluster mass $\Mmax = 2.8\times10^4\Msun$. It is challenging to determine the completeness limit of Milky Way cluster catalogues. For this reason, the apparent low-mass turnover in the observed CMF in Figure \[fig:CMF\_MW\] is hard to interpret, as it could be caused by incompleteness. At low cluster masses, significant systematic errors arise due to large extinction corrections and a lack of mass estimates because of the low number of stars detected in the faintest clusters. In fact, @Cantat-Gaudin18 recently used *Gaia* data to show that the current cluster catalogues in the solar neighbourhood could be highly incomplete. Taking into account the large uncertainties at the lowest masses, both age bins in the observed CMF agree well with our model in the regime $M \geq 10^3\Msun$, and are qualitatively consistent with the predicted turnover at low masses. M31 {#sec:m31} --- M31 is by far the nearest massive extragalactic disc galaxy, making the determination of the CMF more straightforward and less prone to systematics than observations of the cluster population in the solar neighbourhood. To predict the ICMF for M31, we use the sum of the molecular and atomic gas surface densities, $\SigmaISM = 9.3\Msunpc2$ (A. Schruba et al., in prep.), an angular velocity of $\Omega = 0.021\Myr^{-1}$, and a Toomre parameter of $Q = 2.1$. The angular velocity and Toomre parameter are derived for a galactocentric radius of $12\kpc$ corresponding to the star-forming ring where most of the cluster formation takes place. The rotation curve at this radius is approximately flat with a circular velocity $V_{\rm flat} = 250\kms$ [@Corbelli10], and a gas velocity dispersion $\sigma_{\rm ISM} = 9\kms$ [@Braun09]. showed that the predicted high-mass truncation of the ICMF in M31 has a large uncertainty due to the large variation in the gas conditions and the the star formation rate (SFR) over the last $300\Myr$. Specifically, @Lewis15 show that the SFR in the star-forming ring varied by up to a factor of $\sim 4$ over the last $300\Myr$ with respect to the SFR during the most recent $25\Myr$. We include this effect in the uncertainty in the ICMF by correcting the gas surface density as follows. We assume that $\SigmaISM \propto \Sigma_{\rm SFR}$ and use the ratio of the peak SFR surface densities in each age bin considered to recover the peak gas surface density during the formation epoch of each cluster sample. The peak ISM surface density during a past epoch of cluster formation is then $$\SigmaISM(\tau_{\rm min}< t <\tau_{\rm max}) = \SigmaISM \times \frac{ \Sigma_{\rm SFR}(\tau_{\rm min}< t <\tau_{\rm max}) } { \Sigma_{\rm SFR}(0<t<25\Myr) } ,$$ where $\tau_{\rm min}$ and $\tau_{\rm max}$ are the bracketing ages of the cluster sample, and the $\Sigma_{\rm SFR}$ values for $[\tau_{\rm min},\tau_{\rm max}] = [10,100]$ and $[100,300] \Myr$ are taken at a galactocentric radius of $12\kpc$ from Figure 6 of @Lewis15. The uncertainty in the gas surface density during the formation of the clusters of ages $\tau \in [\tau_{\rm min},\tau_{\rm max}]$ then lies in the range $[\SigmaISM(0<t<25\Myr), \SigmaISM(\tau_{\rm min}< t <\tau_{\rm max})]$. Figure \[fig:CMF\_M31\] shows our predictions along with the observations by @Johnson17. The predicted minimum cluster mass is in the range $1.1{-}1.2\times10^2\Msun$ and the maximum cluster mass is $4.2\times10^3{-}1.6\times10^6\Msun$ for clusters with ages up to $100 \Myr$. The minimum mass is in the range $1.1{-}1.2\times10^2\Msun$, while the maximum mass is $4.2\times10^3{-}2.1\times10^7\Msun$ for clusters with ages up to $300\Myr$. The spread of these mass scales comes exclusively from the variation of the conditions in the ISM inferred from the spatially-resolved star formation history. ![image](CMF_M31.pdf){width="100.00000%"} The model is able to simultaneously fit the mass functions of clusters with ages $10< \tau <100\Myr$ and $100 < \tau < 300\Myr$ in M31. Unfortunately, the 50 per cent completeness limit of the @Johnson17 data is well above the lower limit on the predicted minimum cluster mass of $\Mmin = 1.5\times10^2\Msun$. In spite of M31 having a lower gas ISM surface density and a lower SFR than the Milky Way, the dependence of the minimum mass on the ISM surface density is quite shallow in this region of the parameter space (see Figure \[fig:paramspace\]), resulting in a behaviour very similar to that of the solar neighbourhood. The LMC {#sec:lmc} ------- Because it is located only $\sim 50\kpc$ away, the LMC allows for the deepest determination of the ICMF beyond the Milky Way. With a star formation rate of $0.26\Msun~{\rm yr}^{-1}$ [@Kennicutt95], which is about ten times lower than in the Milky Way (but much higher than the solar neighbourhood alone), the LMC presents a unique opportunity to test the ICMF model within a dwarf galaxy environment. This is also interesting, because the GC populations in nearby dwarfs are strikingly different from those of massive galaxies like the Milky Way (see Section \[sec:intro\]). To apply our ICMF model to the LMC, we consider an ISM surface density of $\SigmaISM = 9.9\Msunpc2$ [@Staveley-Smith03], a gas velocity dispersion of $\sigma_{\rm ISM} = 15.8\kms$ [@Kim98], and the flat region of the rotation curve from @Kim98 to derive an angular velocity of $\Omega \sim 0.031\Myr^{-1}$. For these parameters, the model predicts an ICMF with a low-mass truncation $\Mmin = 1.1\times10^2\Msun$ and a maximum cluster mass $\Mmax = 4.5\times10^4\Msun$. ![image](CMF_LMC.pdf){width="100.00000%"} Figure \[fig:CMF\_LMC\] shows the predicted ICMF and the observed young CMF derived from the @Popescu12 catalogue of LMC clusters for clusters with ages $\tau < 10\Myr$ and $\tau < 100\Myr$. We use the mass completeness limit determined from the edge of the fading region in Figure 16 of @Popescu12 for each age bin. The model is normalised to contain the same integrated number of clusters as the observations for each of the two age ranges. @Popescu12 observe clusters in the LMC down to the lowest masses available for extragalactic objects, i.e. $M\sim 10^{2.6}\Msun$ for cluster ages $<10\Myr$. However, this is still insufficiently deep to reach the turnover predicted by our model under the conditions of star formation in the LMC. Regardless, the ICMF of clusters with ages $< 100\Myr$ in our model shows very good agreement with the data above the completeness limit ($M \ga 10^3\Msun$ for this age range). The Antennae galaxies {#sec:antennae} --------------------- The Antennae galaxies are the nearest example of a pair of merging massive disc galaxies. They have been the subject of many studies due to their relative proximity of $\sim 20\Mpc$. Their interaction is driving a starburst with a star formation rate of $20\Msun~{\rm yr}^{-1}$ [@Zhang01] and resulting in the formation of very massive young clusters [@ZhangFall99; @Whitmore10]. This is the ideal environment to study the effect of extreme ISM conditions during mergers on the ICMF. To derive the predicted ICMF, we use the average observed gas ISM surface density and velocity dispersion across the discs, $\SigmaISM \sim 200\Msun\pc^{-2}$ and $\sigma_{\rm ISM} \sim 30\kms$ [@Zhang01]. For the angular velocity we assume it to be $\Omega\sim 0.07\Myr^{-1}$, based on a rough estimate of the gas rotation velocity gradient ($\sim 67\kms\kpc^{-1}$ in a linearly rising rotation curve) from the [ ]{}velocity field in @Hibbard01. Using these values, the model predicts low- and high-mass truncation masses of $\Mmin = 2.2\times10^2\Msun$ and $\Mmax = 2.5\times10^7\Msun$. Naturally, our estimate of $Q$ and gas volume density in this merging system may be inaccurate as it is based on the assumption of a rotating disc in hydrostatic equilibrium. However, @Meng18 use simulations to suggest that the Toomre criterion is still applicable to irregular high redshift galaxies, which do undergo frequent mergers. This implies that our estimate of $Q$ for the Antennae might be reasonable. Moreover, as shown in Figure \[fig:paramspace\], the minimum mass is driven primarily by the ISM surface density, with a dependence on Toomre $Q$ only for very high gas surface densities $\SigmaISM > 3\times10^3\Msunpc2$ (see Section \[sec:environments\]). ![image](CMF_Ant.pdf){width="100.00000%"} The observed young CMF in the Antennae is determined by @ZhangFall99 using HST observations. Figure \[fig:CMF\_Ant\] shows the comparison of our predicted ICMF with observations. To normalise the model ICMF, we use the total completeness-corrected number of clusters inferred from the @ZhangFall99 data for each of the two age intervals, $2.5 < \tau < 6.3\Myr$ and $25 < \tau < 160\Myr$. The observed CMFs are complete down to $7.9\times10^3\Msun$ and $2.5\times10^4\Msun$ for the respective age intervals. The distribution of Monte Carlo samples obtained from the model agrees very well with the observations down to the completeness limit, which is well above the predicted minimum bound cluster mass. The young CMF in the Antennae is perfectly fit by a power-law in the range $\sim 10^4{-}10^6\Msun$. This range is well above the minimum mass predicted by our model, $\Mmin = 2.2\times10^2\Msun$, and consistent with the statistically observable maximum mass given the size of the cluster sample. The predictions for the minimum cluster mass are thus consistent with the limits provided by observations in the regime of local galaxies with high gas and star formation surface densities. The CMZ of the Milky Way and the M82 nuclear starburst {#sec:cmzs} ------------------------------------------------------ Circumnuclear (starbursting) rings are another example of extreme star formation environments commonly found in nearby massive galaxies. They are characterised by a narrow ring of dense molecular gas located near the centre of the galaxy. These are ideal environments to test very high galactic shear ($\Omega \ga 1\Myr^{-1}$) and ISM surface density ($\SigmaISM \ga 5\times 10^2\Msunpc2$) conditions where our model predicts narrow CMFs (see Figure \[fig:paramspace\]). The CMZ is the region located within the central $\sim 500\pc$ of the Milky Way. It has a surprisingly high molecular gas surface density given its low SFR $\sim 0.09\Msun\yr^{-1}$ [@Longmore13; @Barnes17]. Its high gas surface density ($\sim 100$ times higher than in the solar neighbourhood), as well as its location in a region dominated by shearing motions makes it ideal for studying the star-forming conditions in an environment similar to that of high-redshift galaxies [@Kruijssen13]. To predict the ICMF in the CMZ, we consider an ISM surface density $\SigmaISM \sim 10^3\Msun\pc^{-2}$ and a gas velocity dispersion of $\sigma_{\rm ISM} = 5\kms$ [@Henshaw16], and use the enclosed mass profile from @Kruijssen15a to obtain the angular velocity and the Toomre $Q$ parameter at a radius of $60\pc$ [the innermost radius of the molecular stream, cf. @Molinari11; @Kruijssen15a], resulting in $\Omega = 2.04\Myr^{-1}$ and $Q = 1.32$. The left panel of Figure \[fig:CMF\_CMZ\] shows the model prediction for the mass function in the CMZ. In this case, because of its location near the galactic centre, the model predicts that the maximum cloud mass is limited by the mass enclosed in the region unstable to centrifugal forces, with $\Mmax = 3.0\times10^4\Msun$. In addition, very high gas surface densities allow more massive clouds to collapse into single bound clusters, with a minimum cluster mass $\Mmin = 3.2\times10^3\Msun$. This combination of high ISM surface density and strong shear thus results in a very narrow ICMF, with a width of less than one decade in mass. ![image](CMF_CMZ.pdf){width="100.00000%"} Due to high extinction towards the galactic centre, it is difficult to obtain the young CMF in the CMZ. To compare with the model, we use the masses of the only young clusters found in the region (the Arches and Quintuplet) from @PortegiesZwart10, and include as lower limits the mass estimates for the five embedded “proto-clusters” from Table 1 of @GinsburgKruijssen18. To normalise the predicted ICMF we use the SFR determination by @Barnes17, who find that several methods agree to within a factor of 2 with a mean value of the total SFR $= 0.09\Msunyr$. Multiplying this by the observed CFE in Sgr B2, $ 37 \pm 7$ per cent [@GinsburgKruijssen18], and by the width of the cluster age interval, $\sim 5\Myr$, the total mass of stars in young clusters is $\approx 1.66\times10^5\Msun$. Figure \[fig:CMF\_CMZ\] shows that the CMZ is one of the star formation environments in the Local Universe in which the ICMF is expected to deviate most strongly from the traditionally-assumed Schechter form with a lower limit at $10^2\Msun$. Despite the poor statistics in this region, our prediction is consistent with the masses of observed clusters and with the lower limits set by proto-clusters. As an example of a starburst environment in an external galaxy with a well-sampled mass function, we show in the right panel of Figure \[fig:CMF\_CMZ\] the observed young CMF in the central starburst region of M82, along with the predictions of our model. To obtain the predictions in the nuclear region, we use the median inferred molecular gas column density $\SigmaISM = 500\Msunpc2$ from @Kamenetzky12, an angular velocity of $\Omega = 0.23\Myr^{-1}$, and a gas velocity dispersion of $\sigma_{\rm ISM} = 60\kms$. The angular velocity is calculated for the $\sim 450\pc$ central region using the M82 mass model from @Martini18. The ISM velocity dispersion was calculated in the same region using the map of CO velocity dispersion in Fig. 5 of @Leroy15. We show the CMF obtained by @Mayya08 for clusters in the nuclear region with ages $\tau \leq 8\Myr$. The model predicts $\Mmin = 6.4\times10^2\Msun$ and $\Mmax = 1.5\times10^7\Msun$. The CMF prediction reproduces the observed mass function of young clusters in M82 down to the completeness limit of the observations, $M\sim 2\times10^4\Msun$, which lies above the theoretical minimum cluster mass. The cluster formation efficiency {#sec:cfe} -------------------------------- The fraction of stars that form in bound clusters relative to the field is key for understanding the formation of cluster populations and for their use as tracers of galaxy evolution [e.g. @Bastian08; @Adamo11; @Kruijssen12b; @Cook12; @Hollyhead16; @Johnson16; @Messa18]. To determine the cluster formation efficiency (CFE) observationally, the ICMF must be integrated down to its low-mass truncation, which is traditionally assumed to be $M\sim 10^2\Msun$ [@LadaLada03; @Lamers05]. To evaluate the effect of the environmental variation of the minimum cluster mass on the measurement of the cluster formation efficiency, we now compare the observationally determined CFE using the minimum mass model, equation (\[eq:boundcondISM\]), with the CFE obtained assuming the traditional $10^2\Msun$ truncation. Table \[tab:CFE\] summarises the values of the minimum and maximum cluster masses obtained using our model. To highlight the relative change in the CFE estimates that results from using the environmentally-dependent minimum cluster mass (equation \[eq:boundcondISM\]) instead of the traditional $10^2\Msun$ value, we show the ratio of the two estimates. Following @Bastian08, we use the following definition of the CFE for a cluster sample with an upper age limit $\tau$: $$\Gamma = \frac{ \int_{0}^{\infty} {\rm ICMF}(M, \Mmin, \Mmax) M~{\rm d}M }{ \tau \times {\rm SFR}(<\tau) } , \label{eq:Gamma}$$ where the ICMF is obtained using equation (\[eq:CMF\]) with the value of $\Mmin$ and $\Mmax$ taken from Table \[tab:CFE\]. To calculate the CFE using the traditional $10^2\Msun$ truncation we evaluate equation (\[eq:Gamma\]) with $\Mmin=10^2\Msun$. Note that the denominator in equation (\[eq:Gamma\]) drops out when taking the ratio of the two CFE values, $\Gamma(\Mmin)/\Gamma(10^2\Msun)$. Environment $\Mmin~[\Msunns]$ $\Mmax~[\Msunns]$ $\Gamma({\Mmin})/\Gamma({10^2\Msun})$ --------------------- ------------------- ------------------- --------------------------------------- Solar neighbourhood $1.1\times10^2$ $2.8\times10^4$ 0.97 M31 $1.1\times10^2$ $8.2\times10^4$ 0.98 LMC $1.1\times10^2$ $4.5\times10^4$ 0.98 Antennae $2.2\times10^2$ $2.5\times10^7$ 0.93 CMZ $3.2\times10^3$ $3.0\times10^4$ 0.31 M82 $6.4\times10^2$ $1.5\times10^7$ 0.86 ---------------------------------------------------------------------------------------------------- Notes: the “traditional” ICMF assumes a Schechter function with a lower mass limit at $10^2\Msun$. ---------------------------------------------------------------------------------------------------- The derived cluster formation efficiencies obtained with the environmentally dependent minimum mass model are $\sim 2-70$ per cent lower in the representative set of environments shown in Table \[tab:CFE\]. Taking the nucleus of M82 as a representative starburst with $\Mmin \gg 10^2\Msun$, the predicted CFE is $\sim 15$ per cent lower than using the traditional ICMF. In the CMZ the CFE is overestimated by up to $\sim 70$ per cent when assuming a traditional ICMF with $\Mmin=100\Msun$. The effect of the environmental variation of the minimum cluster mass should thus be taken into account when comparing observations with theoretical predictions of the CFE. Implications for GC formation {#sec:GCs} ============================= The ISM conditions in the progenitors of present-day galaxies are typically extremely difficult to study in detail at high redshift. Even when this is possible, the star-forming conditions in the progenitor can only be matched to their local counterparts in a statistical sense. As we have shown, the initial mass distribution of star clusters contains an imprint of their birth environment in the form of the variation of the minimum and maximum masses. It is possible that the present-day CMF of those clusters that survive for many billions of years could preserve a record of these conditions. The mass distribution of present-day GC populations offers a unique window into the physical environments of their host galaxies in the early Universe. There is now growing evidence that GCs can be understood as products of regular star formation in the high pressure conditions of high-redshift galaxies, shaped by billions of years of dynamical evolution within their host galaxy [@KravtsovGnedin05; @Elmegreen10; @Kruijssen15b; @Lamers17; @Forbes18; @Pfeffer18; @Kruijssen19a]. If the ICMF can be reconstructed from the observed globular cluster mass function (GCMF), then our model can be inverted to constrain the star-forming environments (i.e. gas surface density and angular velocity) that gave rise to such a mass function. The GCMF is therefore an ideal tool to probe the otherwise inaccessible early star-forming conditions in present-day galaxies. The progenitors of dwarf galaxies – the Fornax dSph --------------------------------------------------- The progenitors of dwarf galaxies are observationally inaccessible at high redshift, and the only information that may be used to constrain their formation and evolution comes from resolved star formation histories in the Local Group [e.g. @Weisz14a; @Weisz14b]. These provide information on the dwarfs’ ancient stellar populations, but do not constrain the star formation environments (gas surface density and angular velocity) that would be imprinted in the CMF as predicted by our model. Determining the ICMF may therefore provide additional constraints on the early star-forming conditions in galaxies. Several nearby dwarf galaxies are extremely efficient at forming GCs, especially at early cosmic epochs, compared to more massive galaxies like the Milky Way. @Larsen12 found that $\leq 20$ per cent of the stars with metallicity $[\rm{Fe/H}]<-2.0$ reside in Fornax’s GCs. Similarly large fractions are also observed in other dwarfs including IKN and WLM [@Larsen14]. These observations are very difficult to explain using a traditional Schechter ICMF with a minimum cluster mass of $10^2\Msun$, because the amount of mass lost from the surviving GCs can account for up to half of the total mass of low-metallicity stars in the entire galaxy, leaving little room for stars from the remnants of the numerous low-mass clusters that did not survive. Clearly, a narrow ICMF with a high minimum mass could explain this puzzling observation. To evaluate whether this scenario is feasible physically, we invert our model for the ICMF to find the star-forming conditions that led to the formation of the Fornax GCs $\sim 10-12$ Gyr ago [@deBoerFraser16]. This requires that we make an assumption about how well the current GC population represents the initial cluster mass distribution. Since the metal-poor Fornax GCs are all massive ($M \ga 10^5\Msun$), the mass loss due to evaporation and tidal shocks are expected to be sub-dominant (or similar) relative to mass loss due to stellar evolution [@Reina-Campos18]. Using the most conservative assumption of mass loss due to only stellar evolution, two extreme scenarios will bracket the range of possible ICMFs: 1. *Minimum CFE* model: none of the present field stars were originally born in clusters and the observed GCs represent the initial CMF (i.e. the CFE is the ratio of mass in GCs to mass in low-metallicity stars, $\sim 20$ per cent). 2. *Maximum CFE* model: all the low-metallicity field stars in the galaxy originated from disrupted bound clusters (i.e. the CFE is 100 per cent). We may then fit the inferred ICMFs from each model with equation (\[eq:CMF\]) to recover the minimum and maximum cluster masses. The minimum mass can be used to invert equation (\[eq:boundcondISM\]) and solve for the star-forming conditions (i.e. the ISM surface density and angular rotation velocity) of Fornax at the epoch of formation of its GC population. An additional constraint on the gas surface density and the angular velocity can be obtained from inverting equation (10) in (modified to account for IMF sampling following Appendix \[sec:appendix\]) for the maximum cluster mass. Together, these two independent constraints should significantly narrow down the range of physical conditions that produced the metal-poor Fornax GCs. ![Prediction for the ICMF in the Fornax dSph at the time of formation of its metal-poor GC population $10-12\Gyr$ ago. The histogram with error bars shows the observed GCMF of the four GCs with $[\rm{Fe/H}]<-2$ (corrected for mass loss due to stellar evolution, see @deBoerFraser16). The blue curve is the ICMF in the *minimum CFE* model, where the only clusters formed are the presently observed GCs (i.e. the CFE is $M_{\rm tot}^{\rm GCs}/M_{\rm Fornax}$). The red curve is the ICMF for the *maximum CFE* model, where *all* the low-metallicity stars in the galaxy came from disrupted low-mass clusters (i.e. the CFE is 100 per cent). The purple shaded region represents the range of models constrained by the CFE predicted by @Kruijssen12b for $Q$ in the range $0.5{-}3$. For reference, the dotted line shows the traditional ICMF with $\Mmin=10^2\Msun$.[]{data-label="fig:CMF_Fornax"}](CMF_Fornax_cfe.pdf){width="1.0\columnwidth"} Figure \[fig:CMF\_Fornax\] shows the observed GCMF and the resulting ICMF for each of the two bracketing scenarios. Here we used the *birth* GC masses derived by @deBoerFraser16 using color-magnitude diagrams and assuming a @Kroupa01 IMF. For the *minimum CFE* model we simply fit our model ICMF (equation \[eq:CMF\]) to the observed Fornax GCMF using the Maximum Likelihood Estimator [@Fisher1912] and equation (\[eq:CMF\]) with uniform priors on $\Mmin$ and $\Mmax$ in the region defined by the condition $$M_{\rm GCs}^{\rm lower} \leq \int_0^{\infty} \mathrm{ICMF}(\Mmin,\Mmax,M) M {\mathrm d}M \leq M_{\rm GCs}^{\rm upper} ,$$ where $M_{\rm GCs}^{\rm lower}$ and $M_{\rm GCs}^{\rm upper}$ are the lower and upper estimates of the total mass of the four low-metallicity GCs from @deBoerFraser16 respectively. This condition merely states that the total mass under the ICMF should agree with the observed total mass of the GCs within the limits set by the observational errors. The best-fit model is shown in Figure \[fig:CMF\_Fornax\] and its parameters are $\Mmin = 1.4^{+1.4}_{-0.8}\times10^5\Msun$ and $\Mmax = 8.1^{+109.6}_{-5.6}\times10^5\Msun$, where the errors correspond to the parameter values for which the likelihood drops by a factor of $1/e$. These numbers show that the number of GCs in Fornax is too small to put meaningful constraints on any possible high-mass truncation of the ICMF. Next, to obtain the minimum cluster mass for the *maximum CFE* model, we decrease $\Mmin$ in the *minimum CFE* model – while holding $\Mmax$ fixed – until the total mass under the ICMF equals the total mass of Fornax at $[{\rm Fe/H}] < -2$, which is estimated to be $\sim 5$ times the total GC mass [@Larsen12; @deBoerFraser16]. Figure \[fig:CMF\_Fornax\] illustrates the budget problem of the traditional ICMF: to avoid overproducing the stellar mass of the entire galaxy, the minimum cluster mass should be larger than $\sim 900\Msun$, assuming a well-sampled ICMF. ![image](Fornax_conditions_min.pdf){width="95.00000%"} ![image](Fornax_conditions_max.pdf){width="95.00000%"} ![image](Fornax_conditions_cfe.pdf){width="95.00000%"} Is it now possible to recover, using the two ICMFs in Figure \[fig:CMF\_Fornax\], the location in the parameter space of ISM surface density, angular velocity and Toomre $Q$ that produced the observed GCMF. This reconstruction of the ISM surface density and angular velocity is shown in the top and middle rows of Figure \[fig:Fornax\_conditions\] for a range of values $Q \in [0.5,1.0,3.0]$. Since the minimum and maximum masses have different environmental dependences (blue and orange shaded regions), they produce independent constraints on the environmental conditions for a fixed value of $Q$. The region where they overlap corresponds to the only physical solution for ISM surface density and angular velocity. Interestingly, Figure \[fig:Fornax\_conditions\] shows that only a very narrow range of conditions at fixed Toomre $Q$ yield a physical solution (where the minimum and maximum mass solution regions overlap). The largest uncertainty remaining in the predictions comes from the uncertainty in the CFE. To obtain tighter, self-consistent constraints on the ICMF, we further include the @Kruijssen12b model for the CFE. This model predicts, given the ISM surface density, angular velocity, and $Q$, the fraction of stars that form in bound clusters relative to the total amount of star formation. We can adjust the minimum mass in our model ICMF until the total mass in clusters matches the @Kruijssen12b CFE prediction obtained from the ISM conditions corresponding to the chosen value of $\Mmin$ and the best-fitting value of $\Mmax$. In other words, we are solving the equation $$\Gamma^{\rm K12}(\SigmaISM,\Omega,Q) = \frac{ \int_{0}^{\infty} {\rm ICMF}(M_{\rm min},M_{\rm max},M) M {\rm d}M }{ M_{\rm Fornax} }$$ together with equation (\[eq:boundcondISM\]) and equation (26) of @Kruijssen12a for the CFE[^2] implicitly for $\Mmin$, $\SigmaISM$, and $\Omega$, for a fixed assumed value of $Q$. Here, the ICMF is given by equation (\[eq:CMF\]) with the maximum mass held constant at the value $\Mmax = 8.1\times10^5\Msun$ obtained from the MLE fit to the GCMF, and $M_{\rm Fornax} = 4.49\times10^6\Msun$ is the total mass in low-metallicity stars in the galaxy [@deBoerFraser16]. The solution for $Q$ values in the range $0.5 \leq Q \leq 3.0$ is given by a CFE between 71 and 80 per cent for minimum masses $\Mmin = 1.2 - 9.4\times 10^3\Msun$ including $1\sigma$ uncertainties. The range of solutions for the self-consistent, *constrained CFE* model (including uncertainties) is indicated with a purple shaded region in Figure \[fig:CMF\_Fornax\], and the recovered conditions are shown on the bottom row of Figure \[fig:Fornax\_conditions\]. Assuming the typical $Q \sim 0.5$ conditions of high-redshift galaxies, these results indicate that $\sim 10-12~\Gyr$ ago the progenitor of Fornax was forming stars in a high-surface density ISM (with $\SigmaISM \simeq 700{-}1100\Msunpc2$) and strong shearing motions ($\Omega \simeq 0.4{-}1.1\Myr^{-1}$). These conditions caused an increase in the minimum cluster mass (compared to nearby disc galaxies) to $\Mmin = 1.2{-}5.8\times10^3\Msun$, and a high cluster formation efficiency of $\Gamma \sim 80$ per cent. These recovered conditions are quite typical of local (nuclear) starburst galaxies, as indicated by the white circles in Figure \[fig:paramspace\]. This implies that a galactic environment similar to observed present-day nuclear starbursts could explain the large number of low-metallicity stars that belong to the GC systems of dwarf galaxies like Fornax, IKN, and WLM. Previously, @Kruijssen15b explained the high number of GCs in Fornax by an early galaxy merger that cut short the initial phase of rapid GC disruption due to tidal interactions with molecular clouds in the natal disc, instead redistributing the GCs into the gas-poor spheroid. With our model, an early galaxy merger is no longer required to explain the extremely high specific frequency at low metallicities in Fornax, even if it remains a possibility. An interesting implication of this result is that Fornax (and due to the similarities in GC populations, also IKN and WLM) may all have undergone significant subsequent expansion of their stellar components, presumably due to the change of gravitational potential following the blow-out of the residual gas by stellar feedback. We plan to investigate the physics driving this expansion further in a follow-up paper. High-redshift star-forming galaxies ----------------------------------- Because of their location in the high ISM surface density ($\SigmaISM \ga 10^2\Msunpc2$) region of the parameter space in Figure \[fig:paramspace\], high-redshift star-forming galaxies observed at $z=2-3$ are predicted by our model to have ICMFs with minimum masses that are factors of several to orders of magnitude larger that in local spirals. Furthermore, the maximum cluster mass model predicts that the largest gas clumps will have masses larger than $10^9\Msun$, with clusters as massive as $10^8\Msun$ that will quickly spiral into the center through dynamical friction. To make a prediction for the ICMF of this class of galaxies, we select zC406690 ($z = 2.196$) from the catalogue provided by @Tacconi13. This object was chosen by because it represents the average properties of high-redshift star-forming galaxies, its kinematics are dominated by rotation, and both its global properties and its molecular clump masses have been measured [@genzeletal11]. To obtain the ICMF, we proceed as in Section \[sec:predictions\]. We use the rotational velocity and the half-light radius listed in @Tacconi13 (i.e. $V_{\rm rot} = 224\kms$ and $R_{1/2} = 6.3\kpc$), as well as the peak molecular gas surface density (assuming a molecular gas mass $M_{\rm mol}=8.2\times10^{10}\Msun$ and an exponential gas disc with the same scale-length as the optical disc), $\SigmaISM = 9.3\times10^2\Msunpc2$. In addition, the rotation curve is assumed to be flat at the half-light radius in order to obtain the angular velocity. To calculate the Toomre $Q$ parameter we assume a velocity dispersion of $50\kms$ [@Reina-Campos17]. We obtain minimum and maximum masses of $\Mmin = 2.6\times10^3\Msun$ and $\Mmax = 8.9\times10^{10}\Msun$. Figure \[fig:CMF\_HZ\] shows the predicted ICMF for clusters with ages $\tau < 5\Myr$, as well as a mock measurement and uncertainties obtained with $10^4$ Monte Carlo samples. To normalise the ICMF, we calculate the total mass in clusters using the @Kruijssen12b model for the CFE and the observed star formation rate, $\rm SFR = 480\Msun~{\rm yr}^{-1}$ [@Tacconi13]. The predicted CFE is $\Gamma=68$ per cent. ![Predicted young CMF in the prototypical high-redshift galaxy zC406690. The prediction of our ICMF model (equation \[eq:CMF\]) is shown as a dashed line for clusters with ages $\tau < 5\Myr$. The solid line shows the result of Monte Carlo sampling the predicted ICMF. Because of its high gas surface density ISM, the minimum cluster mass ($\Mmin = 2.6\times10^3\Msun$) is predicted to be more than an order of magnitude larger than in the solar neighbourhood. []{data-label="fig:CMF_HZ"}](CMF_HZ.pdf){width="1.0\columnwidth"} To estimate the GCMF that will result from the evolution of the ICMF in zC406690 until the present day, the effects of cluster disruption and dynamical friction must be considered. For instance, @Kruijssen15b used an analytical treatment to estimate that for a $z=3$ galaxy with $\log(M_*/\Msun) = 10.7$, tidal shocking significantly reduces the number of GCs below $\sim 10^5\Msun$. At the high-mass end, clusters more massive than $\sim 10^6\Msun$ will be depleted by dynamical friction. These combined effects should produce a very narrow GCMF with a peak mass in the range $10^5 - 10^6\Msun$. However, because our model predicts a minimum mass $\Mmin=2.6\times10^3\Msun$, the contribution from disrupted low-mass clusters to the field star population will be reduced somewhat compared to the result of assuming the traditional environmentally-independent $10^2\Msun$ truncation in equation (\[eq:CMF\]). Typical galaxies at $z>1$ have clumpy morphologies in rest-frame UV images. Recent studies use multi-band *HST* photometry to determine the clump mass distributions in highly-magnified lensed galaxies at $z=1-3$, as well as in deep fields [@Adamo13; @Elmegreen13; @Wuyts14; @Dessauges-Zavadsky17; @Vanzella17b; @Vanzella17a; @JohnsonT17]. These studies suggest that the mass function of cluster complexes is truncated above a few times $10^8\Msun$. They also provide an upper limit on the minimum mass of cluster complexes of $\sim 10^{5.5}\Msun$. Considering that, because of the limited resolution, complexes will have masses at least as large or larger than bound clusters, these limits are consistent with our prediction for the extent of the ICMF shown in Figure \[fig:CMF\_HZ\]. Future studies of lensed high-redshift galaxies with the James Webb Space Telescope and with the next generation of 30-m class ground-based telescopes will probe the CMF in these objects even deeper, allowing for further constraints on GC formation at high redshift. Conclusions {#sec:conclusions} =========== In this paper, we present a model for the environmental dependence of the minimum mass of bound stellar clusters. The model evaluates the star formation efficiency within feedback-regulated molecular clouds in the context of a rotating galactic disc in hydrostatic equilibrium. In combination with the model for the maximum cluster mass from @Reina-Campos17, this enables us to predict the full ICMF as a function of the global properties of the host galaxy, namely the surface density of the ISM, the angular velocity of the disc, and the Toomre $Q$ stability parameter. We explore the environmental dependence of the minimum cluster mass and the full ICMF in a broad range of galactic environments from local spirals to high-redshift star-forming galaxies, and use it to make predictions for observed young CMFs. The model further allows the reconstruction of the star-forming conditions in local galaxies from their GC populations. Our conclusions are as follows. 1. The minimum cluster mass and the resulting total width of the ICMF are predicted to vary by orders of magnitude across the observed range of galaxy properties, from local quiescent discs to high-redshift clumpy star-forming galaxies (Figure \[fig:paramspace\]). The main driver of the minimum mass variation is the ISM surface density, with $\Mmin \propto \SigmaISM^3$ for $\SigmaISM \ga 3\times10^2\Msunpc2$, and no dependence on the disc angular speed across most of the parameter space. At very large gas surface densities ($\SigmaISM > 4\times10^3\Msunpc2$), the minimum mass saturates at the maximum mass predicted by , leading to very narrow ICMFs. These overall trends are largely insensitive to the value of Toomre $Q$. 2. The minimum cluster mass in high ISM surface density environments scales steeply with the assumed value of the star formation efficiency per free-fall time (Section \[sec:uncertainties\]). This implies that future observational evidence of a variation in $\Mmin$ will be a sensitive probe of this parameter, which plays a fundamental role in star formation theories. 3. We predict the full ICMF in several environments across parameter space where observational determinations of the young CMF have been performed. Despite large systematic uncertainties in the observations at low cluster masses, the model shows good agreement where the data is most robust, which is generally above $M \sim 10^3\Msun$. Although the observed turnover of the CMF in the solar neighbourhood at low masses is likely due to incompleteness of the cluster catalogues, it matches well the predicted value of the minimum cluster mass of $\Mmin \sim 1.1\times10^2\Msun$. *Gaia* data is expected to considerably reduce the uncertainties. The ICMF in the LMC and in M31 agree well in the power-law regime and high-mass truncation, but the predicted minimum mass lies below the completeness limits of the observations. 4. The ICMF model predicts considerably larger minimum cluster masses, $\Mmin \ga 10^{2.5}\Msun$, in starbursting environments (due to their ISM surface densities in excess of $10^2\Msunpc2$; see Figure \[fig:paramspace\]), and extremely narrow mass functions in high-shear environments found in galactic nuclei. This makes these systems ideal for testing our model. The predicted ICMFs agree very well with the limits set by observations of the Antennae galaxies, as well as in the nucleus of M82. In both cases the model predicts a minimum mass that is several times larger than in the solar neighbourhood, but still below the completeness limits. For the CMZ, we predict the most extreme deviation from the traditional mass function: a narrow ($\sim 1$ dex in mass) peak with a minimum cluster mass of $\Mmin \sim 3.2\times10^3\Msun$. This agrees with the limits set by the masses of young clusters and embedded proto-clusters in the region. 5. The model allows us to predict how the star-forming environments at high redshift shaped the observed GC populations around local galaxies. Conversely, it can be inverted to constrain the star-forming environment of the progenitors of local galaxies during the formation epoch of their GCs. Using this approach, we investigate the possibility that the large GC specific frequency (at low metallicities) in the Fornax dSph could be due to environmental conditions that led to a narrow ICMF at the time its GCs formed. We infer a narrow range of conditions for the ISM surface density and shear in the ISM of the progenitor of Fornax $\sim 10-12\Gyr$ ago. The model predicts that the galaxy must have been quite compact, with ISM surface densities $\SigmaISM > 700 \Msunpc2$ and angular velocities $\Omega > 0.4\Myr^{-1}$ (assuming $Q=0.5$). This implies that the central region was heavily dominated by dense gas and that its stellar component likely underwent considerable expansion to become the spatially extended galaxy that is observed at present. 6. The ICMF models predict that $\sim 80$ per cent of the low-metallicity stars (i.e. $[{\rm Fe/H}]<-2$) in the Fornax dSph formed in bound clusters, with a large minimum cluster mass of $\Mmin \sim 1.2{-}5.8\times10^3\Msun$. This is more than an order of magnitude larger than the traditionally assumed low-mass ICMF truncation at $10^2\Msun$. The dearth of low-mass clusters at formation explains the puzzling high specific frequency of GCs (relative to metal-poor field stars) observed in dwarf galaxies like Fornax, IKN and WLM by @Larsen12 [@Larsen14; @Larsen18]. As shown in this paper, modelling the environmental dependence of the CMF has many potential applications for understanding the formation of stellar clusters, as well as for reconstructing the star-forming conditions during galaxy evolution. Future observations of the low-mass regime of the CMF in nearby galaxies with upcoming observational facilities (e.g. the [*James Webb Space Telescope*]{} and 30-m class ground-based telescopes) will allow for the model presented here to be tested conclusively. Finally, the model is ideally-suited for implementation in sub-grid models for cluster formation and evolution in (cosmological) simulations of galaxy formation and evolution [e.g. @Pfeffer18; @Kruijssen19a; @li18]. For the foreseeable future, these models will be incapable of resolving the complete stellar cluster population down to the minimum mass scale. The presented model provides a physically-motivated way to account for the low-mass cluster population. Acknowledgements {#acknowledgements .unnumbered} ================ The authors would like to thank the anonymous referee for a prompt and constructive review. We also thank Angela Adamo, Nate Bastian, Bruce Elmegreen, Cliff Johnson, and Anil Seth for illuminating discussions, as well as Henny Lamers for providing his mass and age estimates of clusters in the solar neighborhood. We gratefully acknowledge funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme via the ERC Starting Grant MUSTANG (grant agreement number 714907). MRC is supported by a Fellowship from the International Max Planck Research School for Astronomy and Cosmic Physics at the University of Heidelberg (IMPRS-HD). JMDK gratefully acknowledges funding from the German Research Foundation (DFG) in the form of an Emmy Noether Research Group (grant number KR4801/1-1). The effect of IMF sampling on the maximum cluster mass {#sec:appendix} ====================================================== Here we show the effect of IMF sampling in low-mass clouds on the maximum cluster mass model from . Using equations (\[eq:tsn\_IMF\]) and (\[eq:delta\_t\]) to rewrite the SN timescale in equation 4 of , we obtain $$t_{\rm fb} = \frac{\tsn}{2} \left[ 1 + \sqrtsign{ 1 + \frac{ 4\pi^2 G^2 t_{\rm ff,g} Q^2 \SigmaISM^2 }{ \phifb \epsff \tsn^2 \kappa^2 } } \right] , \label{eq:tfb_Mmax}$$ with $$\tsn = t_{\rm OB,0} + \frac{ \MOB }{ \epsff } \frac{ t_{\rm ff,g} }{ M_{\rm GMC,max} } , \label{eq:tsn_Mmax}$$ where $M_{\rm GMC,max}$ is the maximum mass of a GMC, and $t_{\rm ff,g}$ is the vertical free-fall time of the gas at the midplane. Equation 9 in can then be solved implicitly for the maximum GMC mass that can condense out of the ISM before it is dispersed by SN feedback, delayed by IMF sampling. This yields $$M_{\rm GMC,max} = \frac{4\pi^5 G^2 \SigmaISM^3}{\kappa^4} \times \min\left[1, \frac{ t_{\rm fb}\left(M_{\rm GMC,max}\right) }{ t_{\rm ff,2D} }\right]^4 , \label{eq:Mgmcmax}$$ where $$t_{\rm ff, 2D} = \sqrt{\frac{2\pi}{\kappa}}$$ is the two-dimensional free-fall time of the shear-enclosed sheet of the ISM. ![image](MaxMass_appendix.pdf){width="68.00000%"} The effect of the delay in the SN detonations due to sampling of the IMF is shown in Figure \[fig:MaxMassIMF\], where the feedback timescale, the ratio of the feedback timescale to the 2D free-fall time of the ISM, and the maximum GMC mass are compared with the results. Including the IMF sampling delay has the effect of increasing the feedback timescale at very low ISM surface densities and high angular speeds, where the maximum GMC masses predicted by are small. In this regime, however, the collapse of the clouds is dominated by centrifugal forces. This results in a very small change in the maximum GMC mass with respect to . The GMC masses do increase significantly at very low surface densities $\SigmaISM \la 3\Msunpc2$ and angular speeds $\Omega \la 0.1\Myr^{-1}$. As can be seen in Figure \[fig:MaxMassIMF\], most observed galaxies do not occupy this region of the parameter space. The maximum cluster mass is determined by multiplying the integrated star formation efficiency ($\epsilon$) by the bound fraction of star formation ($\Gamma$) from @Kruijssen12b and by the maximum GMC mass, i.e. $$\Mmax = \epsilon~\Gamma(\SigmaISM,\kappa,Q)~M_{\rm gmc,max} . \label{eq:Mmax}$$ Because the @Kruijssen12b model also relies on the feedback timescale, it must be modified to include IMF sampling in low mass clouds. To do this, we use the feedback timescale of the maximum GMC mass (equation \[eq:tfb\_Mmax\]) for each position in the parameter space to calculate $\Gamma$. Figure \[fig:MaxMassIMF\] shows the effect of this modification on $\Gamma$ and on the maximum cluster mass. Since the bound fraction is weakly time-dependent [see fig. 4 of @Kruijssen12b], we evaluate it at the moment of completion of the star formation process, i.e. at $t = t_{\rm fb}$. While the bound fraction of stars forming in the most massive GMC increases considerably at low surface densities ($\SigmaISM \la 3\Msunpc2$) and at intermediate surface densities and high angular speeds ($\SigmaISM \la 10^2\Msunpc2$ and $\Omega \ga 0.1\Myr^{-1}$), this region is mostly unoccupied by observed galaxies. The overall effect of the IMF sampling delay on the maximum cluster mass is that of imposing a lower limit on the maximum cluster mass of $\Mmax \geq \MOB \sim 100\Msun$ in the feedback-limited region of the parameter space. This corresponds to the low surface density and angular speed region in the bottom row of Figure \[fig:MaxMassIMF\]. \[lastpage\] [^1]: E-mail: strujill@gmail.com [^2]: Note that in this paper we define the CFE as the bound fraction of star formation. This is not the same notation used by @Kruijssen12a, where the CFE also includes the effect of early cluster disruption by tidal shocks.
--- abstract: 'A search is made for periodic modulation in the X-ray flux from the low mass X-ray binary GX13+1 using Rossi X-ray Timing Explorer All Sky Monitor data collected over a period of almost seven years. From a filtered data set, which excludes measurements with exceptionally large error bars and so maximizes signal to noise, modulation is found at a period of 24.065 $\pm$ 0.018 days. The modulation is most clearly detectable at high energies (5 - 12 keV). Spectral changes are revealed as a modulation in hardness ratio on the 24 day period and there is a phase shift between the modulation in the 5 - 12 keV energy band compared to the 1.5 - 5 keV band. The high-energy spectrum of GX13+1 is unusual in displaying both emission and absorption iron line features and it is speculated that the peculiar spectral and timing properties may be connected.' author: - 'Robin H.D. Corbet' title: ' Long Term X-ray Variability in GX13+1: Energy Dependent Periodic Modulation' --- Introduction ============ The low mass X-ray binary GX13+1 is a bright persistent source which exhibits X-ray bursts (Fleischman 1985, Matsuba et al. 1995). Counterparts have been identified in the infra-red (Naylor, Charles & Longmore 1991, Garcia et al. 1992) and radio (Grindlay & Seaquist 1986) wavebands. The X-ray and radio fluxes of  are not correlated (Garcia et al. 1988). The IR counterpart has been seen to vary on timescales of days to tens of days but no definite orbital period has been found (e.g. Charles & Naylor 1992, Groot et al. 1996,  et al. 2002). However Groot et al. (1996) did find maximum power at a period of 12.6 days from observations spanning 18 days. From IR spectroscopy et al. (1999) derive a spectral type of K5III for the mass donating star. This classification implies a mass of 5M (Allen 1973) and so the mass-donor is the primary star.  is usually classified as an “atoll” source but has some characteristics such as the properties of its quasi-periodic oscillations (QPOs) which make it more similar to a “Z” source (Homan et al. 1998 and references therein). Schnerr et al. (2003) find that while  follows a track in an X-ray color-color diagram on timescales of hours apparently similar to atoll sources, the count rate and power spectrum change in ways unlike any other atoll or Z source. X-ray observations of  with CCD detectors have revealed the presence of a number of features that are attributed to iron (Ueda et al. 2001, Sidoli et al. 2002). These include an emission line near 6.4 keV, an absorption line at 7.0 keV and a deep absorption edge at 8.83 keV. Previously the only other X-ray binaries which had shown such iron absorption lines were the “superluminal” black hole candidates GRO J1655-40 (Ueda et al. 1998, Yamaoka et al. 2001) and GRS 1915+105 (Kotani et al. 2000, Lee et al. 2002). From one year of observations with the All Sky Monitor (ASM) on board the Rossi X-ray Timing Explorer (RXTE) a modulation of the soft (1.3 - 4.8 keV) flux at a period of 24.7 $\pm$ 1 day was reported (Corbet 1996; hereafter C96). Subsequent ASM observations, however, apparently failed to find confirmation of this periodic modulation.  et al. (2002) examined 5 years worth of ASM data and concluded that while there was evidence for quasi-periodicity on a timescale of 20-30 days this modulation was not consistently present. Here RXTE ASM light curves of GX13+1, covering just under seven years, are analyzed separated into three energy bands and different methods for dealing with variable data quality are investigated. It is concluded that the X-ray flux does indeed show a persistent modulation at a period of 24 days as initially reported. Observations ============ The RXTE ASM (Levine et al. 1996) consists of three similar Scanning Shadow Cameras, sensitive to X-rays in an energy band of approximately 1.5-12 keV, which perform sets of 90 second pointed observations (“dwells”) so as to cover 80% of the sky every 90 minutes. Light curves are available in three energy bands: 1.5 to 3.0 keV (“soft”), 3.0 to 5 keV (“medium”), and 5 to 12 keV (“hard”). The Crab produces approximately 75 counts/s in the ASM over the entire energy range. Observations of blank field regions away from the Galactic center indicate that background subtraction may produce a systematic uncertainty of about 0.1 counts/s (Remillard & Levine 1997). Two standard ASM light curve products are routinely available - one with flux measurements from individual dwells which preserves the 90s time resolution, and one which gives averages of all dwells performed during each day. The ASM light curve of considered here covers approximately 6.7 years (MJD 50094 to 52536). The overall light curve of  as obtained with the RXTE ASM is shown in Figure 1. The mean flux for the entire ASM energy range is 23 counts/s and no long term trend is obvious from the full energy-range light curve. In C96 it was reported that the overall flux level of was anti-correlated with the hardness ratio. However, further investigation suggests that the major contribution to this apparent effect is gain changes in two of the three SSCs that comprise the ASM. These instrumental effects cause slow drifts in hardness ratios dependent on source spectrum with only sources with spectra identical to the Crab showing no change (Remillard private communication). A change in the channel definitions used to define the three energy bands also occurred at MJD 51548.625. This can give discontinuous jumps in the count rates in different energy bands again dependent on source spectrum. These two instrumental effects can both be seen in Figure 1. The approximately linear trends in the soft and medium energy bands give exactly the apparent hardness-ratio/intensity correlation reported in C96. The anti-correlation reported in C96 is thus at least primarily an instrumental artifact. In order to remove these trends from the lower energy bands which, in addition to giving apparent hardness ratio changes, result in spurious low-frequency power in a periodogram, the low and medium energy bands were corrected by fitting two linear trends to the light curves before and after MJD 51548 and subtracting these. The high energy light curve does not have an obvious trend and so no correction was needed. Analysis and Results ==================== To investigate long term flux variability it is often convenient to use the daily averaged ASM light curves rather than the dwell light curves. The error on these flux measurements can vary significantly due to, for example, the different number of dwells covering a source from day to day. The errors on flux measurements in individual dwells can also vary depending on factors such as the location of the source in each SSC’s field of view, proximity to the position of the Sun, and the number and brightness of other sources in the field of view. When searching for periodic modulation in faint sources, for example, it can thus be advantageous to weight data points contributions to a power spectrum. In weak sources this weighting may reveal periodic modulation not otherwise easily detectable (e.g. Corbet, Finley & Peele 1999, Corbet et al. 1999). However, this procedure is not appropriate if the variations in source flux are significantly larger than typical data point errors. For the full energy range daily averaged light curve of  the standard deviation of the data points is 2.8 counts/s and the mean error on measurements is 0.7 counts/s. A simple direct weighting of all data points may thus not be appropriate for an analysis of . The light curve does, however, contain some points with exceptionally large error bars because, for example, the points were obtained with only a very small number of dwells or the observations were obtained when the source was at a small angular distance from the Sun. Another simple technique was therefore considered to improve signal to noise in period searches. While the data contributions to the power spectrum are not weighted, some points with errors larger than an arbitrary value are completely excluded from the calculation of the power spectrum. How this arbitrary value is chosen is discussed below. A further complication in weighting the data points is that the variation in the number of dwells per day is significantly non-random. The number of ASM observations made per day of  was investigated and a very strong modulation at a period of 52.6 days was found (Figure 2). This periodicity is also found in the size of the error bars on the daily averaged flux measurements. This 52.6 day periodicity is probably linked to the precession period of RXTE’s orbit. Note that, even if unweighted techniques are used to extract power spectra, if the dwell light curve is used instead of the daily averaged light curve, then a similar weighting will effectively result. In order to compare these different techniques for searching for periodic signals in the variable quality ASM data the effects of weighting and screening the data were investigated. In Figure 3 power spectra of the light curve of   calculated in five different ways are shown. The techniques employed were:\ (a) using unweighted daily averages\ (b) weighted daily averages\ (c) unweighted daily averages with points with large error bars removed\ (d) weighted dwell data\ (e) unweighted dwell data It can be seen that the unweighted procedure (a) shows a peak at near 24 days that is stronger compared to the weighted procedure (b). This peak strongly increases in significance when the points with large error bars are excluded (c). The power spectra obtained from the dwell data (d and e) show similar shapes to the weighted power spectrum with the weighted dwell data (d) almost identical to (b). To obtain the screened unweighted power spectrum (c) data points were removed based on the size of the error bars and the power spectrum calculated. This technique is based on the assumption that the peak exhibited in the unweighted power spectrum arises from a real modulation present in the data and that the parameters of this modulation can best be measured by utilizing a subset of the data which gives the highest overall signal to noise ratio. The filtering procedure was repeated with different data exclusion thresholds until the maximum ratio of signal peak at 24 days to average power was obtained. The distribution of error bar sizes in shown in Figure 4 with the optimum screening threshold of an error of 1.4 counts/s marked. The peak of the error bar distribution is at approximately 0.4 cts/s which is considerably less than the mean error of 0.7 cts/s noted above which reflects the contribution of an extended tail on the distribution. It was found that only a relatively small number of points had to be excluded (8.2% of the original 1897 points). These excluded points all come from the extended tail and the “core” of the data is completely included. Thus, the presence of a strong peak at 24 days is unlikely to be an artifact of this screening procedure. The power spectra of each of the ASM energy bands were next investigated individually. For each band data points were again reiteratively filtered by the size of their error bars to maximize the signal near 24 days. The fraction of data excluded was 22%, 15% and 4.5% for the soft, medium, and hard bands respectively with associated screening thresholds were 0.5, 0.5 and 1.0 counts/s. For comparison the mean and distribution peak of the errors in each band were: 0.4 and 0.19 (soft), 0.35 and 0.17 (medium), and 0.39 and 0.22 (hard). The larger fraction of data excluded for the softer bands might be due to the relatively larger effects of solar contamination at low energies as the soft energy band in particular light curve exhibits very large error bars when the  is close to the position of the Sun. However other unknown effects might also be involved. Figure 5 shows the resulting power spectra in terms of relative power. That is, the power spectra are normalized by the mean flux in each energy band and so show relative modulation. Figure 6 shows details of the power spectra near the 24 day period but plotted as absolute modulation. From Figures 5 and 6 the following conclusions can be drawn: (i) some signal is present in all three energy bands near 24 days. However, significant independent detection of the signal could only be made in the hard (5 to 12 keV) and summed energy bands. (ii) The greatest [*relative*]{} modulation occurs in the soft band (1.5 to 3 keV). There are, however, other peaks in the soft power spectrum of almost comparable size to the signal near 24 days. (iii) The greatest [*absolute*]{} modulation occurs in the hard band (5 to 12 keV). The periodic modulation in each energy band was quantified by fitting sine waves to the light curves and the results of these fits are given in Table 1. Note that the periods derived from each energy band are all consistent within the errors with the value found from the summed energy band of 24.07 $\pm$ 0.02 days. This gives additional confidence that a real period has been detected. This period is also consistent with the value reported in C96 of 24.7 $\pm$ 1 days. The phase of maximum flux, however, differs between the soft (1.5 to 3 keV) and medium (3 to 5 keV) energy bands (which are consistent with each other) and the hard (5 to 12 keV) band which trails by 4.8 $\pm$ 0.7 days ($\Delta \phi = 0.20 \pm 0.03$). This phase difference is also clearly seen directly in the folded light curves (Figure 7). Due, at least in part, to this energy dependent phase difference the folded hardness ratio is also seen to be modulated on the 24 day period. These folded light curves are rather smooth thus justifying the parameterization using sine wave fits. To investigate the coherency of the modulation the width of the peak in the power spectrum was compared to that of a transform of a sine wave sampled with the same frequency as the actual data. For the peak in the transform of the summed energy bands we find a FWHM of 0.00031 day$^{-1}$ and the peak in the periodogram of the pure sine wave has a FWHM of 0.00033 day$^{-1}$. The width of the power spectrum peak is thus fully consistent with coherency. In order to further investigate the stability of the 24 day period we calculated power spectra for each energy band using a sliding box to investigate subsets of the data. Figure 8 shows that while the strength of the modulation is variable all data segments for the hard and summed energy bands show some signal near 24 days. This variability, along with the problem of determining the optimal method to calculate the power spectra, has probably contributed to the previous difficulty in determining the reality and properties of the modulation in GX13+1 ( et al. 2002). Comparison with Other Systems ----------------------------- In order to compare the properties of   the RXTE ASM light curves of several other sources were also investigated. These were the “GX atoll” sources (Hasinger & van der Klis 1989) which are not known to be pulsars or transient black hole systems: GX3+1, GX9+1 and GX9+9. In each case power spectra were constructed from the daily averaged light curves both with and without weighting. Some sources showed smooth variations on long timescales caused by intrinsic variability rather than the ASM instrumental effects noted in Section 2. Quadratic fits were therefore subtracted from the lightcurves before the power spectra were calculated. The resulting power spectra are shown in Figure 9. In the weighted power spectra GX3+1 shows prominent peaks at 51.3 and 54.0 days. These are both close to the sampling period of 52.6 days for . Thus these peaks may be an artifact. GX9+1 shows no significant peaks in the weighted power spectrum. GX9+9 shows peaks at 51.6 and 54.6 days which are again close to the 52.6 day sampling period for . In the unweighted power spectra the peaks near 52 days disappear for both GX3+1 and GX9+9. For GX9+1 a modest peak appears in the unweighted power spectrum at 59.5 days. However, when the data screening technique was employed for this source, it was found that exclusion of data actually reduced the relative height of the peak. This suggests that this peak in GX9+1 may be spurious. Thus none of the other “GX atoll” sources show the same type of timing properties in the RXTE ASM as . Discussion ========== Periodic modulation in X-ray binaries is known to arise from three types of underlying physical processes: neutron star rotation period, binary orbital period, and super-orbital modulation by a less clear physical mechanism (White, Nagase & Parmar 1995). Since neutron star rotation can be excluded because of the length of the period, two likely possibilities remain for  of orbital or super-orbital modulation. The modulation found for  is unusual with no directly comparable modulation found for other bright low mass X-ray binaries observed with the RXTE ASM. Modulation of X-ray flux from low-mass X-ray binaries on the orbital period is only seen for high inclination systems. In only a few cases (e.g. EXO 0748-676, Wolff et al. 2002 and references therein) are eclipses seen in bright sources as the mass-donating star occults the central X-ray source. X-ray dipping is seen for a number of sources with moderate inclination. These dips are caused by vertical structure in the accretion disk and are tied to the orbital period but are irregular in their morphology and phasing. Spectral changes are typically seen during the dips with the spectrum becoming harder. For still lower inclination systems X-ray modulation on the orbital period typically cannot be seen but optical modulation may be observed (e.g. van Paradijs & McClintock 1995) as varying aspects of the X-ray heated mass-donating star are observed. In all cases modulation on the orbital period should be intrinsically coherent as it is locked to the orbital period, but deviations from strict periodicity can arise due to, for example, changes in the phase at which dipping occurs. Super-orbital modulation has been seen in several systems with the strongest effects demonstrated by the high mass systems SMC X-1 and LMC X-4 and the intermediate mass system Her X-1 (e.g. Ogilvie & Dubus 2001). For low mass systems periodic super-orbital modulation appears to be rare and the best evidence for such modulation may come from X1820-30 which has a 172 day period (e.g. Chou & Grindlay 2001 and references therein). Since super-orbital modulation is likely not tied to an underlying good clock such as orbital modulation, but may instead be caused by a mechanism such as accretion disk precession forced by radiation pressure (e.g. Ogilvie & Dubus 2001 and references therein), modulation may have low coherence. In Cyg X-2, which initially appeared to show a “clean” 78 day super-orbital periodicity (Wijnands, Kuulkers, & Smale 1996), the modulation was later found to be more complex with the excursion times between X-ray minima characterized as a series of integer multiples of the 9.8 binary orbital period (Boyd & Smale 2001). If the modulation that is observed in  is orbital in origin then a requirement is that the mass-donating star must not overfill the predicted Roche lobe size.  et al. (1999) find that a mass donor of spectral type K5III, as found from their IR spectroscopy, would fill the Roche lobe if the orbital period is about 25 days. If this spectral type is correct then it implies that the modulation observed would have to be orbital in nature. The folded light curves (Figure 7) show a smooth modulation over the entire 24 day period. This indicates that the flux is being affected at all phases rather than the periodicity being caused by, for example, a sharp eclipse or dips restricted to a limited phase range. Although Accretion Disk Corona (ADC) sources, where the central X-ray source is not observed directly, can exhibit rather broad modulation the ADC sources have low X-ray luminosities and so are unlike which is estimated to have a luminosity of 4 $-$ 6 $\times$ 10$^{37}$ (d/7 kpc)$^2$ ergs s$^{-1}$ (1 - 20 keV, Matsuba et al. 1995). The mechanism for the periodic modulation in  is unclear. However, the “cleaner” modulation observed in the hard ASM energy band may be related to the unusual high energy features reported by Ueda et al. (2001) and Sidoli et al. (2002). The periodic modulation might be caused by material located in different parts of the binary system, such as different parts of accretion disk structure, at different energies. Note that Smith, Heindl, & Swank (2002) find from monitoring observations with the RXTE Proportional Counter Array orbital periods of 12.7 and 18.5 days for the black hole candidates 1E 1740.7-2942 and GRS 1758-258. Thus the presence of a long orbital period for  may be another common factor, along with the iron spectral features, between  and some black hole systems. It is noted that Schnerr et al. (2003) interpret their unusual timing and spectral results for  as showing the presence of an additional source of hard variable emission. This component could plausibly be identified with the periodic component of the flux that is seen most clearly in the hard band with the RXTE ASM. Schnerr et al. (2003) propose that this hard component might come from a jet and that variability could be caused by precession of the jet or to variations in jet activity itself. However, the periodicity seen in the hard component suggests that precession is unlikely to drive this variability. However, variable occultation of part of the jet by, for example, structure in the accretion disk could account for the variability if the system inclination is sufficiently high. It may be that a portion of the hard flux does originate in a physically separate region of the system such as a jet, while softer emission comes from the inner accretion disk and/or the surface of the neutron star. If so, this could account for the different timing properties of  found at different energies. Conclusion ========== The X-ray light curve of  shows strong evidence for the presence of a periodicity near 24 days with modulation properties that are energy dependent. The most likely origin for this is some type of orbital modulation if the mass donor is a Roche-lobe filling K5III star. Because of the unusual nature of this modulation and the somewhat non-standard technique used to maximize the signal in the ASM data it would be desirable to confirm the 24 day period through other observations and determine whether the modulation is present at other wavelengths. Unfortunately no other high-quality long term observations appear to exist. Although  was observed by the all-sky monitors on both Ariel V and Vela 5 the sensitivity of these experiments was significantly less than that of the RXTE ASM. The Ginga all-sky monitor, which was more sensitive than the Ariel V and Vela 5 instruments, did not include  in the objects for which light curves were produced (S. Kitamoto, private communication). Infra-red light curves have been obtained by several groups (Charles & Naylor 1992, Groot et al. 1996, Wachter 1996,  et al. 2002). While these show modulations on timescales of tens of days, which could be consistent with a 24 day period, the observations do not cover sufficient durations to demonstrate whether the 24 day period is also exhibited in the infra-red. The best prospect for confirmation of the 24 day period may come from further RXTE observations. If the RXTE ASM continues to operate for at least a few more years then the additional observations obtained will form a statistically independent data set that can be investigated for the presence of the 24 day period. Additional constraints on models that could account for the periodic modulation may come from an investigation of whether the high energy spectral features also vary on the 24 day period. Extended IR observations would be valuable as they may show whether the 24 day period exists at these wavelengths and so constrain the system inclination. I thank R.A. Remillard for useful comments on the properties of the RXTE ASM and S. Kitamoto for information on the Ginga ASM. references.tex [**Figure Captions**]{} [lccccc]{} Energy Band & Mean Count Rate & Amplitude & Amplitude & Period & T$_{max}$\ (keV) & (cts/s) & (cts/s) & (%) & (days) & (MJD)\ 1.5 $-$ 3 & 4.38 $\pm$ 0.01 & 0.16 $\pm$ 0.02 & 3.7 & 24.063 $\pm$ 0.019 & 51112.2 $\pm$ 0.6\ 3 $-$ 5 & 7.64 $\pm$ 0.02 & 0.19 $\pm$ 0.03 & 2.5 & 24.062 $\pm$ 0.024 & 51112.5 $\pm$ 0.7\ 5 $-$ 12 & 9.35 $\pm$ 0.02 & 0.30 $\pm$ 0.03 & 3.2 & 24.058 $\pm$ 0.015 & 51117.1 $\pm$ 0.5\ 1.5 $-$ 12 & 23.10 $\pm$ 0.06 & 0.61 $\pm$ 0.08 & 2.7 & 24.065 $\pm$ 0.018 & 51115.3 $\pm$ 0.5\ figures.tex
--- abstract: 'Trapped ions are a well-studied and promising system for the realization of a scalable quantum computer. Faster quantum gates would greatly improve the applicability of such a system and allow for greater flexibility in the number of calculation steps. In this paper we present a pulsed laser system, delivering picosecond pulses at a repetition rate of and resonant to the transition in for coherent population transfer to implement fast phase gate operations. The optical pulse train is derived from a mode-locked, stabilized optical frequency comb and inherits its frequency stability. Using a single trapped ion, we implement three different techniques for measuring the ion-laser coupling strength and characterizing the pulse train emitted by the laser, and show how all requirements can be met for an implementation of a fast phase gate operation.' address: - 'Institut für Quantenoptik und Quanteninformation, Österreichische Akademie der Wissenschaften, Technikerstr. 21a, 6020 Innsbruck, Austria' - 'Institut für Experimentalphysik, Universität Innsbruck, Technikerstr. 25, 6020 Innsbruck, Austria' author: - 'D Heinrich, M Guggemos, M Guevara-Bertsch, M I Hussain, C Roos and R Blatt' title: Ultrafast coherent excitation of a ion --- Introduction {#sec:intro} ============ Trapped ions are a promising system for the implementation of a scalable quantum computer [@Cirac1995; @Kielpinski2002; @Wineland2003; @Garcia-Ripoll2005a; @Haffner2008]. Two-qubit entangling gate operations have been demonstrated [@Schmidt-Kaler2003; @Leibfried2003; @Ballance2016; @Gaebler2016] and combined with single-qubit gates to build an elementary quantum processor [@Schmidt-Kaler2003a; @Schindler2013; @Debnath2016]. The entangling gate operations in these experiments rely on spectroscopically-resolved motional sidebands of the ion crystals, a requirement that limits the duration of a gate operation to more than the period of motion of the ions in the trap (typically a few $\si{\micro\s}$ or more). Overcoming this limitation would advance the development of a scalable quantum computer as it would allow one to increase the number of gate operations (computational steps) that can be completed within the coherence time of the ion-qubits. Two-qubit entangling gate operations in less than one trap period have been proposed by García-Ripoll, Zoller and Cirac in 2003 [@Garcia-Ripoll2003] using counter-propagating laser pulses. Several groups are working on its realization [@Mizrahi2014; @Hussain2016] but so far only single-qubit gate operations [@Madsen2006; @Campbell2010] and single-ion spin-motion entanglement [@Mizrahi2013] have been reported on time scales shorter than the ion oscillation period. Recently, creation of two-qubit entanglement by a train of ultrafast laser pulses within a few microseconds has been demonstrated in the ground-states of a pair of Yb$^+$ ions [@Wong-Campos2017]. Our goal, beyond the scope of this work, is to implement an ultrafast two-qubit phase gate operation [@Garcia-Ripoll2003; @Taylor2017] using resonant, counter-propagating laser pulses and to complete it in less than one trap period. The scheme uses pairs of $\pi$-pulses for applying ion-state dependent momentum kicks to a two-ion crystal. ![ Energy level scheme of showing the levels and transitions relevant to the experiment. Possible transitions are shown with their wavelengths and branching ratios [@Gerritsma2008; @Ramm2013], excited electronic states with their lifetimes [@Jin1993; @Barton2000; @Kreuter2005]. The bell curve-shaped bars in the upper-left corner are a representation of the pulsed laser’s spectral modes. []{data-label="fig:levelscheme"}](levelscheme){width="80.00000%"} \[fig:simplelevelscheme\] Figure \[fig:simplelevelscheme\] shows a level scheme of with the levels relevant to the gate operation. We encode each qubit in two Zeeman substates of the ($= {\left|\downarrow\right\rangle}$) and ($= {\left|\uparrow\right\rangle}$) states of a ion [@Schmidt-Kaler2003a]. The laser pulses resonantly excite the transition [@Bentley2015]. Depending on the qubits’ state, each ion will either not interact with the pulses at all (qubit state ${\left|\uparrow\right\rangle}$) or absorb one photon from the first pulse of each pair and emit a photon into the second pulse (qubit state ${\left|\downarrow\right\rangle}$), gaining a momentum of $2 \hbar \overrightarrow{k}$ in the direction of the first pulse, where $\overrightarrow{k}$ is the photons’ wave vector. By subjecting the ion crystal to a sequence of momentum kicks and times of free evolution of the crystal in the trap potential, the ions are forced to follow different, state-dependent trajectories through phase-space. The area enclosed by these trajectories corresponds to the phase the state acquires during the pulse sequence [@Leibfried2003]. When the relative phase between the state-pairs ${\left|00\right\rangle}$, ${\left|11\right\rangle}$ and ${\left|01\right\rangle}$, ${\left|10\right\rangle}$ is $\pi/2$ and the pulse sequence returns both the center-of-mass and the breathing mode of motion [@James1998] of the ion crystal to the initial state, the operation will be the desired geometric phase gate [@Garcia-Ripoll2003]. Both conditions can be met by carefully choosing the duration of the times of free evolution in the pulse sequence. In order to complete such a phase gate within one trap period, the rate $f_\pi$ at which pairs of $\pi$-pulses are applied to the ions must be much larger than the trap frequency $\nu$ [@Bentley2015]. In general, the larger $f_\pi$, the faster the gate operation can be completed. In order for the laser system to be suitable to implement such a fast gate operation, it should satisfy four requirements: (1) The system needs to provide pulsed laser light with a repetition rate much larger than the trap frequency of , in order to provide us with fine-grained control over the timing of pulse sequences. (2) The pulse length $\delta t$ has to be much shorter than the state’s lifetime of [@Jin1993] to avoid or at least minimize spontaneous decay. (3) The center frequency has to be resonant with the transition at and (4) the laser needs to have an intensity such that for every laser pulse $\Omega \, \delta t = \pi$, where $\Omega$ is the corresponding Rabi frequency. Alternatively to (3), non-resonant pulses can be used to apply state-dependent momentum kicks [@Campbell2010]. In order to realize this alternative, we want to be able to tune the laser in between the two fine structure components and such that Stark shifts cancel, while limiting overlap of the laser’s optical spectrum with either of the two transition frequencies. In this paper we describe and characterize a laser system which was designed to generate the light pulses for ultrafast quantum gate operations. Our laser system satisfies requirements (1) to (3), and we will show below, that the intensity of the laser is sufficiently high for generating pulses that flip the state of the ion with probability (requirement 4). In order to do that, we present and characterize three methods to extract information on the Rabi frequency and compare them in terms of applicability and prerequisites. The methods allow us to measure the rotation angle per pulse $\theta = \Omega \, \delta t$ and are more accurate than simply deducing the Rabi frequency from a measurement of the laser’s optical intensity at the point of the ion [@Farrell1995; @James1998]. The paper is structured as follows: Section \[sec:lasersystem\] describes our laser setup in detail. In section \[sec:theory\] we introduce two methods that we developed to gain information on $\theta$ by carrying out measurements on a trapped ion, and we compare them to measuring $\theta$ by injecting single pulses between the two Ramsey zones of a Ramsey experiment [@Madsen2006]. Laser System {#sec:lasersystem} ============ Picosecond laser systems with repetition rates on the order of and a center wavelength of can be constructed by frequency-quadrupling the light generated by commercial lasers operating in the L-band of optical fiber communication (). An overview of the optical setup is provided in figure \[fig:setup\]. ![ Schematic setup of our laser system. Panels marked A, B, C, D are discussed in subsections \[sec:seed\], \[sec:manipulation\], \[sec:picking\] and \[sec:shg\], respectively. AWG: arbitrary waveform generator, BD: Beam dump, DM: dichroic mirror, EDFA: erbium-doped fiber amplifier, FC: optical frequency comb, PBS: polarizing beam splitter, PD: photo detector, PPLN: periodically poled lithium niobate, PPKTP: periodically poled potassium titanyl phosphate, SOA: Semiconductor optical amplifier, TOD: third order dispersion compressor. []{data-label="fig:setup"}](optical_setup) The system is seeded by an optical frequency comb, which is described in more detail in subsection \[sec:seed\]. Filter cavities serve to multiply the laser’s repetition rate. Subsequently, the desired laser wavelength range is selected by spectral filters and the laser output is amplified (subsection \[sec:manipulation\]). Fast and slow pulse picking elements enable the selection of arbitrary pulse sequences (subsection \[sec:picking\]). Finally, the laser frequency is quadrupled by two single-pass second-harmonic generation stages (subsection \[sec:shg\]). Measurements of the basic pulse characteristics are presented in subsection \[sec:pulseswitching\]. Seed laser {#sec:seed} ---------- The seed laser is a fiber-based optical frequency comb (Menlo Systems FC1500-250-WG) with a repetition rate of . The mode locked laser creates short pulses of pulse width with a center wavelength of and a spectral bandwidth of . We lock both the carrier envelope offset and the repetition rate to a frequency reference provided by a GPS-disciplined oven-controlled crystal oscillator (Menlo Systems GPS 6-12) with a fractional frequency instability of . Pulse manipulation {#sec:manipulation} ------------------ The pulses produced by the frequency comb need to be amplified and frequency-upconverted in order to create $\pi$-pulses on the ions. Furthermore, the spectral bandwidth should be limited to about ($\overset{\scriptscriptstyle\wedge}{=}\SI{0.5}{\nm}$ at ) to avoid residual off-resonant excitation of the transition, and the center wavelength should be resonant to the transition. In order to have a higher resolution of the pulse timings, we additionally multiply the repetition rate by a factor of 20. Erbium doped fiber (pre-)amplifiers (EDFAs) and semiconductor optical amplifiers (SOAs) are used to compensate insertion loss at various stages in the set-up (panels B in figure \[fig:setup\]). After the first preamplifier, the pulse train travels through a stretcher fiber which adds dispersion and stretches the pulses from to for chirped pulse amplification [@Strickland1985]. The pulse train then travels through a spectral filter which selects the new center wavelength of $4 \cdot \SI{393}{\nm} = \SI{1572}{\nm}$ and reduces the spectral bandwidth to . Two filter cavities with a free spectral range of each then increase the repetition rate from to by transmitting only the light’s spectral modes that are apart and suppressing all others. The second cavity’s purpose is to increase the extinction ratio and therefore equalize optical intensity of the output pulses. The subsequent fast pulse picker (bandwidth of ) will be described in more detail in the next subsection. Furthermore, we use a second spectral filter to compensate amplifier-induced frequency shifts [@Agrawal1989] of up to and further limit the bandwidth to . Next, a high power EDFA amplifies the pulse train from to a maximum average power of . A free-space third order dispersion compressor reduces the pulse width to (time bandwidth product 0.53) which is close to the transform limited pulse width of for a Gaussian-shaped pulse of the given bandwidth. Pulse picking {#sec:picking} ------------- In order to select the pulse sequences described in section \[sec:intro\] out of the pulse train, we need an optical element that is able to select pulses at this rate and to withstand up to of laser power after the high power EDFA. To satisfy both requirements, we chose a twofold approach using a fast switching element before the high power EDFA (where the average laser power can be limited to ) and a slow element after the amplifier to create the desired pulse sequences (see panel C in figure \[fig:setup\]). The fast element is a pulse picker (custom-made ModBox by Photline) which contains a Mach-Zehnder interferometer with an electro-optic modulator of bandwidth. Since its maximum optical input power is on the order of a few , we install it before the high power EDFA where the light intensity is sufficiently low. Considering the amplifier’s need to be seeded continuously with a maximally allowed dark time on the order of , we need an additional switching element after the amplifier with a high damage threshold and a switching time of less than . For this we use a Pockels cell (Leysop BBO-3-25-AR790) with a driver (custom-made by Bergmann Meßgeräte Entwicklung) that enables switching of the cell with a rise/fall time of at a maximum repetition rate of and a measured optical extinction ratio of . Both the pulse picker and the Pockels cell are controlled by an arbitrary waveform generator (Tektronix AWG 70002A) with a sample rate of which is synchronized with the seed laser by a RF signal derived directly from the laser’s pulse train. ![ Pulse patterns generated by using either only the pulse picker (top), only the Pockels cell (middle) or both (bottom), measured by detecting the residual light after the PPKTP crystal. The pulses labeled “idle” on both sides of the top panel are required to seed the high-power EDFA. The length of the dark time on either side of the “payload” is determined by two factors: The rise and fall times of the Pockels cell of and the minimum allowed time of between switching the cell on and off – i.e. between the start of the rise time and the start of the fall time. The inset in the bottom panel shows the zoomed-in payload signal. Every grid line corresponds to the location of a pulse in the original pulse train. For the reason for the different pulse heights see section \[sec:pulseswitching\] and especially figure \[fig:switch-on\]. []{data-label="fig:pulse_picking"}](pulse_picking){width="80.00000%"} Figure \[fig:pulse\_picking\] shows three oscilloscope traces recording an arbitrary pulse pattern (including a dark time) which was detected by a photodiode. To generate the three traces, either only the pulse picker, only the Pockels cell or both were used. It shows that the pulse picker can switch individual pulses in arbitrary sequences. Frequency up conversion {#sec:shg} ----------------------- ![image](conversion_efficiency_short) The fundamental light is converted to by frequency doubling the light twice in two separate nonlinear crystals (panel D in figure \[fig:setup\]). The first one is an MgO-doped PPLN (periodically-poled lithium niobate) crystal (Covesion MSHG1550-0.5-5), the second crystal is a PPKTP (periodically-poled potassium titanyl phosphate) crystal (custom-made by Raicol Crystals Ltd.). Since the doubling efficiency of the crystals scales with the square of the peak power, the measured extinction ratio of the Pockels cell/PPKTP crystal system is , i.e. almost the square of the Pockels cell extinction ratio. At the current maximum average light power of about and at a repetition rate of (, and therefore four times higher peak power) we can produce () of light, which corresponds to a conversion efficiency of (), and () of light, which corresponds to a conversion efficiency of the second frequency doubling step of (). The maximum total conversion efficiency (from to ) is (). Measurements of the light power and conversion efficiencies for four different repetition rates are presented in figure \[fig:conversion\_efficiency\]. The inset in panel (a) of that figure is a log-log graph of the same data. The graph, together with the line $y \propto x^4$ visualizes the expected fourth-power dependency of the light power on the light power. After each up-conversion of the laser pulses, the remaining fundamental light is split off by a dichroic mirror and in case of the 2nd SHG stage further suppressed by a shortpass filter (Thorlabs FESH0700). Next, the light is coupled into a polarization-maintaining single-mode fiber and sent to the ion trap. After the long fiber, we measure a pulse width of . Finally, a collimator equipped with a lens focuses the laser beam to a waist measured to be $w_0 = \SI{11.8+-0.3}{\micro\m}$ and directs the beam such that the ion is located in the beam waist. Unless stated otherwise, all following measurements and experiments presented in this paper were conducted at a high power EDFA output power of . During times when the Pockels cell was off and the pulses directed into a beam dump, the high power EDFA was seeded with pulses at repetition rate. Pulse switching characteristics {#sec:pulseswitching} ------------------------------- ![image](interferometer_and_ellipses) Our pulse picking scheme described in subsection \[sec:picking\] necessitates blocking the pulses with the fast pulse picker during the rise and fall time of the Pockels cell for typically . This results in an equally long dark time of the amplifiers after the pulse picker. We examined the characteristics of the pulses during the first nanoseconds after such dark time by measuring pulse areas and relative phase of the pulses. We characterize the phase shifts of the pulses by interfering them in a Michelson interferometer (see figure \[fig:interferometer\_and\_ellipses\](a)). At the beam splitter the input pulse train is split into two copies, one of which is temporally delayed by one pulse period $\tau_\text{pulse}$ with respect to the other before being recombined. In this way, every pulse $i$ interferes with its successive pulse $i+1$ and the total phase difference $\Delta \Phi_i$ of the two pulses at the output of the interferometer is a function of the phase difference $\Delta \phi_i$ of pulses $i$ and $i+1$, and the path length difference $\Delta x = 2x_2 - 2x_1 \approx c \cdot \tau_\text{pulse}$: $$\Delta \Phi_i = \Delta \phi_i + k \cdot \Delta x,$$ with $x_1$ ($x_2$) the length of interferometer arm 1 (2), $c$ the speed of light and $k$ the length of the wave vector. The interference pulses are detected on a fast photodiode and their pulse areas, which are functions of $\Delta \Phi_i$, extracted by integrating the photodiode’s signal over the pulse length. We can tune $\Delta x$ by manually moving one of the retro-reflecting mirrors which is fixed on a manual translation stage, but we can not deterministically change $\Delta x$ on a sub-wavelength scale which would have allowed to keep track of the changes in $\Delta \Phi_i$ due to changes in $\Delta x$. Nevertheless, we found its fluctuations to be much smaller than the light’s wavelength and consider $\Delta x$ to be constant on time scales of our measurements of about . For each measurement we therefore randomly choose $\Delta x$ and repeat the measurement for different $\Delta x$. Accordingly, the interference pulse areas of each measurement are both random but also correlated in size since $\Delta x$ is random but equal for all pulses in a given measurement, and if the pulse areas of two interference pulses $i$ and $j$ are different, there is also a change $\delta \phi_{i,j} = \Delta \phi_i - \Delta \phi_j$ in their phase differences. In order to determine $\delta \phi_{i,j}$ we repeatedly send a five-pulse pulse train into the interferometer at a rate of and average over 100 consecutive trains. We measure the interference pulse areas for different and random $\Delta x$ by moving the translation stage back and forth on the order of between two 100 pulse trains-long measurements. Figure \[fig:interferometer\_and\_ellipses\](b) shows two example interference signals, taken at . For every measurement at different $\Delta x$ we plot the pulse areas of two interference pulses against each other and present the data in panels (c) through (f) in figure \[fig:interferometer\_and\_ellipses\]. Every abscissa is the area of interference pulse 4; the ordinates are the areas of interference pulses 1 through 3, respectively. The data points fall on a straight line at a angle, if $\delta \phi_{i,j}$ between the two interference pulses is zero or an integer multiple of $2\pi$, and at a angle if $\delta \phi_{i,j} = (2n+1) \pi, n \in \mathbb{Z}$. In general, for any other value of $\delta \phi_{i,j}$ the data points are located on an ellipse. This allows us to extract $\delta \phi_{i,j}$ by fitting an ellipse to the data. We see a dependence of $\delta \phi_{i,j}$ on the length of the pulse period $\tau_\text{pulse} = 1 / f_\text{rep}$ if there was a long (here: ) dark time before the first pulse: For the maximum repetition rate of $f_\text{rep} = \SI{5}{\GHz}$ we find that between the first (second) and fourth interference peak (pulses 1&2 (2&3) and 4&5, respectively) $\delta \phi_{1,4} = \SI{0.74}{\pi}$ ($\delta \phi_{2,4} = \SI{0.12}{\pi}$). For later interference peaks ($i \geq 3$) $\delta \phi_{i,4}$ is vanishingly small. As a trend we observe that for larger $\tau_\text{pulse}$, $\delta \phi_{i,j}$ becomes smaller and is vanishing for $\tau_\text{pulse} \geq \SI{800}{\ps} = 1 / \SI{1.25}{\GHz}$. In addition to the phase shift, we observe repetition rate-dependent deviations of the pulse intensity of the first pulses after the dark time. Figure \[fig:switch-on\] shows the pulse areas of the pulses during the first after the dark time. In the case of repetition rate we observe that all pulses from the second onward are weaker by a factor of about 3 with respect to the first pulse. At repetition rate, the intensity of the second pulse decreases by about with respect to the first. At the other repetition rates we do not observe this effect. For all repetition rates we observe an increase in pulse intensity of about for the shown time of of pulses after the possible initial decrease. ![ Pulse areas of pulses during the first after a dark time at repetition rates from . Time axis relative to first pulse. Data points are the average of measurements, error bars are smaller than data point symbols. Average high power EDFA output power for this measurement was . []{data-label="fig:switch-on"}](switch-on_high_power) Both the phase shift and the change in pulse area will be problematic for experiments using coherent manipulations of the qubit. Since for our planned two-ion phase gate every pulse needs to act as a $\pi$-pulse, the pulse area needs to be constant, but due to the way we plan to create the counter-propagating pulse pairs, the phase shift does not: We intent to generate the two pulses of every required $\pi$-pulse pair by splitting a larger-area pulse. Therefore, every pulse pair will consist of identical copies and the two pulses will add up to a $2\pi$-rotation of the Bloch vector around the same axis regardless of phase difference with respect to other pulse pairs. For this reason only the intensity changes should pose an issue for our phase gate, the phase shifts should not. We believe that both effects are caused by a semiconductor optical amplifier (SOA2 in figure \[fig:setup\]) which serves as a preamplifier to the high power EDFA and are due to the finite carrier lifetime of the SOA of . This causes the dynamical behavior of the SOA to depend on the input signal of the past [@Manning1997] and is known as the “pattern effect”[@Rizou2016]. We therefore plan to replace the SOA with another fiber-based preamplifier. Methods to measure the Rabi frequency of a pulsed laser-ion system {#sec:theory} ================================================================== We have employed three different techniques to measure the rotation angle per pulse $\theta = \Omega \, \delta t$. The model for the ion-laser interaction used for simulating our experiments is presented in subsection \[sec:model\]. Subsection \[sec:results\] presents measurements using the three techniques. The first approach uses many pulses – ${\mathcal{O}(\num{e3})}$ – to pump the ion into the metastable state. Measuring the population in that state and fitting the model to the data allows $\theta$ to be extracted. Next, we use the same principle but in the regime of single pulses, allowing pulse dynamics and characteristics to be extracted. Finally, the third method is based on preparing the ion in a coherent superposition of the two qubit states. We then use single laser pulses to reduce and restore coherence between the qubit states by transferring population of one of the two states back and forth to a third state. Pulse model for simulation and fits {#sec:model} ----------------------------------- We calculate the interaction of a ion with a pulsed laser field resonant to the ion’s transition (see figure \[fig:levelscheme\]). We consider spontaneous decay of the state into the state with decay rate $\Gamma_\text{PS}$ and into the state with decay rate $\Gamma_\text{PD}$. For the experiments that we describe, the state does not need to be considered as any population in this state is pumped back to via on a timescale of with an laser. Therefore, any decay to effectively becomes a decay to and the modified decay rates are $$\Gamma_\text{PS} = (1 - p_{5/2}) \, \Gamma,$$ $$\Gamma_\text{PD} = p_{5/2} \, \Gamma,$$ where $\Gamma = \frac{1}{\tau} = \frac{1}{\SI{6.924+-0.0019}{\ns}}$ [@Jin1993] and $p_{5/2} = \num{0.0587+-0.0002}$ the probability of to decay to [@Gerritsma2008]. Consequently, we assume a three-level system and identify the state with ${\left|1\right\rangle}$, the state with ${\left|2\right\rangle}$ and the state with ${\left|3\right\rangle}$, as indicated in figure \[fig:levelscheme\]. Furthermore, we assume infinitesimally short pulses and neglect any decay of ${\left|3\right\rangle}$, since the state’s lifetime of [@Kreuter2005] is much longer than the cycle time of our experiments of about . The system’s excited state ${\left|2\right\rangle}$ decays spontaneously with rates $\Gamma_{21} \equiv \Gamma_\text{PS}$ and $\Gamma_{23} \equiv \Gamma_\text{PD}$ into state ${\left|1\right\rangle}$ and ${\left|3\right\rangle}$, respectively. To model the time evolution of the quantum state over one pulse period of duration $\tau_\text{pulse} = 1 / f_\text{rep}$ we start with a rotation of the Bloch vector of the ${\left|1\right\rangle}$ - ${\left|2\right\rangle}$ subsystem around the x-axis to account for the effect of a single pulse, followed by a rotation around the z-axis to account for the detuning during the dark time between two pulses. Next, a rotation of the Bloch vector of the ${\left|1\right\rangle}$ - ${\left|3\right\rangle}$ subsystem around its z-axis takes into consideration an effect of the pulses on this transition’s frequency. Finally, we account for spontaneous decay of state ${\left|2\right\rangle}$ during the pulse period by applying the appropriate Kraus operators to the quantum state. Hence, we start by applying an x-rotation operator $\mathcal{U_R}$ to a given density operator $\rho_0$: $$\rho' = \mathcal{U_R} \rho_0 \mathcal{U_R}^\dag,$$ with $$\mathcal{U_R} = \exp\left(\frac{\mathrm{i}}{2} \ \theta \ ({\left|2\right\rangle}{\left\langle1\right|} + {\left|1\right\rangle}{\left\langle2\right|})\right), \label{eq:R}$$ where $\theta$ is the rotation angle. Next, we allow for a possible detuning $\Delta$ of the laser light with respect to the transition and a shift $\Delta^\prime$ of the state ${\left|3\right\rangle}$ by applying a z-rotation $\mathcal{U_Z}$ to the result. Here, the angle of rotation is proportional to $\tau_\text{pulse}$, $\Delta$ and $\Delta^\prime$, respectively: $$\rho'' = \mathcal{U_Z} \rho' \mathcal{U_Z}^\dag,$$ with $$\mathcal{U_Z} = \exp(\mathcal{Z}),$$ $$\mathcal{Z} = \frac{\mathrm{i}}{2} \left[\Delta \ ({\left|1\right\rangle}{\left\langle1\right|} - {\left|2\right\rangle}{\left\langle2\right|}) + \Delta^\prime \ ({\left|1\right\rangle}{\left\langle1\right|} - {\left|3\right\rangle}{\left\langle3\right|})\right] \tau_\text{pulse}.$$ Finally, we calculate the Kraus operators $\mathcal{N}$, $\mathcal{D}$ to allow for spontaneous decay of the excited state ${\left|2\right\rangle}$: $$\mathcal{N} = {\left|1\right\rangle}{\left\langle1\right|} + \sqrt{1 - p - q} \, {\left|2\right\rangle}{\left\langle2\right|} + {\left|3\right\rangle}{\left\langle3\right|},$$ $$\mathcal{D} = \sqrt{p} \, {\left|1\right\rangle}{\left\langle2\right|} + \sqrt{q} \, {\left|3\right\rangle}{\left\langle2\right|},$$ $$\rho''' = \mathcal{N} \rho'' \mathcal{N}^\dag + \mathcal{D} \rho'' \mathcal{D}^\dag,$$ with $$p = 1 - \exp\left(-\Gamma_{21} \, \tau_\text{pulse}\right),$$ $$q = 1 - \exp\left(-\Gamma_{23} \, \tau_\text{pulse}\right).$$ Calculating the decay of the excited state only after applying the x- and z-rotation operators is acceptable, since the pulse length of is very short compared to the state’s lifetime of , and the z-rotation and decay do not influence each other. To find the density operator $\rho_n$ after a train of $n$ pulses we iteratively apply these operators $n$ times $$\rho_n = \mathcal{N} \, \mathcal{U_Z} \, \mathcal{U_R} \, \rho_{n-1} \, \mathcal{U_R}^\dag \, \mathcal{U_Z}^\dag \, \mathcal{N}^\dag + \mathcal{D} \, \mathcal{U_Z} \, \mathcal{U_R} \, \rho_{n-1} \, \mathcal{U_R}^\dag \, \mathcal{U_Z}^\dag \, \mathcal{D}^\dag \label{eq:simulation}$$ and finish the calculation by letting state ${\left|2\right\rangle}$ decay completely. From the resulting density operator we can easily calculate experimentally accessible observables such as populations ${\text{Tr}}({\left|1\right\rangle}{\left\langle1\right|}\rho_n)$, ${\text{Tr}}({\left|3\right\rangle}{\left\langle3\right|}\rho_n)$ and coherences ${\text{Tr}}({\left|1\right\rangle}{\left\langle3\right|}\rho_n)$, ${\text{Tr}}({\left|3\right\rangle}{\left\langle1\right|}\rho_n)$. To account for our observations described in section \[sec:pulseswitching\] we allow the first pulse in a pulse train to have a different (usually higher) peak power and therefore allow for its rotation angle $\theta^\text{1st}$ to be different. Additionally, we allow the first pulse to turn the Bloch vector of the subsystem around an axis in the equatorial plane rotated by $\delta \phi_{1,4}$ from the x-axis to account for its phase offset relative to the other pulses. We therefore replace $\mathcal{R}$ with $\mathcal{R}^\text{1st}$ in equation \[eq:R\]: $$\mathcal{R} \rightarrow \mathcal{R}^\text{1st} = \frac{\mathrm{i}}{2} \, \theta^\text{1st} \left(\cos(\delta \phi_{1,4}) \sigma_x^{12} + \sin(\delta \phi_{1,4}) \sigma_y^{12}\right).$$ Experimental results {#sec:results} -------------------- Experiments are conducted in the same linear Paul trap as described in [@Guggemos2015]. Its trap axis is aligned with the quantization axis defined by a bias magnetic field. Circularly polarized laser pulses that are sent through the holes in the trap’s endcap electrodes therefore couple only pairs of Zeeman states of the ground and excited state. ### Pumping into a dark state with many pulses. {#sec:rabi} We send pulse trains of between 500 and 5000 pulses to the Doppler-cooled and optically pumped ion and collect fluorescence photons on the transition in order to determine whether the ion has decayed into the dark state. We repeat the measurement 100 times to statistically determine the probability that the ion has decayed into the state. Due to the large number of pulses, the pulse switching effects described in section \[sec:pulseswitching\] and which are affecting only the first one or two pulses, can be disregarded. For the same reason the experiment does not necessitate the use of a pulse picker that is faster than the fundamental repetition rate. We measure the state population $P_\text{D}$ as a function of the laser detuning for different pulse train lengths. Figure \[fig:rabi\] shows data sets for repetition rates ranging from . Each data set consists of four data sub-sets which differ only in the number of pulses. In order to extract $\theta$ we fit the simulation in equation (\[eq:simulation\]) simultaneously to each of the four sub-sets such that we get a single value for $\theta$. The only other fit parameter is a detuning offset which is eliminated in the figure. For the repetition rate of (, , ) we obtain $\theta = \SI{0.227+-0.012}{\pi}$ (, , ). ![ Probability $P_\text{D}$ to find the ion in the state as a function of the detuning $\Delta$ of the laser frequency with respect to the transition frequency. Blue crosses using pulses, yellow diamonds pulses, green circles pulses, red squares pulses. The lines in each graph are a model fit to the respective data as described in the text, allowing the determination of $\theta$ (a) repetition rate, for which we find $\theta = \SI{0.227+-0.012}{\pi}$. (b) repetition rate, $\theta = \SI{0.323+-0.005}{\pi}$. (c) repetition rate, $\theta = \SI{0.363+-0.005}{\pi}$. (d) repetition rate, $\theta = \SI{0.345+-0.005}{\pi}$. []{data-label="fig:rabi"}](rabi) ### Pumping into a dark state with single pulses. {#sec:singlepulses} ![ Experimental sequence of dark state pumping with single pulses. The ion is Doppler-cooled and prepared in the state. A pulse train of $n$ pulses coherently drives the transition. The pulse train is repeated a total of $m=20$ times with a waiting time $t_\text{w}$ between each two repetitions. We choose $t_\text{w} = \SI{20}{\micro\s} \gg \tau = \SI{6.924}{\ns}$ with $\tau$ the state’s lifetime. Finally, we measure the state probability. []{data-label="fig:single_pulses_sequence"}](single_pulses_sequence) ![ Probability to excite the ion to with different number of pulses and detuning. Left column: experimental data; right column: simulation/fit. 1st row: repetition rate, $\theta^\text{1st} = \SI{0.353+-0.001}{\pi}$, $\theta = \SI{0.195+-0.018}{\pi}$, $\delta \phi_{1,4}=\SI{1.282+-0.001}{\pi}$. 2nd row: repetition rate, $\theta = \SI{0.312+-0.010}{\pi}$, $\delta \phi_{1,4}=\SI{0.361+-0.002}{\pi}$. 3rd row: repetition rate, $\theta = \SI{0.339+-0.007}{\pi}$, $\delta \phi_{1,4}=\SI{0.088+-0.003}{\pi}$. 4th row: repetition rate, $\theta = \SI{0.358+-0.003}{\pi}$, $\delta \phi_{1,4}=\SI{0.051+-0.001}{\pi}$. The asymmetry evident in the first two rows is due to the phase shift of the first pulse with respect to later pulses. []{data-label="fig:singlepulses"}](singlepulses) In order to implement our phase gate scheme we need to ensure that every single pulse is a $\pi$-pulse and it does not suffice to characterize an ensemble of hundreds of pulses. By instead sending only $n \leq 12$ pulses at a time to the ion we can gain crucial insights into single pulse dynamics and characteristics. In order to prevent the problem of having to measure very small state populations, we amplify the signal by repeating the pulse train $m = 20$ times as shown in the experimental sequence in figure \[fig:single\_pulses\_sequence\]. Between repetitions, a waiting time $t_\text{w} = \SI{20}{\micro\s}$ much larger than the state’s lifetime $\tau = \SI{6.924}{\ns}$ ensures that any population in that state has decayed and we accumulate population in the state [@Gerritsma2008]. Figure \[fig:singlepulses\] shows the probability for being in the state, inferred from our experimental data and fit/simulation side-by-side. We vary the detuning $\Delta$ and the number of pulses for different repetition rates. Free fit parameters are $\theta$, the phase offset of the first pulse $\delta \phi_{1,4}$ and a detuning offset. Since we know from earlier measurements described in section \[sec:pulseswitching\] that the first pulse in the case of repetition rate has a different area than the other pulses, we allow an additional fit parameter in that case: the rotation angle of the first pulse $\theta^\text{1st}$. All fit values are within three standard deviations of those acquired previously with long pulse trains (section \[sec:rabi\]), as well as those acquired with the Michelson interferometer (section \[sec:pulseswitching\]). Contrary to measuring $\delta \phi_{i,j}$ with the interferometer, this measurement is able to also determine the sign of $\delta \phi_{i,j}$ and therefore to distinguish between phases $\delta \phi_{i,j} = \Phi$ and $\delta \phi_{i,j} = -\Phi$ (the later being the same as $\delta \phi_{i,j} = 2\pi - \Phi$). We therefore assume that $\delta \phi_{1,4}$ at is about and not as the ellipse fit suggested. Furthermore, $\theta^\text{1st}$ is found to be a factor of about $\sqrt{3}$ larger than $\theta$ for the repetition rate, which is the expected amount for a three times larger pulse area (also compare with section \[sec:pulseswitching\]). ### Single pulse with area $\pi$. {#sec:pipulse} In order to check if a single pulse can act as a $\pi$-pulse we repeat the previous experiment with only a single pulse while varying the light power ($n=1$, $m=15$). From the experimentally determined state probability $P_\text{D}$ after the 15 repetitions of a single pulse we calculate the state probability $P_\text{P}$ after only one single pulse using $$P_\text{P} = \frac{1}{0.0587} \left(1 - \left(1 - P_\text{D}\right)^\frac{1}{m}\right).$$ Due to measurement fluctuations, this can sometimes lead to unphysical values of $P_\text{P} > 1$ if we measure a $P_\text{D}$ that happens to be larger than the maximum expectation value of $P_\text{D, max} = 1-(1-0.0587)^{15} \approx 0.60$. Since the excitation probability is a function of the sine squared of the Rabi frequency $\Omega$, we plot $P_\text{P}$ versus the square root of the light power, which is proportional to $\Omega$ (and $\theta$) in figure \[fig:pipulse\]. The data points should therefore follow the curve $$P_\text{P} = P_\text{P, max} \cdot \sin^2(\sqrt{P_\text{light}} / \omega),$$ where $\omega$ is a proportionality factor of dimension and $P_\text{P, max}$ is the maximum probability of a single pulse exciting the ion to the state. A fit of this curve to our data yields $P_\text{P, max} = \SI{96.4+-1.9}{\%}$, showing how close we are to achieving single pulse $\pi$-pulses. Please note that the maximum light power plotted in figure \[fig:conversion\_efficiency\] (e) () and figure \[fig:pipulse\] ($(\SI{7.6}{\sqrt{\mW}})^2 \approx \SI{58}{\mW}$) was measured at the same fundamental light power. The discrepancy is due to coupling losses in the fiber guiding the light to the ion. ![ Probability to excite the ion to with a single pulse as a function of light power. $P_\text{P, max}$ is the maximum excitation probability to the state and determined by a fit to the data. Measurement noise can cause unphysical values of $P_\text{P} > 1$ (see text). []{data-label="fig:pipulse"}](pi_pulse){width="80.00000%"} ### Ramsey contrast decay and revival. {#sec:ramsey} In experiments using coherent manipulations of the qubit it is important to know the effect of the manipulation on the relative phase of the two qubit states. The experiments of the previous three subsections used only one of the two qubit states and the relative phase appeared only as an unobservable global phase. Here however, the second qubit state serves as a phase reference, giving us access to the desired information. We let single pulses interact with the ion during the waiting time of a Ramsey experiment and monitor the coherence at the end of the Ramsey sequence in the following way: We start by bringing the ion into a coherent superposition of two states by a $\frac{\pi}{2}$-pulse on the transition with a laser as illustrated in figure \[fig:ramsey\_sequence\]: ![ Sequence of a Ramsey contrast decay and revival experiment. The ion is Doppler-cooled and prepared in the state. The first laser pulse creates a coherent superposition of the ion’s internal and states. A controlled and variable number of laser pulses act on the remaining state population, transferring population to the state and possibly back, potentially reducing the S-D coherence. We finally analyze how much coherence remains by varying the phase of the second laser pulse and measuring the state probability, while keeping the Ramsey time $t_R = \SI{10}{\micro\s}$ much shorter than the coherence time of the superposition (${\mathcal{O}(\SI{1}{\ms})}$). []{data-label="fig:ramsey_sequence"}](ramsey_sequence) $${\left|\psi\right\rangle} = \frac{1}{\sqrt{2}} \left({\left|1\right\rangle} + {\left|3\right\rangle}\right) \equiv \frac{1}{\sqrt{2}} \left({\left|{\text{4S$_{1/2}$}}\right\rangle} + {\left|{\text{3D$_{5/2}$}}\right\rangle}\right)$$ We then use single pulses of our laser system to coherently drive the transition. If there is any population remaining in the state after the pulses, this part of the population will undergo spontaneous decay and thus destroy the coherence. We finally analyze how much coherence of the original state remains and also extract the phase between the two states. In mathematical terms, let $\rho$ be the density matrix describing our system after the pulses and $\sigma_x^\text{SD}$, $\sigma_y^\text{SD}$ the Pauli matrices acting only on the states ${\left|{\text{4S$_{1/2}$}}\right\rangle}$ and ${\left|{\text{3D$_{5/2}$}}\right\rangle}$ of the three level system. The expected contrast $C$ and phase $\Phi$ can then be written as $$C = \sqrt{{\text{Tr}}^2(\sigma_x^\text{SD} \rho) + {\text{Tr}}^2(\sigma_y^\text{SD} \rho)}, \label{eq:contrast}$$ $$\Phi = \arg\left[{\text{Tr}}(\sigma_x^\text{SD} \rho) + i {\text{Tr}}(\sigma_y^\text{SD} \rho)\right]. \label{eq:phase}$$ Measuring $P_\text{D$_{5/2}$}$ as a function of the phase $\phi'$ of the second $\frac{\pi}{2}$-pulse allows us to fit a sinusoidal curve to the data and extract contrast and phase from the fit. If the contrast is equal to one the ion was still in a fully coherent superposition of the and states and there was no spontaneous decay of the state. If the contrast is zero all the initial state population had been transferred to the state and the inevitable spontaneous decay had destroyed all coherence. The phase on the other hand is expected to undergo jumps of value $\pi$ each time the ion’s state population is transferred to the state and back again to the state by the pulse train. This is due to the fact that a two-level system picks up a sign after a $2 \pi$ rotation ${\left|0\right\rangle} \overset{2 \pi}{\longrightarrow} -{\left|0\right\rangle}$. ![image](ramseys){width="80.00000%"} We repeat the measurement as a function of the number $n$ of laser pulses. Figure \[fig:ramseys\] (a) shows a data set where we plot the remaining coherence and phase against the number of pulses. For this set it took about 15 optical pulses at repetition rate to complete a $\pi$-pulse, which corresponds to $\theta \approx \SI{0.067}{\pi}$. Nevertheless, one can already see that we can drive the transition coherently and that each time we return to the state, the data is consistent with the observation of phase jumps by $\pi$. In figure \[fig:ramseys\] (b) we plot the same quantities of another data set taken at the same laser power as the measurements presented in the other subsections. We can observe the same structure as in the first data set but we need only about 2.5 optical pulses to complete one $\pi$-pulse on the transition. A fit of the simulation of the ion-light interaction to the experimental data allows us to extract $\theta = \SI{0.389+-0.005}{\pi}$. Discussion {#sec:discussion} ---------- As shown above, we were able to deduce the rotation angle per pulse $\theta$ by letting an ion interact with hundreds of consecutive pulses and afterwards measuring the probability of finding the ion in the state. Using only single pulses allowed us to measure not only $\theta$ again, but also the phase offset of the first pulse after a dark time, i.e. a pause in the pulse train. By injecting single pulses into the waiting time between the two $\pi/2$-pulses of a Ramsey experiment, we were able to gain information on how the pulses influence the phase of the ${\left|{\text{4S$_{1/2}$}}\right\rangle}$ state as well as measure $\theta$. [clSS]{} & Experiment & [$\theta$ ($\pi$)]{} & [$\delta \phi_{1,4}$ ($\pi$)]{}\ & interferometer & & 0.74 [ or ]{} 1.26\ & many pulses & 0.227+-0.012 &\ & single pulses & 0.195+-0.018 & 1.282+-0.001\ & interferometer & & 0.37\ & many pulses & 0.323+-0.005 &\ & single pulses & 0.312+-0.010 & 0.361+-0.002\ & interferometer & & 0.10\ & many pulses & 0.363+-0.005 &\ & single pulses & 0.339+-0.007 & 0.088+-0.003\ & interferometer & & 0.04\ & many pulses & 0.345+-0.005 &\ & single pulses & 0.358+-0.003 & 0.051+-0.001\ & Ramsey exp. & 0.377+-0.005 &\ We now have three reliable ways to measure $\theta$ and the phase offset of the first pulses after a dark time. The methods produce the same results within their respective error margins but each have their advantages and disadvantages as described below. Table \[tab:fitsummary\] summarizes our results. Using many pulses to pump into a dark state allows us to measure the laser-ion system’s rotation angle per pulse $\theta$ and requires neither the ability to pick single pulses nor a laser resonant to the , , linewidth, quadrupole transition. We therefore only need additional lasers that are readily available and only need to be stabilized to a linewidth of $\lesssim\SI{1}{\MHz}$, which can be achieved easily. The only parameters that need to be known to fit the data are the number of pulses $n$ and the well-known decay rates $\Gamma_\text{PS}$ and $\Gamma_\text{PD}$. From the fit we can extract $\theta$ and also the detuning between the transition frequency and a laser mode. This allows us to tune the laser into resonance with the transition. The method works well for any laser repetition rate $f_\text{rep}$ as long as it is much larger than the transitions linewidth. Also, the rotation angle should satisfy $\theta \leq \pi$, since rotation angles $\theta' = \pi + \delta$ can not be discerned from $\theta = \pi - \delta$. By using only single pulses to pump into a dark state (method 2) we can observe single pulse dynamics and gain additional information about the pulse characteristics, such as changing phase shifts and pulse powers. With these measurements we are able to reproduce results obtained both with long pulse trains and with the Michelson interferometer. The Ramsey contrast technique (method 3) is experimentally more challenging than the previous procedures. We need to be able to create a coherent superposition of the and states which requires a few- linewidth laser. It is also necessary that any AC Stark shift has been canceled, otherwise it is not possible to transfer all state population to the state as required. The advantage is that we can immediately see if we have reached our goal and if one pulse suffices to do a $\pi$-pulse on the ion. By fitting an empirical model to the data we can also learn how many pulses are currently needed to do a $\pi$-pulse, if the power is not sufficient, yet. Furthermore, the method allows one to track the phase of the population in the state because the state population serves as a phase reference. A fourth method [@Zanon-Willette2011] exists that we have not pursued systematically. It is based on measuring the fluorescence rate of the ion while it is interacting with a continuous pulse train. While it is experimentally easy to conduct, it requires precise knowledge of the detector’s total efficiency. As stated before, almost all experiments presented in this work were conducted at an average output power of the high power EDFA of which corresponds to light power. Nevertheless, we are able to increase the light power to $\gtrsim \SI{100}{\mW}$ as seen in figure \[fig:conversion\_efficiency\], which suffices to create pulses that act as $\pi$-pulses with probability. Conclusion {#sec:conclusion} ========== In summary, we have designed, set up and characterized a high repetition rate laser, derived from a stabilized optical frequency comb, which is suitable for the implementation of ultrafast quantum gate operations with trapped ions. We amplify the light at and shift the wavelength via cascaded SHG to , resonant to the transition in . We have demonstrated that we can pick arbitrary pulse sequences out of the pulse train and that our laser can coherently drive the transition in . We have developed and applied three different techniques to measure the rotation angle per pulse $\theta = \Omega \, \delta t$ of a pulsed laser-ion system and shown that we can create approximate $\pi$-pulses on the transition with only a single optical pulse: a key-requisite to implementing a resonant, ultrafast, two-qubit phase gate operation. References {#references .unnumbered} ========== [10]{} url \#1[[\#1]{}]{}urlprefix\[2\]\[\][[\#2](#2)]{} Cirac J I and Zoller P 1995 [*Phys. Rev. Lett.*]{} [**74**]{} 4091–4094 Kielpinski D, Monroe C and Wineland D J 2002 [*Nature*]{} [**417**]{} 709–711 Wineland D J, Barrett M, Britton J, Chiaverini J, DeMarco B, Itano W M, Jelenkovic B, Langer C, Leibfried D, Meyer V, Rosenband T and Schatz T 2003 [*Philos. Trans. R. Soc. A Math. Phys. Eng. Sci.*]{} [**361**]{} 1349–1361 Garc[í]{}a-Ripoll J J, Zoller P and Cirac J I 2005 [*J. Phys. B At. Mol. Opt. Phys.*]{} [**38**]{} S567–S578 H[ä]{}ffner H, Roos C and Blatt R 2008 [*Phys. Rep.*]{} [**469**]{} 155–203 Schmidt-Kaler F, H[ä]{}ffner H, Riebe M, Gulde S, Lancaster G P T, Deuschle T, Becher C, Roos C F, Eschner J and Blatt R 2003 [*Nature*]{} [**422**]{} 408–411 Leibfried D, DeMarco B, Meyer V, Lucas D, Barrett M, Britton J, Itano W M, Jelenkovi[ć]{} B, Langer C, Rosenband T and Wineland D J 2003 [ *Nature*]{} [**422**]{} 412–415 Ballance C J, Harty T P, Linke N M, Sepiol M A and Lucas D M 2016 [*Phys. Rev. Lett.*]{} [**117**]{} 060504 Gaebler J P, Tan T R, Lin Y, Wan Y, Bowler R, Keith A C, Glancy S, Coakley K, Knill E, Leibfried D and Wineland D J 2016 [*Phys. Rev. Lett.*]{} [**117**]{} 060505 Schmidt-Kaler F, H[ä]{}ffner H, Gulde S, Riebe M, Lancaster G P T, Deuschle T, Becher C, H[ä]{}nsel W, Eschner J, Roos C F and Blatt R 2003 [*Appl. Phys. B*]{} [**77**]{} 789–796 Schindler P, Nigg D, Monz T, Barreiro J T, Martinez E, Wang S X, Quint S, Brandl M F, Nebendahl V, Roos C F, Chwalla M, Hennrich M and Blatt R 2013 [*New J. Phys.*]{} [**15**]{} 123012 Debnath S, Linke N M, Figgatt C, Landsman K A, Wright K and Monroe C 2016 [ *Nature*]{} [**536**]{} 63–66 Garc[í]{}a-Ripoll J J, Zoller P and Cirac J I 2003 [*Phys. Rev. Lett.*]{} [**91**]{} 157901 Mizrahi J, Neyenhuis B, Johnson K G, Campbell W C, Senko C, Hayes D and Monroe C 2014 [*Appl. Phys. B*]{} [**114**]{} 45–61 Hussain M I, Petrasiunas M J, Bentley C D B, Taylor R L, Carvalho A R R, Hope J J, Streed E W, Lobino M and Kielpinski D 2016 [*Opt. Express*]{} [**24**]{} 16638 Madsen M J, Moehring D L, Maunz P, Kohn R N, Duan L M and Monroe C 2006 [ *Phys. Rev. Lett.*]{} [**97**]{} 040505 Campbell W C, Mizrahi J, Quraishi Q, Senko C, Hayes D, Hucul D, Matsukevich D N, Maunz P and Monroe C 2010 [*Phys. Rev. Lett.*]{} [**105**]{} 090502 Mizrahi J, Senko C, Neyenhuis B, Johnson K G, Campbell W C, Conover C W S and Monroe C 2013 [*Phys. Rev. Lett.*]{} [**110**]{} 203001 Wong-Campos J D, Moses S A, Johnson K G and Monroe C 2017 [*Phys. Rev. Lett.*]{} [**119**]{} 230501 Gerritsma R, Kirchmair G, Z[ä]{}hringer F, Benhelm J, Blatt R and Roos C F 2008 [*Eur. Phys. J. D*]{} [**50**]{} 13–19 Ramm M, Pruttivarasin T, Kokish M, Talukdar I and H[ä]{}ffner H 2013 [ *Phys. Rev. Lett.*]{} [**111**]{} 023004 Jin J and Church D A 1993 [*Phys. Rev. Lett.*]{} [**70**]{} 3213–3216 Barton P A, Donald C J S, Lucas D M, Stevens D A, Steane A M and Stacey D N 2000 [*Phys. Rev. A*]{} [**62**]{} 032503 Kreuter A, Becher C, Lancaster G P T, Mundt A B, Russo C, H[ä]{}ffner H, Roos C, H[ä]{}nsel W, Schmidt-Kaler F, Blatt R and Safronova M S 2005 [*Phys. Rev. A*]{} [**71**]{} 032504 Taylor R L, Bentley C D B, Pedernales J S, Lamata L, Solano E, Carvalho A R R and Hope J J 2017 [*Sci. Rep.*]{} [**7**]{} 46197 Bentley C D B, Carvalho A R R and Hope J J 2015 [*New J. Phys.*]{} [**17**]{} 103025 James D F V 1998 [*Appl. Phys. B Lasers Opt.*]{} [**66**]{} 181–190 Farrell P M and MacGillivray W R 1995 [*J. Phys. A. Math. Gen.*]{} [**28**]{} 209–221 Strickland D and Mourou G 1985 [*Opt. Commun.*]{} [**56**]{} 219–221 Agrawal G P and Olsson N A 1989 [*IEEE J. Quantum Electron.*]{} [**25**]{} 2297–2306 Manning R J, Ellis A D, Poustie A J and Blow K J 1997 [*J. Opt. Soc. Am. B*]{} [**14**]{} 3204 Rizou Z V, Zoiros K E and Connelly M J 2016 [*J. Eng. Sci. Technol. Rev.*]{} [**9**]{} 198–201 Guggemos M, Heinrich D, Herrera-Sancho O A, Blatt R and Roos C F 2015 [*New J. Phys.*]{} [**17**]{} 103001 Zanon-Willette T, de Clercq E and Arimondo E 2011 [*Phys. Rev. A*]{} [**84**]{} 062502
**** Deep inelastic sum rules at the boundaries between perturbative and non-perturbative QCD [**A.L. Kataev**]{}[^1]\ Institute for Nuclear Research of the Russian Academy of Sciences ,\ 117312 Moscow, Russsia [**Abstract**]{} The basis of renormalon calculus is briefly discussed. This method is applied to study the QCD predictions for three different sum rules of deep-inelastic scattering, namely for the Gross–Llewellyn Smith, Bjorken polarized and unpolarized sum rules. It is shown that the renormalon structures of these a posteriori different physical quantities are closely related. These properties are giving us the hint that theoretical expressions of these three sum rules are similar both in the perturbative and non-perturbative sectors. Some phenomenological consequences of the new relations are discussed. PACS: 12.38.Bx;12.38.Cy; 13.85.Hd [*Keywords: Perturbation theory; Renormalons; Deep-inelastic scattering sum rules*]{} Introduction ============ The formulation of the quantitative method of renormalon calculus on the higher level of understanding takes its start from the important works of Ref.[@Zakharov:1992bx; @Mueller], devoted to the consideration of $e^+e^-\rightarrow{hadrons}$ process, and from the interesting work of Ref.[@Mueller:1993pa], devoted to the consideration of deep-inelastic scattering processes. After these studies the number of theoretical and practical developments appeared in the literature (see reviews of Refs. [@Beneke:1998ui]–[@Altarelli:1995kz]). In what is discussed below, we will consider aspects of renormalon calculus, related to deep-inelastic scattering (DIS) sum rules. It is commonly expected that in the canonical renormalization schemes, say the $\overline{\rm MS}$ scheme, perturbative expansions in small QCD coupling constant $a_s=\alpha_s/(4\pi)$ of theoretical expressions for physical quantities, defined in the Euclidean region, are asymptotic ones. This means that the difference of the total sums $$D(a)=1+\sum_{n\geq 1}d_na^n$$ and their finite sums $$D_{\rm k}(a)=1+\sum_{n=1}^{\rm{k}}d_na^{\rm k}$$ satisfy the following property $${\rm lim}_{a\rightarrow 0}|\frac{D(a)-D_{\rm{k}}(a)}{a^{\rm k}} |\rightarrow 0~~~.$$ In other words the difference between the total series and their finite sum are expressed as $$D(a)-D_{\rm k}(a)=O(a^{\rm{k}+1})~~~.$$ In this case the error of the truncation of the asymptotic series can be estimated by the last term of $D_{\rm{k}}(a)$ , namely $d_{\rm{k}}a^{\rm{k}}$ [@Dingle]. In QCD one expects that in the $\overline{\rm MS}$ scheme the coefficient function for DIS sum rules, normalized to unity can be approximated by the following asymptotic series[@Beneke:1998eq]: $$D_{\rm{k}}(a)=1+\sum_{\rm {k}\geq 1}(\beta_0)^{\rm k} {\rm k}! \bigg(K_D^{\rm {UV}} (-1)^{\rm k}{\rm {k}}^a+K_D^{\rm{IR}}{\rm k}^b\bigg) a_s^{\rm{k+1}}~, \label{asymptotics}$$ where sign-alternating series with the coefficient $K^{\rm{UV}}$ is generated by the ultraviolet renormalons (UVR), sign-constant asymptotic series with coefficient $K^{\rm{IR}}$ result from the consideration of infrared renormalons (IRR), and $a$ and $b$ are the known numbers, that depend from the ratio of the first two coefficients of the QCD $\beta$-function. Working within renormalon calculus we will demonstrate that the perturbative and non-perturbative contributions to definite DIS sum rules are related. In other words we will show that the renormalon approach is working at the boundaries between these two regimes in QCD. The aim of this article is three fold: - to explain the basic stages of renormalon calculus in QCD, using the simple language; - to show that in the asymptotic perturbative expansion of three DIS sum rules, namely of the Gross–Llewellyn Smith (GLS), Bjorken- polarized (Bjp) and Bjorken- unpolarized (Bjunp) sum rules, may be universal. We will present arguments, based on the consideration of the results given in Refs.[@Broadhurst:1993ru],[@Broadhurst:2002bi], that these expansions are defined by the poles in the closely related Borel images of all three sum rules. - We will explain the features, which follow from the consideration of the IRR poles in the Borel images of the three DIS sum rules. Moreover our aim is to outline new consequences of the IRR calculus. They indicate the existence of relations between twist-4 $1/Q^2$ non-perturbative contributions to the sum rules we are interested in[@Kataev:2005ci]. These results form the basis of the new QCD relations between theoretical expressions for these three sum rules[@Kataev:2005ci], which seem to be supported by the experimental data within existing error bars. More critical tests of these relations are proposed. Renormalon calculus and DIS sum rules ===================================== Let us first express a perturbative QCD series in terms of a Borel integral as $$\begin{aligned} D(a_s)&=&\sum_{n=0}^{\infty}d_n a_s^{n} \\ \nonumber &=& \sum_{n=0}^{\infty} d_n\frac{n !}{\Gamma(n+1)}\delta^{n} \\ \nonumber &=& \int_0^{\infty} {\rm exp}(-\delta/\beta_0 a_s) \sum_{n=0}^{\infty}d_n\frac{\delta^{n}}{n !}d\delta \\ \label{Borel} &=& \int_0^{\infty} {\rm exp} (-\delta/\beta_0 a_s) B[D]({\delta})d\delta~,`\end{aligned}$$ where $\beta_0=(11/3)C_A-(4/3)T_fN_f$ is the first coefficient of the QCD $\beta$-function, with $C_A=3$, $T_f=1/2$, and $B[D](\delta)$ is the image of the Borel integral. At this stage we define the DIS sum rules we will be interested in. The GLS sum rule of the $\nu N$ DIS[@Gross:1969jf] has the following form $$\begin{aligned} \nonumber {\rm GLS}(Q^2)&=&\frac{1}{2}\int_0^1dx\bigg[F_3^{\nu n}(x,Q^2)+ F_3^{\nu p}(x,Q^2)\bigg] \\ &=&3 C_{\rm GLS}(Q^2)-\frac{\langle\langle O_1 \rangle\rangle}{Q^2}- O\bigg(\frac{1}{Q^4}\bigg)~~. \end{aligned}$$ In the Born approximation, this “measures” the number of valence quarks, that are contained in the nucleon and can thus be considered as the [**baryon sum rule**]{}. In the $\overline{\rm MS}$ scheme, the twist-2 perturbative coefficient function $ C_{GLS}(Q^2)$ is calculated explicitly, including $a_s^2$ and $a_s^3$ terms[@Gorishnii:1985xm],[@Larin:1991tj]. The twist-4 matrix element of the $O(1/Q^2)$ non-perturbative contribution to the GLS sum rule is related to the matrix element calculated in Ref.[@Shuryak:1981kj] to be $$\langle\langle O_1 \rangle\rangle= \frac{8}{27}\langle\langle O^{\rm S}\rangle\rangle~~,$$ where $\langle\langle O^{\rm S}\rangle\rangle$ is defined by the following operator $$O_{\mu}= \overline{u}\tilde{G}_{\mu\nu}\gamma_{\nu}\gamma_5u+ (u\rightarrow d)~~,$$ where $$\label{def1} \tilde{G}_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\alpha\beta}G_{\alpha\beta}^a \frac{\lambda^a}{2}$$ and $$\langle P|O_{\mu}^{\rm S}|P\rangle =2p_{\mu}\langle\langle O^{\rm S} \rangle\rangle~~~.$$ The second sum rules, actively studied both in theory and experiment, is the Bjp sum rule[@Bjorken:1966jh], having the physical meaning of [**polarized isospin sum rule**]{}. Its theoretical expression can be defined as $$\begin{aligned} \nonumber {\rm Bjp}(Q^2)&=&\int_0^1dx\bigg[g_1^{lp}(x,Q^2)-g_1^{ln}(x,Q^2)\bigg] \\ &=& \frac{g_A}{6} C_{\rm Bjp}(Q^2)-\frac{\langle\langle O_{2} \rangle\rangle} {Q^2}-O\bigg(\frac{1}{Q^4}\bigg)~~.\end{aligned}$$ Here $g_A=1.26$ is the known $\beta$-decay constant. At the $a_s^3$ level its perurbative part differs from the one of the GLS sum rule by the absence of small “light-by-light”-type terms, proportional to the colour structure $d^{abc}d^{abc}$ [@Larin:1991tj]. The structure of the power corrections to the matrix element of the leading $O(1/Q^2)$ power correction was analytically calculated in Ref.[@Shuryak:1981pi], with the useful correction input from the considerations of Ref.[@Ji:1993sv]. The final expressions are presented in a simple-form in the review of Ref.[@Hinchliffe:1996hc], from which we can get: $$\langle \langle O_2\rangle\rangle =\frac{1}{6}\frac{8}{9} \bigg[ \langle\langle U^{\rm NS}\rangle\rangle -\frac{M_N^2}{4} \langle\langle V^{\rm NS}\rangle \rangle\bigg] ~~,$$ where $$\begin{aligned} \nonumber \langle P,S|U_{\mu}^{\rm NS}|P,S\rangle &=& 2M_N S_{\mu}\langle\langle U^{\rm NS} \rangle\rangle \\ \langle P,S|V_{\mu\nu,\sigma}^{\rm NS}|P,S\rangle &=& 2M_N\langle\langle V^{\rm NS} \rangle\rangle\{(S_{\mu}P_{\nu}-S_{\nu}P_{\mu})P_{\delta}) \}_{S\{\nu,\sigma\}}\end{aligned}$$ and $\langle\langle U^{\rm NS} \rangle\rangle$ and $\langle\langle V^{\rm NS} \rangle\rangle$ are the reduced matrix elements of the local operators from Ref.[@Shuryak:1981pi], namely $$\begin{aligned} \nonumber U_{\mu}^{\rm NS}&=&g_s\big[\overline{u}\tilde{G}_{\mu,\nu}\gamma^{\nu}u -(u\rightarrow d)\big] \\ V^{\rm NS}_{\mu\nu,\sigma}&=&g_s\{\overline{u}\tilde{G}_{\mu\nu}\gamma_{\delta} u - (u\rightarrow d)\}_{S\{\nu,\delta\}}~~~,\end{aligned}$$ where $S\{\nu,\sigma\}$ stand for symmetrization over the given subscripts and $\tilde{G}_{\mu,\nu}$ is defined in Eq. (\[def1\]). In Ref.[@Balitsky:1989jb] the definition of Eq. (14) was used for the estimates of $O(1/Q^2)$ corrections to Bjp sum rule, using the three-point function QCD sum rules technique. These calculations were then re-analyzed with the same method in Ref.[@Ross:1993gb]. The numerical results of these calculations will be discussed later. In the work of Ref.[@Stein:1994zk] a similar analysis was done with the help of the same method for the first term in the r.h.s. of Eq. (14), while the term, proportional to $(M_N^2/4)\langle\langle V\rangle\rangle$ was included into an $O(M_N^2/Q^2)$ kinematic Al power correction to the Bjp sum rule, which involves the second $x^2$ moments of the leading-twist contribution to $g_1^{p-n}=g_1^p-g_1^n$ and the twist-3 matrix element, defined through the combination of $x^2$-weighted moments of the difference of structure functions $ g_1^{p-n}$ and of $g_2^{p-n}=g_2^{p}(x,Q^2)-g_2^{n}(x,Q^2)$ as $$\label{d2} d_2^{p-n}=\int_0^1dx x^2\bigg(2g_1^{p-n}(x,Q^2)+3g_2^{p-n}(x,Q^2)\bigg)~~~,$$ Taking into account this decomposition, it is possible to rewrite a theoretical expression for the numerator of the $1/Q^2$ contribution, in the way it was done say, in the most recent experimental work of Ref.[@Deur:2004ti] $$\mu_4^{p-n}=\frac{M_N^2}{9}\big(a_2^{p-n}+4d_2^{p-n}+4f_2^{p-n}\big)~~,$$ where $$a_2=\int_0^1dx x^2\big[g_1^{p}(x,Q^2)-g_1^{n}(x,Q^2)\big]$$ is the target mass correction and $$2m_N^2f_2^{p-n}S_{\mu}=-4M_N S_{\mu}\langle\langle U^{\rm NS} \rangle\rangle$$ is the twist-4 contribution, which is related to the definition used by us as $$\langle\langle O_2 \rangle\rangle = \frac{1}{6}\frac{8}{9}\langle\langle U^{\rm NS}\rangle\rangle=-\frac{1}{6}\frac{4}{9}M_N^2f_2^{p-n} ~~.$$ In other words we have the following relation $$M_N^2f_2^{p-n}=-2{\langle\langle U^{\rm NS}\rangle\rangle}~~.$$ It should be stressed that in the region where the perturbative theory is working well enough and the application of the operator-product expansion method is valid (say at $Q^2\geq 2~{\rm GeV}^2$), both target mass corrections and twist-3 terms are small and we will neglect them in our further considerations [^2]. These features were revealed in the process of the analysis of Ref. [@Balitsky:1989jb]. The third sum rule, which was originally derived for purely theoretical purposes, is the Bjorken unpolarized sum rule[@Bjorken:1967px]. It can be written down as: $$\begin{aligned} \nonumber {\rm Bjunp}(Q^2) &=& \int_0^1dx\bigg[F_1^{\nu p}(x,Q^2)-F_1^{\nu n}(x,Q^2)\bigg] \\ \label{inp} &=&C_{\rm Bjunp}(Q^2)- \frac{\langle\langle O_3 \rangle \rangle}{Q^2}-O\bigg(\frac{1}{Q^4}\bigg)~~~.\end{aligned}$$ It may be also studied in future as the valuable test of QCD both in perturbative and non-perturbative sectors. As in the previous two cases, the coefficient function $C_{\rm Bjunp}(Q^2)$ is calculated up to next-to-next-to-leading order $a_s^3$-corrections[@Gorishnii:1983gs],[@Larin:1990zw]. The twist-4 matrix element to this sum rule was evaluated in Ref.[@Shuryak:1981kj]; with the following result: $$\label{def} \langle\langle O_3 \rangle \rangle = \frac{8}{9}\langle\langle O^{\rm NS}\rangle\rangle~~~,$$ where the matrix element $\langle\langle O^{\rm NS}\rangle\rangle$ is related to the dimension-5 operator $$O_{\mu}^{\rm NS}=\overline{u}\tilde{G}_{\mu\nu}\gamma_{\nu}\gamma_5u- \overline{d}\tilde{G}_{\mu\nu}\gamma_{\nu}\gamma_5d~~~,$$ its matching over nucleon states $$\langle P|O_{\mu}^{\rm NS}|P\rangle=2p_{\mu}\langle\langle O^{\rm NS}\rangle\rangle~~~$$ and application of Eq. (\[def\]). Let us now return to the renormalon calculus. The basic theoretical problem is how to define the Borel image $B[D](\delta)$ (or the Borel sum) of the integral in Eq. (\[Borel\]) for the quantities we are interested in. In QCD this problem is usually solved using perturbative methods and calculating the corresponding multiloop Feynman diagrams with a one-gluon line, dressed by the chains of fermion bubbles (so called renormalon chain insertion). These chains are generating sign-alternating asymptotic perturbative series, typical of the quantities under consideration, in powers of the expansion parameter $N_f a_s$ (where $N_f$ is the number of quarks flavours). The contributions of these chains are gauge-invariant, but they do not reflect the whole picture of renormalon effects in QCD. The latter begin to manifest themselves after application of the naive non-abelianization (NNA) ansatz[@Broadhurst:1994se] only, namely after the replacement $N_f\rightarrow -(3/2)\beta_0$ = $N_f-(33/2)$ in the leading terms of the large-$N_f$ expansion. This procedure transforms a large-$N_f$ expansion into a large-$\beta_0$ expansion, which in addition to quark bubbles insertions into the renormalon chain, is taking into account the contributions of the gluon- and ghost-bubbles insertions as well (though neglecting definite one-loop insertions into the gluon–quark–antiquark vertex, which should be also considered in the process of rigorous calculation of the coefficient $\beta_0$). The application of the NNA approach allowed the authors of Ref. [@Beneke:1994qe] to formulate the extension to higher orders of the BLM-approach [@Brodsky:1982gc]. Technically, the work of Ref. [@Beneke:1994qe] supports the results of the first successful formulation of the BLM procedure to the next-to-next-to-leading order [@Grunberg:1991ac] Moreover, these two works pushed ahead the study of the BLM procedure in higher orders [@Mikhailov:2004iq]. In principle, the relations of the results of Refs. [@Grunberg:1991ac; @Beneke:1994qe; @Mikhailov:2004iq] need more detailed considerations. In view of the lack of space we will avoid discussions of this subjects here. The Borel images calculated by this procedure for the GLS and Bjp sum rules coincide and have the following form[@Broadhurst:1993ru]: $$B[\rm C_{\rm Bjp}](\delta)=B[\rm C_{\rm GLS}](\delta)= -\frac{(3+\delta) {\rm exp}(5\delta/3)}{(1-\delta^2)(1-\delta^2/4)}~~. \label{GLS}$$ They contain the IRR poles at $\delta=1$ and $\delta=2$ and the UVR poles at $\delta=-1$ and $\delta=-2$. Note that the $\delta=-1$ UVR poles in Eq. (\[GLS\]) are suppressed by a factor $(1/2){\rm exp}(-10/3)=0.018$, relative to the dominant IRR poles at $\delta=1$ [@Broadhurst:2002bi]. Therefore, in the asymptotic structure of the perturbative QCD effects in the expressions for $C_{\rm GLS}(Q^2) \approx C_{Bjp}(Q^2)$ (where we neglect the small “light-by-light-type” effects, contributing to $C_{\rm GLS}(Q^2)$) the sign-constant part in Eq. (\[asymptotics\]) dominates strongly with respect to the sign-alternating contribution, generated by $\delta=-1$ UVR. The scheme-dependence of these results are not so obvious, Indeed, the suppressions of $\delta=1$ UVR with respect to $\delta=1$ IRR is related to the application of the $\overline{\rm MS}$-scheme which we are using throughout the whole work. In fact in this scheme the IRR renormalons are not suppressed. However, there is the procedure, when the situation is reversed- the IRR are absent, but UVR may exist. This feature is manifesting itself for the models with “frozen” coupling constant (see e.g. [@Shirkov:1997wi] ). Returning to the large-$N_f$ expansion of the perturbative expressions $$C_{\rm Bjp}(Q^2) = C_{GLS}(Q^2)=1+\frac{C_F}{T_fN_f}\sum_{n=1}^ {\infty}r_n(T_fN_fa_s)^n~~,$$ where $C_F=4/3$, $T_f=1/2$ and $$r_n={\rm lim}_{\delta\rightarrow 0}\bigg(-\frac{4}{3}\frac{ d}{d\delta} \bigg)^{n-1}B[ C_{\rm Bjp}](\delta)~~,$$ we arrive at the following expansion in powers of $x=T_fN_fa_s$, namely $$\label{expansion} \sum_{n}r_n x^n=-3x +8x^2-\frac{920}{27}x^3+\frac{38720}{243}x^4+...~~~,$$ which is known in the $\overline{\rm MS}$ scheme up to $O(\alpha_s^9N_f^9)$ terms[@Broadhurst:1993ru]. Using now the traditional $\overline{\rm MS}$-scheme expansion in terms of the orders in $\alpha_s/\pi=4a_s$, one can compare the results of explicit perturbative calculations of $$C_{\rm Bjp}(Q^2)=1+\sum_{n \geq 1}r_n\bigg(\frac{\alpha_s}{\pi}\bigg)^n$$ with the known numbers $$\begin{aligned} \label{r1} r_1&=&-1 \\ \label{r2} r_2&=&-4.5833+0.33333N_f \\ \label{r3} r_3&=& -41.440+7.6073N_f-0.17747N_f^2\end{aligned}$$ obtained at $O(\alpha_s^2)$ in Ref.[@Gorishnii:1985xm] and at $O(\alpha_s^3)$ in Ref.[@Larin:1991tj], with the results of the application of the NNA procedure[@Broadhurst:1994se] to the estimates of the perturbative QCD corrections from large-$N_f$ expansion of Eq. (\[expansion\]) [^3]. Performing the shift $N_f\rightarrow N_f-33/2$ in the second, third and fourth terms in Eq. (\[expansion\]), we arrive at the following estimates in the $\overline{\rm{MS}}$ scheme [@Broadhurst:2002bi]: $$\begin{aligned} \label{r2NNA} r_2^{\rm NNA}&=&-5.5 +0.33333N_f \\ \label{r3nna} r_3^{\rm NNA}&=&-48.316+5.8565N_f-0.17747N_f^2\\ r_4^{\rm NNA}&=&-466.00+84.728N_f-5.1350~N_f^2+0.10374N_f^3~~~.\end{aligned}$$ Reasonable agreement can be observed between the sign structure and values of the NNA estimates and the results of explicit calculations (compare the estimates of Eqs. (\[r2NNA\]) and (\[r3nna\]) with the numbers in Eqs. (\[r2\]) and (\[r3\]), respectively). As to the prediction for $r_4^{\rm NNA}$, it may serve as a guide for understanding the rate of growth of the coefficients of the perturbative series generated by the single renormalon-chain approximation. Consider now the Bjunp sum rule, which is defined in Eq. (\[inp\]). Within the large-$N_f$, expansion its perturbative coefficient function $$C_{\rm Bjpunp}(Q^2)=1+\sum_{n\geq 1}\tilde{r}_n \bigg(\frac{\alpha_s}{\pi}\bigg)^n~~~$$ was calculated in the $\overline{\rm MS}$ scheme and large-$N_f$ expansion up to a $O(\alpha_s^9N_f^9)$-terms[@Broadhurst:2002bi]. Following the logic of our work, we present here the results for the first 4 terms only: $$\label{expansion2} \sum_{n}\tilde{r}_n x^n=-2x +\frac{64}{9}8x^2-\frac{2480}{81}x^3+ \frac{113920}{729}x^4+ \dots$$ As was already mentioned above, the explicit results of calculations of the perturbative contributions to the Bjunp sum rule $$C_{\rm Bjunp}(Q^2)=1+\sum_{n\geq 1}\tilde{r}_n \bigg(\frac{\alpha_s}{\pi}\bigg)^n$$ are known up to the order $O(\alpha_s^3)$ level. These results are: $$\begin{aligned} \label{tilder1} \tilde{r}_1&=&-2/3 \\ \label{tilder2} \tilde{r}_2&=& -3.8333+0.29630N_f \\ \label{tilder3} \tilde{r}_3&=& -36.155+6.3313N_f-0.15947N_f^2\end{aligned}$$ where $\tilde{r}_2$ was calculated in Ref.[@Gorishnii:1983gs] while $\tilde{r}_3$ was evaluated in Ref.[@Larin:1990zw]. Applying now the NNA procedure to the series of Eq. (\[expansion2\]), we find that, in the $\overline{\rm{MS}}$ scheme, the estimated coefficients of the Bjunp sum rules have the following form[@Broadhurst:2002bi]: $$\begin{aligned} \label{2NNA} \tilde{r}_2^{\rm NNA}&=&-4.8889 +0.29630N_f \\ \label{3nna} \tilde{r}_3^{\rm NNA}&=&-43.414+5.2623N_f-0.15947N_f^2\\ \tilde{r}_4^{\rm NNA}&=&-457.02+83.094N_f-5.0360~N_f^2+0.10174N_f^3~~~.\end{aligned}$$ The estimate of Eq. (\[2NNA\]) is in agreement with its exact partner of Eq. (\[tilder2\]). The same situation holds for the $O(\alpha_s^3)$ corrections (compare Eq. (\[3nna\]) with Eq. (\[tilder3\])). It should be stressed, that the similarity of the next-to-next-to- leading-order $\overline{\rm {MS}}$-scheme perturbative QCD contributions to the Bjp and Bjunp sum rules was previously noticed in Ref.[@Gardi:1998rf], although no explanation of this observation was given. Now, within the NNA procedure, it is possible to generalize this observation to higher-order level. Indeed, the NNA estimates of the $O(\alpha_s^4)$ corrections to the Bjp and Bjunp sum rules have a similar expressions as well. [**These facts may indicate the close similarity in the full perturbative structure of the QCD corrections to the Bjunp sum rule, the Bjp sum rule and the GLS sum rule**]{} (provided the “light-by-light-type” terms will not drastically modify the values of perturbative terms in the latter case in the one-renormalon chain approximation). Note that, generally speaking, from this order of perturbation theory the diagrams from the second renormalon chain are starting to contribute to the quantities under consideration. These diagrams may influence the asymptotic behavior of the the series considered[@Vainshtein:1994ff]. In view of this it seems that it is more rigorous to use, in the phenomenological application, the order of $\alpha_s^4$-terms, estimated in Ref.[@Kataev:1995vh] using the PMS approach[@Stevenson:1981vj] and the effective-charges approach, developed in Ref.[@Grunberg:1982fw]. However, since in this work we concentrated ourselves on the structure of the QCD expressions, obtained in the one-renormalon chain approximation, we will avoid more detailed discussions of the possible influence of the multi-renormalon chain contributions to the results of our studies. The observed in Ref.[@Gardi:1998rf] similarity of the next-to-next-to-leading-order approximations for the Bjp and Bjunp sum rules was attributed in Ref.[@Broadhurst:2002bi] to the fact that the dominant $\delta=1$ IRR contribution to the Borel images of these sum rules enters with identical residues. Indeed, the Borel images in the Borel integrals of Eq. (7) for the Bjunp and Bjp sum rules turn out to be closely related[@Broadhurst:2002bi], namely $$\label{rens} B[ C_{\rm Bjunp}](\delta)=\bigg(\frac{2(1+\delta)}{3+\delta}\bigg) B[C_{\rm Bjp}](\delta)=- \frac{2{\rm exp}(5\delta/3)} {(1-\delta)(1-\delta^2/4)})~~~.$$ Comparing Eq. (27) with Eq. (\[rens\]) one can convince oneself that the residues of the poles at $\delta=1$ in these two expressions are really the same and are equal to the factor $-(8/3){\rm exp}(5/3)$. Notice also the [**absence**]{} of $\delta=-1$ UVR pole and the [**existence**]{} in Eq. (\[rens\]) of a $\delta=-2$ UVR pole together with the leading $\delta=1$ IRR one. Thus we are observing one more interesting fact: the structure of the Borel image for the Borel sum, related to the Bjunp sum rule, is [**dual**]{} to the structure of leading renormalon contributions to the Borel image of the Borel sum for the $e^+e^-$ annihilation Adler D-function. Indeed, in the latter case the leading IRR is manifesting itself at $\delta=2$, while the leading UVR pole is appearing at $\delta=-1$ (the general structure of renormalon singularities in the $e^+e^-$ annihilation channel was analyzed in Ref.[@Zakharov:1992bx], while the concrete $\overline{\rm MS}$-scheme calculations of the corresponding Borel image were done later on in Refs.[@Beneke:1992ch] and [@Broadhurst:1992si]). The absence of $\delta=1$ IRR in the Borel sum of the $e^+e^-$ annihilation channel is related to the absence of $O(\Lambda^2/Q^2)$ non-perturbative power correction in the standard variant of the operator product expansion formalism, applied to the theoretical expression for the $e^+e^-$ annihilation Adler D-function. Indeed, the existence of lowest dimension-4 quark and gluon condensates [@Shifman:1978bx] in this channel can be associated in terms of renormalon language with the existence of the [**leading**]{} IRR pole, which in case of “Borelization” of the Adler D-function is appearing at $\delta=2$. However, as was already discussed above, the dimension-2 non-perturbative corrections enter into the theoretical expressions for the three DIS sum rules we are interested in. In the IRR language, this corresponds to the appearance of a $\delta=1$ IRR pole[@Mueller:1993pa], which manifests itself in the concrete results of Refs.[@Broadhurst:1993ru],[@Broadhurst:2002bi] (see Eqs. (27) and (47)). Thus, it should be stressed that the structure of singularities of the Borel sums (or images) is not universal and depends from the physical quantity under consideration. IRR for DIS sum rules and the values of twist-4 corrections =========================================================== In addition to controlling the sign-positive $n!$ growth of the asymptotic series the existence of $\delta=1$ IRR gives an ambiguity in taking the Borel integral of Eq. (7) over this pole. In the case of large $\beta_0$ expansion and for the series we are interested in, this ambiguity was estimated in Ref.[@Beneke:2000kc]. Moreover, $\delta=1$ IRR generates the negative power suppressed correction which has the following expression: $$\label{ambig} \Delta C_{\rm sum~rules}\approx -\frac{32 \rm exp(5/3)} {9 \beta_0}\frac{\Lambda^2_{\overline{\rm MS}}}{Q^2}~~.$$ Notice, that it has the same negative sign as the residue of $\delta=1$ IRR. This estimate may be coordinated with the definition of the twist-4 matrix element in the sum rules we are interested in. Therefore, we will make the assumption that the identical values and signs of the IRR induced power-suppressed term indicate that the values of twist-4 contributions to the expressions of GLS, Bip and Bjunp sum rules, normalized to unity, should have the same negative sign and a similar closed value [@Kataev:2005ci]. This assumption is similar to the known in the renormalon-oriented literature guess of “universality”[@Dokshitzer:1995qm]. Let us check this assumption, considering the following expressions for the sum rules we are interested in $$\begin{aligned} {\rm GLS}(Q^2)& =&3\bigg[1-4a_s-O(a_s^2) - \label{A} \frac{\rm A}{Q^2}\bigg]~~~, \\ \label{Bp} {\rm Bjp}(Q^2)&=&\frac{g_A}{6}\bigg[1-4a_s-O(a_s^2)-\frac{\rm {B} }{Q^2} \bigg]~~, \\ \label{C} {\rm Bjunp}(Q^2)&=&\bigg[1-\frac{8}{3}a_s - O(a_s^2)-\frac{ \rm C}{Q^2}\bigg]~~~,\end{aligned}$$ where ${\rm A} = \langle\langle O_1 \rangle\rangle/3$, ${\rm B}= \langle\langle O_2\rangle\rangle(6/g_A)$ and ${\rm C}= \langle\langle O_3\rangle\rangle$ and compare in Table 1 the results of different theoretical and phenomenologically based evaluations of the twist-4 parameters ${\rm A},~{\rm B}$ and ${\rm C}$. 0.1 in --------------------------------------------------------------------------------------------------------------------- ${\rm A}~[{\rm GeV}^2]$ ${\rm B}~[{\rm GeV}^2]$ ${\rm C}~[{\rm GeV}^2]$ --------------------------------------- ------------------------- ------------------------- ------------------------- QCD sum rules (Ref.[@Braun:1986ty]) $ 0.098 — $ 0.133 \pm 0.065$ \pm 0.049$ QCD sum rules (Ref.[@Balitsky:1989jb] — $0.063 — \pm 0.031$ QCD sum rules (Ref.[@Ross:1993gb] $0.158 \pm 0.078~$ $0.223 \pm 0.118~$ $0.16 \pm 0.08~$ QCD sum rules (Ref.[@Stein:1994zk]) — $0.025 \pm 0.012~$ — Instanton model (Ref.[@Balla:1997hf]) $0.078 \pm 0.039~$ $ 0.087 \pm 0.043$ — Instanton model (Ref.[@Weiss:2002vv]) — — $0.16 \pm 0.08 $ Experiment (Ref.[@Sidorov:2004sg]) —- $0.098 \pm 0.028$ — Experiment (Ref.[@Sid]) $0.04 \pm 0.13$ — — --------------------------------------------------------------------------------------------------------------------- : *The results for twist-4 contributions to the GLS, Bjp and Bjunp sum-rule expressions of Eqs. (49)–(51).* In the case of the GLS and Bjunp sum rules the results of the original application of the three-point function QCD sum rules method gave $\langle\langle O^{\rm S}\rangle\rangle = 0.33~{\rm GeV^2}$ and $\langle\langle O^{\rm NS}\rangle\rangle = 0.15~{\rm GeV^2}$, with over 50$\%$ error bars[@Braun:1986ty], while the three-point function estimates for the modified results of calculations of the twist-4 parameter of the Bjp sum rule resulted in the following value $M_N^2f_2^{p-n} = - 0.18 \pm 0.09~ {\rm GeV^2}$ in the region where nucleon target mass corrections of $O(M_N^2/Q^2)$ and twist-3 contribution may be neglected[@Balitsky:1989jb]. As was already mentioned, these calculations were re-examined using three-point function QCD sum-rules approach in Refs.[@Ross:1993gb] and [@Stein:1994zk]. In the first case the obtained result turned out to be larger than the original results from Ref.[@Balitsky:1989jb] and has the following value of $M_N^2f_2^{p-n} = - 0.634 \pm 0.317~ {\rm GeV^2}$ [@Ross:1993gb], while in the latter case it was considerably smaller, namely $M_N^2f_2^{p-n}=-0.07\pm 0.035$ [@Stein:1994zk], although within $50\%$ theoretical uncertainty we adopt for all calculations within three-point function QCD sum-rules approach, this value does not disagree with the results obtained in Ref.[@Balitsky:1989jb]. The relatively high difference between the central values of estimates of Refs.[@Ross:1993gb] and [@Balitsky:1989jb] is explained by the fact that in the former analysis the additional corrections to the perturbative side of the corresponding QCD sum rules are included and the continuum term to the nucleon pole of the low-energy side of this sum rule is explicitly retained. This leads to better stability of the extracted value of matrix element with respect to the Borel parameter and increases its central value. Note, however, that the theoretical error of the three-point function of the QCD sum rules result of Ref. [@Ross:1993gb] is considerably underestimated. We fix it as 50 $\%$ uncertainty, which to our point of view is typical to all three-point function QCD sum rules results. In Table 1 we present the estimates of twist-4 corrections to different DIS sum rules, obtained with the help of the three-point function QCD sum-rules approach and compare them with the results of the application of different theoretical approach, based on the picture of the QCD vacuum as a “medium” of instantons[@Shuryak:1981ff]. This picture was further developed in the method in Ref.[@Diakonov:1983hh] and applied for estimating twist-4 contributions to the GLS sum rule and Bjp sum rule in Ref.[@Balla:1997hf], while the number for the twist-4 contribution to the Bjunp sum rule, which follows from this approach, was presented in Ref.[@Weiss:2002vv]. In the absence of estimates of theoretical uncertainties within this approach, we will apply to them the careful $50\%$ estimate as well. All these results support the original results of the three-point function QCD sum rules calculations of the twist-4 corrections to the GLS, Bjunp sum rules[@Braun:1986ty] and Bjp sum rule[@Balitsky:1989jb], though the additional three-point function QCD sum rules cross-check of the results of Ref.[@Balitsky:1989jb] may be rather useful. The experimentally motivated value of the twist-4 contribution to the Bjorken sum rule $M_n^2f_2^{p-n}=-0.28 \pm 0.08~{\rm GeV}^2$ [@Sidorov:2004sg] was obtained by means of integrating in $x$ the numerator of the dimensionless $ h(x)/Q^2$ contributions, extracted from the fits of world average data for $g_1^{p}(x,Q^2)$ and $g_1^{n}(x,Q^2)$ performed in Ref. [@Leader:2002ni]. From the results of Table 1 one can see that the agreement with the QCD sum-rules calculations of Ref.[@Balitsky:1989jb] and instanton-based calculations of Ref.[@Balla:1997hf] is more than qualitative. The experimentally inspired estimate for the twist-4 contribution to the GLS sum rule was obtained only recently[@Sid] as a result of the integration of $x$-dependence of the twist-4 contribution $h(x)/Q^2$, extracted in the works of Ref.[@Kataev:1997nc] devoted to the analysis of $xF_3$ data of CCFR collaboration. One can see that the central value of the contribution is negative (in fact it comes with the negative sign in the sum rule) but has rather large uncertainties. So, at the present level we cannot obtain from this estimate even qualitative information and additional work on its improvement is needed. To conclude, we present the final results for the GLS, Bjp and Bjunp sum rules, where for definiteness the twist-4 matrix elements are estimated using the central values of the three-point function QCD sum-rules results from Refs.[@Braun:1986ty] and [@Balitsky:1989jb]: $$\begin{aligned} {\rm GLS}(Q^2)& =&3\bigg[1-4a_s-O(a_s^2) - \frac{0.098~{\rm GeV^2}}{Q^2}\bigg]~~~, \\ {\rm Bjp}(Q^2)&= &\frac{g_A}{6}\bigg[1-4a_s-O(a_s^2)-\frac{0.063~{\rm GeV^2}}{Q^2} \bigg]~~, \\ {\rm Bjunp}(Q^2)&=&\bigg[1-\frac{8}{3}a_s - O(a_s^2)-\frac{0.133~{\rm GeV^2}}{Q^2}\bigg]~~~.\end{aligned}$$ It should be stressed that they all have the same negative sign and within existing theoretical uncertainties are in agreement with each other. This fact was anticipated by the identical value of the ambiguity, generated by the $\delta=1$ IRR pole of the Borel images of all these three sum rules (see Eq. (48)). Moreover, as follows from the results of application of the single-renormalon chain approximation in the perturbative sector presented in Sec.2, we may expect a similar asymptotic behavior of the perturbative corrections to all these three sum rules (compare Eqs.(33)–(36) with Eqs.(42)–(45)). It is interesting that the similar property is manifesting itself in perturbative series under investigations at the $O(\alpha_s^3)$ level, studied within scheme-invariant approaches in Ref.[@Kataev:1995vh]. These facts give us the idea that the sum rules we are interested in are closely related and that, in the region, where we can neglect target mass corrections and twist-3 contributions to the Bjp sum rule and quark-mass dependent corrections (say in the region $Q^2 \geq 2~{\rm GeV}^2$) we can write down the following basic relation [@Kataev:2005ci]: $${\rm Bjp}(Q^2)\approx (g_A/18){\rm GLS}(Q^2)\approx (g_A/6){\rm Bjunp}(Q^2)~~~. \label{basic}$$ In the next section we will present more detailed considerations of the experimental consequences of these relations then those, that are briefly outlined in Ref.[@Kataev:2005ci]. IRR- inspired relations and experiment ====================================== In order to test whether our basic relation Eq. (\[basic\]) is respected by experiment, we first present the results of the extraction of the GLS sum rule by combining CCFR neutrino DIS data with the data for other neutrino DIS experiments for $1~{\rm GeV}^2 < Q^2 < 15 ~{\rm GeV^2}$ [@Kim:1998ki]. It is known that the weighted extraction of $\alpha_s(M_Z)$ from these data result in the rather rough value $\alpha_s(M_Z)= 0.115\pm^{0.009}_{0.12}$, which, is in agreement with $\alpha_s(M_Z)=0.115 \pm 0.001~(stat) \pm 0.005~(syst)$ $\pm 0.003~(twist)\pm 0.0005~(scheme)$, extracted in Ref.[@Chyla:1992cg] from the previous CCFR data for the GLS sum rule at $Q^2=3~{\rm GeV}^2$[@Leung:1992yx]. However, for our purposes we will not need to re-extract $\alpha_s(M_Z)$ values from the GLS sum rule results of Ref.[@Kim:1998ki], but will use these, which are presented in Table 2. 0.1 in $Q^2$ \[${\rm GeV^2}$\] GLS sum rule ------------------------- --------------------------- -- 2.00 $2.49 \pm 0.08 \pm 0.14$ 3.16 $2.55 \pm 0.08 \pm 0.10$ 5.01 $2.78 \pm 0.06 \pm 0.19$ 7.94 $2.82 \pm 0.07 \pm 0.19$ 12.59 $2.80 \pm 0.13 \pm 0.18$ : *The results for the GLS sum rule from Ref. [@Kim:1998ki]* To estimate the values of the Bjp sum rule from the results of Table 2 we will use our main equation (\[basic\]) and will compare them with available experimental data for the Bjp sum rule. The results of these studies are presented in Table 3. 0.1 in ---------------------------------------------------------------------------------------------------------------- $Q^2$ \[${\rm GeV^2}$\] Bjp from Table 1 Bjp SR (exp) ------------------------- ------------------------------ ------------------------------------------------------- 2.00 $0.174 \pm 0.006 \pm 0.010$ $0.169\pm 0.025$  \[Ref.[@Abe:1998wq]\] 3.16 $0.178 \pm 0.004 \pm 0.007$ $0.164 \pm 0.023$  \[Ref.[@Abe:1998wq]\] 5.01 $0.195 \pm 0.004\pm 0.013$ $0.181 \pm 0012~(stat)\pm 0.018~ (syst)$ \[Ref.[@Adeva:1998vw]\] 7.94 $0.197 \pm 0.005 \pm 0.013$ —– 12.5 $0.196 \pm 0.009 \pm 0.013$ $0.195 \pm 0.029$ Ref. [@Adeva:1997is] ---------------------------------------------------------------------------------------------------------------- : *The comparison of the results of application of Eq. (\[basic\]) with direct experimentally motivated numbers* One can see that though the central values of estimated numbers for the Bjp SR are higher than the results of the SLAC E143 collaboration [@Abe:1998wq], they agree within error bars. It is also interesting to compare the result from Table 3 with the value of the Bjp sum rule extracted in Ref.[@Altarelli:1996nm] from the SLAC and SMC data ${\rm Bjp}(3~{\rm GeV^2})=0.177 \pm 0.018$ and which, within error bars, do not contradict the value ${\rm Bjp}(3~{\rm GeV^2})=0.164 \pm 0.011$ used in the work of Ref.[@Ellis:1995jv]. It is rather inspiring that within error bars these results agree with the GLS sum rule value at $Q^2= 3.16~{\rm GeV^2}$. The same feature holds for the Bjp sum rules at $Q^2= 5~{\rm GeV^2}$, namely for the SMC result of Ref.[@Adeva:1998vw]. Thus we think that within existing uncertainties our approximate IRR-inspired basic equation (\[basic\]) is supported by existing experimental data. Conclusions =========== We demonstrated that the existing phenomenological data do not contradict the basic relation of Eq. (\[basic\]) and therefore [**the reliability of the one-renormalon chain approximation of the theoretical quantities under consideration**]{}. For its more detailed studies, we may rely on the appearance of Neutrino Factory data for all sum rules, which enter in Eq. (\[basic\]). In fact it may provide rather useful data not only for the GLS and Bjp sum rules, but for the Bjunp sum rule as well (for a discussion of this possibility see Refs.[@Mangano:2001mj],[@Alekhin:2002pj]). Another interesting option of the relation of Eq. (\[basic\]) is to analyze the sources of its possible violation in the lower energy region of over $Q^2\approx 1~{\rm GeV^2}$, where one may compare the CCFR data for the GLS sum rule at the energy point $Q^2=1.26~{\rm GeV^2}$ [@Kim:1998ki] and the JLAB data for the Bjp sum rule at $Q^2=1.10~{\rm GeV}^2$ [@Deur:2004ti]. To conclude this section, we would like to emphasize that the problems considered by us in this work are complementary to the considerations of Ref.[@Brodsky:1995tb]. In the former analysis, the GLS and Bjp sum rules were determined in the high energy point of over $Q^2=12.33~{\rm GeV}^2$ from the generalized Crewther relation constructed in [@Broadhurst:1993ru], [@Brodsky:1995tb],[@Crewther:1972kn] using the extension of the BLM approach of Ref.[@Brodsky:1982gc] and the analysis of $e^+e^-$ annihilation data from Ref.[@Mattingly:1993ej]. Certainly, the renormalon-chain insertions are absorbed in this approach into the BLM scale. However, the considerations within this language of the high-twist effects is still missed. It may be of interest to think of the possibility of evaluating high-twist contributions to the Crewther relation, which relates, in the Eucledian region we are working massless QCD perturbative contributions to the Adler D-function of $e^+e^-$-annihilation with the perturbative corrections to the GLS and Bjp sum rules. [**Acknowledgements**]{} I am grateful to D.J. Broadhurst for a productive collaboration. It is a pleasure to express my personal thanks to S.I. Alekhin, G. Altarelli, Yu.L. Dokshitzer, J. Ellis, G. Grunberg, A.V. Sidorov and V.I. Zakharov for useful discussions at various stages of this work. This article grew up from my talk at the 19th Rencontre de Physique de la Vallèe d’Aoste (27 February-5 March,2005, La Thuile, Aosta Valley, Italy). I would like to thank its organizers, G. Bellettini and M. Greco for their invitation. This work is supported by RFBR Grants N03-02-17047, 03-02-17177 and N 05-01-00992. It was continued during the visit to CERN. I have real pleasure in thanking the members of the CERN Theory Group for hospitality. In its final form the work was completed during the visit to ICTP (Trieste). I am grateful to a referee for constructive advise [99]{} V. I. Zakharov, Nucl. Phys. B [**385**]{}, 452 (1992). A. H. Mueller, CU-TP-573, Proc. Int. Conference “QCD-20 Years Later” Ed. by P.M. Zerwas and H.A. Kastrup (World Scientific, 1992) pp.162-171. A. H. Mueller, Phys. Lett. B [**308**]{}, 355 (1993). M. Beneke, Phys. Rep.  [**317**]{}, 1 (1999) \[arXiv:hep-ph/9807443\]. M. Beneke and V. M. Braun, In the “Boris Ioffe Festschrift : At the Frontiers of Particle Physics/Handbook of QCD”, Ed. by M. Shifman (World Scientific, 2001), pp. 1719–1773 \[arXiv:hep-ph/0010208\]. G. Altarelli, CERN/TH-95-309, Proc. Int. School of Subnuclear Physics: 33rd Course “Vacuum and Vacua: The Physics of Nothing”, Ed. A. Zichichi, 1995, pp. 221–248. R. B. Dingle, Asymptotic expansions: their derivation and interpretation, Ch. XXI (Academic Press, London and New York, 1973). M. Beneke, V. M. Braun and N. Kivel, Phys. Lett. B [**443**]{}, 308 (1998) \[arXiv:hep-ph/9809287\]. D. J. Broadhurst and A. L. Kataev, Phys. Lett. B [**315**]{}, 179 (1993) \[arXiv:hep-ph/9308274\]. D. J. Broadhurst and A. L. Kataev, Phys. Lett. B [**544**]{}, 154 (2002) \[arXiv:hep-ph/0207261\]. A. L. Kataev, Pisma  Zhetf. [**81**]{}, 744 (2005), arXiv:hep-ph/0505108. D. J. Gross and C. H. Llewellyn Smith, Nucl. Phys. B [**14**]{}, 337 (1969). S. G. Gorishny and S. A. Larin, Phys. Lett. B [**172**]{}, 109 (1986). S. A. Larin and J. A. M. Vermaseren, Phys. Lett. B [**259**]{}, 345 (1991). E. V. Shuryak and A. I. Vainshtein, Nucl. Phys. B [**199**]{}, 451 (1982). J. D. Bjorken, Phys. Rev.  [**148**]{}, 1467 (1966). E. V. Shuryak and A. I. Vainshtein, Nucl. Phys. B [**201**]{}, 141 (1982). X. D. Ji and P. Unrau, Phys. Lett. B [**333**]{}, 228 (1994) \[arXiv:hep-ph/9308263\]. I. Hinchliffe and A. Kwiatkowski, Annu. Rev. Nucl. Part. Sci.  [**46**]{}, 609 (1996) \[arXiv:hep-ph/9604210\]. I. I. Balitsky, V. M. Braun and A. V. Kolesnichenko, Phys. Lett. B [**242**]{} 245 (1990) \[Erratum-[*ibid.*]{}  B [**318**]{}, 648 (1993)\] \[arXiv:hep-ph/9310316\]. G. G. Ross and R. G. Roberts, Phys. Lett. B [**322**]{}, 425 (1994) \[arXiv:hep-ph/9312237\]. E. Stein, P. Gornicki, L. Mankiewicz, A. Schafer and W. Greiner, Phys. Lett. B [**343**]{}, 369 (1995) \[arXiv:hep-ph/9409212\]. A. Deur [*et al.*]{}, Phys. Rev. Lett.  [**93**]{}, 212001 (2004) \[arXiv:hep-ex/0407007\]. J. D. Bjorken, Phys. Rev.  [**163**]{}, 1767 (1967). K. G. Chetyrkin, S. G. Gorishny, S. A. Larin and F. V. Tkachov, Phys. Lett. B [**137**]{}, 230 (1984). S. A. Larin, F. V. Tkachov and J. A. M. Vermaseren, Phys. Rev. Lett.  [**66**]{}, 862 (1991). D. J. Broadhurst and A. G. Grozin, Phys. Rev. D [**52**]{}, 4082 (1995) \[arXiv:hep-ph/9410240\]. M. Beneke and V. M. Braun, Phys. Lett. B [**348**]{} 513 (1995) \[arXiv:hep-ph/9411229\]. S. J. Brodsky, G. P. Lepage and P. B. Mackenzie, Phys. Rev. D [**28**]{}, 228 (1983). G. Grunberg and A. L. Kataev, Phys. Lett. B [**279**]{} (1992) 352. S. V. Mikhailov, arXiv:hep-ph/0411397. C. N. Lovett-Turner and C. J. Maxwell, Nucl. Phys. B [**452**]{}, 188 (1995) \[arXiv:hep-ph/9505224\]. D. V. Shirkov and I. L. Solovtsov, Phys. Rev. Lett.  [**79**]{}, 1209 (1997) \[arXiv:hep-ph/9704333\]. E. Gardi and M. Karliner, Nucl. Phys. B [**529**]{}, 383 (1998) \[arXiv:hep-ph/9802218\]. A. I. Vainshtein and V. I. Zakharov, Phys. Rev. Lett.  [**73**]{}, 1207 (1994) \[Erratum-[*ibid.*]{}  [**75**]{}, 3588 (1995)\] \[arXiv:hep-ph/9404248\]. A. L. Kataev and V. V. Starshenko, Mod. Phys. Lett. A [**10**]{}, 235 (1995) \[arXiv:hep-ph/9502348\]. P. M. Stevenson, Phys. Rev. D [**23**]{}, 2916 (1981). G. Grunberg, Phys. Rev. D [**29**]{}, 2315 (1984). M. Beneke, Nucl. Phys. B [**405**]{}, 424 (1993). D. J. Broadhurst, Z. Phys. C [**58**]{}, 339 (1993). M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Nucl. Phys. B [**147**]{}, 385 (1979). Y. L. Dokshitzer, G. Marchesini and B. R. Webber, Nucl. Phys. B [**469**]{}, 93 (1996) \[arXiv:hep-ph/9512336\]. V. M. Braun and A. V. Kolesnichenko, Nucl. Phys. B [**283**]{}, 723 (1987). E. V. Shuryak, Nucl. Phys. B [**203**]{}, 93 (1982). D. Diakonov and V. Y. Petrov, Nucl. Phys. B [**245**]{}, 259 (1984). J. Balla, M. V. Polyakov and C. Weiss, Nucl. Phys. B [**510**]{}, 327 (1998) \[arXiv:hep-ph/9707515\]. C. Weiss, J. Phys. G [**29**]{}, 1981 (2003) \[ arXiv:hep-ph/0210132\]. A. V. Sidorov and C. Weiss, arXiv:hep-ph/0410253. E. Leader, A. V. Sidorov and D. B. Stamenov, Phys. Rev. D [**67**]{}, 074017 (2003) \[arXiv:hep-ph/0212085\]. A. V. Sidorov, private communication. A. L. Kataev, A. V. Kotikov, G. Parente and A. V. Sidorov, Phys. Lett. B [**417**]{}, 374 (1998) \[arXiv:hep-ph/9706534\];\ A. L. Kataev, G. Parente and A. V. Sidorov, Nucl. Phys. B [**573**]{}, 405 (2000) \[arXiv:hep-ph/9905310\] ;\ A. L. Kataev, G. Parente and A. V. Sidorov, Phys. Part. Nucl.  [**34**]{}, 20 (2003) \[Fiz. Elem. Chast. Atom. Yadra [**34**]{}, 43 (2003)\] \[arXiv:hep-ph/0106221\]. J. H. Kim [*et al.*]{}, Phys. Rev. Lett.  [**81**]{}, 3595 (1998) \[arXiv:hep-ex/9808015\]. J. Chyla and A. L. Kataev, Phys. Lett. B [**297**]{}, 385 (1992) \[arXiv:hep-ph/9209213\]. W. C. Leung [*et al.*]{}, Phys. Lett. B [**317**]{}, 655 (1993). K. Abe [*et al.*]{} \[E143 collaboration\], Phys. Rev. D [**58**]{}, 112003 (1998) \[arXiv:hep-ph/9802357\]. B. Adeva [*et al.*]{} \[Spin Muon Collaboration\], Phys. Rev. D [**58**]{}, 112002 (1998). B. Adeva [*et al.*]{} \[Spin Muon Collaboration (SMC)\], Phys. Lett. B [**412**]{}, 414 (1997). G. Altarelli, R. D. Ball, S. Forte and G. Ridolfi, Nucl. Phys. B [**496**]{}, 337 (1997) \[arXiv:hep-ph/9701289\]. J. R. Ellis, E. Gardi, M. Karliner and M. A. Samuel, Phys. Lett. B [**366**]{}, 268 (1996) \[arXiv:hep-ph/9509312\]. M. L. Mangano [*et al.*]{}, arXiv:hep-ph/0105155. S. I. Alekhin and A. L. Kataev, J. Phys. G [**29**]{}, 1993 (2003) \[arXiv:hep-ph/0209165\]. S. J. Brodsky, G. T. Gabadadze, A. L. Kataev and H. J. Lu, Phys. Lett. B [**372**]{}, 133 (1996) \[arXiv:hep-ph/9512367\]. R. J. Crewther, Phys. Rev. Lett.  [**28**]{}, 1421 (1972) and Phys. Lett. B [**397**]{}, 137 (1997) \[arXiv:hep-ph/9701321\]. A. C. Mattingly and P. M. Stevenson, Phys. Rev. D [**49**]{}, 437 (1994) \[arXiv:hep-ph/9307266\]. [^1]: E-mail:kataev@ms2.inr.ac.ru [^2]: For completeness we note that there is a minor difference between the the $O(M_N^2/Q^2)$ coefficients of the $\int_0^1 dxx^2 g_1^{p-n}$ terms in Ref.[@Balitsky:1989jb] and Ref.[@Deur:2004ti]. In the former and latter cases they are equal to (10/9) and 1 respectively. [^3]: It is worth noting that similar NNA analysis was performed previously, in Ref.[@Lovett-Turner:1995ti], for the $e^+e^-$ annihilation Adler $D$-function.
--- abstract: 'We report on the optical and structural properties of plastic scintillators irradiated with neutron beams produced by the IBR-2 reactor of the Frank Laboratory of Neutron Physics in JINR, Dubna. Blue UPS-923A and green plastic scintillators were irradiated with neutron fluence ranging from 10$^{13}$ to 10$^{17}$ n/cm$^2$. Discolouring in the plastic scintillators was observed after irradiation. The effects of radiation damage on the optical and structural properties of the samples were characterized by conducting light yield, light transmission, light fluorescence and Raman spectroscopy studies. The results showed that neutron radiation induced damage in the material. The disappearance of the Raman peak features in green scintillators at frequencies of 1165.8, 1574.7 and 1651.2 cm$^{-1}$ revealed significant structural alterations due to neutron bombardment. Losses in fluorescence intensity, light yield and light transmission in the plastic scintillators were observed.' address: - 'Joint Institute for Nuclear Research, Dubna, Russia, 141980' - 'School of Physics, University of the Witwatersrand, Johannesburg, Wits 2050, South Africa' - 'DST-NRF Centre of Excellence in Strong Materials, University of the Witwatersrand, Johannesburg, Wits 2050, South Africa' - 'Department of Physics and Astronomy, Botswana International University of Science and Technology, Private Bag 16, Palapye, Botswana' - 'School of Physics and Institute for Collider Particle Physics, University of the Witwatersrand, Johannesburg, Wits 2050, South Africa' - 'iThemba LABS, National Research Foundation, PO Box 722, Somerset West 7129, South Africa' - 'Institute for Scintillation Materials, Kharkov, Ukraine' author: - 'V. Baranov' - 'Yu.I. Davydov' - 'R. Erasmus' - 'C.O. Kureba' - 'N. Lekalakala' - 'T. Masuku' - 'J.E. Mdhluli' - 'B. Mellado' - 'G. Mokgatitswane' - 'E. Sideras-Haddad' - 'I. Vasilyev' - 'P.N. Zhmurin' title: Effects of neutron radiation on the optical and structural properties of blue and green emitting plastic scintillators --- Neutron radiation ,Neutron fluence ,Radiation damage ,Scintillator ,Polystyrene Introduction {#intro} ============ Plastic scintillators are employed within high energy particle detectors due to their desirable properties such as high optical transmission and fast rise and decay times [@Knoll:1300754]. The generation of fast signal pulses enables efficient data capturing. They are used to detect the energies and reconstruct the path of the particles through the process of luminescence due to the interaction of ionising radiation. Compared to inorganic crystals, plastic scintillators are organic crystals that are easily manufactured and therefore cost effective when covering large areas such as the ATLAS detector [@chen].\ In the ATLAS detector of the Large Hadron Collider (LHC), there is a hadronic calorimeter known as the Tile Calorimeter that is responsible for detecting hadrons, taus and jets of quarks and gluons through the use of plastic scintillators. These particles deposit large quantities of energy and create an immensely detrimental radiation environment. The neutrons mostly coming from the shower tails contribute to the counting rates and degradation of plastic scintillators through neutron capture. Monte-Carlo calculations have been performed to estimate doses and particle fluences currently experienced at different regions of the detector, operating at a nominal luminosity of 10$^{34}$ cm$^{-2}$s$^{-1}$. The maximum neutron fluence per year in the Tile Calorimeter barrel was estimated at around 10$^{12}$ n/cm$^{2}$yr [@Angela]. The LHC intends to increase its luminosity by a factor of up to ten times by 2022, and this will drastically impact the radiation environment in the ATLAS detector.\ The interaction of ionising radiation with plastic scintillators results in the damage of these plastic scintillators. According to Sonkawade [*[et al]{}*]{}. [@sonkawade], during irradiation the properties of scintillators are altered significantly depending on the structure of the target material, fluence and the nature of radiation. Some of these structural modifications have been ascribed to the scissoring of the polymer chain, intensification of cross-linking, breakage of bonds and formation of new chemical bonds. This damage results in a significant decrease in the light yield of the scintillator and as a result, errors are introduced in the data captured.\ Studies on proton irradiated plastic scintillators conducted by the Wits High Energy Physics (Wits-HEP) group have been reported in literature [@SAIP; @1742-6596-645-1-012019; @NIMB]. This paper extends the study to focus on the effects of non-ionising radiation (i.e. neutrons). Compared to the interaction of ionising radiation, the interaction of non-ionising radiation with matter is more interesting since the particles interact indirectly with the atoms of the material. When materials are bombarded with neutrons, collision cascades are created within the material that results in point defects and dislocations. A Primary Knock-on Atom (PKA) is created when the kinetic energy from the collision is transferred to the displaced lattice atom. The knock-on atoms lose energy with each collision and that energy in turn ionizes the material [@Bisanti]. Neutron irradiation allows for bulk probing on materials since they are highly penetrating particles. Experimental Details {#methods} ==================== Commercial blue scintillators UPS-923A [@sci] and recently synthesized green scintillators [@velmo; @yu] were investigated. The samples were prepared at the Institute for Scintillation Materials (ISMA, Kharkov). They were cut and polished to dimensions of 2 x 2 cm with 6 mm thickness. Table \[table:properties\] has some important properties of the plastic scintillators under study. Scintillator Blue UPS-923A Green -------------------- ----------------------------- ----------------------------- Manufacturer Institute for Scintillation Institute for Scintillation Materials Materials Base Polystyrene Polystyrene Primary fluor 2$\%$ PTP 3HF Secondary fluor 0.03$\%$ POPOP Light Output 60 ($\%$ Anthrance) Wavelength of 425 530 Max. Emission (nm) Rise time (ns) 0.9 0.9 Decay time (ns) 3.3 7.6 : Properties of the scintillators under study.[]{data-label="table:properties"} Channel number 3 of the IBR-2 reactor, as schematically shown in Figure \[subfig:b\], located at the Frank Laboratory of Neutron Physics (FLNP) at the Joint Institute for Nuclear Physics (JINR) in Dubna, Russia was used to irradiate the samples [@bulav; @shabalin]. [0.49]{} ![Layout of IBR-2 spectrometer complex , and the irradiation facility at the channel No. 3 of IBR-2 reactor experimental hall, the view from the external biological shield side: (1)-massive part of the irradiation facility, (2)-transport beam, (3)-metallic container for samples fastening, (4)-samples, (5)-rail way  [@bulav].](L1 "fig:") [0.49]{} ![Layout of IBR-2 spectrometer complex , and the irradiation facility at the channel No. 3 of IBR-2 reactor experimental hall, the view from the external biological shield side: (1)-massive part of the irradiation facility, (2)-transport beam, (3)-metallic container for samples fastening, (4)-samples, (5)-rail way  [@bulav].](L2 "fig:") The samples were subjected to a beam of neutrons for 432 hours, this was the duration of the reactor cycle during the October 2017 run (9 $-$ 27 October 2017). The reactor operated at an average power of 1875 kW with the samples placed at various positions away from the reactor core to achieve various neutron fluences. The neutron fluence ranged approximately between 10$^{13}$ $-$ 10$^{17}$ n/cm$^2$. However, during the irradiation only neutrons with energy $E>1$ MeV were monitored. These neutrons account for about a quarter of the total flux. Hereafter we refer to number of fast neutrons with energy $E>1$ MeV, although the actual amount of neutrons are a factor of four higher. As shown in Figure \[fig:samples\], the discolouration of the samples is evident after irradiation. [1]{} ![Neutron irradiated samples, Blue scintillators UPS-923A (a - f) and Green scintillators (i - vi). From the left column to right column: non-irradiated, 10$^{13}$, 10$^{14}$, 10$^{15}$, 10$^{16}$ and 10$^{17}$ n/cm$^2$.[]{data-label="fig:samples"}](samples_after_irr "fig:") The studies of effects of radiation damage on the optical and structural properties of the samples were characterized by conducting light yield, light transmission, light fluorescence and Raman spectroscopy measurements. Transmission spectroscopy studies were conducted using the Varian Cary 500 spectrophotometer located at the University of the Witwatersrand. Light transmission was measured relative to transmission in air over a wavelength range of 300-800 nm. The spectrophotometer consists of a lamp source and diffraction grating to produce a differential wavelength spectrum of light. A tungsten lamp was used to produce light in the visible spectrum and a deuterium lamp was used to produce light in the ultra-violet spectrum.\ Light fluorescence measurements of the neutron irradiated plastic scintillators were conducted at the University of the Witwatersrand using the Horiba LabRAM HR Raman spectrometer. Light emission resulting from the luminescence phenomenon was excited in the plastic scintillators using a laser excitation wavelength ($\lambda$$_{ex}$) of 244 nm, operating at a power of $\sim$20 mW. A laser spot size of 0.7 $\mu$m provided energy for molecular excitations to occur. A grid of 11 x 11 points (121 acquisition spots) was mapped across a surface area of 200 x 200 $\mu$m using a motorised X-Y stage. This allowed for an average representative spectrum to be determined largely free from local variations introduced by surface features such as scratches.\ The light yield measurements were conducted at the European Organisation for Nuclear Research (ATLAS-experiment) using a light tight box set-up shown in Figure \[fig:box\]. The plastic scintillators were excited with $\beta$-electrons emitted by a source with average energies of 0.54 MeV and 2.28 MeV. The source scanned over the sample in the X$-$Y direction whilst emitting radiation in the Z direction. The light emitted by plastic scintillators through fluorescence was detected by the photomultiplier tube (PMT). The signal generated by the PMT was further processed through electronics and digitized. To minimize background signals like those coming from the interaction of the $\beta$-electrons with the PMT, a light transmitter was used to transport the light produced by the scintillators to the cathode of the PMT. In addition, a light transmitter was covered with aluminum foil to impede $\beta$-electrons from the source. [0.68]{} ![A photograph of light box set-up used to measure the light yield.[]{data-label="fig:box"}](box "fig:"){height="8cm" width="10cm"} Structural properties of the plastics were characterized using Raman spectroscopy at the University of the Witwatersrand. The Horiba LabRAM HR Raman spectrometer was used to obtain the Raman spectra for the non-irradiated control samples as well as the irradiated samples. A 785 nm diode laser was used to excite the Raman modes and the spectrograph was calibrated via the zeroth order reflection of a white light source from the grating. Results and Discussion ====================== Raman Spectroscopy Results and Analysis --------------------------------------- Raman spectroscopy measurements were performed with the aim of assessing changes in the structure and morphology of the irradiated samples. Raman spectra were obtained for the irradiated and non-irradiated samples using Horiba LabRAM HR Raman spectrometer, with a laser excitation wavelength of 785 nm. This laser wavelength was chosen as it was found that with a green excitation wavelength the higher fluence samples gave a very prominent background fluorescence that was of sufficient intensity to mask the Raman peaks. With the longer excitation wavelength there were no problems with background fluorescence and hence better quality spectra were obtained. The only limitation of the 785 nm wavelength is that due to detector limitations on the instrument, Raman peaks can only be measured up to 2000 cm$^{-1}$. The -C-H and =C-H vibrational modes between 2800 and 3200 cm$^{-1}$ could thus not be measured. Raman spectroscopy showed small changes in the structure, in comparison with non-irradiated and irradiated samples. Figure \[subfig:aramspec\] and  reports the background subtracted Raman spectra of the green and blue emitting plastic scintillators . Investigations were conducted in the quest of determining the radiation sensitive peaks. Intensities of the Raman peaks for both green and blue emitters were plotted relative to peak 12 and 8, respectively, in order to assess changes in the species present. Peak 12 and 8 typically represent aromatic ring structures, which influences the scintillation properties of the scintillator. The C-C bonds present in the structure give rise to a cloud of delocalized $\pi$-electrons that are prone to excitation by incident energetic particles. The results are shown in Figure \[subfig:cb\] and . The ratio of most species found in the styrene backbone of samples to that found in the benzene ring shows a decrease after irradiation. [0.49]{} ![Background subtracted Raman spectra of irradiated and non-irradiated plastic scintillator samples for,  green plastic scintillators and  blue plastic scintillators UPS-923A. Plot of intensities of peaks relative to peak 12 (non-irradiated) for green  and peak 8 (non-irradiated) for blue scintillators . NB: The Raman spectra in (a) and (b) have been vertically offset for better visual presentation.](green "fig:") [0.49]{} ![Background subtracted Raman spectra of irradiated and non-irradiated plastic scintillator samples for,  green plastic scintillators and  blue plastic scintillators UPS-923A. Plot of intensities of peaks relative to peak 12 (non-irradiated) for green  and peak 8 (non-irradiated) for blue scintillators . NB: The Raman spectra in (a) and (b) have been vertically offset for better visual presentation.](blue "fig:") [0.49]{} ![Background subtracted Raman spectra of irradiated and non-irradiated plastic scintillator samples for,  green plastic scintillators and  blue plastic scintillators UPS-923A. Plot of intensities of peaks relative to peak 12 (non-irradiated) for green  and peak 8 (non-irradiated) for blue scintillators . NB: The Raman spectra in (a) and (b) have been vertically offset for better visual presentation.](greenb "fig:") [0.49]{} ![Background subtracted Raman spectra of irradiated and non-irradiated plastic scintillator samples for,  green plastic scintillators and  blue plastic scintillators UPS-923A. Plot of intensities of peaks relative to peak 12 (non-irradiated) for green  and peak 8 (non-irradiated) for blue scintillators . NB: The Raman spectra in (a) and (b) have been vertically offset for better visual presentation.](blueb "fig:") Some of the important molecular vibrational assignments are provided in Table \[table:assignments\]. Peaks are assigned to their corresponding vibrational groups [@spectro; @menezess]. ---------------- ----------- --------------------------------------------------- Green Blue 1 1 $\tau$(CH$_3$) 2-4 2 $\delta$(C-C) aliphatic 5, 8-11, 13-17 4-7, 9-14 $\nu$(C-C) alicyclic or aliphatic chain vibration 12 8 C-C stretches, breathing mode- aromatic rings 6, 7 3 ring deformation mode, aromatic 18 15 $\delta$(CH$_3$) 19 16 $\delta$(CH$_2$) or $\delta$(CH$_3$) asymmetric 20 – 21 17 C-C aromatic stretching 22 18 CCH quadrant stretches- aromatic rings 23 – ---------------- ----------- --------------------------------------------------- : Raman peak number and vibrational assignments for key features in green scintillators and blue scintillators UPS-923A.[]{data-label="table:assignments"} From the results shown in Figure \[subfig:cb\] and , there is a significant decrease in the Raman intensities of aromatic breathing modes as well as alicyclic/aliphatic chain vibrations belonging to the aromatic benzene ring. These changes are more prominent in green emitting samples. It appears that the benzene ring structure undergoes a significant amount of damage. This could be a result of strong dehydrogenation due to the C-H bond breaking in the benzene ring and an emission of different C$_{x}$H$_{y}$ groups with the absorbed neutron fluence as described by Torris, 2002 [@torrisi]. This causes a decline in the number of Raman active modes and hence could account for the observable effects. The Raman intensities of corresponding vibrational modes have been quantified and reported in Table \[table:green int\] and \[table:blue int\] for green and blue emitting samples, respectively. ----------------- ----------------- -------------------- -------------- ---------------- -- Functional Peak(cm$^{-1}$) Fluence (n/cm$^2$) Intensity $\%$ Int. loss group (arb. units) C-C stretches 1003.6 non-irradiated 9898.9 breathing mode- 10$^{13}$ 9779.8 1.2 aromatic ring 10$^{14}$ 9779.8 1.2 10$^{15}$ 9740.1 1.6 10$^{16}$ 9621.0 2.8 10$^{17}$ 9501.9 4.0 CCH quadrant 1604.9 non-irradiated 2197.9 stretches- 10$^{13}$ 2081.9 5.3 aromatic ring 10$^{14}$ 2032.1 7.5 10$^{15}$ 2156.5 1.9 10$^{16}$ 2082.5 5.3 10$^{17}$ 2032.7 7.5 ----------------- ----------------- -------------------- -------------- ---------------- -- : Raman intensity values of green emitting scintillators at wavenumbers 1003.6 and 1604.9 cm$^{-1}$.[]{data-label="table:green int"} ----------------- ----------------- -------------------- -------------- ---------------- Functional Peak(cm$^{-1}$) Fluence (n/cm$^2$) Intensity $\%$ Int. loss group (arb. units) C-C stretches 1003.6 non-irradiated 11550.4 breathing mode- 10$^{13}$ 11294.4 2.2 aromatic ring 10$^{14}$ 11422.4 1.1 10$^{15}$ 11208.9 2.9 10$^{16}$ 11294.4 2.2 10$^{17}$ 10824.8 6.3 CCH quadrant 1604.9 non-irradiated 2053.9 stretches- 10$^{13}$ 2021.7 1.6 aromatic ring 10$^{14}$ 2045.8 0.4 10$^{15}$ 1997.6 2.7 10$^{16}$ 1925.1 6.3 10$^{17}$ 1884.9 8.2 ----------------- ----------------- -------------------- -------------- ---------------- : Raman intensity values of blue emitting samples at wavenumbers 1003.6 and 1604.9 cm$^{-1}$.[]{data-label="table:blue int"} It has been observed that, for green scintillators, Raman peak features at frequencies 1165.8, 1574.7 and 1651.2 cm$^{-1}$ appear to be more sensitive to neutron radiation. These bands become less intense as the neutron fluence increases and finally disappear. They are most likely to be related to the fluor dopant 3HF (3-hydroxyflavone) [@velmo; @yu], although the primary peaks of 3HF do not match up that well with the peaks in the green scintillators spectrum [@teslova]. It could be that the fluors are a “mix” of 3HF derivatives. Clearly the neutron irradiation does not have a major impact on the polystyrene since all the peaks are still present even for higher fluences. The corresponding Lorentzian fits of the non-irradiated and higher neutron fluence spectra are shown in Figure \[fig:lorentz\] and the peak intensity values are reported in Table \[table:peaks int\]. [0.49]{} ![Lorentzian line-shape fitted function at regions 1140-1230 cm$^{-1}$  and 1560-1662 cm$^{-1}$ , showing the disappearance of Raman modes at 1165.8, 1574.7 and 1651.2 cm$^{-1}$, which are present in the non-irradiated Raman spectrum of green scintillators.[]{data-label="fig:lorentz"}](lorea "fig:") [0.49]{} ![Lorentzian line-shape fitted function at regions 1140-1230 cm$^{-1}$  and 1560-1662 cm$^{-1}$ , showing the disappearance of Raman modes at 1165.8, 1574.7 and 1651.2 cm$^{-1}$, which are present in the non-irradiated Raman spectrum of green scintillators.[]{data-label="fig:lorentz"}](loreb "fig:") Several possible explanations for these effects can be made. The coupling interaction between the individual vibrations can result in the formation of a combined band [@spectro]. A slight broadening of the closest peaks due to formation of the new chemical bonds might have also contributed to the observed changes. Sonkawade [*[et al]{}*]{}. [@sonkawade] observed new bands formation ascribed to cross-linking of the polymer chains after neutron irradiation of polyaniline. Furthermore, chain scission and dehydrogenation of the polymer as described by Evans [*[et al]{}*]{}. [@evans], could be the cause of structural alterations. These support the findings reported in this paper. The luminescence properties of the fluors depend on their $\pi$-electron systems staying intact, and that if those are damaged then the fluorescence properties are adversely affected. At the moment there is not enough literature evidence to make a firm link between the actual peak and damage to the $\pi$-electron systems. ----------------- -------------------- ------------- ---------------- -- Peak(cm$^{-1}$) Fluence (n/cm$^2$) Intensity $\%$ Int. loss (arb.units) 1165.8 non-irradiated 166.6 10$^{13}$ 164.4 1.3 10$^{14}$ 107.8 35.3 10$^{15}$ 126.3 24.2 10$^{16}$ 130.7 21.5 10$^{17}$ 0 100 1574.7 non-irradiated 319.1 10$^{13}$ 263.3 17.5 10$^{14}$ 131.7 58.7 10$^{15}$ 175.8 44.9 10$^{16}$ 111.8 64.9 10$^{17}$ 0 100 1651.2 non-irradiated 141.8 10$^{13}$ 99.9 29.5 10$^{14}$ 72.5 48.9 10$^{15}$ 70.9 50.0 10$^{16}$ 19.3 86.4 10$^{17}$ 0 100 ----------------- -------------------- ------------- ---------------- -- : Raman intensity values of green scintillators at wavenumbers 1165.8, 1574.7 and 1651.2 cm$^{-1}$.[]{data-label="table:peaks int"} Fluorescence Spectroscopy Results and Analysis ---------------------------------------------- Fluorescence spectra of plastic scintillator samples are shown in Figure \[fig:fluorescence\]. A two-peak feature in Figure \[subfig:bf\] is observed for blue scintillators UPS-923A. Wavelength regions of 310-375 nm and 375-520 nm correlate with fluorescence of the polystyrene base and fluor dopants, respectively. The two-peak feature is observed since the fluorescence is predominantly from the benzene ring structures. However, the results for green emitting scintillators in Figure \[subfig:af\] reveal only a single fluorescence peak around 529 nm. For all samples, the decrease in fluorescence intensity is prominent as neutron fluence increases. There could be degradation of the aromatic benzene ring structures in the polymer base matrix and damage to the förster energy transfer mechanisms due to C-H bond breaking within the benzene ring. These damages are reported in literature by Torrisi, 2002 [@torrisi], to be the cause of luminescence yield reduction. The results of Raman spectroscopy reported in Table \[table:green int\] and \[table:blue int\] affirm these changes, and are consistent with the fluorescence results. Damage to the benzene ring directly affects the scintillation process in the material. [0.49]{} ![Fluorescence spectra for green scintillators  and blue UPS-923A scintillators .[]{data-label="fig:fluorescence"}](bluef "fig:") [0.49]{} ![Fluorescence spectra for green scintillators  and blue UPS-923A scintillators .[]{data-label="fig:fluorescence"}](bluef2 "fig:") Light Transmission Results and Analysis --------------------------------------- Transmission spectroscopy was conducted using the Varian Cary 500 spectrophotometer with the light transmission measured relative to transmission in air over a range of 300-800 nm. The results are shown in Figure \[fig:trans\]. All the non-irradiated blue scintillators UPS-923A and green scintillators have an absorption edge starting at  410 nm, which completely falls off at around 400 nm. The formation of an absorptive tint as neutron fluence increases is observed where the absorption edge drops and shifts to longer wavelengths. This effect is ascribed to the production of free radicals induced by radiation damage. These free radicals form absorption centers within the samples resulting in the light absorption competition and loss of transparency in scintillators. [0.49]{} ![Light transmission spectra for green scintillators  and blue scintillators UPS-923A .[]{data-label="fig:trans"}](greenT "fig:") [0.49]{} ![Light transmission spectra for green scintillators  and blue scintillators UPS-923A .[]{data-label="fig:trans"}](blueT "fig:") From the light transmission plots, it is evident that blue scintillators UPS-923A possess the highest light transmission properties and appear to be most radiation tolerant as compared to green light emitters. The blue scintillator overall remains more transparent to all wavelengths, but becomes opaque to blue light after damage. Whereas, the green scintillators lose transmission in blue light region, but remain transparent in the 550+ nm region. Light Yield Results and Analysis -------------------------------- The assessment of light yielded by irradiated plastic scintillator samples was performed by testing their response to a $\beta$-electron source. The signal generated by the PhotoMultiplier Tube was measured as a function of radiation source position. Figure \[subfig:back\] illustrates a 2D mapping of the signal measured in the X and Y direction over a pair of samples. The position of samples is approximated by the black and yellow boxes (color legend only available on electronic version), representing irradiated and non-irradiated samples, respectively. [0.49]{} ![2D mapping of the PMT signal with source position, indicating signal regions corresponding to regions on the experimental set-up  and plots of area cut with entries of high signal values  \[color legend only available on electronic version\].](2D_map "fig:") [0.49]{} ![2D mapping of the PMT signal with source position, indicating signal regions corresponding to regions on the experimental set-up  and plots of area cut with entries of high signal values  \[color legend only available on electronic version\].](Area_cuts "fig:") Before analysis, the background was subtracted from the rest of the mappings. The background region marked with a red box on Figure \[subfig:back\] was selected on one of the 2D mapping surface with the signal not contaminated by the signal from the samples. A pair of irradiated (10$^{13}$ n/cm$^2$) blue test sample and the non-irradiated blue reference sample were used. The mean signal value of 0.1309 mean/entry over the selected area was used as a background value. The area cuts plots shown in Figure \[subfig:area\] were selected from the 2D mappings with entries of high signal values. In order to analyse the light yield loss as a function of neutron fluence, the ratio of corrected signals from the reference and test samples were calculated and plotted against the neutron fluence as shown in Figure \[fig:rly\]. The light yield loss in all irradiated samples is observed as neutron fluence increases. It is prominent at a fluence of 10$^{17}$ n/cm$^2$. The light yield results are consistent with the fluorescence and light transmission results. [1]{} ![Light yield against neutron fluence for blue UPS-293A scintillator and green scintillator.[]{data-label="fig:rly"}](relative "fig:"){height="8cm" width="14cm"} The radiation damage experienced by plastic scintillators may be influenced by a number of factors such as the total dose absorbed and dose-rate. The influence of dose rate in predicting the lifetime of plastic scintillators has not yet been studied with neutron radiation. On the other hand, radiation damage studies have been conducted using proton beams provided by the 6 MV EN Tandem accelerator at iThemba LABS, Gauteng. Several dose rates were used and it was observed that plastic scintillators exposed to high dose rates degraded more in comparison to a low dose rates. Conclusion ========== The effects of neutron radiation on the structural and optical properties of blue scintillators UPS-923A and green scintillators were studied. According to the results obtained, irradiation to high neutron fluences with energy of $E>1$ MeV, marginally affected the structural properties and strongly affected the optical properties. The effects of neutron irradiation on the Raman spectroscopy, fluorescence, light transmission and light yield at different fluences have been demonstrated. A discolouration in the samples emerged. It is prominent at a fluence of 10$^{16}$ n/cm$^2$ and continues with an increase in fluence. This is attributed to the formation of free radicals due to dehydrogenation induced by the neutron bombardment. These free radicals combine to produce cross-links which result in the formation of a three dimensional network and cause discolouration. Raman spectroscopy revealed that blue scintillators UPS-923A maintain their structural characteristics after irradiation with slight intensity loss in some species. However, for green scintillators, it is pointed that Raman peak features at frequencies 1165.8, 1574.7 and 1651.2 cm$^{-1}$ appear to be more radiation sensitive and die out with an increase in neutron fluence. Radiation damage decreases the transmittance of light, the luminescence intensity, and the light yield at relatively high fluences. The optical properties are altered by the presence of radiation induced radicals. These excited species form when bonds break within the polymer. Free radicals initiate chemical reactions that alter the structure of the polymer backbone of the plastic. More effects of neutron damage is observed as irradiation progresses to high fluences. Furthermore, the degradation of the polymer base matrix which results in the damage of $\pi$-electron structure in the benzene ring largely contribute to the observed modifications. Acknowledgements ================ The authors are grateful to staff of the Institute for Scintillation Materials (ISMA) in Kharkov, Ukraine for providing plastic scintillators for this study, the technical team at the IBR-2 reactor of the Frank Laboratory of Neutron Physics in Dubna, Russia for providing neutron irradiation, the University of the Witwatersrand for making the equipment available for analysis, as well as Dr. Oleg Solovianov for his help with the light yield measurements at CERN. This project was funded by BIUST, SA-CERN consortium, NRF-SA and SA-JINR. References ========== [00]{} G. F. Knoll, 3$^{rd}$, Radiation Detection and Measurement, John Wiley $\&$ Sons Inc, Michigan, 1999 (Chapter 8, pp. 220-222) M. Chen, Queen’s University, PHYS 352: Measurement, Instrumentation and Experiment Design \[Online\]. Available: http://www.physics.queensu.ca/ phys352/lect19.pdf, 2011. A. Vasilescu, Overview on the radiation environment in ATLAS and CMS SCT and the irradiation facilities used for damage tests. ROSE/TN/97-3, 1997. R. G. Sonkawade [*et al.*]{}, Effects of gamma ray and neutron radiation on polyaniline conducting polymer. Indian Journal of Pure $\&$ Applied Physics. 48 (2010) 453-456 H. Jivan [*et al.*]{}, Radiation hardness of plastic scintillators for the Tile Calorimeter of the ATLAS detector, Proceedings of SAIP2014, University of Johannesburg, 978-0- 620-65391-6, 2014, pp. 199-205. H. Jivan [*et al.*]{}, Radiation hardness of plastic scintillators for the Tile Calorimeter of the ATLAS detector, J. Phys.: Conf. Seri. 645 (1) (2015), https://doi.org/10.1088/ 1742-6596/645/1/012019. H. Jivan [*et al.*]{}, Radiation damage effects on the optical properties of plastic scintillators. Nucl. Instrum. Methods Phys. Res. Section B Beam Interactions Mater. At. 409 (2017) 224-228. P. Bisanti, F. Borsa, V. Tognetti (Eds.), Magnetic properties of matter, World Scientific, Turin, 1986, pp. 385-408. Institute for Scintillation Materials (ISMA), Available: &lt;http://www.isma.kharkov.ua/eng/&gt; E.S. Velmozhnaya [*et al.*]{}, The new radiation-hard plastic scintillators with diffusion enhancers and 3-hydroxyflavone derivatives, Funct. Mater. 23 (4) (2006) 650-656. Yu.A. Gurkalenko [*et al.*]{}, The plastic scintillator activated with fluorinated 3-hydroxyflavone, Funct. Mater. 24 (2) (2017) 244-249. M. Bulavin [*et al.*]{}, Irradiation facility at the IBR-2 reactor for investigation of material radiation hardness, Nucl. Instrum. Methods B 343 (2015) 26-29. E. P. Shabalin [*et al.*]{}, Spectrum and density of neutron flux in the irradiation beam line no. 3 of the IBR-2 reactor, Phys. Particles Nucl. Lett. 12 (2) (2015) 336-343. R. M. Silverstein [*et al.*]{}, Spectrometric identification of organic compounds, 7th ed, John Wiley $\&$ Sons Inc., New York, 2005 (Chapter 2, pp 82). D. B. Menezess [*et al.*]{}, Glass transition of polystyrene (PS) studied by Raman spectroscopic investigation of its phenyl functional groups, Mater. Res. Express 4 (1) (2017) 015-303. L. Torrisi, Radiation damage in polyvinyltoluene (PVT), Radiation Phys. Chem. 63 (2002) 89-92. T. Teslova [*et al.*]{}, Raman and surface-enhanced Raman spectra of flavone and several hydroxy derivatives, J. Raman Spectroscopy 38 (7) (2007) 802-818. D. Evans [*et al.*]{}, Irradiation of plastics: damage and gas evolution, MRS Bull. 22 (1997) 36-40.
--- abstract: 'Two-dimensional (2D) vector matter waves in the form of soliton-vortex and vortex-vortex pairs are investigated for the case of attractive intracomponent interaction in two-component Bose-Einstein condensates. Both attractive and repulsive intercomponent interactions are considered. By means of a linear stability analysis we show that soliton-vortex pairs can be stable in some regions of parameters while vortex-vortex pairs turn out to be always unstable. The results are confirmed by direct numerical simulations of the 2D coupled Gross-Pitaevskii equations.' author: - 'A. I. Yakimenko$^{1,2}$, Yu. A. Zaliznyak$^1$, and V. M. Lashkin$^{1}$' title: 'Two-dimensional nonlinear vector states in Bose-Einstein condensates' --- Introduction ============ Multicomponent Bose-Einstein condensates (BECs) have been subject of growing interest in recent years as they open intriguing possibilities for a number of important physical applications, including coherent storage and processing of optical fields [@op1; @op2], quantum simulation [@qubit], quantum interferometry *etc*. Experimentally, multicomponent BECs can be realized by simultaneous trapping of different species of atoms [@DifAtom1; @DifAtom2] or atoms of the same isotope in different hyperfine states. Magnetic trapping freezes spin dynamics [@MagTrap1; @MagTrap2], while in optical dipole traps all hyperfine states are liberated (spinor BECs) [@OpTrap1]. Theoretical models of multicomponent BECs in the mean-field approximation are formulated in the framework of coupled Gross-Pitaevskii (GP) equations [@Dalfovo] and the order parameter of multicomponent BECs is described by a multicomponent vector. Like in the scalar condensate case, various types of nonlinear matter waves have been predicted in multicomponent BECs. They include, in addition to ground-state solutions [@ground1; @ground2; @ground3], structures which are peculiar to multicomponent BECs only, such as bound states of dark-bright [@darkbright] and dark-dark [@darkdark], dark-gray, bright-gray, bright-antidark and dark-antidark [@darkgrey] complexes of solitary waves, domain wall solitons [@wall1; @wall2; @wall3], soliton molecules [@Molecul], symbiotic solitons [@symbiotic]. Two-dimensional (2D) and three-dimensional (3D) vector solitons and vortices have been considered in Refs. [@Skryabin; @Garcia1; @Garcia2] for the case of repulsive condensates. Attractive intracomponent interaction have, on the other hand, received less attention and only one-dimensional vector structures have been studied so far [@attract1D; @Dutton]. Two-dimensional and 3D cases, however, demands special attention since the phenomenon of collapse is possible in attractive BECs. Interactions between the atoms in the same and different states can be controlled (including changing the sign of the interactions) via a Feshbach resonance. Theoretical and experimental studies have shown that inter-component interaction plays a crucial role in dynamics of nonlinear structures in multicomponent BECs. Recently, two-component BECs with tunable inter-component interaction were realized experimentally [@exp1; @exp2]. Note that in nonlinear optics, where similar model equations (without the trapping potential) are used to describe the soliton-induced waveguides [@Kivshar], the nonlinear coefficients are always of the same sign. The aim of this paper is to study 2D nonlinear localized vector structures in the form of soliton-vortex and vortex-vortex pairs in a binary mixture of disc-shaped BECs with attractive intracomponent and attractive or repulsive intercomponent interactions. Then, by means of a linear stability analysis, we investigate the stability of these structures and show that pairs of soliton and single-charged vortex can be stable both for attractive and repulsive interactions between different components. Vortex-vortex pairs turn out to be always unstable. The results are confirmed by direct numerical simulations of the 2D coupled Gross-Pitaevskii equations. The paper is organized as follows. In Sec.  \[sec2\] we formulate a model and present basic equations. The cases of attractive and repulsive intercomponent interactions are considered in Secs.  \[sec3\] and  \[sec4\] respectively. The conclusions are made in Sec.  \[sec5\]. Basic equations {#sec2} =============== We consider a binary mixture of BECs, consisting of two different spin states of the same isotope. We assume that the nonlinear interactions are weak relative to the confinement in the longitudinal (along $z$-axis) direction. In this case, the BEC is a “disk-shaped” one, and the GP equations take an effectively 2D form $$\label{GP1} i\hbar\frac{\partial \Psi_1}{\partial t}=\left[-\frac{\hbar^2}{2M}\nabla^2 +V_\mathrm{ext}(\mathbf{r})+g_{11}|\Psi_1|^2+g_{12}|\Psi_2|^2\right]\Psi_1,$$ $$\label{GP2} i\hbar\frac{\partial \Psi_2}{\partial t}=\left[-\frac{\hbar^2}{2M}\nabla^2 +V_\mathrm{ext}(\mathbf{r})+g_{21}|\Psi_1|^2+g_{22}|\Psi_2|^2\right]\Psi_2,$$ where $M$ is the mass of the atoms, $V_\mathrm{ext}(\mathbf{r})=M\omega_{\perp}^{2}(x^{2}+y^{2})/2$ is the harmonic external trapping potential with frequency $\omega_\perp$ and $\nabla^2=\partial^{2}/\partial x^{2}+\partial^{2}/\partial y^{2}$ is the 2D Laplacian. Atom-atom interactions are characterized by the coupling coefficients $g_{ij}=4\pi\hbar^2 a_{ij}/M$, where $a_{ij}=a_{ji}$ are the $s$-wave scattering lengths for binary collisions between atoms in internal states $|i\rangle$ and $|j\rangle$. Note that $g_{11}=g_{22}$ and $g_{12}=g_{21}$. Introducing dimensionless variables $(x,y)\to(x/l,y/l)$, $t\to \omega_{\perp} t$, $\Psi_j\to\Psi_j\sqrt{\hbar \omega_\perp/(2|g_{11}|)}$, $B_{12}=-g_{12}/|g_{11}|$, $B_{11}=-g_{11}/|g_{11}|$, where $l=\sqrt{\hbar/(M\omega_{\perp})}$, one can rewrite equations (\[GP1\]) and (\[GP2\]) as $$\label{main1} i\frac{\partial \Psi_1}{\partial t}=\left[-\nabla^2 +x^{2}+y^{2}-|\Psi_1|^2-B_{12}|\Psi_2|^2\right]\Psi_1,$$ $$\label{main2} i\frac{\partial \Psi_2}{\partial t}=\left[-\nabla^2 +x^{2}+y^{2}-B_{12}|\Psi_1|^2-|\Psi_2|^2\right]\Psi_2.$$ In what follows we consider attractive interaction between atoms of the same species and set $B_{11}=B_{22}=1$. We neglect the spin dynamics (assuming magnetic trapping) so that the interaction conserves the total number $N_{j}$ ($j=1,2$) of particles of each component $$\label{N} N_j=\int{|\Psi_j|^2d^2\textbf{r}},$$ and energy $$\label{E} E=E_1+E_2-\frac12 B_{12}\int{|\Psi_1|^2|\Psi_2|^2d^2\textbf{r}},$$ where $$E_j=\int\left\{|\nabla\Psi_j|^2- \frac12|\Psi_j|^4+(x^{2}+y^{2})|\Psi_j|^2\right\} d^2\textbf{r}.$$ Attractive intercomponent interaction {#sec3} ===================================== Stationary soliton-vortex pairs ------------------------------- We look for stationary solutions of Eqs. (\[main1\]) and (\[main2\]) in the form $$\label{Psi} \Psi_j(\mathbf{r},t)=\psi_j(r)e^{-i\mu_j t+i m_j\varphi}$$ where $m_j$ is the topological charge (vorticity) of the $j$-th component, $\mu_j$ is the chemical potential, $r=\sqrt{x^2+y^2}$ and $\varphi$ is the polar angle. Substituting Eq. (\[Psi\]) into Eqs. (\[main1\]) and (\[main2\]), we have $$\label{stat1} \mu_1\psi_1+\Delta_r^{(m_1)}\psi_1-r^2\psi_1+(|\psi_1|^2+B_{12}|\psi_2|^2)\psi_1=0,$$ $$\label{stat2} \mu_2\psi_2+\Delta_r^{(m_2)}\psi_2-r^2\psi_2+(B_{12}|\psi_1|^2+|\psi_2|^2)\psi_2=0,$$ where $\Delta_r^{(m)}=d^2/d r^2+(1/r)(d/d r)-m^2/r^2$. As was pointed out the inter-component interaction may be varied over wide range, however, the strength of the intercomponent interaction is weaker than the intracomponent counterpart in most experiments with two-component BECs. Thus $B_{12}$ can be considered as the free parameter from the range $-1\le B_{12}\le 1$. We find that the qualitative behavior of vector state characteristics does not change when varying $B_{12}$. To make it definite we further fixed the strength of intercomponent interaction at $B_{12}=\pm0.5$ for attractive and repulsive cases respectively. In this section we consider the case of attractive interatomic interaction $B_{12}>0$. ![Normalized numbers of atoms $N_{1}$ and $N_{2}$ of each component versus chemical potential $\mu_2$ at fixed $\mu_1$ for vector soliton-vortex pair ($m_1=0$, $m_2=1$) (attractive intercomponent interaction).[]{data-label="EDDS_N1N2"}](Fig1.eps){width="3.4in"} ![Existence domain for vector state $m_1=0$ and $m_2=1$ on the $(\mu_1,\mu_2)$ plane for attractive intercomponent interaction $B_{12}=0.5$. Open circles correspond to numerically found existence boundaries. Dashed curves outline the variational predictions. At upper and lower boundaries of the existence domain the vector states degenerate into the pure scalar states with $N_1=0$ and $N_2=0$ respectively. The solid line with triangles indicates the stability boundary.[]{data-label="Figure1"}](Fig2.eps){width="3.4in"} ![image](Fig3.eps){width="6.8in"} ![Typical dependence of maximum growth rates for $L=1$ and $L=2$ azimuthal modes as functions of $\mu_2$ at fixed $\mu_1$, (here $\mu_1=-2$), $B_{12}=0.5$. Note that the widest instability domain has $L=2$ mode.[]{data-label="Figure3"}](Fig4.eps){width="3.4in"} ![Snapshots of unstable evolution for vector pair $m_1=0$, $m_2=1$, $\mu_1=-2$, $\mu_2=0.5$, $B_{12} = 0.5$. The absolute values of $|\psi_1|$ and $|\psi_2|$ are shown in grayscale: the darker regions correspond to higher amplitudes.[]{data-label="Evolution1"}](Fig5.eps){width="3.4in"} First, we present a variational analysis. Stationary solutions of Eqs. (\[stat1\]) and (\[stat2\]) in the form Eq. (\[Psi\]) realize the extremum of the energy functional $E$ under the fixed number of particles $N_{1}$ and $N_{2}$. We take trial functions $\psi_{j}$ in the form $$\label{trial} \psi_j(r)=h_j\left(\frac{r}{a_j}\right)^{|m_j|}\exp{\left(- \frac{r^2}{2a_j^2}+im_j\varphi\right)},$$ which correspond to the localized state $(m_{1},m_{2})$ with vorticities $m_{1}$ and $m_{2}$ for $|1\rangle$ and $|2\rangle$ components respectively, $a_j$ and $h_j$ are unknown parameters to be determined by the variational procedure. The parameters $h_1$ and $h_2$ can be excluded using the normalization conditions (\[N\]), which yield the relation $N_j= m_j!\pi h_j^2a_j^2$. Thus, the only variational parameters are $a_1$ and $a_2$. Substituting Eq. (\[trial\]) into Eq. (\[E\]), we get for the functional $E$ $$\label{E}\nonumber E=E_1+E_2+E_{12},$$ where $$E_j=\frac{N_j}{ a_j^2}\left(m_j+1-\frac{N_j^2(2m_j)!}{\pi4^{1+m_j}(m_j!)^2}\right)+N_ja_j^2(m_j+1),$$ and $$E_{12}=-\frac12B_{12}N_1N_2\frac{a_1^{2|m_1|}a_2^{2|m_2|} }{(a_1^2+a_2^2)^{1+|m_1|+|m_2|}}\frac{(m_1+m_2)!}{\pi m_1!m_2!}.$$ By solving the variational equations $\partial E/\partial a_j=0$ at fixed $N_j$ one finds the parameters of approximate solutions with different $m_{1}$ and $m_{2}$. We will focus on one particular configuration with $m_{1}=0$ and $m_{2}=1$ which corresponds to the pair soliton-vortex. The results of the variational analysis for this case and $B_{12}>0$ are given in Fig. \[EDDS\_N1N2\] and Fig. \[Figure1\] by dashed lines. These results were the starting point for numerical analysis. The equations (\[stat1\]) and (\[stat2\]) were discretized on the equidistant radial grid and the resulting system was solved by the stabilized iterative procedure similar to that described in Ref. [@Petviashvili86]. The appropriate initial guesses were based on the variational approximation. The numerical results are shown in Fig. \[EDDS\_N1N2\] and Fig. \[Figure1\] by open circles. It is seen that the variational results exhibit a good agreement with numerical calculations. The stationary vector states form two-parameter family with parameters $\mu_1$ and $\mu_2$. In the Fig. \[EDDS\_N1N2\] the number of atoms for each component of the stationary vector state $(0,1)$ is represented as a function of the chemical potential $\mu_2$ at fixed value of $\mu_1$. The existence domain is bounded and its boundaries are determined by the condition $\left.\mu_2\right|_{N_1=0}\le\mu_2\le\left.\mu_2\right|_{N_2=0}$. For each value of $\mu_1$, where the solution exists, a dependence similar to one presented in Fig. \[EDDS\_N1N2\] can be found. This allows one to reconstruct an existence domain of the vector pair $(0,1)$ on ($\mu_1$, $\mu_2$) plane, which is shown in the Fig. \[Figure1\]. As is known, (see e.g. [@MihalacheMalomedPRA06]) for the two-dimensional scalar solitary structures in BEC with attraction, the chemical potential is bounded from above $\mu<\mu_\mathrm{{max}}=2(m+1)$, where $m$ is the topological charge, and $N\to 0$ when $\mu\to\mu_\mathrm{{max}}$. One can see from Fig. \[Figure1\] that the value of $\mu_\mathrm{{max}}$ is reduced in the presence of the second component if the intercomponent interaction is attractive ($B_{12}>0$). Both components vanish at the point $(\mu_1, \mu_2)=(2m_1+2,2m_2+2)$. Examples of soliton-vortex $(0,1)$ radial profiles are given in Fig. \[Figure2\]. It is interesting to note that when the amplitude of the vortex component is sufficiently high, a soliton profile develops a noticeable plateau. Such a deviation from gaussian-like shape leads to comparable divergence of numerical and variational dependencies in Fig. \[EDDS\_N1N2\]. Other vector states as $(0,2)$, $(-1,1)$, $(-2,2)$ *etc.* were also found, but they all turn out to be always unstable (see below). ![The same as in Fig. \[Figure1\] for repulsive intercomponent interaction ($B_{12}=-0.5$).[]{data-label="Figure4"}](Fig6.eps){width="3.4in"} ![Numerically found solutions of Eqs. (\[stat1\]) and (\[stat2\]) for repulsive intercomponent interactions at fixed $\mu_1=-2$ from stable (right panel) and unstable (left panel) regions.[]{data-label="Figure5"}](Fig7.eps){width="3.4in"} ![Same as in Fig. \[Figure3\] for $B_{12}=-0.5$. Note that in contrast to the case of attractive intercomponent interaction the stability threshold is determined by $L=1$ mode.[]{data-label="Figure6"}](Fig8.eps){width="3.4in"} ![Development of snake-type instability ($L=1$) for vector pair $m_1=0$, $m_2=1$, $\mu_1=-2$, $\mu_2=3.5$. Intercomponent interaction is repulsive, $B_{12} = -0.5$. The absolute values of $|\psi_1|$ and $|\psi_2|$ are shown for different times.[]{data-label="Evolution2"}](Fig9.eps){width="3.4in"} Stability of stationary solutions --------------------------------- The stability of the vector pairs can be investigated by the analysis of small perturbations of the stationary states. We take the wave functions in the form $$\Psi_j(\textbf{r} ,t)= \left\{\psi_j(r)+\varepsilon_j(\textbf{r} ,t)\right\}e^{-i\mu_j t+im_j\varphi},$$ where the stationary solutions $\psi_j(r)$ are perturbed by small perturbations $\varepsilon_j(\textbf{r} ,t)$, and linearize Eqs. (\[main1\]) and (\[main2\]) with respect to $\varepsilon_j$. The basic idea of such a linear stability analysis is to represent a perturbation as the superposition of the modes with different azimuthal symmetry. Since the perturbations are assumed to be small, stability of each linear mode can be studied independently. Presenting the perturbations in the form $$\varepsilon_j(\textbf{r} ,t)=u_j(r)e^{i\omega t+i L\varphi} +v_j^*(r)e^{-i\omega^* t-i L\varphi},$$ we get the following linear eigenvalue problem $$\label{eq:lineareigen} \left(\begin{array}{cccc} \hat L_{12}^{(+)} & \alpha_1 & \beta_{12} & \beta_{12} \\ -\alpha_1 & -\hat L_{12}^{(-)} & -\beta_{12} & -\beta_{12} \\ \beta_{12} & \beta_{12} & \hat L_{21}^{(+)} & \alpha_2\\ -\beta_{12} & -\beta_{12} & \alpha_2 & -\hat L_{21}^{(-)} \\ \end{array}\right) U= \omega U,$$ where $U=(u_1,v_1,u_2,v_2)$ is the vector eigenmode and $\omega$ is an (generally, complex) eigenvalue, $\alpha_j=\psi_j^2$, $\beta_{12}=B_{12}\psi_1\psi_2$, $\hat L_{ij}^{(\pm)}=\mu_i+\Delta_r^{(m_i\pm L)}-r^2+2\psi_i^2+B_{ij}\psi_j^2$. An integer $L$ determines the number of the azimuthal mode. Nonzero imaginary parts in $\omega$ imply the instability of the state $|\psi_{1},\psi_{2}\rangle$ with $\gamma_{L}=\max |\mathrm{Im}\,\omega|$ being the instability growth rate. Employing a finite difference approximation, we numerically solved the eigenvalue problem (\[eq:lineareigen\]). Typical dependencies of the growth rate $\gamma_{L}$ of the azimuthal perturbation modes $L=1$ and $L=2$ on $\mu_2$ at fixed $\mu_1$ are shown in Fig. \[Figure3\] for the state $(0,1)$. Note that above some critical value of $\mu_2$ all growth rates vanish granting the stability of the vector pair $(0,1)$ against the azimuthal perturbations. For the case of attractive intercomponent interaction $B_{12}>0$ the stability boundary is determined by $L=2$ mode. Similar dependencies of the growth rate $\gamma_{L}$ on $\mu_2$ have been obtained for other values of $\mu_1$. We have performed the numerical calculation of $\gamma_{L}$ for values of the azimuthal index up to $L=5$. In all studied cases azimuthal stability is always defined by vanishing of the growth rate of $L=2$ mode. The corresponding stability threshold is given in Fig. \[Figure1\] by filled triangles. Note that in the degenerate scalar case $\psi_{1}=0$ for single-charge vortex $m_2=1$ the stability threshold $\mu_2=2.552$ coincides with the value obtained in Ref. [@MihalacheMalomedPRA06]. For the vector states $(0,2)$, $(-1,1)$, $(-2,2)$, the growth rates are found to be nonzero in the entire existence domain and these pairs appear to be always unstable. To verify the results of the linear analysis, we solved numerically the dynamical equations (\[main1\]) and (\[main2\]) initialized with our computed vector solutions. Numerical integration was performed on the rectangular Cartesian grid with a resolution $512^2$ by means of standard split-step fourier technique (for details see e.g. [@KivsharAgrawal]). In full agreement with the linear stability analysis, the states $(0,1)$ perturbed by the azimuthal perturbations with different $L$ survive over huge times provided that the corresponding $\mu_{1}$ and $\mu_{2}$ belong to the stability region. On the other hand, Fig. \[Evolution1\] shows the temporal development of azimuthal $L=2$ instability for the vector state $(0,1)$ with $\mu_{1}=-2$ and $\mu_{2}=0.5$ (i. e. in the instability region). One can see the two humps which appear on the initially smooth ring-like intensity distribution. Further, the vortex profile is deformed, vortex and fundamental soliton both split into two filaments which then collapse. Note that unstable $(0,1)$ pair in BECs with repulsive intracomponent interaction [@Garcia1] does not collapse and undergo a complex dynamics with trapping one component by another. Repulsive intercomponent interaction {#sec4} ==================================== In this section we present results for the case of repulsive interactions $B_{12}<0$ between different components. The existence domain, stable and unstable branches on the $(\mu_1, \mu_2)$ plane for the state $(0,1)$ are shown in Fig. \[Figure4\]. It is seen that the repulsive intercomponent interaction leads to an increase of the maximum chemical potential $\mu_\mathrm{{max}}$ for each component compared to the case $\mu=4$ of pure scalar solution. The stability properties of the vector states were investigated by the linear stability analysis described in the preceding section. The states $(0,2)$, $(-1,1)$, $(-2,2)$ are always unstable as in the case of $B_{12}>0$. Figure \[Figure5\] shows examples of the radial profiles of unstable and stable $(0,1)$ states. In Fig. \[Figure6\] we plot the growth rates $\gamma_{L}$ as functions of $\mu_{2}$ under fixed $\mu_{1}$ for the azimuthal perturbations with $L=1$ and $L=2$. The growth rates vanish if $\mu_{2}$ exceeds a some critical value. In contrast to the attractive intercomponent interaction case, it is seen that the stability boundary is controlled by the elimination of the snake-type instability (i. e. azimuthal perturbation with $L=1$). Indeed, the repulsion between components naturally leads to spatial separation of condensate species. This relative motion destroys the vector state as seen from Fig. \[Evolution2\]. Conclusions {#sec5} =========== In conclusion, we have analyzed the stability of 2D vector matter waves in the form of soliton-vortex and vortex-vortex pairs in two-component Bose-Einstein condensates with attractive interactions between atoms of the same species. Both attractive and repulsive intercomponent interactions are considered. We have performed a linear stability analysis and showed that, in both cases, only soliton-vortex pairs $(0,1)$ can be stable in some regions of parameters. Namely, under the fixed number of atoms in the soliton component, the number of atoms of the vortex component should be less than a some critical value. No stabilization regions have been found for vortex-vortex pairs and they turn out to be always unstable. The results of the linear analysis have been confirmed by direct numerical simulations of the 2D coupled Gross-Pitaevskii equations. [99]{} Z. Dutton and L. V. Hau, Phys. Rev. A [**70**]{}, 053831 (2004). C. Liu, Z. Dutton, C. H. Behroozi, and L. V. Hau, Nature (London) 409, 490 (2001); D. F. Phillips, A. Fleischhauer, A. Mair, R. L. Walsworth, and M. D. Lukin, Phys. Rev. Lett. [**86**]{}, 783 (2001). K. T. Kapale and J. P. Dowling, Phys. Rev. Lett. [**95**]{}, 173601 (2005). G. Modugno, G. Ferrari, G. Roati, R.J. Brecha, A. Simoni, and M. Inguscio, Science [**294**]{}, 1320 (2001); G. Modugno, M. Modugno, F. Riboli, G. Roati, and M. Inguscio, Phys. Rev. Lett. [**89**]{}, 190404 (2002); G. Modugno, G. Roati, F. Riboli, F. Ferlaino, R.J. Brecha, and M. Inguscio, Science [**297**]{}, 2240 (2002). M. Mudrich, S. Kraft, K. Singer, R. Grimm, A. Mosk, and M. Weidemuller, Phys. Rev. Lett. [**88**]{}, 253001 (2002). D.S. Hall, M.R. Matthews, J.R. Ensher, C.E. Wieman, and E.A. Cornell, Phys. Rev. Lett. [**81**]{}, 1539 (1998). P. Maddaloni, M. Modugno, C. Fort, F. Minardi, and M. Inguscio, Phys. Rev. Lett. [**85**]{}, 2413 (2000). M. Barrett, J. Sauer, and M. S. Chapman, Phys. Rev. Lett. [**87**]{}, 010404 (2001). F. Dalfovo, S. Giorgini, L. P. Pitaevskii and S. Stringari, Rev. Mod. Phys. [**71**]{}, 463 (1999). H. Pu and N.P. Bigelow, Phys. Rev. Lett. [**80**]{}, 1130 (1998). T.-L. Ho and V.B. Shenoy, Phys. Rev. Lett. [**77**]{}, 3276 (1996). B.D. Esry, C.H. Greene, J.P. Burke, Jr., and J.L. Bohn, Phys. Rev. Lett. [**78**]{}, 3594 (1997). B. P. Anderson, P. C. Haljan, C. A. Regal, D. L. Feder, L. A. Collins, C. W. Clark, and E. A. Cornell, Phys. Rev. Lett. [**86**]{}, 2926 (2001); Th. Busch and J. R. Anglin, ibid. [**87**]{}, 010401 (2001). P. Öhberg and L. Santos, Phys. Rev. Lett. [**86**]{}, 2918 (2001). P.G. Kevrekidis, H.E. Nistazakis, D.J. Frantzeskakis, B.A. Malomed, and R. Carretero-González, Eur. Phys. J. D [**28**]{}, 181 (2004). S. Coen, and M. Haelterman, Phys. Rev. Lett. [**87**]{}, 140401 (2001). K. Kasamatsu, and M. Tsubota, Phys. Rev. Lett. [**93**]{}, 100402 (2004). P.G. Kevrekidis, H. Susanto, R. Carretero-González, B.A. Malomed, and D.J. Frantzeskakis, Phys. Rev. E [**72**]{}, 066604 (2005). V. M. Pérez-García, V. Vekslerchik, arXiv:nlin/0209036v1 (2002). V. M. Pérez-García and J. B. Beitia, Phys. Rev. A [**72**]{}, 033620 (2005). D. V. Skryabin, Phys. Rev. A [**63**]{}, 013602 (2000). J. J. García-Ripoll and V. M. Pérez-García, Phys. Rev. Lett. [**64**]{}, 4264 (2000). J. J. García-Ripoll and V. M. Pérez-García, Phys. Rev. A [**62**]{}, 033601 (2000). L. Li, B. A. Malomed, D. Mihalache, and W. M. Liu, Phys. Rev. E [**73**]{}, 066610 (2006). Z. Dutton and C. W. Clark, Phys. Rev. E [**71**]{}, 063618 (2005). G. Thalhammer *et al.*, Phys. Rev. Lett. [**100**]{}, 210402 (2008). S. B. Papp, J. M. Pino, and C. Wieman, Phys. Rev. Lett. [**101**]{}, 040402 (2008). Yu. S. Kivshar and G. Agrawal, [*Optical Solitons: From Fibers to Photonic Crystals*]{} (Academic Press, San Diego, 2003). V.I. Petviashvili and V.V. Yan’kov, Rev. Plasma Phys. Vol. 14, Ed. B.B. Kadomtsev, (Consultants Bureau, New York, 1989), p 1. D. Mihalache, D. Mazilu, B.A. Malomed, F. Lederer, Phys. Rev. A [**73**]{}, 043615 (2006). Yu. S. Kivshar and G. Agrawal, [*Optical Solitons: From Fibers to Photonic Crystals*]{} (Academic Press, San Diego, 1995).
--- abstract: 'We define a stratification of Deligne–Lusztig varieties and their parahoric analogues which we call the Drinfeld stratification. In the setting of inner forms of $\operatorname{GL}_n$, we study the cohomology of these strata and give a complete description of the unique closed stratum. We state precise conjectures on the representation-theoretic behavior of the stratification. We expect this stratification to play a central role in the investigation of geometric constructions of representations of $p$-adic groups.' address: - | Department of Mathematics\ MIT\ Cambridge, MA 02139, USA - | Mathematisches Institut\ Universität Bonn\ Endenicher Allee 60\ 53115 Bonn, Germany author: - 'Charlotte Chan and Alexander B. Ivanov' bibliography: - 'bib\_ADLV\_CC.bib' title: 'The Drinfeld stratification for $\operatorname{GL}_n$' --- Introduction ============ Like the classical upper half plane, its nonarchimedean analogue—the Drinfeld upper half plane—appears naturally in a wide range of number theoretic, representation theoretic, and algebro-geometric contexts. For finite fields, the $\ell$-adic étale cohomology of the Drinfeld upper half plane ${\mathbb P}^1(\overline {\mathbb F}_q) \smallsetminus {\mathbb P}^1({\mathbb F}_q)$ with coefficients in nontrivial rank-1 local systems, is known to realize the cuspidal irreducible representations of $\operatorname{GL}_2({\mathbb F}_q)$. One can generalize this to $\operatorname{GL}_n({\mathbb F}_q)$ by projectivizing the complement of all rational sub-vector spaces of $V = \overline {\mathbb F}_q^{\oplus n}$. This is the Drinfeld upper half space for ${\mathbb F}_q$. In this paper, we consider a stratification of the Drinfeld upper half space induced by “intermediate” Drinfeld upper half spaces of smaller dimension sitting inside ${\mathbb P}(V)$. In earlier work [@CI_ADLV], we proved that for inner forms of $\operatorname{GL}_n$, Lusztig’s *loop Deligne–Lusztig set* [@Lusztig_79] is closely related to a finite-ring analogue of the Drinfeld upper half space. This allowed us to endow this set with a scheme structure (a statement which is still conjectural for any group outside $\operatorname{GL}_n$) and define its cohomology. Under a regularity condition, we prove in [@CI_ADLV] that the cohomology of loop Deligne–Lusztig varieties for inner forms of $\operatorname{GL}_n$ realize certain irreducible supercuspidal representation and describe these within the context of the local Langlands and Jacquet–Langlands correspondences. In [@CI_loopGLn], we are able to relax this regularity condition to something quite general by using highly nontrivial input obtained by studying the cohomology of a stratification—the *Drinfeld stratification*—which comes from the aforementioned stratification of the Drinfeld upper half space. In [@CI_MPDL], we studied a class of varieties $X_h$ associated to parahoric subgroups of a(ny) connected reductive group $G$ which splits over an unramified extension. We define a stratification in this general context as well; the Drinfeld stratification is indexed by twisted Levi subgroups of $G$. The purpose of this paper is to initiate the study of these strata and, in due course, supply the necessary input for the final step in [@CI_loopGLn]. We focus on the setting of inner forms of $\operatorname{GL}_n$ and prove the first foundational representation-theoretic traits of the cohomology of the Drinfeld stratification: irreducibility (Theorem \[t:inner\_prod\]) and a special character formula (Proposition \[t:very\_reg\]). Using Theorem \[t:inner\_prod\], in Section \[s:single\_degree\] we prove that the torus eigenspaces in the cohomology of the unique closed Drinfeld stratum is supported in a single (possibly non-middle) degree. Furthermore, this stratum is a *maximal variety* in the sense of Boyarchenko–Weinstein [@BoyarchenkoW_16]: the number of rational points on the closed Drinfeld stratum attains its Weil–Deligne bound. Our analysis relies on techniques developed in [@Chan_siDL] in the special case of division algebras and gives some context for what we expect to be the role of maximal varieties in these Deligne–Lusztig varieties for $p$-adic groups. In practice, it is sometimes only possible to work directly with the Drinfeld stratification of the parahoric Deligne–Lusztig varieties $X_h$ instead of with the entire $X_h$. In this paper, for example, the maximality of the closed stratum allows us to give an exact formula (Corollary \[c:dimension\]) for the formal degree of the associated representation of the $p$-adic group. We prove a comparison theorem in [@CI_loopGLn] relating the Euler characteristic of this stratum to that of $X_h$. This formal degree input, by comparison with Corwin–Moy–Sally [@CorwinMS_90], allows us to obtain a geometric supercuspidality result in [@CI_loopGLn]. We finish the paper with a precise formulation of some conjectures (Conjecture \[c:single\_degree\] and \[c:Xh\]) which describe what we expect to be the shape of the cohomology of the Drinfeld stratification and its relation to the cohomology of loop Deligne–Lusztig varieties. In the Appendix, we present an analysis of the fibers of the natural projection maps $X_h \to X_{h-1}$; we believe this could be a possible approach to proving Conjecture \[c:Xh\] and may be of independent interest. It would be interesting to see if the Drinfeld stratification plays a role in connections to orbits in finite Lie algebras, à la work of Chen [@Chen_2019]. Acknowledgements {#acknowledgements .unnumbered} ---------------- We would like to thank Masao Oi and Michael Rapoport for enlightening conversations. The first author was partially supported by the DFG via the Leibniz Prize of Peter Scholze and an NSF Postdoctoral Research Fellowship, Award No. 1802905. The second author was supported by the DFG via the Leibniz Preis of Peter Scholze. Notation ======== Let $k$ be a nonarchimedean local field with residue field ${\mathbb F}_q$ and let $\breve k$ denote the completion of the maximal unramified extension of $k$. We write ${\mathcal O}_{\breve k}$ and ${\mathcal O}_k$ for the rings of integers of $\breve k$ and $k$, respectively. For any positive integer $m$, let $k_m$ denote the degree-$m$ unramified extension of $k$. Additionally, we define $L \colonequals k_n$. *With the exception of Section \[s:drinfeld\]*, in this paper, $G$ will be an inner form of $\operatorname{GL}_n$ defined over $k$. Let $\sigma \in \operatorname{Gal}(\breve k/k)$ denote the Frobenius which induces the $q$th power automorphism on the residue field $\overline {\mathbb F}_q$. Abusing notation, we also let $\sigma$ denote the map $\operatorname{GL}_n(\breve k) \to \operatorname{GL}_n(\breve k)$ by applying $\sigma$ to each matrix entry. The inner forms of $\operatorname{GL}_n$ are indexed by an integer $0 \leq \kappa \leq n-1$; fix such an integer. Throughout the paper, we write $\kappa/n = k_0/n_0$, $(k_0,n_0) = 1$, and $\kappa = k_0 n'$. We define an element $b_{{\,{\rm cox}}}$ with $\operatorname{val}\det(b_{{\,{\rm cox}}}) = \kappa$ and set $G = J_{b_{{\,{\rm cox}}}}$ (the $\sigma$-stabilizer of $b_{{\,{\rm cox}}}$) with $k$-rational structure induced by the Frobenius $$F {\colon}\operatorname{GL}_n(\breve k) \to \operatorname{GL}_n(\breve k), \qquad g \mapsto b_{{\,{\rm cox}}}\sigma(g) b_{{\,{\rm cox}}}^{-1}.$$ Note that $G \cong \operatorname{GL}_{n'}(D_{k_0/n_0})$, where $D_{k_0/n_0}$ denotes the division algebra over $k$ of dimension $n_0^2$ with Hasse invariant $k_0/n_0$. Let $T$ denote the set of diagonal matrices in $G$. Let $x$ be the unique point in the intersection ${\mathcal A}(T) \cap {\mathscr{B}}(\operatorname{GL}_n, \breve k)^F$. Note that $T(k) \cong L^\times$. If $k$ has characteristic $p$, we let ${\mathbb W}(A) = A[\![\pi]\!]$ for any ${\mathbb F}_q$-algebra $A$ and write $[a_i]_{i \geq 0}$ to denote the element $\sum_{i \geq 0} a_i \pi^i \in {\mathbb W}(A)$. If $k$ has characteristic zero, we let ${\mathbb W}= W_{{\mathcal O}_k} \times_{\operatorname{Spec}{\mathcal O}_k} \operatorname{Spec}{\mathbb F}_q$, where $W_{{\mathcal O}_k}$ is the ${\mathcal O}_k$-ring scheme of ${\mathcal O}_k$-Witt vectors [@FarguesFontaine_book Section 1.2]. Following the notation of *op. cit. * we write the elements of ${\mathbb W}(A)$ as $[a_i]_{i \geq 0}$ where $a_i \in A$. We may now talk about ${\mathbb W}$ uniformly, regardless of the characteristic of $k$. As usual, we have the Frobenius and Verschiebung morphisms $$\begin{aligned} &\sigma {\colon}{\mathbb W}\to {\mathbb W}, \qquad [a_i]_{i \geq 0} \mapsto [a_i^q]_{i \geq 0}, \\ &V {\colon}{\mathbb W}\to {\mathbb W}, \qquad [a_i]_{i \geq 0} \mapsto [0,a_0,a_1,\ldots].\end{aligned}$$ For any $h \in {\mathbb Z}_{\geq 0}$, let ${\mathbb W}_h = {\mathbb W}/V^h {\mathbb W}$ denote the corresponding truncated ring scheme. For the benefit of the reader, we present a summary of the relationship between the various schemes appearing in this paper. Let $r \mid n'$ and let $h$ be a positive integer. We have $$\begin{tikzcd} S_h^{(r)} \ar[hook]{r} \ar{d} & S_h \ar{d} \\ X_h^{(r)} \ar[hook]{r} & X_h \end{tikzcd}$$ where the vertical maps are quotients by an affine space. We have $X_h = X_h(b_{{{\,{\rm cox}}}}, b_{{{\,{\rm cox}}}})$ and when $X_h(b,w) \cong X_h$, then $X_h(b,w)^{(r)} \cong X_h^{(r)}$ by definition. We have $$X_h^{(r)} = \bigsqcup_{g \in {\mathbb G}_h({\mathbb F}_q)/({\mathbb{L}}_h^{(r)}({\mathbb F}_q) {\mathbb G}_h^1({\mathbb F}_q))} g \cdot (X_h \cap {\mathbb{L}}_h^{(r)} {\mathbb G}_h^1).$$ The $r$th Drinfeld stratum is $$X_{h,r} = X_h^{(r)} \smallsetminus \bigcup_{r \mid s \mid n', \, r < s} X_h^{(s)}$$ and the closure of $X_{h,r}$ in $X_h$ is $X_h^{(r)}$. The unique closed Drinfeld stratum is the setting $r = n'$; in this case, $$X_{h,n'} = X_h^{(n')} = \bigsqcup_{g \in {\mathbb G}_h({\mathbb F}_q)} g \cdot X_h^1, \qquad \text{where $X_h^1 = X_h \cap {\mathbb G}_h^1$}.$$ For any positive integer $m$, we let $[l]_m$ denote the unique representative of $l{\mathbb Z}/m{\mathbb Z}$ in the set $\{1, \ldots, m\}$. The Drinfeld stratification {#s:drinfeld} =========================== In this section only, we let $G$ be any reductive group over $k$ which splits over $\breve k$. Let $F$ denote a Frobenius associated to the $k$-rational structure on $G$. Fix a $k$-rational, $\breve k$-split maximal torus $T \subset G$, let $x \in {\mathcal A}(T) \cap {\mathscr{B}}(G, \breve k)^F$, and let $G_{x,0}$ be the attached parahoric model. Pick a $\breve k$-rational Borel subgroup $B \subset G_{\breve k}$ containing $T$ and let $U$ be the unipotent radical of $B$. Let $h \geq 1$ be an integer. There is a smooth affine group scheme ${\mathbb G}_h$ over ${\mathbb F}_q$ such that $${\mathbb G}_h({\mathbb F}_q) = G_{x,0}({\mathcal O}_k)/G_{x,(h-1)+}({\mathcal O}_k), \quad{\mathbb G}_h(\overline {\mathbb F}_q) = G_{x,0}({\mathcal O})/G_{x,(h-1)+}({\mathcal O})$$ (see [@CI_MPDL Section 2.5, 2.6] for more details). The subgroups $T, U$ have associated subgroup schemes ${\mathbb T}_h$, ${\mathbb U}_h$ of ${\mathbb G}_h$ such that $${\mathbb T}_h({\mathbb F}_q) = (T(k) \cap G_{x,0}({\mathcal O}_k))/G_{x,(h-1)+}({\mathcal O}_k), \quad {\mathbb T}_h(\overline {\mathbb F}_q) = (T(\breve k) \cap G_{x,0}({\mathcal O}))/G_{x,(h-1)+}({\mathcal O}),$$ and ${\mathbb U}_h(\overline {\mathbb F}_q) = (U(\breve k) \cap G_{x,0}({\mathcal O}))/G_{x,(h-1)+}({\mathcal O})$ (note here that ${\mathbb U}_h$ is defined over $\overline {\mathbb F}_q$ but may not be defined over ${\mathbb F}_q$ as $U$ may not be $k$-rational). The schemes $S_h$ and $X_h$ --------------------------- The central object of study is $X_h$: \[def:Xh\] Define $\overline {\mathbb F}_q$-scheme $$X_h \colonequals \{x \in {\mathbb G}_h : x^{-1}F(x) \in {\mathbb U}_h\}/({\mathbb U}_h \cap F^{-1}({\mathbb U}_h)).$$ $X_h$ comes with a natural action of ${\mathbb G}_h({\mathbb F}_q) \times {\mathbb T}_h({\mathbb F}_q)$ by left- and right-multiplication: $$(g,t) \cdot x = gxt, \qquad \text{for $(g,t) \in {\mathbb G}_h({\mathbb F}_q) \times {\mathbb T}_h({\mathbb F}_q),$ $x \in X_h$.}$$ In some contexts, it will be more convenient to study $S_h$: Define $\overline {\mathbb F}_q$-scheme $$S_h \colonequals \{x \in {\mathbb G}_h : x^{-1}F(x) \in {\mathbb U}_h\}.$$ So, $S_h$ is the closed subscheme of ${\mathbb G}_h$ obtained by pulling back ${\mathbb U}_h$ along the (finite étale) Lang map ${\mathbb G}_h \to {\mathbb G}_h, \, g \mapsto g^{-1}F(g)$. Note that $S_h$ comes with the same natural action of ${\mathbb G}_h({\mathbb F}_q) \times {\mathbb T}_h({\mathbb F}_q)$ as $X_h$. Observe that since ${\mathbb U}_h \cap F^{-1}({\mathbb U}_h)$ is an affine space, the cohomology of $X_h$ and $S_h$ differs only by a shift, and in particular, for any $\theta {\colon}{\mathbb T}_h({\mathbb F}_q) \to \overline {\mathbb Q}_\ell^\times$, we have $$H_c^*(X_h, \overline {\mathbb Q}_\ell)[\theta] = H_c^*(S_h, \overline {\mathbb Q}_\ell)[\theta]$$ as elements of the Grothendieck group of ${\mathbb G}_h({\mathbb F}_q)$. The scheme $X_h(b,w)$ {#s:Xhbw} --------------------- In this section, we further assume that $G$ is quasisplit over $k$ and $B \subset G$ is $k$-rational. In this section, we write $\sigma = F$ for our $q$-Frobenius associated to the $k$-rational structure on $G$. Let $b,w \in G(\breve k)$. Assume that $b,w$ normalize both subgroups $G_{x,0}({\mathcal O}_{\breve k})$, $G_{x,(h-1)+}({\mathcal O}_{\breve k})$ of $G(\breve k)$, and additionally assume that $w$ normalizes $T(\breve k)$. Define the $\overline {\mathbb F}_q$-scheme $$X_h(b,w) \colonequals \{x \in {\mathbb G}_h : x^{-1} b \sigma(x) \in {\mathbb U}_h w {\mathbb U}_h\}/{\mathbb U}_h,$$ where the condition $x^{-1} b \sigma(x) \in {\mathbb U}_h w {\mathbb U}_h$ means the following: For any lift $\widetilde x \in G$ of $x \in {\mathbb G}_h$, the element $\widetilde x^{-1} b \sigma(\widetilde x)$ is an element of $(U \cap G_{x,0}) w (U \cap G_{x,0}) G_{x,(h-1)+} \subset G$. More precisely, $X_h(b,w) = S_h(b,w)/{\mathbb U}_h$, where $S_h(b,w)$ is the reduced $\overline {\mathbb F}_q$-subscheme of ${\mathbb G}_h$ such that $S_h(b,w)(\overline {\mathbb F}_q)$ is equal to the image of $\{x \in G_{x,0}({\mathcal O}_{\breve k}) : x^{-1} b \sigma(x) \in (U(\breve k) \cap G_{x,0}({\mathcal O}_{\breve k})) w (U(\breve k) \cap G_{x,0}({\mathcal O}_{\breve k})) G_{x,(h-1)+}({\mathcal O}_{\breve k})\}$ in ${\mathbb G}_h(\overline {\mathbb F}_q)$. Note that $X_h(b,w)$ comes with a natural action by left- and right-multiplication of $G_h(b)$ and $T_h(w)$, where $G_h(b) \subset {\mathbb G}_h(\overline {\mathbb F}_q)$ is the image of $\{g \in G_{x,0}({\mathcal O}_{\breve k}) : b \sigma(g) b^{-1} = g\}$ and $T_h(w) \subset {\mathbb T}_h(\overline {\mathbb F}_q)$ is the image of $\{t \in T(\breve k) \cap G_{x,0}({\mathcal O}_{\breve k}) : w \sigma(t) w^{-1} = t\}$. The next lemma is a one-line computation; we record it for easy reference. \[l:change b\] Let $\gamma \in G_{x,0}({\mathcal O}_{\breve k})$. Then we have an isomorphism $$X_h(b,w) \to X_h(\gamma^{-1} b \sigma(\gamma),w), \qquad x \mapsto \overline \gamma x,$$ where $\overline \gamma$ is the image of $\gamma$ in the quotient ${\mathbb G}_h(\overline {\mathbb F}_q)$. \[l:Xbw to Xh\] Consider the morphism $F {\colon}({\mathbb G}_h)_{\overline {\mathbb F}_q} \to ({\mathbb G}_h)_{\overline {\mathbb F}_q}$ given by $g \mapsto b \sigma(g) b^{-1}$. If $w G_{x,0} b^{-1} = G_{x,0}$ and $F({\mathbb U}_h) = w {\mathbb U}_h b^{-1}$, then $$X_h(b,w) = X_h,$$ where $X_h$ is the $\overline {\mathbb F}_q$-scheme in Definition \[def:Xh\] associated to the group scheme $({\mathbb G}_h)_{\overline {\mathbb F}_q}$ endowed with the ${\mathbb F}_q$-rational structure associated to the $q$-Frobenius $F$. We have $$\begin{aligned} X_h(b,w) &= \{x \in {\mathbb G}_h : x^{-1} F(x) \in {\mathbb U}_h w {\mathbb U}_h b^{-1}\}/{\mathbb U}_h \\ &= \{x \in {\mathbb G}_h : x^{-1} F(x) \in {\mathbb U}_h F({\mathbb U}_h)\}/{\mathbb U}_h \\ &= \{x \in {\mathbb G}_h : x^{-1} F(x) \in {\mathbb U}_h\}/({\mathbb U}_h \cap F^{-1} {\mathbb U}_h) = X_h. \qedhere\end{aligned}$$ The Drinfeld stratification for $S_h$ {#s:gen_Drin_S} ------------------------------------- Let $L$ be a $k$-rational twisted Levi subgroup of $G$ and assume that $L$ contains $T$. Recall a $k$-rational subgroup $L \subset G$ is a *twisted Levi* if $L_{\overline k}$ is a Levi subgroup of $G_{\overline k}$. Note also that the condition that $L$ contains $T$ forces $L$ to be split over $\breve k$. Following [@CI_MPDL Section 2.6], the schematic closure $L_{x}$ in $G_{x,0}$ is a closed subgroup scheme defined over ${\mathcal O}_k$. Applying the “positive loop” functor to $L_{x}$, for $h \in {\mathbb Z}_{> 0}$, we can define a $\overline {\mathbb F}_q$-scheme ${\mathbb{L}}_h$ such that ${\mathbb{L}}_h(\overline {\mathbb F}_q)$ is the image of $L_x({\mathcal O}_{\breve k})$ in ${\mathbb G}_h(\overline {\mathbb F}_q)$. \[d:drinfeld Sh\] Define $$S_h^{(L)} \colonequals \{x \in {\mathbb G}_h : x^{-1}F(x) \in ({\mathbb{L}}_h \cap {\mathbb U}_h) {\mathbb U}_h^1\},$$ where $({\mathbb{L}}_h \cap {\mathbb U}_h) {\mathbb U}_h^1 \subset {\mathbb U}_h$ is the subgroup generated by ${\mathbb{L}}_h \cap {\mathbb U}_h$ and ${\mathbb U}_h^1$ (which is normalized by ${\mathbb{L}}_h \cap {\mathbb U}_h$). Note that the subscheme $S_h^{(L)}$ of $S_h$ is closed and stable under the action of ${\mathbb G}_h({\mathbb F}_q) \times {\mathbb T}_h({\mathbb F}_q)$. \[d:drinfeld Sh\] Define $X_h^{(L)}$ to be the image of $S_h^{(L)}$ under the surjection $S_h \to X_h$. Recall that for any $\gamma \in G_{x,0}({\mathcal O}_{\breve k})$, we have $X_h(b,w) \cong X_h(\gamma^{-1} b \sigma(\gamma),w)$ via $x \mapsto \overline\gamma x$. If $F({\mathbb U}_h) = w {\mathbb U}_h b^{-1}$, then $X_h = X_h(b,w)$; in this setting, let $X_h(\gamma^{-1} b \sigma(\gamma),w)^{(L)}$ denote the image of $X_h^{(L)}$. Another subscheme of $S_h$ which we may associate to the twisted Levi subgroup $L \subset G$ is the intersection $$\begin{aligned} S_h \cap {\mathbb{L}}_h {\mathbb G}_h^1 &= \{x \in {\mathbb{L}}_h {\mathbb G}_h^1 : x^{-1}F(x) \in {\mathbb U}_h\} \\ &= \{x \in {\mathbb{L}}_h {\mathbb G}_h^1 : x^{-1}F(x) \in ({\mathbb{L}}_h \cap {\mathbb U}_h) {\mathbb U}_h^1\},\end{aligned}$$ where ${\mathbb{L}}_h {\mathbb G}_h^1$ denotes the subgroup scheme of ${\mathbb G}_h$ generated by ${\mathbb{L}}_h$ and ${\mathbb G}_h^1$ (which is normalized by ${\mathbb{L}}_h$). Note that $S_h \cap {\mathbb{L}}_h {\mathbb G}_h^1$ is stable under the action of ${\mathbb{L}}_h({\mathbb F}_q) {\mathbb G}_h^1({\mathbb F}_q) \times {\mathbb T}_h({\mathbb F}_q)$. \[l:ShP\] Let $L$ be a $k$-rational twisted Levi subgroup of $G$ containing $T$. Then $$S_h^{(L)} = \bigsqcup_{\gamma \in {\mathbb G}_h({\mathbb F}_q)/({\mathbb{L}}_{h}({\mathbb F}_q){\mathbb G}_h^1({\mathbb F}_q))} \gamma \cdot (S_h \cap {\mathbb{L}}_{h} {\mathbb G}_h^1).$$ Pick any $u \in {\mathbb U}_{h}(\overline {\mathbb F}_q) {\mathbb U}_h^1(\overline {\mathbb F}_q)$. By surjectivity of the Lang map, there exists $x \in {\mathbb{L}}_{h}(\overline {\mathbb F}_q) {\mathbb G}_h^1(\overline {\mathbb F}_q)$ and $y \in {\mathbb G}_h(\overline {\mathbb F}_q)$ such that $x^{-1} F(x) = u$ and $y^{-1} F(y) = u$. Then $$(xy^{-1})^{-1} F(xy^{-1}) = y x^{-1} F(x) = F(y)^{-1} = y u F(y)^{-1} = y u u^{-1} y^{-1} = 1.$$ Therefore $xy^{-1} \in {\mathbb G}_h({\mathbb F}_q)$. The assertion now follows from the fact that the stabilizer of $S_h \cap {\mathbb{L}}_{h} {\mathbb G}_h^1$ in ${\mathbb G}_h({\mathbb F}_q) \times {\mathbb T}_h({\mathbb F}_q)$ is ${\mathbb{L}}_{h}({\mathbb F}_q) {\mathbb G}_h^1({\mathbb F}_q) \times {\mathbb T}_h({\mathbb F}_q)$. By Lemma \[l:ShP\], we see: \[l:geom parabolic induction\] If $L$ is a twisted Levi subgroup of $G$ containing $T$, then for any character $\theta {\colon}{\mathbb T}_h({\mathbb F}_q) \to \overline {\mathbb Q}_\ell^\times$ and for all $i \geq 0$, $$H_c^i(S_h^{(L)}, \overline {\mathbb Q}_\ell)[\theta] \cong \operatorname{Ind}_{{\mathbb{L}}_{h}({\mathbb F}_q) {\mathbb G}_h^1({\mathbb F}_q)}^{{\mathbb G}_h({\mathbb F}_q)}\big(H_c^i(S_h \cap {\mathbb{L}}_{h} {\mathbb G}_h^1, \overline {\mathbb Q}_\ell)[\theta]\big).$$ The case of $\operatorname{GL}_n$ ================================= In this paper, study the varieties introduced in Section \[s:drinfeld\] in the special case when $G$ is an inner form of $\operatorname{GL}_n$. We emphasize that these varieties $S_h, X_h, X_h(b,w)$—at least *a priori*—depend on a choice of Borel subgroup containing the torus at hand. From now until the end of the paper, we work with the varieties associated with the Borel subgroup explicitly chosen in Section \[s:explicit\]. We explicate the Drinfeld stratification for $S_h$, $X_h$, and certain $X_h(b,w)$, and give a description in terms of Drinfeld upper half-spaces and ${\mathscr L}_h \subset {\mathbb W}_h^{\oplus n}$, a finite-ring analogue of an isocrystal. Let $\sigma \in \operatorname{Gal}(\breve k/k)$ denote the $q$-Frobenius induces $x \mapsto x^q$ on the residue field $\overline {\mathbb F}_q$. Abusing notation, also let $$\sigma {\colon}\operatorname{GL}_n(\breve k) \to \operatorname{GL}_n(\breve k), \qquad (M_{i,j})_{i,j=1,\ldots,n} \mapsto (\sigma(M_{i,j}))_{i,j=1,\ldots,n}.$$ For $b \in \operatorname{GL}_n(\breve k)$, let $J_b$ be the $\sigma$-stabilizer of $b$: for any $k$-algebra $R$, $$J_b(R) \colonequals \{g \in \operatorname{GL}_n(R \otimes_k \breve k) : g^{-1} b \sigma(g) = b\}.$$ $J_b$ is an inner form of the centralizer of the Newton point of $b$ (which is a Levi subgroup of $\operatorname{GL}_n$), and we may consider $$\operatorname{GL}_n(\breve k) \to \operatorname{GL}_n(\breve k), \qquad g \mapsto b \sigma(g) b^{-1}$$ to be an associated $q$-Frobenius for the $k$-rational structure on $J_b$. If $b$ is *basic* (i.e. the Newton point of $b$ is central), then $J_b$ is an inner form of $\operatorname{GL}_n$ and every inner form arises in this way. If $\kappa = \kappa_{\operatorname{GL}_n}(b) \colonequals \operatorname{val}(\det(b))$, then then $J_b(k) \cong \operatorname{GL}_{n'}(D_{k_0/n_0})$ where $\kappa/n = k_0/n_0$, $(k_0,n_0) = 1$, and $\kappa = k_0 n'$. Note that the isomorphism class of $J_b$ only depends on the $\sigma$-conjugacy class $[b] \colonequals \{g^{-1} b \sigma(g) : g \in \operatorname{GL}_n(\breve k)\}$. Fix an integer $0 \leq \kappa \leq n-1$. In the next sections, we will focus on representatives $b$ revolving around the *Coxeter representative* (Def \[d:coxeter\]) and give explicit descriptions of the varieties $X_h$, $X_h(b,w)$, and their Drinfeld stratifications $\{X_h^{(r)}\}$, $\{X_h(b,w)^{(r)}\}$, where $r$ runs over the divisors of $n'$. The $X_h^{(r)}, X_h(b,w)^{(r)}$ are closed subvarieties of $X_h, X_h(b,w)$; we call the $r$th Drinfeld stratum $$\label{e:rth stratum} X_h^{(r)} \smallsetminus \Big(\bigcup_{\substack{r < r' \leq n' \\ r \mid r' \mid n'}} X_h^{(r')}\Big), \qquad X_h(b,w)^{(r)} \smallsetminus \Big(\bigcup_{\substack{r < r' \leq n' \\ r \mid r' \mid n'}} X_h(b,w)^{(r')}\Big)$$ so that the closure of the $r$th Drinfeld stratum is $X_h^{(r)}$, $X_h(b,w)^{(r)}$. Explicit parahoric subgroups of $G$ ----------------------------------- Set $$b_0 \colonequals \left(\begin{matrix} 0 & 1 \\ 1_{n-1} & 0 \end{matrix}\right), \quad \text{and} \quad t_{\kappa,n} \colonequals \begin{cases} \operatorname{diag}(\underbrace{1, \ldots, 1}_{n-\kappa}, \underbrace{\varpi, \ldots, \varpi}_\kappa) & \text{if $(\kappa,n)=1$,} \\ \operatorname{diag}(\underbrace{t_{k_0,n_0}, \ldots, t_{k_0,n_0}}_{n'}) & \text{otherwise.} \end{cases}$$ Fix an integer $e_{\kappa,n}$ such that $(e_{\kappa,n},n) = 1$ and $e_{\kappa,n} \equiv k_0$ mod $n_0$. If $\kappa$ divides $n$ (i.e. $k_0 = 1$), we always take $e_{\kappa,n} = 1$. \[d:coxeter\] The *Coxeter-type representative* attached to $\kappa$ is $b_{{{\,{\rm cox}}}} \colonequals b_0^{e_{\kappa,n}} \cdot t_{\kappa,n}$. Define $G \colonequals J_{b_{{{\,{\rm cox}}}}}$ with Frobenius $$F {\colon}\operatorname{GL}_n(\breve k) \to \operatorname{GL}_n(\breve k), \qquad g \mapsto b_{{{\,{\rm cox}}}} \sigma(g) b_{{{\,{\rm cox}}}}^{-1}$$ and define $T$ to be the set of diagonal matrices in $G$. Observe that $T$ is $F$-stable and that $T(k) \cong L^\times$. Since $T$ is elliptic, the intersection ${\mathcal A}(T) \cap {\mathscr{B}}(G,\breve k)^F$ consists of a single point $x$ and $G_{x,0}$ consists of invertible matrices $(A_{i,j})_{1 \leq i,j \leq n}$ where $$A_{i,j} \in \begin{cases} {\mathbb W}& \text{if $[i]_{n_0} \geq [j]_{n_0}$}, \\ V{\mathbb W}& \text{if $[i]_{n_0} < [j]_{n_0}$}. \end{cases}$$ For technical reasons, we will need to write down the relationship between the Coxeter element $b_0^{e_{\kappa,n}}$ and the Coxeter element $b_0$. Define $\gamma$ to be the unique permutation matrix which a) fixes the first elementary column vector and b) has the property that $$\label{e:gamma} \gamma b_0^{e_{\kappa,n}} \gamma^{-1} = b_0.$$ Note that one can express $\gamma$ explicitly as well: it corresponds to the permutation of $\{1, \ldots, n\}$ given by $$i \mapsto [(i-1) e_{\kappa,n} + 1]_n.$$ An explicit description of $X_h$ {#s:explicit} -------------------------------- The choices in this section are the same as those from [@CI_ADLV Section 7.7]. In the setting of division algebras, these choices also appear in [@Chan_DLII; @Chan_siDL]. Let $U_{\rm up}, U_{\rm low} \subset G_{\breve k}$ denote the subgroups of unipotent upper- and lower-triangular matrices. Define $$\label{e:U} U \colonequals \gamma^{-1} U_{\rm low} \gamma, \qquad U^- \colonequals \gamma^{-1} U_{\rm up} \gamma.$$ Let ${\mathbb U}_h, {\mathbb U}_h^-$ be the associate subgroup schemes of ${\mathbb G}_h$. By [@CI_ADLV Lemma 7.12], we have an isomorphism of $\overline {\mathbb F}_q$-schemes $$\label{e:section} ({\mathbb U}_h \cap F {\mathbb U}_h^-) \times ({\mathbb U}_h \cap F^{-1} {\mathbb U}_h) \to {\mathbb U}_h, \qquad (g,x) \mapsto x^{-1} g F(x).$$ We will need a refinement of this isomorphism later (see Lemma \[l:U\_[h,r]{}\]). Define $${\mathscr L}_{h} \colonequals \big({\mathbb W}_h \oplus ({\mathbb W}_{h-1})^{\oplus n_0-1}\big)^{\oplus n'}.$$ Write $t_{\kappa,n} = \operatorname{diag}\{t_1, \ldots, t_n\}$. Viewing any $v \in {\mathscr L}_h$ as a column vector, consider the associated matrix $$\begin{aligned} \label{e:lambda} \lambda(v) &\colonequals \left(v_1 \, \Big| \, v_2 \, \Big| \, v_3 \, \Big| \, \cdots \, \Big| \, v_n\right), \\ \label{e:lambda i} \text{where } v_{[ie_{\kappa,n}+1]_n} &\colonequals \varpi^{-\lfloor i k_0/n_0 \rfloor} \cdot (b\sigma)^i(v) \text{ for $0 \leq i \leq n-1$}.\end{aligned}$$ \[l:description\] We have $$\begin{aligned} X_h &= \{x \in {\mathbb G}_h : x^{-1} F(x) \in {\mathbb U}_h \cap F({\mathbb U}_h^-)\} \\ &= \{\lambda(v) \in {\mathbb G}_h : \text{$v \in {\mathscr L}_h$ and $\sigma(\det \lambda(v)) = \det \lambda(v)$}\}. \end{aligned}$$ The first equality holds by . The second equality is an explicit computation: in the division algebra setting, see [@Lusztig_79 Equation (2.2)], [@Boyarchenko_12 Lemma 4.4], [@Chan_siDL Section 2.1]; in the present setting of arbitrary inner forms of $\operatorname{GL}_n$, see [@CI_ADLV Section 6]. We give an exposition of these works here. By direct computation, ${\mathbb U}_h \cap F({\mathbb U}_h^-)$ is the subgroup of ${\mathbb G}_h$ consisting of unipotent lower-triangular matrices whose entries outside the first column vanish: $${\mathbb U}_h \cap F({\mathbb U}_h^-) = \left\{\left(\begin{smallmatrix} 1 & & & \\ * & 1 & & \\ \vdots & & \ddots & \\ * & & & 1 \end{smallmatrix}\right)\right\}.$$ Suppose that $x \in {\mathbb G}_h$ is such that $x^{-1} F(x) \in {\mathbb U}_h \cap F({\mathbb U}_h^-)$ and let $x_i$ denote the $i$th column of $x$. Then recalling that $b = b_0^{e_{\kappa,n}} t_{\kappa,n}$ and writing $t_{\kappa,n} = \operatorname{diag}\{t_1, \ldots, t_n\}$, we have $$\begin{aligned} F(x) &= \left( b\sigma(x_1) \, \Big| \, b\sigma(x_2) \, \Big | \, \cdots \, \Big| b\sigma(x_n) \right) b^{-1} \\ &= \left( t_{[1-e_{\kappa,n}]}^{-1}b\sigma(x_{[1-e_{\kappa,n}]_n}) \, \Big| \, t_{[2-e_{\kappa,n}]}^{-1}b\sigma(x_{[2-e_{\kappa,n}]_n}) \, \Big | \, \cdots \, \Big| t_{[n-e_{\kappa,n}]}^{-1}b\sigma(x_{[n-e_{\kappa,n}]_n}) \right).\end{aligned}$$ On the other hand, we have $$x ({\mathbb U}_h \cap F({\mathbb U}_h^-)) = \left(* \,\Big|\, x_2 \,\Big|\, x_3 \,\Big|\, \cdots \,\Big|\, x_n\right).$$ Comparing columns, we see that each $x_i$ is uniquely determined by $x_1$ and that we have $$\begin{aligned} \label{e:x1 determines} x_{[(n-1)e_{\kappa,n}+1]_n} &= t_{[(n-2)e_{\kappa,n}+1]_n}^{-1} b\sigma(x_{[(n-2)e_{\kappa,n}+1]_n}) \\ &= t_{[(n-2)e_{\kappa,n}+1]_n}^{-1}t_{[(n-3)e_{\kappa,n}+1]_n}^{-1} b\sigma(b\sigma(x_{[(n-3)e_{\kappa,n}+1]_n})) \\ &= t_{[(n-2)e_{\kappa,n}+1]_n}^{-1}t_{[(n-3)e_{\kappa,n}+1]_n}^{-1} \cdots t_{1}^{-1} (b\sigma)^{n-1}(x_1).\end{aligned}$$ Using Lemma \[l:ti contribution\], we now see that $x = \lambda(x_1)$, and finally, the condition $\sigma(\det \lambda(x)) = \det \lambda(x)$ comes from observation that $x^{-1}F(x)$ must have determinant $1$. \[l:ti contribution\] For $1 \leq i \leq n-1$, $$\prod_{j=0}^{i-1} t_{[je_{\kappa,n}+1]_n} = \varpi^{\lfloor ik_0/n_0 \rfloor}.$$ We prove this by induction on $i$. If $i = 1$, then by definition we have $t_1 = 1$, so this proves the base case. Now assume that the lemma holds for $i$. We would like to prove that it holds for $i+1$. This means we need to prove two assertions: 1. If $\lfloor (i+1)k_0/n_0 \rfloor > \lfloor ik_0/n_0 \rfloor$, then $t_{[ie_{\kappa,n}+1]_n} = \varpi$. 2. If $\lfloor (i+1)k_0/n_0 \rfloor = \lfloor ik_0/n_0 \rfloor$, then $t_{[ie_{\kappa,n}+1]_n} = \varpi$. The arguments are very similar. For (a): Observe that $\lfloor (i+1)k_0/n_0 \rfloor > \lfloor ik_0/n_0 \rfloor$ if and only if $n_0 > [i e_{\kappa,n}]_{n_0} \geq n_0 - k_0$ since $e_{\kappa,n} \equiv k_0$ mod $n_0$. But this happens if and only if $[i e_{\kappa,n} + 1]_{n_0} > n_0 - k_0$, which means $t_{[i e_{\kappa,n}+1]_n} = \varpi$ by definition. For (b): Observe that $\lfloor (i+1)k_0/n_0 \rfloor = \lfloor ik_0/n_0 \rfloor$ if and only if $[i e_{\kappa,n}]_{n_0} = n_0$ or $[i e_{\kappa,n}]_{n_0} < n_0 - k_0$. But this happens if and only if $[i e_{\kappa,n}+1]_{n_0} \leq n_0 - k_0$, which means that $t_{[i e_{\kappa,n}+1]_n} = 1$ by definition. The Drinfeld stratification of $X_h$ ------------------------------------ For any divisor $r \mid n'$, define $L^{(r)}$ to be the twisted Levi subgroup of $G$ consisting of matrices $(A_{i,j})_{1 \leq i,j \leq n}$ such that $A_{i,j} = 0$ unless $i - j \equiv 0$ modulo $rn_0$. Note that $L^{(r)} \cong \operatorname{Res}_{k_{\frac{n}{r}}/k}(\operatorname{GL}_r)$ and that every $k$-rational twisted Levi subgroup of $G$ containing $T$ is conjugate to $L^{(r)}$ for some $r \mid n'$. Let ${\mathbb{L}}_h^{(r)}$ denote subgroup of ${\mathbb G}_h$ associated to $L^{(r)}$ and define $${\mathbb U}_{h,r} \colonequals {\mathbb{L}}_h^{(r)} {\mathbb U}_h^1 \cap {\mathbb U}_h, \qquad {\mathbb U}_{h,r}^- \colonequals {\mathbb{L}}_h^{(r)} {\mathbb U}_h^{-,1} \cap {\mathbb U}_h^-.$$ \[l:U\_[h,r]{}\] The isomorphism of $\overline {\mathbb F}_q$-schemes $$({\mathbb U}_h \cap F{\mathbb U}_h^-) \times ({\mathbb U}_h \cap F^{-1}{\mathbb U}_h) \to {\mathbb U}_h, \qquad (g,x) \mapsto x^{-1} g F(x)$$ restricts to an isomorphism $$({\mathbb U}_{h,r} \cap F{\mathbb U}_{h,r}^-) \times ({\mathbb U}_{h,r} \cap F^{-1} {\mathbb U}_{h,r}) \to {\mathbb U}_{h,r}.$$ This lemma is a refinement of [@CI_ADLV Lemma 7.12]. Recall that $\gamma {\mathbb U}_h \gamma^{-1}$ and $\gamma {\mathbb U}_h^- \gamma^{-1}$ are the subgroups consisting of unipotent lower- and upper-triangular matrices in ${\mathbb G}_h$. Recall also that $F(g) = b_0^{e_{\kappa,n}} t_{\kappa,n} \sigma(g) t_{\kappa,n}^{-1} b_0^{e_{\kappa,n}}$. Conjugating , which is proven in *op. cit.*, we have $$(\gamma {\mathbb U}_h \gamma^{-1} \cap F_0(\gamma {\mathbb U}_h^- \gamma^{-1})) \times (\gamma {\mathbb U}_h \gamma^{-1} \cap F_0^{-1}(\gamma {\mathbb U}_h \gamma^{-1})) \to \gamma {\mathbb U}_h \gamma^{-1},$$ where $F_0(g) = (b_0 \gamma t_{\kappa,n} \gamma^{-1}) \sigma(g) (b_0 \gamma t_{\kappa,n} \gamma^{-1})^{-1}$. Since $\gamma L^{(r)} \gamma^{-1} = L^{(r)}$, to prove the lemma, it suffices to show that if $(g,x) \in (\gamma {\mathbb U}_h \gamma^{-1} \cap F_0(\gamma {\mathbb U}_h^- \gamma^{-1})) \times (\gamma {\mathbb U}_h \gamma^{-1} \cap F_0^{-1}(\gamma {\mathbb U}_h \gamma^{-1}))$ is such that $A = x^{-1} g F(x) \in \gamma {\mathbb U}_{h,r} \gamma^{-1}$, then $$\label{e:x,g gamma} (g,x) \in (\gamma {\mathbb U}_{h,r} \gamma^{-1} \cap F_0(\gamma{\mathbb U}_{h,r}^- \gamma^{-1})) \times (\gamma {\mathbb U}_{h,r} \gamma^{-1} \cap F_0^{-1}(\gamma {\mathbb U}_{h,r} \gamma^{-1})).$$ Keeping the same notation as in [@CI_ADLV Lemma 7.12], write $$x = \left(\begin{matrix} 1 & 0 & 0 & \cdots & \cdots & 0 \\ b_{21} & 1 & 0 & \cdots & \cdots & 0 \\ b_{31} & b_{32} & 1 & \ddots & & \vdots \\ \vdots & & \ddots & \ddots & 0 & \vdots \\ b_{n-1,1} & b_{n-1,2} & \cdots & b_{n-1,n-2} & 1 & 0 \\ 0 & \cdots & \cdots & 0 & 0 & 1 \end{matrix}\right), \qquad g = \left(\begin{matrix} 1 & 0 & 0 & \cdots & 0 \\ c_1 & 1 & 0 & \cdots & 0 \\ c_2 & 0 & 1 & \ddots & \vdots \\ \vdots & \vdots & \ddots & \ddots & 0 \\ c_{n-1} & 0 & \cdots & 0 & 1 \end{matrix}\right).$$ Let $\gamma t_{\kappa,n} \gamma^{-1} = \operatorname{diag}(s_1, s_2, \ldots, s_n)$ so that we have $$F_0(x) = \left(\begin{matrix} 1 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 1 & 0 & 0 & \cdots & 0 \\ 0 & \sigma(b_{21}) s_2/s_1 & 1 & 0 & & 0 \\ 0 & \sigma(b_{31}) s_3/s_1 & \sigma(b_{32}) s_3/s_2 & 1 & \ddots &\vdots \\ \vdots & \vdots & \ddots & \ddots & 1 & 0 \\ 0 & \sigma(b_{n-1,1}) s_{n-1}/s_1 & \sigma(b_{n-1,2}) s_{n-1}/s_2 & \cdots & \sigma(b_{n-1,n-2}) s_{n-1}/s_{n-2} & 1 \end{matrix}\right).$$ As in [@CI_ADLV Lemma 7.12], we see that the $(i,j)$th entry of $g F_0(x)$ is $$\label{e:RHS ij} (g F_0(x))_{i,j} = \begin{cases} 1 & \text{if $i = j$,} \\ 0 & \text{if $i < j$,} \\ c_{i-1} & \text{if $i > j = 1$,} \\ \sigma(b_{i-1,j-1}) s_{i-1}/s_{j-1} & \text{if $i > j > 1$.} \end{cases}.$$ We also compute the $(i,j)$th entry of $xA$ when $A = (a_{i,j})_{i,j} \in \gamma{\mathbb U}_h\gamma^{-1}$: $$\label{e:LHS ij} (xA)_{i,j} = \begin{cases} 1 & \text{if $i = j$,} \\ 0 & \text{if $i < j$,} \\ b_{ij} + \sum_{k=j+1}^{i-1} b_{ik} a_{kj} + a_{ij} & \text{if $j < i \leq n-1$,} \\ a_{nj} & \text{if $j < i = n$.} \end{cases}$$ We now have $n^2$ equations given by $=$ , viewed as equations in the variables $b_{i,j}$ and $c_i$. Let $\overline b_{i,j}$, $\overline c_i$, $\overline a_{i,j}$ denote the images of $b_{i,j}$, $c_i$, $a_{i,j}$ in ${\mathbb W}_1$. In particular, we have the following: $$\begin{aligned} \label{e:i=n} \overline b_{n-1,j-1} = 0 \qquad \Longleftrightarrow \qquad \overline a_{n,j} = 0,\end{aligned}$$ and for $1 < j < i < n$, $$\label{e:i<n} \overline b_{i-1,j-1} = 0 \qquad \Longleftrightarrow \qquad \overline b_{i,j} + \sum_{k=j+1}^{i-1} \overline b_{i,k} \overline a_{k,j} + \overline a_{i,j} = 0.$$ Assume now that $A \in \gamma{\mathbb U}_{h,r}\gamma^{-1} = \gamma({\mathbb{L}}_h^{(r)} {\mathbb U}_h^1 \cap {\mathbb U}_h)\gamma^{-1}$. Then $\overline a_{i,j} = 0$ if $rn_0 \nmid i-j$. From we see that $\overline b_{n-1,j-1} = 0$ if $rn_0 \nmid n-j = (n-1)-(j-1)$. We now proceed by (decreasing) induction on $i$. If $i,j$ are such that $1 < j < i < n$ and $rn_0 \nmid i-j$, then necessarily either $rn_0 \nmid i-k$ or $rn_0 \nmid k-j$, and therefore each term in the sum on the right-hand side of is zero, and so $\overline b_{i-1,j-1} = 0$. We have therefore shown that $x \in \gamma({\mathbb{L}}_h^{(r)} {\mathbb U}_h^1 \cap {\mathbb U}_h)\gamma^{-1} \cap F^{-1}(\gamma {\mathbb U}_h\gamma^{-1})$. In particular, $F(x) \in \gamma{\mathbb U}_h\gamma^{-1}$. Since ${\mathbb{L}}_h^{(r)}$ is $F$-stable, we have that $F(\overline x) \in {\mathbb{L}}_1^{(r)}$ and therefore $F(x) \in \gamma({\mathbb U}_h \cap {\mathbb{L}}_h^{(r)} {\mathbb U}_h^1)\gamma^{-1}$. Hence $x \in \gamma({\mathbb U}_{h,r} \cap F^{-1}{\mathbb U}_{h,r}) \gamma^{-1}$. Now since $\overline A, \overline x \in {\mathbb{L}}_1^{(r)}$, we must have $\overline g \in {\mathbb{L}}_1^{(r)}$. Since $g \in \gamma{\mathbb U}_h\gamma^{-1}$, we must have $g \in \gamma({\mathbb{L}}_h^{(r)} {\mathbb U}_h^1 \cap {\mathbb U}_h)\gamma^{-1} = \gamma{\mathbb U}_{h,r}\gamma^{-1}$, and since $g \in F(\gamma {\mathbb U}_h^- \gamma^{-1})$, we must have $g \in F(\gamma({\mathbb{L}}_h^{(r)} {\mathbb U}_h^{-,1} \cap {\mathbb U}_h^-)\gamma^{-1})$. Hence $g \in \gamma{\mathbb U}_{h,r}\gamma^{-1} \cap F(\gamma{\mathbb U}_{h,r}^-\gamma^{-1})$. This establishes and finishes the proof of the lemma. \[d:drinfeld GLn\] For each divisor $r \mid n'$, we define $$\begin{aligned} S_h^{(r)} &\colonequals \{x \in {\mathbb G}_h : x^{-1} F(x) \in {\mathbb U}_{h,r}\}, \\ X_h^{(r)} &\colonequals \{x \in {\mathbb G}_h : x^{-1} F(x) \in {\mathbb U}_{h,r}\}/({\mathbb U}_{h,r} \cap F^{-1}{\mathbb U}_{h,r}) \\ &= \{x \in {\mathbb G}_h : x^{-1} F(x) \in {\mathbb U}_{h,r} \cap F{\mathbb U}_{h,r}^-\},\end{aligned}$$ where the second equality in $X_h^{(r)}$ holds by Lemma \[s:gen\_Drin\_S\]. Note that $S_h^{(r)}$ is the variety $S_h^{(L)}$ defined in Section \[s:gen\_Drin\_S\] in the special case that $G$ is an inner form of $\operatorname{GL}_n$, the twisted Levi $L$ is $L^{(r)}$, and $U$ is the unipotent radical of the Borel subgroup specified in Section \[s:explicit\]. By Lemma \[l:U\_[h,r]{}\], we can change the quotient in the definition of $X_h^{(r)}$ from ${\mathbb U}_{h,r} \cap F^{-1} {\mathbb U}_{h,r}$ to ${\mathbb U}_h \cap F^{-1} {\mathbb U}_h$ so that $$X_{h}^{(r)} = \{x \in {\mathbb G}_h : x^{-1} F(x) \in {\mathbb U}_{h,r}\}/({\mathbb U}_h \cap F^{-1} {\mathbb U}_h) \subset X_h.$$ Hence we have the picture: $$\begin{tikzcd} S_h^{(r)} \ar[hook]{r} \ar[two heads]{d} & S_h \ar[two heads]{d} \\ X_h^{(r)} \ar[hook]{r} & X_h \end{tikzcd}$$ The Drinfeld stratification for the Drinfeld upper half-space ------------------------------------------------------------- Consider the twisted Frobenius $b_{{\,{\rm cox}}}\sigma {\colon}\breve k^{\oplus n} \to \breve k^{\oplus n}$. Then $G(k)$ is equal to the subgroup consisting of all elements of $\operatorname{GL}_n(\breve k)$ which commute with $b_{{\,{\rm cox}}}\sigma$. Now consider the subquotient of $\breve k^{\oplus n}$ given by $${\mathscr L}_h \colonequals \big({\mathbb W}_h(\overline {\mathbb F}_q) \oplus (\operatorname{V}{\mathbb W}_{h-1}(\overline {\mathbb F}_q))^{\oplus n_0 - 1}\big)^{\oplus n'} \subset {\mathbb W}_h(\overline {\mathbb F}_q)^{\oplus n}$$ and write ${\mathscr L}= \varprojlim_h {\mathscr L}_h$. The action of $G(k)$ on $\breve k^{\oplus n}$ restricts to an action of $G_{x,0}({\mathcal O}_k)$ on ${\mathscr L}$ which induces an action of ${\mathbb G}_h({\mathbb F}_q)$ on ${\mathscr L}_h$. Now consider the $n'$-dimensional $\overline {\mathbb F}_q$-vector space $V \colonequals {\mathscr L}_1 \subset \overline {\mathbb F}_q^{\oplus n}$. The morphism $\varpi^{-k_0}(b_{{\,{\rm cox}}}\sigma)^{n_0}$ is a Frobenius automorphism of $V$ and defines a ${\mathbb F}_{q^{n_0}}$-rational structure on $V$. Observe that ${\mathbb G}_1({\mathbb F}_q)$ is isomorphic to the subgroup of $\operatorname{GL}(V)$ consisting of elements which commute with $\varpi^{-k_0}(b_{{\,{\rm cox}}}\sigma)^{n_0}$. For any divisor $r \mid n'$ and any ${\mathbb F}_{q^{n_0r}}$-rational subspace $W$ of $V$, consider $$\Omega_{W, q^{n_0r}} \colonequals \{[x] \in {\mathbb P}(V) : \text{$W$ is the smallest ${\mathbb F}_{q^{n_0r}}$-rational subspace of $V$ containing $x$}\}.$$ Note that $\Omega_{W,q^{n_0r}} \subset {\mathbb P}(V)$ is isomorphic to the Drinfeld upper half-space for $W$ with respect to ${\mathbb F}_{q^{n_0r}}$. For any divisor $r \mid n'$, define $${\mathscr{S}}_r \colonequals \bigcup_{W} \Omega_{W, q^{n_0r}},$$ where the union ranges over all ${\mathbb F}_{q^{n_0r}}$-rational subspaces $W$ of dimension $n'/r$ in $V$. The following lemma records some easy facts. We have 1. ${\mathscr{S}}_1 = \Omega_{V, q^{n_0}}$ and ${\mathscr{S}}_{n'} = {\mathbb P}(V)({\mathbb F}_{q^n})$. 2. If $r \mid r' \mid n'$ and $W$ is a ${\mathbb F}_{q^{n_0r}}$-rational subspace of $V$, then $\Omega_{W, q^{n_0r'}} \subseteq \Omega_{W, q^{n_0r}}$. 3. If $r \mid r' \mid n'$, then ${\mathscr{S}}_1 \cap {\mathscr{S}}_{r'} \subseteq {\mathscr{S}}_1 \cap {\mathscr{S}}_r$. Note that ${\mathscr{S}}_1$ is the classical Deligne–Lusztig variety for ${\mathbb G}_1({\mathbb F}_q) \cong \operatorname{GL}_{n'}({\mathbb F}_{q^{n_0}})$ with respect to the nonsplit maximal torus ${\mathbb T}_1({\mathbb F}_q) \cong {\mathbb F}_{q^n}^\times$ [@DeligneL_76 Section 2.2] and the variety $X_h$ when $h=1$ is a ${\mathbb F}_{q^n}^\times$-cover of ${\mathscr{S}}_1$. Hence for any $h \geq 1$, we have a map $$X_h \to X_1 \to {\mathscr{S}}_1.$$ \[l:drinfeld preimage\] For any divisor $r \mid n'$, the variety $X_h^{(r)}$ is the preimage of ${\mathscr{S}}_1 \cap {\mathscr{S}}_r$ under the composition map $X_h \to X_1 \to {\mathscr{S}}_1$. To prove this, we use the explicit description of $X_h$ coming from Lemma \[l:description\]: $$X_h = \{\lambda(v) \in {\mathbb G}_h : \text{$v \in {\mathscr L}_h$ and $\sigma(\det \lambda(v)) = \det \lambda(v)$}\}.$$ By Definition \[d:drinfeld GLn\], if $v \in {\mathscr L}_h$ is such that $\lambda(v) \in X_h^{(r)}$, then $\lambda(v)^{-1} F(\lambda(v)) \in {\mathbb U}_{h,r} \cap F {\mathbb U}_{h,r}^-$, which is equivalent to $$\label{e:A lin comb} F(\lambda(v)) = \lambda(v) A, \qquad \text{for some $A \in {\mathbb U}_{h,r} \cap F {\mathbb U}_{h,r}^-$}.$$ Note that $A = (a_{i,j})_{1 \leq i,j \leq n}$ has the property that $$\begin{aligned} a_{i,i} &= 1, && \text{for $i = 1, \ldots, n$,} \\ a_{i,1} &\in {\mathbb W}_h, && \text{if $i \equiv 1$ mod $r n_0$}, \\ a_{i,1} &\in V {\mathbb W}_{h-1} \subset {\mathbb W}_h, && \text{if $i \not\equiv 1$ mod $r n_0$}, \\ a_{i,j} &= 0 && \text{otherwise.}\end{aligned}$$ The first column of $F(\lambda(v))$ is the vector $\sigma^n(v)$. Therefore implies that $$\sigma^n(v) = \sum_{i=1}^n a_{i,1} \lambda(v)_i = v + \sum_{i=2}^n a_{i,1} \lambda(v)_i,$$ where $\lambda(v)_i$ denotes the $i$th column of $\lambda(v)$. Recall from that $\lambda(v)_{[ie_{\kappa,n}+1]_n} = \prod_{j=0}^{i-1} t_{[je_{\kappa,n}+1]}^{-1} \cdot (b\sigma)^i(v)$. If $[i e_{\kappa,n}+1]_n \equiv 1$ modulo $rn_0$, then $i \equiv 0$ modulo $rn_0$. Therefore, if $\mathfrak v$ denotes the image of $v$ in ${\mathscr L}_1$, we have (using ), $$\sigma^n(\mathfrak v) \in \operatorname{span}\{\mathfrak v, \varpi^{-rk_0}(b\sigma)^{rn_0}(\mathfrak v), \varpi^{-2rk_0}(b\sigma)^{2rn_0}(\mathfrak v), \ldots, \varpi^{-(n'-1)rk_0}(b\sigma)^{(n'-1)rn_0}(\mathfrak v)\}.$$ Since $\lambda(v) \in {\mathbb G}_h$, necessarily $\mathfrak v, \varpi^{-rk_0}(b\sigma)^{rn_0}(\mathfrak v), \ldots, \varpi^{-(n'-1)rk_0} (b\sigma)^{(n'-1)rn_0}(\mathfrak v)$ are linearly independent and therefore span a $n'/r$-dimensional subspace of ${\mathscr L}_1$. This exactly means that $\mathfrak v \in {\mathscr{S}}_1 \cap {\mathscr{S}}_r$, so the proof is complete. By Lemma \[l:drinfeld preimage\], we see that for $\operatorname{GL}_n$ and its inner forms, the Drinfeld stratification of $X_h$ is induced by considering intermediate Drinfeld upper half-spaces of smaller dimension embedding in ${\mathbb P}^{n'}_{{\mathbb F}_{q^{n_0}}}$. The Drinfeld stratification of $X_h(b,w)$ {#s:drinfeld b,w} ----------------------------------------- In this section, we consider the varieties $X_h(b,w)$ in the special case $$\text{$b = g_0 b_{{{\,{\rm cox}}}} \sigma(g_0)^{-1}$ for some $g_0 \in G_{x,0}({\mathcal O}_{\breve k})$}, \qquad \text{and} \qquad w = b_{{{\,{\rm cox}}}}.$$ For any such $b$, recall from Lemmas \[l:change b\] and \[l:Xbw to Xh\] that $$\label{e:b bcox} X_h = X_h(b_{{{\,{\rm cox}}}}, b_{{{\,{\rm cox}}}}) \cong X_h(b, b_{{{\,{\rm cox}}}}),$$ where the second isomorphism is given by $x \mapsto \overline g_0 x$, where $\overline g_0$ is the image of $g_0$ in ${\mathbb G}_h(\overline {\mathbb F}_q)$. Therefore the Drinfeld stratification $\{X_h^{(r)}\}$ of $X_h$ gives rise to a stratification $\{X_h(b,b_{{\,{\rm cox}}})^{(r)}\}$ for $X_h(b,b_{{{\,{\rm cox}}}})$. The proof of Lemma \[l:g0 independence\] shows that if $\sigma^n(\overline g_0) = \overline g_0$, then the Drinfeld stratification of $X_h(b, b_{{\,{\rm cox}}})$ does not depend on the choice of $g_0$. Let $b = g_0 b_{{{\,{\rm cox}}}} \sigma(g_0)^{-1} \in G(\breve k)$ for some $g_0 \in G_{x,0}({\mathcal O}_{\breve k})$. To each $v \in {\mathscr L}_h$, define $$\begin{aligned} g_b(v) &\colonequals \left(v_1 \, \Big| \, v_2 \, \Big| \, v_3 \, \Big| \, \cdots \, \Big| \, v_n\right) \\ \text{where } v_i &\colonequals \varpi^{\lfloor (i-1)k_0/n_0 \rfloor} \cdot (b\sigma)^{i-1}(v) \text{ for $1 \leq i \leq n-1$,}\end{aligned}$$ where we abuse notation by writing $\varpi^{\lfloor (i-1)k_0/n_0 \rfloor} \cdot (b\sigma)^{i-1}$ for the map ${\mathscr L}_h \to {\mathscr L}_h$ which takes $v$ to the image $\varpi^{\lfloor (i-1)k_0/n_0 \rfloor} \cdot (b\sigma)^{i-1}(\widetilde v)$ in the subquotient ${\mathscr L}_h$ of $\breve k^{\oplus n}$, where $\widetilde v$ is any lift of $v$ in ${\mathscr L}\subset \breve k^{\oplus n}$. If $b = g_0 b_{{{\,{\rm cox}}}} \sigma(g_0)^{-1}$ for some $g_0 \in G_{x,0}({\mathcal O}_{\breve k})$, then $$X_h(b,b_{{{\,{\rm cox}}}}) \cong \{v \in {\mathscr L}_h : \text{$\sigma(\det g_b(v)) = \frac{\det b_{{\,{\rm cox}}}}{\det b} \cdot \det g_b(v) \in {\mathbb W}_h^\times$}\}.$$ First note that one can obtain $g_{b_{{\,{\rm cox}}}}(v)$ from $\lambda(v)$ by permuting columns. In particular, $$X_h(b_{{\,{\rm cox}}},b_{{\,{\rm cox}}}) = X_h \cong \{v \in {\mathscr L}_h : \sigma(\det g_{b_{{{\,{\rm cox}}}}}(v)) = \det g_{b_{{{\,{\rm cox}}}}}(v) \in {\mathbb W}_h^\times\}.$$ Since $X_h(b_{{\,{\rm cox}}}, b_{{\,{\rm cox}}}) \cong X_h(b, b_{{\,{\rm cox}}})$ is given by $x \mapsto \overline g_0 x$ where $\overline g_0$ denotes the image of $g_0$ in ${\mathbb G}_h(\overline {\mathbb F}_q)$, we have that $X_h(b,b_{{\,{\rm cox}}})$ is isomorphic to the set of $\overline g_0 \cdot g_{b_{{\,{\rm cox}}}}(v)$ where $v \in {\mathscr L}_h$ satisfies the above criterion. By direct computation, $$\overline g_0 \cdot g_{b_{{\,{\rm cox}}}}(v) = g_b(\overline g_0 \cdot v),$$ and hence if $\sigma(\det g_{b_{{\,{\rm cox}}}}(v)) = \det g_{b_{{\,{\rm cox}}}}(v)$, then $$\begin{aligned} \sigma(\det g_b(\overline g_0 \cdot v)) &= \sigma(\overline \det g_0) \cdot \sigma(\det g_{b_{{\,{\rm cox}}}}(v)) = \sigma(\det \overline g_0) \cdot \det g_{b_{{\,{\rm cox}}}}(v) \\ &= \frac{\sigma(\det \overline g_0)}{\det \overline g_0} \cdot \det g_b(\overline g_0 \cdot v) = \frac{\det b_{{\,{\rm cox}}}}{\det b} \cdot \det g_b(\overline g_0 \cdot v). \qedhere\end{aligned}$$ \[l:g0 independence\] Let $b = g_0 b_{{\,{\rm cox}}}\sigma(g_0)^{-1}$ for some $g_0 \in G_{x,0}({\mathcal O}_{\breve k})$ and assume that the image $\overline g_0 \in {\mathbb G}_h(\overline {\mathbb F}_q)$ of $g_0$ has the property that $\sigma^n(\overline g_0) = \overline g_0$. Let $r \mid n'$ be any divisor. For $v \in {\mathscr L}_h$, let $\mathfrak v$ denote its image in ${\mathscr L}_1$. Then $$X_h(b, b_{{\,{\rm cox}}})^{(r)} \cong \left\{v \in {\mathscr L}_h : \begin{gathered} \sigma(\det g_{b_{{\,{\rm sp}}}}(v)) = \frac{\det b_{{\,{\rm cox}}}}{\det b} \cdot \det g_{b}(v) \in {\mathbb W}_h^\times \\ \sigma^n(\mathfrak v) \in \operatorname{span}\{\varpi^{-ik_0r} (b\sigma)^{irn_0}(\mathfrak v): 0 \leq i \leq n'-1\}\end{gathered} \right\}.$$ In particular, the Drinfeld stratification of $X_h(b, b_{{\,{\rm cox}}})$ does not depend on the choice of $g_0$. Recall that $$X_h(b_{{\,{\rm cox}}}, b_{{\,{\rm cox}}})^{(r)} \cong \left\{v \in {\mathscr L}_h : \begin{gathered} \sigma(\det g_{b_{{\,{\rm cox}}}}(v)) = \det g_{b_{{\,{\rm cox}}}}(v) \in {\mathbb W}_h^\times \\ \sigma^n(\mathfrak v) \in \operatorname{span}\{\varpi^{-ik_0r} (b_{{\,{\rm cox}}}\sigma)^{irn_0}(\mathfrak v): 0 \leq i \leq n'-1\}\end{gathered} \right\}.$$ By definition, every element in $X_h(b, b_{{\,{\rm cox}}})^{(r)}$ is of the form $\overline g_0 g_{b_{{\,{\rm cox}}}}(v)$ for some $v \in {\mathscr L}_h$ satisfying the above criteria. Since $\overline g_0 g_{b_{{\,{\rm cox}}}}(v) = g_{b}(\overline g_0 v)$ and since $\sigma^n(\overline g_0) = \overline g_0$, we have $$\overline g_0 \sigma^n(\mathfrak v) \in \operatorname{span}\{\overline g_0 \varpi^{-ik_0r}(b_{{\,{\rm cox}}}\sigma)^{irn_0}(\mathfrak v): 0 \leq i \leq n'-1\}.$$ But now $\overline g_0 \varpi^{-ik_0r}(b_{{\,{\rm cox}}}\sigma)^{irn_0}(\mathfrak v) = \varpi^{-ik_0r}(b \sigma)^{irn_0}(\mathfrak v)$ and therefore the desired conclusion follows. In Appendix \[s:fibers\], we will work directly with a particular $b$ called the *special representative* in [@CI_ADLV] (see Definition \[d:special\] of the present paper). The special representative satisfies the hypotheses of Lemma \[l:g0 independence\]. Torus eigenspaces in the cohomology =================================== We prove an irreducibility result for torus eigenspaces in the alternating sum of the cohomology of $X_h \cap {\mathbb{L}}_h^{(r)}{\mathbb G}_h^1$. Howe factorizations {#s:howe_fact} ------------------- Let ${\mathscr T}_{n,h}$ denote the set of characters $\theta {\colon}{\mathbb W}_h^\times({\mathbb F_{q^n}}) \to \overline {\mathbb Q}_\ell^\times$. Recall that if $h \geq 2$, we have natural surjections $\operatorname{pr}{\colon}{\mathbb W}_h^\times \to {\mathbb W}_{h-1}^\times$ and injections ${\mathbb G}_a \to {\mathbb W}_h^\times$ given by $x \mapsto [1, 0, \ldots, 0, x]$. Furthermore, for any subfield $F \subset L$, the norm map $L^\times \to F^\times$ induces a map $\operatorname{Nm}{\colon}{\mathbb W}_h^\times(k_L) \to {\mathbb W}_h^\times(k_F)$. These maps induce $$\begin{aligned} \operatorname{pr}^* {\colon}{\mathscr T}_{n,h'} &\to {\mathscr T}_{n,h}, && \text{for $h' < h$}, \\ \operatorname{Nm}^* {\colon}{\mathscr T}_{m,h} &\to {\mathscr T}_{n,h}, && \text{for $m \mid n$}.\end{aligned}$$ First consider the setting $h \geq 2$. By pulling back along ${\mathbb G}_a \to {\mathbb W}_h^\times, x \mapsto [1, 0, \ldots, 0, x]$, we may restrict characters of ${\mathbb W}_h^\times({\mathbb F_{q^n}})$ to characters of ${\mathbb F_{q^n}}$. We say that $\theta \in {\mathscr T}_{n,h}$ is *primitive* if $\theta|_{{\mathbb F_{q^n}}}$ has trivial stabilizer in $\operatorname{Gal}({\mathbb F_{q^n}}/{\mathbb F}_q)$. If $h = 1$, then $\theta \in {\mathscr T}_{n,h}$ is a character $\theta {\colon}{\mathbb F}_{q^n}^\times \to \overline {\mathbb Q}_\ell^\times$, and we say it is *primitive* if $\theta$ has trivial stabilizer in $\operatorname{Gal}({\mathbb F_{q^n}}/{\mathbb F}_q)$. For any $h \geq 1$, we write ${\mathscr T}_{n,h}^0 \subset {\mathscr T}_{n,h}$ to denote the subset of primitive characters. We can decompose $\theta \in {\mathscr T}_{n,h}$ into primitive components in the sense of Howe [@Howe_77 Corollary after Lemma 11]. A *Howe factorization* of a character $\theta \in {\mathscr T}_{n,h}$ is a decomposition $$\theta = \prod_{i=1}^d \theta_i, \qquad \text{where $\theta_i = \operatorname{pr}^* \operatorname{Nm}^* \theta_i^0$ and $\theta_i^0 \in {\mathscr T}_{m_i, h_i}^0$},$$ such that $m_i < m_{i+1}$, $m_i \mid m_{i+1}$, and $h_i > h_{i+1}$. It is automatic that $m_i \leq n$ and $h \geq h_i$. For any integer $0 \leq t \leq d$, set $\theta_0$ to be the trivial character and define $$\theta_{\geq t} \colonequals \prod_{i=t}^d \theta_i \in {\mathscr T}_{n,h_t}.$$ Observe that the choice of $\theta_i$ in a Howe factorization $\theta = \prod_{i=1}^r \theta_i$ is not unique, but the $m_i$ and $h_i$ only depend on $\theta$. Hence the Howe factorization attaches to each character $\theta \in {\mathscr T}_{n,h}$ a pair of well-defined sequences $$\begin{aligned} &1 \equalscolon m_0 \leq m_1 < m_2 < \cdots < m_d \leq m_{d+1} \colonequals n \\ &h \equalscolon h_0 \geq h_1 > h_2 > \cdots > h_d \geq h_{d+1} \colonequals 1\end{aligned}$$ satisfying the divisibility $m_i \mid m_{i+1}$ for $0 \leq i \leq d$. We give some examples of the sequences associated to characters $\theta \in {\mathscr T}_{n,h}$. 1. If $\theta$ is the trivial character, then $d = 1$ and the associated sequences are $$\{m_0, m_1, m_2\} = \{1, 1, n\}, \qquad \{h_0, h_1, h_2\} = \{h, 1, 1\},$$ where we note that ${\mathscr T}_{1,1} = {\mathscr T}_{1,1}^0$ since any character of ${\mathbb F}_q^\times$ has trivial $\operatorname{Gal}({\mathbb F}_q/{\mathbb F}_q)$-stabilizer. 2. Say $h \geq h'$. We say that $\theta$ is a primitive character of level $h' \geq 2$ if $\theta|_{U_L^{h'}} = 1$ and $\theta|_{U_L^{h'-1}/U_L^{h'}}$ has trivial $\operatorname{Gal}({\mathbb F}_{q^n}/{\mathbb F}_q)$-stabilizer. Then $d = 1$ and the associated sequences are $$\{m_0, m_1, m_2\} = \{1, n, n\}, \qquad \{h_0, h_1, h_2\} = \{h, h', 1\}.$$ In the division algebra setting, this case is studied in [@Chan_DLI; @Chan_DLII]. For arbitrary inner forms of $\operatorname{GL}_n$ over $K$, we considered *minimal admissible* $\theta$, which are exactly the characters $\theta \in {\mathscr T}_{n,h}$ which are either primitive or have $d=2$ with associated sequences $$\{m_0, m_1, m_2, m_3\} = \{1, 1, n, n\}, \qquad \{h_0, h_1, h_2, h_3\} = \{h, h_1, h_2, 1\}.$$ This is a very slight generalization over the primitive case. 3. Say $h \geq 2$. If $\theta|_{U_L^2} = 1$ and the stabilizer of $\theta|_{U_L^1/U_L^2}$ in $\operatorname{Gal}({\mathbb F_{q^n}}/{\mathbb F}_q)$ is $\operatorname{Gal}({\mathbb F_{q^n}}/{\mathbb F}_{q^m})$, then $d = 1$ and the associated sequences are $$\{m_0, m_1, m_2\} = \{1, m, n\}, \qquad \{h_0, h_1, h_2\} = \{h, 2, 1\}.$$ In the division algebra setting, the case $h = 2$ is studied in [@Boyarchenko_12; @BoyarchenkoW_16]. 4. Say $h \geq 1$. If $\theta|_{U_L^1} = 1$ and the stabilizer of $\theta {\colon}{\mathbb F}_{q^n}^\times \to \overline {\mathbb Q}_\ell^\times$ is $\operatorname{Gal}({\mathbb F_{q^n}}/{\mathbb F}_{q^m})$, then $d = 1$ and the associated sequences are $$\{m_0, m_1, m_2\} = \{1, m, n\}, \qquad \{h_0, h_1, h_2\} = \{h, 1, 1\}.$$ This is the so-called “depth zero” case. Irreducibility {#s:irred} -------------- Recall that the intersection $X_h \cap {\mathbb{L}}_h^{(r)}{\mathbb G}_h^1$ has an action by the subgroup ${\mathbb{L}}_h^{(r)}({\mathbb F}_q){\mathbb G}_h^1({\mathbb F}_q) \times {\mathbb T}_h({\mathbb F}_q) \subset {\mathbb G}_h({\mathbb F}_q) \times {\mathbb T}_h({\mathbb F}_q)$. In this section, we study the irreducibility of the virtual ${\mathbb{L}}_h^{(r)}({\mathbb F}_q){\mathbb G}_h^1({\mathbb F}_q)$-representation $H_c^*(X_h \cap {\mathbb{L}}_h^{(r)}{\mathbb G}_h^1)[\theta]$, where $\theta {\colon}{\mathbb T}_h({\mathbb F}_q) \to \overline {\mathbb Q}_\ell^\times$ is arbitrary. We follow a technique of Lusztig which has appeared in the literature in many incarnations, the closest analogues being [@Lusztig_04; @Stasinski_09; @CI_MPDL]. In these works, the strategy is to translate the problem of calculating an inner product between two representations to calculating the cohomology of a third variety $\Sigma$. This is done by first writing $\Sigma = \Sigma' \sqcup \Sigma''$, proving the cohomology of $\Sigma''$ gives the expected outcome, and then putting a lot of work into showing that the cohomology of $\Sigma'$ does not contribute. In the three works cited, one can only prove the vanishing of (certain eigenspaces of) the Euler characteristic of $\Sigma'$ under a strong *regularity* condition on the characters $\theta, \theta'$. The key new idea here is adapted from [@CI_loopGLn Section 3.2], which allows us to relax this regularity assumption by working directly with $\Sigma$ throughout the proof. We give only a sketch of the proof of Theorem \[t:inner\_prod\] here, as the proof of [@CI_loopGLn Theorem 3.1] is very similar. \[t:inner\_prod\] Let $\theta, \theta' {\colon}{\mathbb T}_h({\mathbb F}_q) \to \overline {\mathbb Q}_\ell^\times$ be any two characters. Then $$\Big\langle H_c^*(X_h \cap {\mathbb{L}}_h^{(r)}{\mathbb G}_h^1)[\theta], H_c^*(X_h \cap {\mathbb{L}}_h^{(r)}{\mathbb G}_h^1)[\theta'] \Big\rangle_{{\mathbb{L}}_h^{(r)}({\mathbb F}_q) {\mathbb G}_h^1({\mathbb F}_q)} = \#\{w \in W_{{\mathbb{L}}_h^{(r)}}^F : \theta' = \theta \circ \operatorname{Ad}(w)\},$$ where $W_{{\mathbb{L}}_h^{(r)}}^F = N_{{\mathbb{L}}_h^{(r)}({\mathbb F}_q)}({\mathbb T}_h({\mathbb F}_q))/{\mathbb T}_h({\mathbb F}_q)$. Since $W_{{\mathbb{L}}_h^{(r)}}^F \cong \operatorname{Gal}({\mathbb F}_{q^n}/{\mathbb F}_{q^{n_0r}})$, we obtain the following theorem as a direct corollary of Theorem \[t:inner\_prod\]. \[t:irred\] Let $\theta {\colon}{\mathbb T}_h({\mathbb F}_q) \cong {\mathbb W}_h^\times({\mathbb F}_{q^n}) \to \overline {\mathbb Q}_\ell^\times$ be any character. Then the virtual ${\mathbb{L}}_h^{(r)}({\mathbb F}_q){\mathbb G}_h^1({\mathbb F}_q)$-representation $H_c^*(X_h \cap {\mathbb{L}}_h^{(r)}{\mathbb G}_h^1)[\theta]$ is (up to sign) irreducible if and only if $\theta$ has trivial $\operatorname{Gal}({\mathbb F}_{q^n}/{\mathbb F}_{q^{n_0r}})$-stabilizer. In the special case that $r = n'$, we have ${\mathbb{L}}_h^{(n')} = {\mathbb T}_h$ and using Lemma \[l:description\] and Definition \[d:drinfeld GLn\], we have that $S_h \cap {\mathbb T}_h {\mathbb G}_h^1$ is an affine fibration over $$\{x \in {\mathbb T}_h {\mathbb G}_h^1 : x^{-1} F(x) \in {\mathbb U}_h^1 \cap F{\mathbb U}_h^{-,1}\}.$$ and that $$X_h \cap {\mathbb T}_h {\mathbb G}_h^1 = \bigsqcup_{t \in {\mathbb T}_h({\mathbb F}_q)} t \cdot X_h^1, \qquad \text{where $X_h^1 = X_h \cap {\mathbb G}_h^1$.}$$ Here we have $$\label{e:Xh1} X_h^1 = \{x \in {\mathbb G}_h^1 : x^{-1} F(x) \in {\mathbb U}_h^1 \cap F{\mathbb U}_h^{-,1}\}.$$ \[t:irred n’\] Let $\chi {\colon}{\mathbb T}_h^1({\mathbb F}_q) \to \overline {\mathbb Q}_\ell^\times$ be any character. Then $H_c^*(X_h^1, \overline {\mathbb Q}_\ell)[\chi]$ is an irreducible representation of ${\mathbb G}_h^1({\mathbb F}_q)$. Moreover, if $\chi,\chi'$ are any two characters of ${\mathbb T}_h^1({\mathbb F}_q)$, then $H_c^*(X_h^1, \overline {\mathbb Q}_\ell)[\chi] \cong H_c^*(X_h^1, \overline {\mathbb Q}_\ell)[\chi']$ if and only if $\chi = \chi'$. Corollary \[t:irred n’\] follows from Corollary \[t:irred\] (by arguing the relationship between the cohomology of $X_h^1$ and the cohomology of $X_h \cap {\mathbb T}_h {\mathbb G}_h^1$), but one can give an alternate proof using [@Chan_siDL Section 6.1], which is based on [@Lusztig_79]. We do this in Section \[s:irred n’\]. Recall that specializing Lemma \[l:geom parabolic induction\] yields that $$H_c^*(X_h^{(r)}, \overline {\mathbb Q}_\ell)[\theta] \cong \operatorname{Ind}_{{\mathbb{L}}_h^{(r)}({\mathbb F}_q){\mathbb G}_h^1({\mathbb F}_q)}^{{\mathbb G}_h({\mathbb F}_q)}\big(H_c^*(X_h \cap {\mathbb{L}}_h^{(r)} {\mathbb G}_h^1, \overline {\mathbb Q}_\ell)[\theta]\big).$$ We note that one needs a separate argument to study the irreducibility of $H_c^*(X_h^{(r)}, \overline {\mathbb Q}_\ell)[\theta]$. In the case that $r = n'$, this is done in [@CI_loopGLn Theorem 4.1(b)]. ### Proof of Theorem \[t:inner\_prod\] Recall that by definition $$S_h \cap {\mathbb{L}}_h^{(r)} {\mathbb G}_h^1 = \{g \in {\mathbb{L}}_h^{(r)} {\mathbb G}_h^1 : g^{-1} F(g) \in {\mathbb U}_{h,r}\}, \qquad \text{where ${\mathbb U}_{h,r} = {\mathbb{L}}_h^{(r)} {\mathbb U}_h^1 \cap {\mathbb U}_h$.}$$ Consider the variety $$\Sigma^{(r)} = \{(x,x', y) \in F({\mathbb U}_{h,r}) \times F({\mathbb U}_{h,r}) \times {\mathbb{L}}_h^{(r)} {\mathbb G}_h^1 : x F(y) = yx'\}$$ endowed with the ${\mathbb T}_h({\mathbb F}_q) \times {\mathbb T}_h({\mathbb F}_q)$-action given by $(t,t') {\colon}(x,x',y) \mapsto (txt^{-1}, t' x' t'{}^{-1}, tyt'{}^{-1})$. Then we have an isomorphism $$\begin{aligned} {\mathbb{L}}_h^{(r)}({\mathbb F}_q){\mathbb G}_h^1({\mathbb F}_q) \backslash \big((S_h \cap {\mathbb{L}}_h^{(r)} {\mathbb G}_h^1) \times (S_h \cap {\mathbb{L}}_h^{(r)} {\mathbb G}_h^1)\big) &\to \Sigma^{(r)}, \\ (g,g') &\mapsto (g^{-1}F(g), g'{}^{-1} F(g'), g^{-1}g'),\end{aligned}$$ equivariant with respect to ${\mathbb T}_h({\mathbb F}_q) \times {\mathbb T}_h({\mathbb F}_q)$. To prove Theorem \[t:inner\_prod\], we need to establish $$\label{e:Sigma goal} \sum_i (-1)^i \dim H_c^i(\Sigma^{(r)}, \overline {\mathbb Q}_\ell)_{\theta,\theta'} = \#\{w \in W_{{\mathbb{L}}_h^{(r)}}^F : \theta' = \theta \circ \operatorname{Ad}(w)\}.$$ The Bruhat decomposition of the reductive quotient ${\mathbb G}_1$ lifts to a decomposition ${\mathbb G}_h = \bigsqcup_{w \in W_{{\mathbb G}_h}} {\mathbb G}_{h,w}$, where ${\mathbb G}_{h,w} = {\mathbb U}_h {\mathbb T}_h \dot w {\mathbb{K}}_{h}^1 {\mathbb U}_h$ and ${\mathbb{K}}_{h}^1 = ({\mathbb U}_h^-)^1 \cap \dot w^{-1} {\mathbb U}_h^{-,1} \dot w$ [@CI_MPDL Lemma 8.6]. This induces the decomposition $${\mathbb{L}}_h^{(r)} {\mathbb G}_h^1 = \bigsqcup_{w \in W_{{\mathbb{L}}_h^{(r)}}^F} {\mathbb G}_{h,w}^{(r)}, \qquad \text{where ${\mathbb G}_{h,w}^{(r)} = {\mathbb G}_{h,w} \cap {\mathbb{L}}_h^{(r)} {\mathbb G}_h^1$.}$$ and also the locally closed decomposition $$\Sigma^{(r)} = \bigsqcup_{w \in W_{\mathcal O}} \Sigma_w^{(r)}, \qquad \text{where $\Sigma_w^{(r)} = \Sigma \cap (F({\mathbb U}_{h,r}) \times F({\mathbb U}_{h,r}) \times {\mathbb G}_{h,w}^{(r)})$.}$$ We will calculate by analyzing the cohomology of $$\begin{aligned} \widehat \Sigma_w^{(r)} = \{(x,x',y_1,\tau,z,y_2) \in F({\mathbb U}_{h,r}) \times F({\mathbb U}_{h,r}) \times {\mathbb U}_{h,r} \times {}&{} {\mathbb T}_h \times {\mathbb{K}}_{h}^1 \times {\mathbb U}_h : \\&x F(y_1 \tau \dot w z y_2) = y_1 \tau \dot w z y_2 x'\}.\end{aligned}$$ Since $\widehat \Sigma_w^{(r)} \to \Sigma_w^{(r)}, (x,x',y_1,\tau,z,y_2) \mapsto (x,x',y_1 \tau z y_2)$ is a locally trivial fibration, showing is equivalent to showing $$\label{e:Sigma hat goal} \sum_i (-1)^i \dim H_c^i(\widehat \Sigma_w^{(r)}, \overline {\mathbb Q}_\ell)_{\theta,\theta'} = \begin{cases} 1 & \text{if $w \in W_{{\mathbb{L}}_h^{(r)}}^F$ and $\theta' = \theta \circ \operatorname{Ad}(w)$,} \\ 0 & \text{otherwise.} \end{cases}$$ As in [@Lusztig_04 1.9], we can simplify the formulation of $\widehat \Sigma_w$ by replacing $x$ by $x F(y_1)$ and replacing $x'$ by $x' F(y_2)^{-1}$. We then obtain $$\widehat \Sigma_w^{(r)} = \{(x,y_1, \tau,z,y_2) \in F{\mathbb U}_{h,r} \times {\mathbb U}_{h,r} \times {\mathbb T}_h \times {\mathbb{K}}_{h}^1 \times {\mathbb U}_{h,r} : x F(\tau \dot w z) \in y_1 \tau \dot w z y_2 F {\mathbb U}_{h,r}\}.$$ \[l:empty criterion\] Assume that there exists some $2 \leq i \leq n$ which satisfies the string of inequalities $[\gamma \dot w \gamma^{-1}(i)] > [\gamma \dot w \gamma^{-1}(i-1)+1] > 1$. Then $\widehat \Sigma_w = \varnothing.$ By the same argument as in [@CI_loopGLn Lemma 3.4], we may assume $h=1$ and come to the statement that $\widehat \Sigma_w = \varnothing$ if there does not exist $(x,y_{12}, y_{21}, \tau) \in F{\mathbb U}_{1,r} \times ({\mathbb U}_{1,r} \cap F{\mathbb U}_{1,r}^-) \times ({\mathbb U}_{1,r} \cap F{\mathbb U}_{1,r}^-) \times {\mathbb T}_1$ such that $$\dot w^{-1} \tau y_{12} x F(\dot w) \in y_{21} F({\mathbb U}_1 \cap {\mathbb{L}}_1^{(r)}).$$ Therefore to prove the lemma, it is enough to analyze the intersection $$\big[\dot w^{-1} ({\mathbb U}_{1,r} \cap F{\mathbb U}_{1,r}^-) \cdot F{\mathbb U}_{1,r} F(\dot w)\big] \cap \big[({\mathbb U}_{1,r} \cap F{\mathbb U}_{1,r}^-) \cdot F({\mathbb U}_1 \cap {\mathbb{L}}_1^{(r)})\big].$$ By construction (see , , and write $F_0(g) = b_0 \gamma t_{\kappa,n} \gamma^{-1} \sigma(g) t_{\kappa,n}^{-1} \gamma b_0 \gamma^{-1}$), we have $$\begin{aligned} \dot w^{-1} &({\mathbb T}_1 \cap ({\mathbb U}_{1,r} \cap F{\mathbb U}_{1,r}^-) \cdot F{\mathbb U}_{1,r}) F(\dot w) \cap (({\mathbb U}_{1,r} \cap F{\mathbb U}_{1,r}^-) \cdot F{\mathbb U}_{1,r}) \\ &= \gamma^{-1} (\gamma \dot w^{-1} \gamma^{-1})({\mathbb T}_1 \cdot ({\mathbb U}_{{\rm low},1,r} \cap F_0{\mathbb U}_{{\rm up},1,r}) \cdot F_0{\mathbb U}_{{\rm low},1,r}) F_0(\gamma \dot w^{-1} \gamma^{-1}) \gamma \\ &\qquad\qquad\qquad\cap \gamma^{-1}(({\mathbb U}_{{\rm low},1,r} \cap F_0 {\mathbb U}_{{\rm up},1,r}) \cdot F_0 {\mathbb U}_{{\rm low},1,r})\gamma.\end{aligned}$$ Now the desired result holds by [@CI_loopGLn Lemma 3.5]. The rest of the proof now proceeds exactly as in [@CI_loopGLn Section 3.3, 3.4], which we summarize now. By [@CI_loopGLn Lemma 3.5], if $1 \neq w \in W_{{\mathbb{L}}_h^{(r)}}$ is such that $\widehat \Sigma_w \neq \varnothing$, then ${\mathbb U}_h \cap \dot w^{-1} U_h \dot w$ is centralized by a subtorus of ${\mathbb T}_h$ which properly contains the center of ${\mathbb G}_h$. In particular, the group $$H_w = \{(t,t') \in {\mathbb T}_h \times {\mathbb T}_h : \text{$\dot w^{-1} t^{-1} F(t) \dot w = t'{}^{-1} F(t')$ centralizes ${\mathbb{K}}_{h} = {\mathbb U}_ \cap \dot w^{-1} {\mathbb U}_h \dot w$}\}$$ has the property that its image under the projections $\pi_1, \pi_2 {\colon}{\mathbb T}_h \times {\mathbb T}_h \to {\mathbb T}_1 \times {\mathbb T}_1 \to {\mathbb T}_1$ contains a rank-$1$ regular[^1] torus. Crucially, $H_w$ acts on $\widehat\Sigma_w^{(r)}$ via $$(t,t') {\colon}(x,y_1, \tau, z, y_2) \mapsto (F(t) x F(t)^{-1}, F(t) y_1 F(t)^{-1}, t \tau \dot w t'{}^{-1} \dot w^{-1}, t' z t'{}^{-1}, F(t') y_2 F(t')^{-1}),$$ and this action extends the action of ${\mathbb T}_h({\mathbb F}_q) \times {\mathbb T}_h({\mathbb F}_q)$. Then $H_c^*(\widehat \Sigma_w, \overline {\mathbb Q}_\ell) = H_c^*(\widehat \Sigma_w^{H_{w,{\rm red}}^0}, \overline {\mathbb Q}_\ell)$ and using [@CI_loopGLn Lemma 3.6], we can calculate: $$\widehat \Sigma_w^{H_{w,{\rm red}}^0} = \begin{cases} ({\mathbb T}_h \dot w)^F & \text{if $F(\dot w) = \dot w$,} \\ \varnothing & \text{otherwise.} \end{cases}$$ Now holds for all $w \neq 1$. To obtain for $w = 1$, we may apply [@CI_loopGLn Section 3.4] directly. We have now finished the proof of Theorem \[t:inner\_prod\] ### Proof of Corollary \[t:irred n’\] {#s:irred n'} Consider $$\Sigma^1 = \{(x,x',y) \in ({\mathbb U}_h^1 \cap F{\mathbb U}_h^{-,1}) \times ({\mathbb U}_h^1 \cap F{\mathbb U}_h^{-,1}) \times {\mathbb G}_h^1 : x F(y) = yx'\}.$$ Then we have an isomorphism $${\mathbb G}_h({\mathbb F}_q) \backslash \big((X_h \cap {\mathbb T}_h {\mathbb G}_h^1) \times (X_h \cap {\mathbb T}_h {\mathbb G}_h^1)\big) \to \Sigma^1, \qquad (g,g') \mapsto (g^{-1}F(g), g'{}^{-1}F(g'), g^{-1} g').$$ Since ${\mathbb G}_h^1$ has an Iwahori factorization, any $y \in {\mathbb G}_h^1$ can be written uniquely in the form $$\begin{aligned} y &= y_1' y_2' y_1'' y_2'', & y_1' &\in {\mathbb U}_h^1 \cap F^{-1}({\mathbb U}_h^1), & y_2' &\in {\mathbb U}_h^1 \cap F^{-1}({\mathbb U}_h^{-,1}), \\ & & y_1'' &\in {\mathbb T}_h \cdot ({\mathbb U}_h^{-,1} \cap F^{-1} {\mathbb U}_h^{-,1}), & y_2'' &\in {\mathbb U}_h^{-,1} \cap F^{-1} {\mathbb U}_h^1.\end{aligned}$$ Then our definition equation becomes $$x F(y_1' y_2' y_1'' y_2'') = y_1' y_2' y_1'' y_2'' x'.$$ By , every element of ${\mathbb U}_h$ can be written uniquely in the form $y_1'{}{-1} x F(y_1')$. We also have $F(y_2''), x' \in {\mathbb U}_h^1 \cap F {\mathbb U}_h^{-,1}$ and we can replace $x'$ by $x' F(y_2'')^{-1}$. Therefore $\Sigma^1$ is the set of tuples $(x', y_2', y_1'', y_2'') \in ({\mathbb U}_h^1 \cap F{\mathbb U}_h^{-,1}) \times ({\mathbb U}_h^1 \cap F^{-1}{\mathbb U}_h^{-,1}) \times ({\mathbb T}_h \cdot ({\mathbb U}_h^{-,1} \cap F^{-1} {\mathbb U}_h^{-,1})) \times ({\mathbb U}_h^{-,1} \cap F^{-1}{\mathbb U}_h^1)$ which satisfy $$y_1'' y_2'' x' \in y_2'{}^{-1} {\mathbb U}_h F(y_2') F(y_1'') = {\mathbb U}_h F(y_2') F(y_1'').$$ Now consider the subgroup $$H \colonequals \{(t,t') \in {\mathbb T}_h \times {\mathbb T}_h : \text{$t^{-1} F(t) = t'{}^{-1}F(t')$ centralizes ${\mathbb T}_h \cdot ({\mathbb U}_h^{-,1} \cap F^{-1} {\mathbb U}_h^{-,1})$}\}.$$ It is a straightforward check that for any $(t,t') \in H$, the map $$(x', y_2', y_1'', y_2'') \mapsto (F(t')^{-1} x' F(t'), t^{-1} y_2' t, t^{-1} y_1'' t', F(t')^{-1} y_2'' F(t'))$$ defines an action of $H$ on $\Sigma^1$. By explicit calculation, one can check that $H$ contains an algebraic torus ${\mathcal T}$ over $\overline {\mathbb F}_q$ and that the fixed points of $\Sigma^1$ under ${\mathcal T}$ is equal to ${\mathbb T}_h^1({\mathbb F}_q)$. We therefore have $$\dim H_c^*(\Sigma^1, \overline {\mathbb Q}_\ell)_{\theta^{-1}, \theta'} = \begin{cases} 1 & \text{if $\chi = \chi'$,} \\ 0 & \text{otherwise,} \end{cases}$$ and this completes the proof. Very regular elements --------------------- Recall that we say that an element $g \in {\mathbb T}_h({\mathbb F}_q) \cong {\mathbb W}_h({\mathbb F}_{q^n})^\times$ is *very regular* if its image in ${\mathbb F}_{q^n}^\times$ has trivial $\operatorname{Gal}({\mathbb F_{q^n}}/{\mathbb F}_q)$-stabilizer. \[t:very\_reg\] Let $\theta {\colon}{\mathbb T}_h({\mathbb F}_q) \to \overline {\mathbb Q}_\ell^\times$ be any character. If $g \in {\mathbb T}_h({\mathbb F}_q)\subset {\mathbb{L}}_h^{(r)}({\mathbb F}_q) {\mathbb G}_h^1({\mathbb F}_q)$ is a very regular element, then $$\operatorname{Tr}(g ; H_c^*(X_h \cap {\mathbb{L}}_h^{(r)}{\mathbb G}_h^1)[\theta]) = \sum_{\gamma \in \operatorname{Gal}(L/k)[n'/r]} \theta^\gamma(x),$$ where $\operatorname{Gal}(L/k)[n'/r]$ is the unique order-$n'/r$ subgroup of $\operatorname{Gal}(L/k)$. Let $g \in {\mathbb T}_h({\mathbb F}_q)$ be a very regular element and let $t \in {\mathbb T}_h({\mathbb F}_q)$ be any element. Since the action of $(g,t)$ on $X_h \cap {\mathbb{L}}_h^{(r)}{\mathbb G}_h^1$ is a finite-order automorphism of a separated, finite-type scheme over ${\mathbb F}_{q^n}$, by the Deligne–Lusztig fixed point formula, $$\operatorname{Tr}\left((g,t)^*; H_c^*(X_h \cap {\mathbb{L}}_h^{(r)} {\mathbb G}_h^1)[\theta]\right) = \operatorname{Tr}\left((g_u, t_u)^*; H_c^*((X_h \cap {\mathbb{L}}_h^{(r)} {\mathbb G}_h^1)^{(g_s, t_s)})[\theta]\right),$$ where $g = g_s g_u$ and $t = t_s t_u$ are decompositions such that $g_s, t_s$ is a power of $g,t$ of $p$-power order and $g_u, t_u$ is a power of $g,t$ of prime-to-$p$ order. Recall from Section \[s:explicit\] that every element $x$ of $X_h \cap {\mathbb{L}}_h^{(r)}{\mathbb G}_h^1$ is a matrix that is uniquely determined by its first column $(x_1, x_2, \ldots, x_n)$. Furthermore, we have an isomorphism $${\mathbb W}_h({\mathbb F}_{q^n})^\times \to {\mathbb T}_h({\mathbb F}_q), \qquad t \mapsto \operatorname{diag}(t, \sigma^l(t), \sigma^{2l}(t), \ldots, \sigma^{(n-1)l}(t)).$$ Under this identification, for $g,t \in {\mathbb T}_h({\mathbb F}_q)$, the element $gxt \in X_h \cap {\mathbb{L}}_h^{(r)}{\mathbb G}_h^1$ corresponds to the vector $(gt x_1, \sigma^l(g)t x_2, \sigma^{2l}(g)t x_3, \ldots, \sigma^{(n-1)l}(g) t x_n).$ In particular, we see that if $x \in (X_h \cap {\mathbb{L}}_h^{(r)}{\mathbb G}_h^1)^{(g,t)}$, then (for any $i = 1, \ldots, n$) $x_i \neq 0$ implies $t = \sigma^{(i-1)l}(g)^{-1}$. Using the assumption that $g$ is very regular and therefore $g_s$ has trivial $\operatorname{Gal}(L/k)$-stabilizer, this implies that $(X_h \cap {\mathbb{L}}_h^{(r)} {\mathbb G}_h^1)^{(g,t)}$ exactly consists of elements corresponding to vectors with a single nonzero entry $x_i$. Now, if $i \not\equiv 1$ modulo $n_0$, then the corresponding $x$ cannot lie in $X_h$ as then $\det(x) \notin {\mathbb W}_h(\overline {\mathbb F}_q)^\times$. On the other hand, if $i \equiv 1$ modulo $n_0$ and $i \not\equiv 1$ modulo $n_0r$, then the corresponding $x$ cannot lie in ${\mathbb{L}}_h^{(r)}{\mathbb G}_h^1$. If $x \in X_h \cap {\mathbb{L}}_h^{(r)} {\mathbb G}_h^1$ corresponds to $(0, \ldots, 0, x_i, 0, \ldots, 0)$ for some $i \equiv 1$ modulo $n_0r$, then $x_i$ can be any element of ${\mathbb W}_h^\times({\mathbb F}_{q^n})$. Hence: $$(X_h \cap {\mathbb{L}}_h^{(r)} {\mathbb G}_h^1)^{(g_s,t_s)} = \begin{cases} b_0^i{\mathbb T}_h({\mathbb F}_q) & \text{if $t = \sigma^{(i-1)l}(g)^{-1}$ for some $i \equiv 1$ mod $n_0r$,} \\ \varnothing & \text{otherwise.} \end{cases}$$ Furthermore, for $g_u, t_u \in {\mathbb T}_h({\mathbb F}_q)$ and $b_0^i x \in (X_h \cap {\mathbb{L}}_h^{(r)}{\mathbb G}_h^1)^{(g_s,t_s)},$ $$g_u \cdot b_0^i x \cdot t_u = b_0^i (b_0^{-i} g_u b_0^i) x t_u = b_0^i (\sigma^{(i-1)l}(g_u) x t_u).$$ We are now ready to put all the above together. We have $$\begin{aligned} \operatorname{Tr}(g ; {}&{}H_c^*(X_h \cap {\mathbb{L}}_h^{(r)}{\mathbb G}_h^1)[\theta])\\ &= \frac{1}{\#{\mathbb T}_h({\mathbb F}_q)} \sum_{t \in {\mathbb T}_h({\mathbb F}_q)} \theta(t)^{-1} \operatorname{Tr}((g,t) ; H_c^*(X_h \cap {\mathbb{L}}_h^{(r)} {\mathbb G}_h^1)) \\ &= \frac{1}{\#{\mathbb T}_h({\mathbb F}_q)} \sum_{t \in {\mathbb T}_h({\mathbb F}_q)} \theta(t)^{-1} \operatorname{Tr}((g_u,t_u) ; H_c^*((X_h \cap {\mathbb{L}}_h^{(r)} {\mathbb G}_h^1)^{(g_s, t_s)})) \\ &= \frac{1}{\#{\mathbb T}_h({\mathbb F}_q)} \sum_{\substack{1 \leq i \leq n \\ i \equiv 1 \!\!\!\!\! \pmod{n_0r}}} \theta(\sigma^{(i-1)l}(g_s)) \sum_{t_u \in {\mathbb T}_h^1({\mathbb F}_q)} \theta(t_u)^{-1} \operatorname{Tr}((g_u, t_u) ; H_c^*(b_0^i{\mathbb T}_h({\mathbb F}_q))) \\ &= \frac{1}{\#{\mathbb T}_h({\mathbb F}_q)} \sum_{\substack{1 \leq i \leq n \\ i \equiv 1 \!\!\!\!\! \pmod{n_0r}}} \theta(\sigma^{(i-1)l}(g_s)) \sum_{t_u \in {\mathbb T}_h^1({\mathbb F}_q)} \theta(t_u)^{-1} \sum_{\theta' {\colon}{\mathbb T}_h({\mathbb F}_q) \to \overline {\mathbb Q}_\ell} \theta'(\sigma^{(i-1)l}(g_u)) \theta'(t_u) \\ &= \sum_{\substack{1 \leq i \leq n \\ i \equiv 1 \!\!\!\!\! \pmod{n_0r}}} \theta(\sigma^{(i-1)l}(g_s)) \theta(\sigma^{(i-1)l}(g_u)) = \sum_{\gamma \in \operatorname{Gal}(L/k)[n'/r]} \theta^\gamma(g). \qedhere\end{aligned}$$ The closed stratum is a maximal variety {#s:single_degree} ======================================= Recall that $X_h^{(r)}$ is the closure of the $r$th Drinfeld stratum and that the unique closed Drinfeld stratum is the $n'$th Drinfeld stratum $$X_h^{(n')} \colonequals \{x \in {\mathbb G}_h : x^{-1} \sigma(x) \in {\mathbb U}_r^1\}.$$ Recall that $X_h^{(n')}$ is a finite disjoint union of copies of $X_h^1 \colonequals X_h^{(n')} \cap {\mathbb G}_h^1$: $$X_h^{(n')} = \bigsqcup_{g \in {\mathbb G}_1({\mathbb F}_q)} [g] \cdot X_h^1,$$ where $[g]$ denotes a coset representative in ${\mathbb G}_h({\mathbb F}_q)$ for $g \in {\mathbb G}_1({\mathbb F}_q) = {\mathbb G}_h({\mathbb F}_q)/{\mathbb G}_h^1({\mathbb F}_q)$. For any character $\theta {\colon}{\mathbb T}_h({\mathbb F}_q) \to \overline {\mathbb Q}_\ell^\times$, we have an isomorphism of ${\mathbb G}_h({\mathbb F}_q)$-representations $$H_c^i(X_h^{(n')}, \overline {\mathbb Q}_\ell)[\theta] \cong \operatorname{Ind}_{{\mathbb T}_h({\mathbb F}_q) {\mathbb G}_h^1({\mathbb F}_q)}^{{\mathbb G}_h({\mathbb F}_q)}\left(H_c^i(X_h^{(n')} \cap {\mathbb T}_h {\mathbb G}_h^1, \overline {\mathbb Q}_\ell)[\theta]\right), \qquad \text{for all $i \geq 0$}.$$ Let $\chi \colonequals \theta|_{{\mathbb T}_h^1({\mathbb F}_q)}$. As ${\mathbb G}_h^1({\mathbb F}_q)$-representations, $$H_c^i\left(X_h^{(n')} \cap {\mathbb T}_h {\mathbb G}_h^1, \overline {\mathbb Q}_\ell\right)[\theta] \cong H_c^i({X_h^1}, \overline {\mathbb Q}_\ell)[\chi], \qquad \text{for all $i \geq 0$.}$$ The subvariety ${X_h^1}\subset X_h$ is stable under the action of $\Gamma_h \colonequals \{(\alpha,\alpha^{-1}) : \alpha \in {\mathbb T}_h({\mathbb F}_q)\} \cdot ({\mathbb G}_h^1({\mathbb F}_q) \times {\mathbb T}_h^1({\mathbb F}_q))$, where the product is viewed as a product of subgroups of ${\mathbb G}_h({\mathbb F}_q) \times {\mathbb T}_h({\mathbb F}_q)$. Observe that $\Gamma_h \cong {\mathbb F}_{q^n}^\times \ltimes ({\mathbb G}_h^1({\mathbb F}_q) \times {\mathbb T}_h^1({\mathbb F}_q))$ and note that $\Gamma_h \cdot (\{1\} \times {\mathbb T}_h({\mathbb F}_q)) = {\mathbb G}_h({\mathbb F}_q) \times {\mathbb T}_h({\mathbb F}_q)$. Therefore $$\operatorname{Ind}_{\Gamma_h}^{{\mathbb G}_h({\mathbb F}_q) \times {\mathbb T}_h({\mathbb F}_q)}(H_c^i({X_h^1}, \overline {\mathbb Q}_\ell)[\chi]) \cong \bigoplus_{\theta'} H_c^i(X_h \cap {\mathbb T}_h {\mathbb G}_h^1)[\theta'],$$ where $\theta'$ ranges over all characters of ${\mathbb T}_h({\mathbb F}_q)$ which restrict to $\chi$ on ${\mathbb T}_h^1({\mathbb F}_q)$. The action of $(\zeta, g, t) \in {\mathbb F}_{q^n}^\times \ltimes ({\mathbb G}_h^1({\mathbb F}_q) \times {\mathbb T}_h^1({\mathbb F}_q)) \cong \Gamma_h$ on $x \in {X_h^1}$ is given by $$(\zeta, g, t) * x = \zeta (g x t) \zeta^{-1},$$ where we view $\zeta \in {\mathbb F}_{q^n}^\times$ as an element of ${\mathbb W}_h({\mathbb F}_{q^n})^\times \cong {\mathbb T}_h({\mathbb F}_q)$. The nonvanishing cohomological degree ------------------------------------- Recall from Section \[s:howe\_fact\] that any character $\theta {\colon}{\mathbb T}_h({\mathbb F}_q) \to \overline {\mathbb Q}_\ell^\times$ has a Howe factorization. For any Howe factorization $\theta = \prod_{i=1}^d \theta_i$ of $\theta$, define a Howe factorization for $\chi \colonequals \theta|_{{\mathbb T}_h^1({\mathbb F}_q)}$ by $$\chi = \prod_{i=1}^{d'} \chi_i, \qquad \text{where $\chi_i \colonequals \theta_i|_{{\mathbb T}_h^1({\mathbb F}_q)}$ and $d' \colonequals \begin{cases} d & \text{if $h_d \geq 2$,} \\ d-1 & \text{if $h_d = 1$.} \end{cases}$}$$ As in Section \[s:howe\_fact\], although the characters $\chi_i$ are not uniquely determined, we have two well-defined sequences of integers $$\begin{aligned} 1 &\equalscolon m_0 \leq m_1 < m_2 < \cdots < m_{d'} \leq m_{d'+1} \leq m_{d+1} \colonequals n \\ h &\equalscolon h_0 \geq h_1 > h_2 > \cdots > h_{d'} > h_{d'+1} = h_{d+1} \colonequals 1\end{aligned}$$ satisfying the divisibility $m_i \mid m_{i+1}$ for $0 \leq i \leq d$. We state the main result of this section. \[t:single\_degree\] Let $\chi {\colon}{\mathbb T}_h^1({\mathbb F}_q) \cong {\mathbb W}_h^1({\mathbb F}_{q^n}) \to \overline {\mathbb Q}_\ell^\times$ be any character. Then $$H_c^i({X_h^1}, \overline {\mathbb Q}_\ell)[\chi] = \begin{cases} \text{irreducible ${\mathbb G}_h^1({\mathbb F}_q)$-representation} & \text{if $i = r_\chi$,} \\ 0 & \text{if $i \neq r_\chi$,} \end{cases}$$ where $$\begin{aligned} r_\chi &= 2(n'-1) + 2 e_\chi + f_\chi \\ e_\chi &= \Big(\frac{n}{m_{d'}} - 1\Big)(h_{d'}-1) - \Big(\frac{n}{\operatorname{lcm}(m_{d'},n_0)} - 1\Big) - (h_0 - h_{d'}) + \sum_{t=0}^{d'-1}\frac{n}{m_t}(h_t - h_{t+1}) \\ f_\chi &= \Big(n-\frac{n}{m_{d'}}\Big) - \Big(n' - \frac{n}{\operatorname{lcm}(m_{d'},n_0)}\Big) + \sum_{t=0}^{d'-1} \Big(\frac{n}{m_t} - \frac{n}{m_{t+1}}\Big)h_{t+1}\end{aligned}$$ Moreover, $\operatorname{Fr}_{q^n}$ acts on $H_c^{r_\chi}({X_h^1}, \overline {\mathbb Q}_\ell)$ as multiplication by $(-1)^{i} q^{ni/2}$. The assertion about the action of $\operatorname{Fr}_{q^n}$ on $H_c^i({X_h^1}, \overline {\mathbb Q}_\ell)[\theta]$ is equivalent to saying that ${X_h^1}$ is a *maximal variety* in the sense of Boyarchenko–Weinstein [@BoyarchenkoW_16]; that is, $\#{X_h^1}({\mathbb F}_{q^n})$ attains its Weil–Deligne bound $$\#{X_h^1}({\mathbb F}_{q^n}) = \sum_{i \geq 0} (-1)^i \operatorname{Tr}(\operatorname{Fr}_{q^n}; H_c^i({X_h^1}, \overline {\mathbb Q}_\ell)) \leq \sum_{i \geq 0} q^{ni/2} \dim H_c^i({X_h^1}, \overline {\mathbb Q}_\ell).$$ For easy reference later, we record the following special case of Theorem \[t:single\_degree\]. \[c:prounip\_degree\] Let $\chi {\colon}{\mathbb T}_h^1({\mathbb F}_q) \cong {\mathbb W}_h^1({\mathbb F}_{q^n}) \to \overline {\mathbb Q}_\ell^\times$ be any character with trivial $\operatorname{Gal}(L/k)$-stabilizer. Then $$H_c^i({X_h^1}, \overline {\mathbb Q}_\ell)[\chi] = \begin{cases} \text{irreducible} & \text{if $i = r_\chi$,} \\ 0 & \text{if $i \neq r_\chi$,} \end{cases}$$ where $$r_\chi = n(h-h_1) + h(n-2) + h_{d'} - (n-n') + \sum_{t=1}^{d'-1} \frac{n}{m_t}(h_t - h_{t+1}).$$ The assumption that $\chi$ has trivial $\operatorname{Gal}(L/k)$-stabilizer is equivalent to the assumption that $m_{d'} = n$. We see then that the formula for $r_\chi$ given in Theorem \[t:single\_degree\] simplifies as follows: $$\begin{aligned} r_\chi &= 2(n'-1) + \sum_{t=0}^{d'-1} 2\Big(\frac{n}{m_t} - 1\Big)(h_t - h_{t+1}) \\ &\qquad + \sum_{t=0}^{d'-1}\Big(\Big(\frac{n}{m_t} - \frac{n}{m_{t+1}}\Big)(h_{t+1} - 1) - \Big(\frac{n}{\operatorname{lcm}(m_t, n_0)} - \frac{n}{\operatorname{lcm}(m_{t+1}, n_0)}\Big)\Big) \\ &= 2(n'-1) - 2(h_0 - h_{d'}) - \Big(\frac{n}{m_0} - \frac{n}{m_{d'}}\Big) - \Big(\frac{n}{\operatorname{lcm}(m_0,n_0)} - \frac{n}{\operatorname{lcm}(m_{d'}, n_0)}\Big)\\ &\qquad + \frac{n}{m_0}(2 h_0 - h_1) - \frac{n}{m_{d'}}(h_{d'}) + \sum_{t=1}^{d'-1} \frac{n}{m_t}(h_t - h_{t+1}).\end{aligned}$$ Using the fact that $h_0 = h$ and $m_0 = 1$ by construction, the above expression simplifies to the one given in the statement of the corollary. Ramified Witt vectors --------------------- We give a brief summary of ramified Witt vectors, following [@Chan_siDL Section 3.1]. In this section, we assume $k$ has characteristic $0$. We first define a “simplified version” of the ramified Witt ring ${\mathbb W}$. For any ${\mathbb F}_q$-algebra $A$, let $W(A)$ be the set $A^{\mathbb{N}}$ endowed with the following coordinatewise addition and multiplication rule: $$\begin{aligned} [a_i]_{i \geq 0} +_W [b_i]_{i \geq 0} &= [a_i + b_i]_{i \geq 0}, \\ [a_i]_{i \geq 0} *_W [b_i]_{i \geq 0} &= \left[\textstyle \sum\limits_{j = 0}^i a_j^{q^{i-j}} b_{i-j}^{q^i}\right]_{i \geq 0}.\end{aligned}$$ It is a straightforward check that $W$ is a commutative ring scheme over ${\mathbb F}_q$. It comes with Frobenius and Verschiebung morphisms $\varphi$ and $V$. The relationship between the ring scheme $W$ and the ring scheme ${\mathbb W}$ of ramified Witt vectors is captured by the following lemma. The key point here is the notion of “major contribution” and “minor contribution”; this will appear in Lemma \[l:det\_contr\] and (implicitly) in Proposition \[p:induct extra\]. \[l:simple Witt n\] Let $A$ be an ${\mathbb F}_q$-algebra. 1. For any $[a_1], \ldots, [a_n] \in A^{\mathbb{N}}$ where $[a_j] = [a_{j,i}]_{i \geq 0}$, $$\prod_{\substack{1 \leq j \leq n \\ \text{w.r.t.\ ${\mathbb W}$}}} [a_j] = \left(\prod_{\substack{1 \leq j \leq n \\ \text{w.r.t.\ $W$}}} [a_j]\right) +_W [c],$$ where $[c] = [c_i]_{i \geq 0}$ for some $c_i \in A[a_{1,i_1}^{e_1} \cdots a_{n,i_n}^{e_n} : i_1 + \cdots + i_n < i, \, e_1, \ldots, e_n \in {\mathbb Z}_{\geq 0}].$ 2. For any $[a_1], \ldots, [a_n] \in A^{\mathbb{N}}$ where $[a_j] = [a_{j,i}]_{i \geq 0}$, $$\sum_{\substack{1 \leq j \leq n \\ \text{w.r.t.\ ${\mathbb W}$}}} [a_j] = \left(\sum_{\substack{1 \leq j \leq n \\ \text{w.r.t.\ $W$}}} [a_j]\right) +_W [c],$$ where $[c] = [c_i]_{i \geq 0}$ for some $c_i \in A[a_{1,j}, \ldots, a_{n,j} : j < i].$ We call the portion coming from $W$ the “major contribution” and $[c]$ the “minor contribution.” Normed indexing sets {#s:indexing} -------------------- The group ${\mathbb G}_h^1$ is an affine space of dimension $n^2(h-1)$. To prove Theorem \[t:single\_degree\], we will need to coordinatize ${\mathbb G}_h^1$, and we do this here by defining an indexing set ${\mathcal A}^+$ of triples $(i,j,l)$. Our strategy for approaching Theorem \[t:single\_degree\] is to perform an inductive calculation based on a Howe factorization of the character $\chi {\colon}{\mathbb T}_h^1({\mathbb F}_q) \to \overline {\mathbb Q}_\ell^\times$. In this section, we will also define a filtration of ${\mathcal A}^+$ corresponding to the two sequences $\{m_i\}, \{h_i\}$ associated with $\chi$. The algebraic group ${\mathbb G}_h^1$ can be described very explicitly: it consists of matrices $(A_{i,j})_{1 \leq i,j \leq n}$ where $$A_{i,j} = \begin{cases} [1,A_{(i,j,1)}, A_{(i,j,2)}, \ldots, A_{(i,j,h-1)}] \in {\mathbb W}_h^1 & \text{if $i = j$,} \\ [A_{(i,j,0)}, A_{(i,j,1)}, \ldots, A_{(i,j,h-2)}] \in {\mathbb W}_{h-1} & \text{if $[i]_{n_0} > [j]_{n_0}$}, \\ [0,A_{(i,j,1)}, A_{(i,j,2)}, \ldots, A_{(i,j,h-1)}] \in {\mathbb W}_h & \text{if $[i]_{n_0} \leq [j]_{n_0}$ and $i \neq j$.} \end{cases}$$ Here, we recall that for $x \in {\mathbb Z}$, we write $[x]_{n_0}$ to denote the unique representative of $x {\mathbb Z}/n_0 {\mathbb Z}$ in the set of coset representatives $\{1, \ldots, n_0\}$. We have a well-defined determinant map $$\det {\colon}{\mathbb G}_h^1 \to {\mathbb W}_h^1.$$ In the way described above, ${\mathbb G}_h^1$ can be coordinatized by the indexing set $${\mathcal A}^+ \colonequals \left\{(i,j,l) \in {\mathbb Z}^{\oplus 3} : \begin{gathered} 1 \leq i, j \leq n \\ \text{$0 \leq l \leq h-2$ if $[i]_{n_0} > [j]_{n_0}$} \\ \text{$1 \leq l \leq h-1$ if $[i]_{n_0} \leq [j]_{n_0}$} \end{gathered}\right\}.$$ We also define: $$\begin{aligned} {\mathcal A}&\colonequals \{(i,j,l) \in {\mathcal A}^+ : i \neq j\}, \\ {\mathcal A}^- &\colonequals \{(i,j,l) \in {\mathcal A}: j = 1\}.\end{aligned}$$ The indexing set ${\mathcal A}$ corresponds to the elements of ${\mathbb G}_h^1$ with $1$’s along the diagonal, and ${\mathcal A}^-$ remembers only the first column of elements of ${\mathbb G}_h^1$ with $(1,1)$-entry $1$. Define a norm on ${\mathcal A}^+$: $$\begin{aligned} {\mathcal A}^+ &\to {\mathbb R}_{\geq 0}, \\ (i,j,l) &\mapsto |(i,j,l)| \colonequals i - j + nl.\end{aligned}$$ For $\lambda = (i,j,l) \in {\mathcal A}^+$, define $$\lambda^\vee \colonequals (j,i,h-1-l).$$ The following seemingly innocuous lemma is in some sense the key reason that the indexing sets above allow us to carry over the calculations in [@Chan_siDL Section 5] from $n'=1$ setting to the present general $n'$ setting with very few modifications. \[l:det\_contr\] Following the conventions as set up above, write $A = (A_{i,j})_{1 \leq i,j \leq n} \in {\mathbb G}_h^1$, where $$A_{i,j} = \begin{cases} [1,A_{(i,j,1)}, \ldots, A_{(i,j,h-1)}] \in {\mathbb W}_h^1 & \text{if $i = j$,} \\ [A_{(i,j,0)}, \ldots, A_{(i,j,h-2)}] \in {\mathbb W}_{h-1} & \text{if $[i]_{n_0} > [j]_{n_0}$}, \\ [0, A_{(i,j,1)}, \ldots, A_{(i,j,h-1)}] \in {\mathbb W}_h & \text{if $[i]_{n_0} \leq [j]_{n_0}$ and $i \neq j$}. \end{cases}$$ Assume that for $\lambda_1, \lambda_2 \in {\mathcal A}^+$, the variables $A_{\lambda_1}$ and $A_{\lambda_2}$ appear in the same monomial in $\det(A) \in {\mathbb W}_{h'}$ for some $h' \leq h$. 1. Then $|\lambda_1| + |\lambda_2| \leq n(h'-1)$. 2. If $|\lambda_1| + |\lambda_2| = n(h'-1)$, then $\lambda_2 = \lambda_1^\vee$, where ${}^\vee$ is taken relative to $h'$. By definition, $$\det(A) = \sum_{\gamma \in S_n} \prod_{1 \leq i \leq n} A_{i, \gamma(i)} \in {\mathbb W}_{h'}(\overline {\mathbb F}_q).$$ Let $l \leq h'-1$. If $K$ has characteristic $p$, then the contributions to the $\varpi^l$-coefficient coming from $\gamma \in S_n$ are of the form $$\prod_{i=1}^n A_{(i,\gamma(i), l_i)},$$ where $(l_1, \ldots, l_n)$ is a partition of $l$. Then $$\label{e:equal sum} \sum_{i=1}^n |(i,\gamma(i), l_i)| = \sum_{i=1}^n i - \gamma(i) + nl_i = \sum_{i=1}^n nl_i = nl \leq n(h'-1).$$ If $K$ has characteristic $0$, then the major contributions to the $\varpi^l$-coefficient coming from $\gamma$ are of the form $$\prod_{i=1}^n A_{(i,\gamma(i), l_i)}^{e_i},$$ where the $e_i$ are some nonnegative integers and where $(l_1, \ldots, l_n)$ is a partition of $l$. Hence $$\label{e:mixed sum} \sum_{i=1}^n |(i,\gamma(i), l_i)| = nl \leq n(h'-1).$$ The minor contributions to the $\varpi^l$-coefficient coming from $\gamma$ are polynomials in $\prod_{i=1}^n A_{(i,\gamma(i), l_i)}^{e_i'}$ where $l_1 + \cdots + l_n < l$ and the $e_i'$ are some nonnegative integers. Hence $\sum_{i=1}^n |(i,\gamma(i), l_i)| < n(h'-1)$. Suppose now that $\lambda_1 = (i_1, j_1, l_1), \lambda_2 = (i_2, j_2, l_2) \in {\mathcal A}^+$ are such that $A_{\lambda_1}$ and $A_{\lambda_2}$ contribute to the same monomial in $\det(M) \in {\mathbb W}_{h'}^1$. Then there exists some $\gamma \in S_n$ such that $\gamma(i_1) = j_1$ and $\gamma(i_2) = j_2$, and by Equations and , $$|\lambda_1| + |\lambda_2| \leq n(h'-1).$$ Observe that if $K$ has characteristic $0$ and $\lambda_1$ and $\lambda_2$ occur in a minor contribution, then $|\lambda_1| + |\lambda_2| < n(h')$. This proves (a), and furthermore, we see that if $|\lambda_1| + |\lambda_2| = n(h'-1)$, then the simultaneous contribution of $A_{\lambda_1}$ and $A_{\lambda_2}$ comes from a major contribution. But now (b) follows: since the image of ${\mathbb G}_h^1$ under the determinant is ${\mathbb W}_h^1$, if $|\lambda_1| + |\lambda_2| = n(h'-1)$, then necessarily the contribution of $\lambda_1$ and $\lambda_2$ to the $(h'-1)$th coordinate of the determinant must come from a transposition. Given two sequences of integers $$\begin{aligned} 1 &\equalscolon m_0 \leq m_1 < m_2 < \cdots < m_{d'} \leq m_{d'+1} \leq m_{d+1} \colonequals n \\ h &\equalscolon h_0 \geq h_1 > h_2 > \cdots > h_{d'} > h_{d'+1} = h_{d+1} \colonequals 1\end{aligned}$$ satisfying $m_i \mid m_{i+1}$ for $0 \leq i \leq d$, we can define the following subsets of ${\mathcal A}$ for $0 \leq s,t \leq d$: $$\begin{aligned} {\mathcal A}_{s,t} &\colonequals \{(i,j,l) \in {\mathcal A}: i \equiv j \!\!\!\! \pmod{m_s}, \, i \not\equiv j \!\!\!\! \pmod{m_{s+1}}, \, l \leq h_t - 1\}, \\ {\mathcal A}_{s,t}^- &\colonequals {\mathcal A}_{s,t} \cap {\mathcal A}^-.\end{aligned}$$ We will need to understand which $\lambda \in {\mathcal A}$ are such that $x_\lambda$ contributes nontrivially to the determinant. We denote the set of all such $\lambda$ by ${\mathcal A}^{\min}$. We may describe this explicitly: $$\begin{aligned} {\mathcal A}^{\min} &= \{\lambda \in {\mathcal A}: \lambda^\vee \in {\mathcal A}\} \label{e:index min} \\ \nonumber &= \left\{(i,j,l) \in {\mathcal A}: \begin{gathered} \text{$0 \leq l \leq h - 2$ if $[i]_{n_0} > [j]_{n_0}$} \\ \text{$1 \leq l \leq h - 1$ if $[i]_{n_0} < [j]_{n_0}$} \\ \text{$1 \leq l \leq h - 2$ if $[i]_{n_0} = [j]_{n_0}$} \end{gathered}\right\}.\end{aligned}$$ For $0 \leq s,t \leq r$, by considering ${}^\vee$ relative to $h_t$, we may similarly define $$\begin{aligned} {\mathcal A}_{s,t}^{\min} &\colonequals \{\lambda \in {\mathcal A}_{s,t} : \lambda^\vee \in {\mathcal A}_{s,t}\} \\ &= \left\{ (i,j,l) \in {\mathcal A}_{s,t} : \begin{gathered} \text{$0 \leq l \leq h_t - 2$ if $[i]_{n_0} > [j]_{n_0}$} \\ \text{$1 \leq l \leq h_t - 1$ if $[i]_{n_0} < [j]_{n_0}$} \\ \text{$1 \leq l \leq h_t - 2$ if $[i]_{n_0} = [j]_{n_0}$} \end{gathered} \right\}.\end{aligned}$$ Define ${\mathcal A}_{s,t}^{-,\min} \colonequals {\mathcal A}^- \cap {\mathcal A}_{s,t}^{\min} = {\mathcal A}_{s,t}^- \cap {\mathcal A}_{s,t}^{\min}$. Define the following decomposition of ${\mathcal A}_{s,t}^{-,\min}$: $$\begin{aligned} {\mathcal I}_{s,t} &\colonequals \{(i,1,l) \in {\mathcal A}_{s,t}^{-,\min} : |(i,1,l)| > n(h_t - 1)/2\}, \\ {\mathcal J}_{s,t} &\colonequals \{(i,1,l) \in {\mathcal A}_{s,t}^{-,\min} : |(i,1,l)| \leq n(h_t - 1)/2\}.\end{aligned}$$ For any real number $\nu$, define $$\begin{aligned} {\mathcal A}_{\geq \nu, t}^{\min} \colonequals \bigsqcup_{s = \lceil \nu \rceil}^r {\mathcal A}_{s,t}^{\min}, \qquad {\mathcal A}_{\geq \nu, t}^{-,\min} = {\mathcal A}^- \cap {\mathcal A}_{\geq \nu, t}^{\min},\end{aligned}$$ and observe that for $0 \leq s \leq r$ an integer, $${\mathcal A}_{\geq s,t}^{\min} = \left\{(i,j,l) \in {\mathcal A}: \begin{gathered} \text{$j \equiv i \!\!\!\! \pmod{m_s}$} \\ \text{$0 \leq l \leq h_t - 2$ if $[i]_{n_0} > [j]_{n_0}$} \\ \text{$1 \leq l \leq h_t - 1$ if $[i]_{n_0} < [j]_{n_0}$} \\ \text{$1 \leq l \leq h_t - 2$ if $[i]_{n_0} = [j]_{n_0}$} \end{gathered}\right\}.$$ \[l:IJ bij\] There is an order-reversing injection ${\mathcal I}_{s,t} \to {\mathcal J}_{s,t}$ that is a bijection if and only if ${\mathcal A}_{s,t}^{-,\min}$ is even. Explicitly, it is given by $${\mathcal I}_{s,t} \hookrightarrow {\mathcal J}_{s,t}, \qquad (i,1, l) \mapsto ([n-i+2]_n, 1, h_t - 2 -l).$$ Note that $\#{\mathcal A}_{s,t}^{-,\min}$ is even unless $n$ and $h_t$ are both even. If $(i,1,l) \in {\mathcal A}_{s,t}^{-,\min}$, then by definition $i \equiv 1$ modulo $m_s$ and $i \not \equiv 1$ modulo $m_{s+1}$. Thus $[n-i+2]_n \equiv 1$ modulo $m_s$ and $[n-i+2]_n \not \equiv 1$ modulo $m_{s+1}$, which shows that $(i,1,l) \in {\mathcal A}_{s,t}^{-,\min}$ implies $([n-i+2]_n,1,l) \in {\mathcal A}_{s,t}^{-,\min}$. Since $i \geq 2$ by assumption, we have $i + [n-i+2]_n = n+2$ and $$|(i,1,l)| + |([n-i+2]_n,1,h_t-2-l)| = n(h_t-1).$$ Hence if $(i,1,l) \in {\mathcal I}_{s,t}$, then $([n-i+2]_n,1,l) \in {\mathcal J}_{s,t}$. It is clear that the map is a bijection if and only if ${\mathcal J}_{s,t}$ does not contain an element of norm $n(h_t - 1)/2$. Such an element must necessarily be of the form $((n+2/2),1,(h_t-2)/2)$, which is integral if and only if $n$ and $h_t$ are both even. The cohomology of ${X_h^1}$ --------------------------- The purpose of this section is to establish the following result: \[t:hom\] For any character $\chi {\colon}{\mathbb T}_h^1({\mathbb F}_q) \to \overline {\mathbb Q}_\ell^\times$, $$\operatorname{Hom}_{{\mathbb G}_h^1({\mathbb F}_q)}\left(\operatorname{Ind}_{{\mathbb T}_h^1({\mathbb F}_q)}^{{\mathbb G}_h^1({\mathbb F}_q)}(\chi), H_c^i({X_h^1}, \overline {\mathbb Q}_\ell)\right) = \begin{cases} \overline {\mathbb Q}_\ell^{\oplus q^{nf_\chi/2}} \otimes ((-q^{n/2})^{r_\chi})^{\deg} & \text{if $i = r_\chi$,} \\ 0 & \text{otherwise.} \end{cases}$$ Moreover, $\operatorname{Fr}_{q^n}$ acts on $H_c^i({X_h^1}, \overline {\mathbb Q}_\ell)$ by multiplication by the scalar $(-1)^i q^{ni/2}$. This is a technical calculation which follows the strategy developed in [@Chan_siDL] (in particular, see Sections 4 and 5 of *op. cit.*). We first rephrase space of homomorphisms in the statement of Theorem \[t:hom\] in terms of the cohomology of a related variety. Every coset of ${\mathbb G}_h^1/{\mathbb T}_h^1$ has a unique coset representative $g$ whose diagonal entries are identically $1$. Over ${\mathbb F}_q$, we may identify ${\mathbb G}_h^1/{\mathbb T}_h^1$ with the affine space ${\mathbb A}[{\mathcal A}]$ (the affine space of dimension $\#{\mathcal A}$ with coordinates indexed by the set ${\mathcal A}$ of Section \[s:indexing\]). Then the quotient morphism ${\mathbb G}_h^1 \to {\mathbb G}_h^1/{\mathbb T}_h^1$ has a section given by $$s {\colon}{\mathbb G}_h^1/{\mathbb T}_h^1 \to {\mathbb G}_h^1, \qquad (x_{(i,j,l)})_{(i,j,l) \in {\mathcal A}} \mapsto (x_{i,j})_{i,j=1, \ldots, n},$$ where $$x_{i,j} = \begin{cases} 1 \in {\mathbb W}_h^1 & \text{if $i = j$,} \\ [x_{(i,j,0)}, x_{(i,j,1)}, \ldots, x_{(i,j,h-2)}] \in {\mathbb W}_{h-1} & \text{if $[i]_{n_0} > [j]_{n_0}$}, \\ [0,x_{(i,j,1)}, x_{(i,j,2)}, \ldots, x_{(i,j,h-1)}] \in {\mathbb W}_h & \text{if $[i]_{n_0} \leq [j]_{n_0}$ and $i \neq j$.} \end{cases}$$ As in [@Chan_siDL Section 5.1.1], there exists a closed ${\mathbb F}_{q^n}$-subscheme ${Y_h^1}$ of ${\mathbb G}_h^1$ such that $X_h = L_q^{-1}({Y_h^1})$ which satisfies the condition that $\operatorname{Fr}_q^i({Y_h^1}) \cap \operatorname{Fr}_q^j({Y_h^1}) = \{1\}$ for all $i \neq j$. We are therefore in a setting where we can invoke [@Chan_siDL Proposition 4.1.1]. Define $$\beta {\colon}({\mathbb G}_h^1/{\mathbb T}_h^1) \times {\mathbb T}_h^1 \to {\mathbb G}_h^1, \qquad (x,g) \mapsto s(\operatorname{Fr}_q(x)) \cdot g \cdot s(x).$$ The affine ${\mathbb F}_{q^n}$-scheme $\beta^{-1}({Y_h^1}) \subset ({\mathbb G}_h^1/{\mathbb T}_h^1) \times {\mathbb T}_h^1$ comes with two maps: $$\operatorname{pr}_1 {\colon}\beta^{-1}({Y_h^1}) \to {\mathbb G}_h^1/{\mathbb T}_h^1 = {\mathbb A}[{\mathcal A}], \qquad \operatorname{pr}_2 {\colon}\beta^{-1}({Y_h^1}) \to {\mathbb T}_h^1.$$ Recall from [@Chan_siDL Lemma 4.1.2] that since the Lang morphism $L_q$ is surjective, $$\label{e:beta} (x,g) \in \beta^{-1}({Y_h^1}) \qquad \Longleftrightarrow \qquad s(x) \cdot y \in X_h,$$ where $y \in {\mathbb T}_h^1$ is any element such that $L_q(y) = g$. \[p:coh\_beta\] For any character $\chi {\colon}{\mathbb T}_h^1({\mathbb F}_q) \cong {\mathbb W}_h^1({\mathbb F_{q^n}}) \to \overline {\mathbb Q}_\ell^\times$, let ${\mathcal L}_\chi$ denote the corresponding $\overline {\mathbb Q}_\ell$-local system on ${\mathbb W}_h^1$. For $i \geq 0$, we have $\operatorname{Fr}_{q^n}$-compatible isomorphisms $$\operatorname{Hom}_{{\mathbb G}_h^1({\mathbb F}_q)}\Big(\operatorname{Ind}_{{\mathbb T}_h^1({\mathbb F}_q)}^{{\mathbb G}_h^1({\mathbb F}_q)}(\chi), H_c^i(X_h, \overline {\mathbb Q}_\ell)\Big) \cong H_c^i({\mathbb A}[{\mathcal A}^-], P^* {\mathcal L}_\chi),$$ where $P {\colon}{\mathbb A}[{\mathcal A}^-] \to {\mathbb W}_h^1$ is the morphism $(x_{(i,1,l)})_{(i,1,l) \in {\mathcal A}^-} \mapsto L_q(\det(g_b^{\operatorname{red}}(1,x_2,\ldots, x_n)))^{-1}$ for $x_i \colonequals [x_{(i,1,0)}, x_{(i,1,1)}, \ldots, x_{(i,1,h-1)}]$. By [@Chan_siDL Proposition 4.1.1], $$\operatorname{Hom}_{{\mathbb G}_h^1({\mathbb F}_q)}\Big(\operatorname{Ind}_{{\mathbb T}_h^1({\mathbb F}_q)}^{{\mathbb G}_h^1({\mathbb F}_q)}(\chi), H_c^i(X_h, \overline {\mathbb Q}_\ell)\Big) \cong H_c^i(\beta^{-1}({Y_h^1}), \operatorname{pr}_2^* {\mathcal F}_\chi),$$ where ${\mathcal F}_\chi$ is the rank-$1$ local system on ${\mathbb T}_h^1$ corresponding to $\chi$. By the same proof as [@Chan_siDL Lemma 5.1.1], $\beta^{-1}({Y_h^1})$ is the graph of the morphism $P_0 {\colon}{\mathbb A}[{\mathcal A}] \to {\mathbb W}_h^1$ given by $x \mapsto L_q(\det(s(x)))^{-1}$. Furthermore, as morphisms on $\beta^{-1}({Y_h^1})$, we have $\operatorname{pr}_2 = i \circ P_0 \circ \operatorname{pr}_1$, where $i {\colon}{\mathbb W}_h^1 \to {\mathbb T}_h^1, x \mapsto \operatorname{diag}(x,1, \ldots, 1)$. Therefore, as sheaves on $\operatorname{pr}_1(\beta^{-1}({Y_h^1}))$, we have $\operatorname{pr}_2^* {\mathcal F}_\chi = P_0^* i^* {\mathcal F}_\chi = P_0^* {\mathcal L}_\chi$, so $$H_c^i(\beta^{-1}({Y_h^1}), \operatorname{pr}_2^* {\mathcal F}_\chi) = H_c^i(\operatorname{pr}_1(\beta^{-1}({Y_h^1})), P_0^* {\mathcal L}_\chi).$$ Next we claim that the projection ${\mathbb A}[{\mathcal A}] \to {\mathbb A}[{\mathcal A}^-]$ induces an isomorphism $\operatorname{pr}_1(\beta^{-1}({Y_h^1})) \to {\mathbb A}[{\mathcal A}^-]$. Injectivity is clear: using , we know that $x \in \operatorname{pr}_1(\beta^{-1}({Y_h^1}))$ if $s(x) \cdot y \in {X_h^1}$ for some $y \in {\mathbb T}_h^1$. Since $s(x) \cdot y$ is uniquely determined by its first column, then $s(x)$ is uniquely determined by its first column, which is precisely the projection of $x$ to ${\mathbb A}[{\mathcal A}^-]$. To see surjectivity, we need to show that for any $x \in {\mathbb A}[{\mathcal A}^-](\overline {\mathbb F}_q)$, there exists a $y \in {\mathbb T}_h^1(\overline {\mathbb F}_q)$ such that $g_b^{\operatorname{red}}(x) \cdot y \in {X_h^1}$. Pick any $y = \operatorname{diag}(y_1, \sigma(y_1), \ldots, \sigma(y_1)) \in {\mathbb T}_h^1(\overline {\mathbb F}_q)$ such that $\det(y) = \det(g_b^{\operatorname{red}}(x))^{-1}$. Then $g_b^{\operatorname{red}}(x) \cdot y \in X_h$ since $g_b^{\operatorname{red}}(x) \cdot y = g_b^{\operatorname{red}}(xy_1)$ and $\det(g_b^{\operatorname{red}}(x) \cdot y) = 1 \in {\mathbb W}_h^1({\mathbb F}_q)$. Under the isomorphism $\operatorname{pr}_1(\beta^{-1}({Y_h^1})) \cong {\mathbb A}[{\mathcal A}^-]$, the sheaf $P_0^* {\mathcal L}_\chi$ is identified with $P^* {\mathcal L}_\chi$, and the proposition now follows. Note that the last paragraph of the above proof is a simpler and more conceptual proof of [@Chan_siDL Lemma 5.1.6]. To calculate $H_c^i({\mathbb A}[{\mathcal A}^-], P^* {\mathcal L}_\chi)$, we will use an inductive argument on affine fibrations that relies on iteratively applying the next two propositions: \[p:induct factor\] For $0 \leq t \leq d'$, we have $\operatorname{Fr}_{q^n}$-compatible isomorphisms $$H_c^i({\mathbb A}[{\mathcal A}_{\geq t,t}^{-,\min}], P^* {\mathcal L}_{\chi_{\geq t}}) \cong H_c^i({\mathbb A}[{\mathcal A}_{\geq t,t+1}^{-,\min}], P^* {\mathcal L}_{\chi_{\geq t+1}})[2 e_t] \otimes ((-q^{n/2})^{2e_t})^{\deg},$$ where $e_t = \#({\mathcal A}_{\geq t,t}^{-,\min} \smallsetminus {\mathcal A}_{\geq t,t+1}^{-,\min})$. The proof is the same as the proof of [@Chan_siDL Proposition 5.3.1]. We give a sketch here. By definition, $\chi_{\geq t} = \chi_t \cdot \chi_{\geq t+1}$ and $\chi_t$ factors through the norm map ${\mathbb W}_{h_t}^1({\mathbb F}_{q^n}) \to {\mathbb W}_{h_t}^1({\mathbb F}_{q^{m_t}})$. Let $\operatorname{pr}{\colon}{\mathbb W}_{h_t}^1 \to {\mathbb W}_{h_{t+1}}^1$. Since $P {\colon}{\mathbb A}[{\mathcal A}_{\geq t,t}^{-,\min}] \to {\mathbb W}_{h_t}^1$ factors through $L_{q^{m_t}}$, this implies that $$P^* {\mathcal L}_{\chi_{\geq t}} = P^* {\mathcal L}_{\chi_t} \otimes P^* \operatorname{pr}^* {\mathcal L}_{\chi_{\geq t+1}} = \overline {\mathbb Q}_\ell \boxtimes P^* {\mathcal L}_{\chi_{\geq t+1}},$$ where $\overline {\mathbb Q}_\ell$ is the constant sheaf on ${\mathbb A}[{\mathcal A}_{\geq t,t}^{-,\min} \smallsetminus {\mathcal A}_{\geq t,t+1}^{-,\min}]$ and $P^* {\mathcal L}_{\chi_{\geq t+1}}$ is the pullback along $P {\colon}{\mathbb A}[{\mathcal A}_{\geq t,t+1}^{-,\min}] \to {\mathbb W}_{h_{t+1}}^1$. The conclusion then follows from the Künneth formula. \[p:induct extra\] For $0 \leq t \leq d'-1$, we have $\operatorname{Fr}_{q^n}$-compatible isomorphisms $$H_c^i({\mathbb A}[{\mathcal A}_{\geq t,t+1}^{-,\min}], P^* {\mathcal L}_{\chi_{\geq t+1}}) \cong H_c^i({\mathbb A}[{\mathcal A}_{\geq t+1,t+1}^{-,\min}], P^* {\mathcal L}_{\chi_{\geq t+1}})^{\oplus q^{nf_t/2}}[f_t] \otimes ((-q^{n/2})^{f_t})^{\deg},$$ where $f_t = \#({\mathcal A}_{\geq t,t+1}^{-,\min} \smallsetminus {\mathcal A}_{\geq t+1,t+1}^{-,\min}) = \#{\mathcal A}_{t,t+1}^{-,\min}$. By replacing [@Chan_siDL Lemmas 3.2.3, 3.2.6] with Lemmas \[l:det\_contr\], \[l:IJ bij\], the proof of [@Chan_siDL Proposition 5.3.2] applies. (The proof is quite technical; simpler incarnations of this idea have appeared in [@Boyarchenko_12], [@Chan_DLI], [@Chan_DLII].) By Proposition \[p:coh\_beta\], we need to calculate $H_c^i({\mathbb A}[{\mathcal A}^-], P^* {\mathcal L}_\chi)$. Since $P({\mathbb A}[{\mathcal A}^- \smallsetminus {\mathcal A}^{-,\min}]) = \{1\} \in {\mathbb W}_h^1$ and $\#({\mathcal A}^- \smallsetminus {\mathcal A}^{-,\min}) = n'-1$, we see that $$H_c^i({\mathbb A}[{\mathcal A}^-], P^* {\mathcal L}_\chi) = H_c^i({\mathbb A}[{\mathcal A}^{-,\min}], P^* {\mathcal L}_\chi)[2(n'-1)] \otimes ((-q^{n/2})^{2(n'-1)})^{\deg}.$$ Using Propositions \[p:induct factor\] and \[p:induct extra\] iteratively, we have $$\begin{aligned} H_c^i&({\mathbb A}[{\mathcal A}^{-,\min}], P^* {\mathcal L}_\chi) \\ &= H_c^i({\mathbb A}[{\mathcal A}_{\geq 0,0}^{-,\min}], P^* {\mathcal L}_{\chi_{\geq 0}}) && \text{(by def)} \\ &\cong H_c^i({\mathbb A}[{\mathcal A}_{\geq 0,1}^{-,\min}], P^* {\mathcal L}_{\geq 1})[2e_0] \otimes \Big((-q^{n/2})^{2e_0}\Big)^{\deg} && \text{(Prop \ref{p:induct factor})} \\ &\cong H_c^i({\mathbb A}[{\mathcal A}_{\geq 1,1}^{-,\min}], P^* {\mathcal L}_{\geq 1})^{\oplus q^{nf_0/2}}[f_0 + 2 e_0] \otimes \Big((-q^{n/2})^{f_0 + 2e_0}\Big)^{\deg} && \text{(Prop \ref{p:induct extra})} \\ &\cong H_c^i({\mathbb A}[{\mathcal A}_{\geq 1,2}^{-,\min}], P^* {\mathcal L}_{\geq 2})^{\oplus q^{nf_0/2}}[f_0 + 2(e_0 + e_1)] \otimes \Big((-q^{n/2})^{f_0 + 2(e_0 + e_1)}\Big)^{\deg} && \text{(Prop \ref{p:induct factor})}\end{aligned}$$ and so forth until $$\cong H_c^i({\mathbb A}[{\mathcal A}_{\geq d',d'+1}^{-,\min}], P^* {\mathcal L}_{\chi_{\geq d'+1}})^{\oplus q^{nf_\chi/2}}[f_\chi + 2 e_\chi] \otimes \Big((-q^{n/2})^{f_\chi + 2 e_\chi}\Big),$$ where $$f_\chi \colonequals f_0 + f_1 + \cdots + f_{d'-1}, \qquad e_\chi \colonequals e_0 + e_1 + \cdots + e_{d'}.$$ Since ${\mathcal A}_{\geq d',d'+1} = \varnothing$, now we have shown $$\label{e:min to pt} H_c^i({\mathbb A}[{\mathcal A}^{-,\min}], P^* {\mathcal L}_\chi) \cong H_c^i(*, \overline {\mathbb Q}_\ell)^{\oplus q^{n f_\chi/2}}[f_\chi + 2 e_\chi] \otimes \Big((-q^{n/2})^{f_\chi + 2 e_\chi}\Big)^{\deg}.$$ Set $r_\chi \colonequals 2(n'-1) + f_\chi + 2 e_\chi$. By Proposition \[p:coh\_beta\], we now have $$\operatorname{Hom}_{{\mathbb G}_h^1({\mathbb F}_q)}\Big(\operatorname{Ind}_{{\mathbb T}_h^1({\mathbb F}_q)}^{{\mathbb G}_h^1({\mathbb F}_q)}(\chi), H_c^i(X_h, \overline {\mathbb Q}_\ell)\Big) \cong \begin{cases} \overline {\mathbb Q}_\ell^{q^{nf_\chi/2}} & \text{if $i = r_\chi$,} \\ 0 & \text{otherwise.} \end{cases}$$ Moreover, since $\operatorname{Fr}_{q^n}$ acts trivially on $H_c^i({\mathbb A}[{\mathcal A}_{\geq d',d'+1}^{-,\min}], P^* {\mathcal L}_{\chi_{\geq d'+1}}) = H^0(*, \overline {\mathbb Q}_\ell)$, then $\operatorname{Fr}_{q^n}$ acts by multiplication by $(-1)^{r_\chi} q^{nr_\chi/2}$ on the above space of homomorphisms. To finish the proof of Theorem \[t:hom\], we need only calculate $e_\chi$, $f_\chi$, $r_\chi$. Unwinding the definitions of indexing sets given in Section \[s:indexing\], we have, for $0 \leq t \leq d'$, $$\begin{aligned} {\mathcal A}_{\geq t,t}^{-,\min} &= \left\{ (i,1,l) \in {\mathbb Z}^{\oplus 3} : \begin{gathered} \text{$2 \leq i \leq n$, $i \equiv 1 \!\!\!\! \pmod{m_t}$} \\ \text{$0 \leq l \leq h_t - 2$ if $[i]_{n_0} \neq 1$} \\ \text{$1 \leq l \leq h_t - 2$ if $[i]_{n_0} = 1$} \end{gathered} \right\}, \\ {\mathcal A}_{\geq t, t+1}^{-,\min} &= \left\{ (i,1,l) \in {\mathbb Z}^{\oplus 3} : \begin{gathered} \text{$2 \leq i \leq n$, $i \equiv 1 \!\!\!\! \pmod{m_t}$} \\ \text{$0 \leq l \leq h_{t+1} - 2$ if $[i]_{n_0} \neq 1$} \\ \text{$1 \leq l \leq h_{t+1} - 2$ if $[i]_{n_0} = 1$} \end{gathered} \right\}.\end{aligned}$$ Therefore, we have $$\begin{aligned} e_t &= \Big(\frac{n}{m_t} - 1\Big)(h_t - h_{t+1}) \qquad \text{if $0 \leq t \leq d'-1$}, \\ e_{d'} &= \Big(\frac{n}{m_{d'}} - 1\Big)(h_{d'} - 1) - \Big(\frac{n}{\operatorname{lcm}(m_{d'},n_0)} - 1\Big).\end{aligned}$$ For $0 \leq t \leq d'-1$, we have $${\mathcal A}_{t,t+1}^{-,\min} = \left\{ (i,1,l) \in {\mathbb Z}^{\oplus 3} : \begin{gathered} \text{$2 \leq i \leq n$, $i \equiv 1 \!\!\!\! \pmod{m_{t}}$, $i \not\equiv 1 \!\!\!\! \pmod{m_{t+1}}$} \\ \text{$0 \leq l \leq h_{t+1} - 2$ if $[i]_{n_0} \neq 1$} \\ \text{$1 \leq l \leq h_{t+1} - 2$ if $[i]_{n_0} = 1$} \end{gathered} \right\}$$ so that $$f_t = \Big(\frac{n}{m_t} - \frac{n}{m_{t+1}}\Big)(h_{t+1}-1) - \Big(\frac{n}{\operatorname{lcm}(m_t,n_0)} - \frac{n}{\operatorname{lcm}(m_{t+1},n_0)}\Big). \qedhere$$ The nonvanishing cohomological degree ------------------------------------- In this section, we use the results of the preceding sections to finish the proof of Theorem \[t:single\_degree\]. Observe that from Theorem \[t:hom\] together with Corollary \[t:irred\], we have the following: \[c:s\_chi\] Let $\pi$ be an irreducible constituent of $H_c^r(Z_h^1, \overline {\mathbb Q}_\ell)$ for some $r$. Then $$\operatorname{Hom}_{{\mathbb G}_h^1({\mathbb F}_q)}\left(\pi, H_c^i({X_h^1}, \overline {\mathbb Q}_\ell)\right) = 0 \qquad \text{for all $i \neq r$.}$$ In particular, for any $\chi {\colon}{\mathbb T}_h^1({\mathbb F}_q) \to \overline {\mathbb Q}_\ell^\times$, there exists a positive integer $s_\chi$ such that $$H_c^i({X_h^1}, \overline {\mathbb Q}_\ell)[\chi] = \begin{cases} \text{irreducible} & \text{if $i = s_\chi$,} \\ 0 & \text{if $i \neq s_\chi$.} \end{cases}$$ This is the same as the proof of [@Chan_siDL Corollary 5.1.3]. The irreducible ${\mathbb G}_h^1({\mathbb F}_q)$-representation $\pi \subset H_c^r({X_h^1}, \overline {\mathbb Q}_\ell)$ is a summand of $\operatorname{Ind}_{{\mathbb T}_h^1({\mathbb F}_q)}^{{\mathbb G}_h^1({\mathbb F}_q)}(\chi')$ for some $\chi'$. Hence $$\operatorname{Hom}_{{\mathbb G}_h^1({\mathbb F}_q)}\left(\operatorname{Ind}_{{\mathbb T}_h^1({\mathbb F}_q)}^{{\mathbb G}_h^1({\mathbb F}_q)}(\chi'), H_c^r({X_h^1}, \overline {\mathbb Q}_\ell)\right) \neq 0.$$ Theorem \[t:hom\] implies that $r = r_{\chi'}$ and that there are no ${\mathbb G}_h^1({\mathbb F}_q)$-equivariant homomorphisms from $\pi$ to $H_c^i({X_h^1}, \overline {\mathbb Q}_\ell)$ for $i \neq r_{\chi'}$. This proves the first assertion. To see the second assertion, first recall from Corollary \[t:irred\] that $H_c^*({X_h^1}, \overline {\mathbb Q}_\ell)[\chi]$ is (up to sign) an irreducible ${\mathbb G}_h^1({\mathbb F}_q)$-representation. Therefore, we may apply the above argument to $H_c^*({X_h^1}, \overline {\mathbb Q}_\ell)[\chi]$ and we see that if $H_c^*({X_h^1}, \overline {\mathbb Q}_\ell)[\chi]$ is a summand of $\operatorname{Ind}_{{\mathbb T}_h^1({\mathbb F}_q)}^{{\mathbb G}_h^1({\mathbb F}_q)}(\chi')$, then $$H_c^i({X_h^1}, \overline {\mathbb Q}_\ell)[\chi] = \begin{cases} \text{irreducible} & \text{if $i = r_{\chi'}$,} \\ 0 & \text{otherwise.} \end{cases}$$ Since the number $r_{\chi'}$ only depends on $\chi$, we final assertion of the corollary holds taking $s_\chi = r_{\chi'}$. We see now that the upshot of Theorem \[t:hom\] is that we already know that $H_c^i({X_h^1}, \overline {\mathbb Q}_\ell)[\chi]$ is concentrated in a single degree $s_\chi$. However, it would be much more satisfying—for many reasons, computational, conceptual, idealogical—if we could pinpoint this nonvanishing cohomological degree. Taking a hint from the proof of Corollary \[c:s\_chi\], one strategy to prove that $s_\chi = r_\chi$ is to prove that $H_c^{s_\chi}({X_h^1}, \overline {\mathbb Q}_\ell)[\chi]$ is a summand of $\operatorname{Ind}_{{\mathbb T}_h^1({\mathbb F}_q)}^{{\mathbb G}_h^1({\mathbb F}_q)}(\chi)$. This is our next result. \[t:r=s\] For any $\chi {\colon}{\mathbb T}_h^1({\mathbb F}_q) \to \overline {\mathbb Q}_\ell^\times$, $$\operatorname{Hom}_{{\mathbb G}_h^1({\mathbb F}_q)}\left(\operatorname{Ind}_{{\mathbb T}_h^1({\mathbb F}_q)}^{{\mathbb G}_h^1({\mathbb F}_q)}(\chi), H_c^{s_\chi}(Z_h^1, \overline {\mathbb Q}_\ell)[\chi]\right) \neq 0.$$ In particular, $s_\chi = r_\chi$. The proof of Theorem \[t:r=s\] is essentially the same proof as [@Chan_siDL Theorem 6.2.4]. By Frobenius reciprocity, it is enough to show $$\label{e:T_hom} \operatorname{Hom}_{{\mathbb T}_h^1({\mathbb F}_q)}\left(\chi, H_c^{s_\chi}({X_h^1}, \overline {\mathbb Q}_\ell)[\theta]\right) \neq 0.$$ We will sometimes write ${\mathbb T}_h^1 = {\mathbb T}_{h,n,q}^1$ and ${\mathbb G}_h^1 = {\mathbb G}_{h,n,q}^1$, ${X_h^1}= X_{h,n,q}^1$, $g_b^{n,q}$, and $s_\chi = s_\chi^{n,q}$ to emphasize the dependence on $n,q$. It is clear that once is established, then by Theorem \[t:hom\], it follows that $s_\chi = r_\chi$. For notational convenience, we write $H_c^i(X)$ to mean $H_c^i(X, \overline {\mathbb Q}_\ell)$. We first establish a few lemmas. \[l:0+ reg\] For any $\zeta \in {\mathbb F}_{q^n}^\times$ with trivial $\operatorname{Gal}({\mathbb F}_{q^n}/{\mathbb F}_q)$-stabilizer and any $g \in {\mathbb T}_h^1({\mathbb F}_q)$, $$\operatorname{Tr}\Big((\zeta, 1, g) ; H_c^{s_\chi}({X_h^1})[\chi]\Big) = (-1)^{s_\chi} \chi(g).$$ Recall that the action of $(\zeta, 1, 1) \in \Gamma_h$ is given by conjugation. Observe that if $x \in ({X_h^1})^{(\zeta, 1, 1)}$, then $x = g_b(v_1, 0, \ldots, 0)$. Furthermore, this forces $v_1 \in {\mathbb W}_h^1({\mathbb F}_{q^n})$. Therefore $({X_h^1})^{(\zeta, 1, 1)} = {\mathbb T}_h^1({\mathbb F}_q)$. By the Deligne–Lusztig fixed point formula, $$\begin{aligned} \operatorname{Tr}\Big((\zeta, g, 1)^* ; H_c^*({X_h^1})[\chi]\Big) &= \frac{1}{\#{\mathbb T}_h^1({\mathbb F}_q)} \sum_{t \in {\mathbb T}_h^1({\mathbb F}_q)} \chi(t)^{-1} \operatorname{Tr}\Big((\zeta, g, t)^* ; H_c^*({X_h^1})\Big) \\ &= \frac{1}{\#{\mathbb T}_h^1({\mathbb F}_q)} \sum_{t \in {\mathbb T}_h^1({\mathbb F}_q)} \chi(t)^{-1} \operatorname{Tr}\Big((1, g, t)^* ; H_c^*(({X_h^1})^{(\zeta, 1, 1)}) \Big) \\ &= \frac{1}{\#{\mathbb T}_h^1({\mathbb F}_q)} \sum_{t \in {\mathbb T}_h^1({\mathbb F}_q)} \chi(t)^{-1} \operatorname{Tr}\Big((1,g,t)^* ; H_c^*({\mathbb T}_h^1({\mathbb F}_q))\Big) \\ &= \frac{1}{\#{\mathbb T}_h^1({\mathbb F}_q)} \sum_{t \in {\mathbb T}_h^1({\mathbb F}_q)} \chi(t)^{-1} \sum_{\chi' {\colon}{\mathbb T}_h^1({\mathbb F}_q) \to \overline {\mathbb Q}_\ell^\times} \chi'(g) \chi'(t) = \chi(g).\end{aligned}$$ The conclusion of the lemma now follows from Corollary \[c:s\_chi\]. \[l:induction\] Let $p_0$ be a prime dividing $n$. For any $\zeta \in {\mathbb F}_{q^{p_0}}^\times \smallsetminus {\mathbb F}_q^\times$ and any $g \in {\mathbb T}_h^1({\mathbb F}_q)$, $$(-1)^{s_\chi^{n,q}} \operatorname{Tr}\Big((\zeta, 1, g); H_c^{s_\chi^{n,q}}(X_{h,n,q}^1)[\chi]\Big) = (-1)^{s_\chi^{n/p_0, q^{p_0}}}\operatorname{Tr}\Big((1,1,g) ; H_c^{s_\chi^{n/p_0,q^{p_0}}}(X_{h,n/p_0, q^{p_0}}^1)[\chi]\Big).$$ Recall that the action of $(\zeta, 1, 1) \in \Gamma_h$ is given by conjugation. Observe that if $x \in ({X_h^1})^{(\zeta, 1, 1)}$, then $x = g_b(v_1, \ldots, v_n)$ where $v_i = 0$ for all $i \not\equiv 1$ modulo $p_0$. The map $$\begin{aligned} f {\colon}(X_{h,n,q}^1)^{(\zeta, 1, 1)} &\to X_{h,n/p_0,q^{p_0}}^1 \\ g_b^{n,q}(v_1, v_2, \ldots, v_n) &\mapsto g_b^{n/p_0, q^{p_0}}(v_1, v_{p_0+1}, v_{2p_0+1}, \ldots, v_{n-p_0+1})\end{aligned}$$ defines an isomorphism equivariant under the action of ${\mathbb T}_{h,n,q}^1({\mathbb F}_q) \times {\mathbb T}_{h,n,q}^1({\mathbb F}_q) \cong {\mathbb T}_{h,n/p_0,q^{p_0}}^1({\mathbb F}_q) \times {\mathbb T}_{h,n/p_0,q^{p_0}}^1({\mathbb F}_q)$. (Note that the determinant condition on the image can be seen by observing that the rows and columns of $x \colonequals g_b^{n,q}(v_1, \ldots, v_n)$ can be rearranged so that the matrix becomes block-diagonal of the form $\operatorname{diag}(f(x), \sigma^l(f(x)), \ldots, \sigma^{[l(p_0-1)]_n}(f(x)))$. Hence the determinant of $x$ is fixed by $\sigma$ if and only if the determinant of $f(x)$ is fixed by $\sigma^{p_0}$.) By the Deligne–Lusztig fixed-point formula, $$\operatorname{Tr}\Big((\zeta, g, t)^*; H_c^*(X_{h,n,q}^1)\Big) = \operatorname{Tr}\Big((1,g, t)^* ; H_c^*(X_{h,n,q}^1)^{(\zeta, 1, 1)}\Big),$$ so that $$\begin{aligned} \operatorname{Tr}\Big((\zeta, g, 1)^* ; H_c^*(X_{h,n,q}^1)[\chi]\Big) &= \frac{1}{\#{\mathbb T}_h^1({\mathbb F}_q)} \sum_{t \in {\mathbb T}_h^1({\mathbb F}_q)} \chi(t)^{-1} \operatorname{Tr}\Big((\zeta, g, t)^* ; H_c^*(X_{h,n,q}^1)\Big) \\ &= \frac{1}{\# {\mathbb T}_h^1({\mathbb F}_q)} \sum_{t \in {\mathbb T}_h^1({\mathbb F}_q)} \chi(t)^{-1} \operatorname{Tr}\Big((1, g, t)^* ; H_c^*((X_{h,n,q}^1)^{(\zeta, 1, 1)})[\chi]\Big) \\ &= \frac{1}{\#{\mathbb T}_h^1({\mathbb F}_q)} \sum_{t \in {\mathbb T}_h^1({\mathbb F}_q)} \chi(t)^{-1} \operatorname{Tr}\Big((1, g, t)^* ; H_c^*(X_{h,n/p_0,q^{p_0}}^1)\Big) \\ &= \operatorname{Tr}\Big((1, g,1)^* ; H_c^*(X_{h,n/p_0,q^{p_0}}^1)[\chi]\Big).\end{aligned}$$ The conclusion of the lemma now holds by Corollary \[c:s\_chi\]. \[l:chi tilde\] Let $\chi {\colon}{\mathbb T}_h^1({\mathbb F}_q) \to \overline {\mathbb Q}_\ell^\times$. Assume that we are in one of the following cases: 1. $n > 1$ is odd and $p_0$ is a prime divisor of $n$. 2. $n > 1$ is even and $p_0 = 2$. Fix a $\zeta \in {\mathbb F}_{q^{p_0}}^\times$ such that $\langle \zeta \rangle = {\mathbb F}_{q^{p_0}}^\times$ and consider the extension of $\chi$ defined by $$\widetilde \chi {\colon}{\mathbb F}_{q^{p_0}}^\times \times {\mathbb T}_h^1({\mathbb F}_q) \to \overline {\mathbb Q}_\ell^\times, \qquad (\zeta^i, g) \mapsto \begin{cases} \chi(g) & \text{if $q$ is even,} \\ ((-1)^{s_\chi^{n,q}+s_\chi^{n/p_0,q^{p_0}}})^i \cdot \chi(g) & \text{if $q$ is odd.} \end{cases}$$ Then $$\sum_{x \in {\mathbb F}_{q^{p_0}}^\times \smallsetminus {\mathbb F}_q} \widetilde \chi(x, 1)^{-1} \neq 0.$$ This is the same proof as [@Chan_siDL Lemma 6.2.6]. The proof is exactly as in [@Chan_siDL Theorem 6.2.4]. We give a sketch here. Since $X_{h,1,q}^1 = {\mathbb T}_h^1({\mathbb F}_q)$ and hence for any $\chi {\colon}{\mathbb T}_h^1({\mathbb F}_q) \to \overline {\mathbb Q}_\ell^\times$, we have $$H_c^{s_\chi^{1,q}}(X_{h,1,q}^1)[\chi] = H_c^0({\mathbb T}_h^1({\mathbb F}_q))[\chi] = \chi,$$ so Equation holds for $n = 1$ and $q$ arbitrary. We induct on the number of prime divisors of $n$: assume that for a fixed integer $l \geq 0$, Equation holds for any $\prod_{i=1}^l p_i$ and arbitrary $q$, where the $p_i$ are (possibly non-distinct) primes. We will show that Equation holds for any $\prod_{i=0}^l p_i$ and arbitrary $q$. If $n$ is even, let $p_0 = 2$; otherwise, $p_0$ can be taken to be anything. Let $\widetilde \chi$ be as in Lemma \[l:chi tilde\]. Then $$\begin{aligned} &\sum_{(x,g) \in {\mathbb F}_{q^{p_0}}^\times \times {\mathbb T}_h^1({\mathbb F}_q)} \widetilde \chi(x,g)^{-1} \operatorname{Tr}\Big((x,1,g) ; H_c^{s_\chi^{n,q}}(X_{h,n,q}^1)[\chi]\Big) \\ &= \#({\mathbb F}_q^\times \times {\mathbb T}_h^1({\mathbb F}_q)) \cdot \dim \operatorname{Hom}_{{\mathbb F}_q^\times \times {\mathbb T}_h^1({\mathbb F}_q)}\Big(\widetilde \chi, H_c^{s_\chi^{n,q}}(X_{h,n,q}^1)[\chi]\Big) \\ &\quad+ \sum_{\substack{(x,g) \in {\mathbb F}_{q^{p_0}}^\times \times {\mathbb T}_h^1({\mathbb F}_q) \\ x \in {\mathbb F}_{q^{p_0}}^\times \smallsetminus {\mathbb F}_q^\times}} \widetilde \chi(x,g)^{-1} \cdot (-1)^{s_\chi^{n,q}+s_\chi^{n/p_0,q^{p_0}}} \cdot \operatorname{Tr}\Big((1,1,g); H_c^{s_\chi^{n/p_0,q^{p_0}}}(X_{h,n/p_0,q^{p_0}}^1)[\chi]\Big).\end{aligned}$$ By the inductive hypothesis together with Lemma \[l:chi tilde\], the second summand is a nonzero number, and hence necessarily either the left-hand side is positive or the first summand is positive. In either case, Equation must hold. For the reader’s benefit, we summarize the discussion of this section to prove Theorem \[t:single\_degree\]. By Corollary \[t:irred\], we know that $H_c^*({X_h^1}, \overline {\mathbb Q}_\ell)[\chi]$ is (up to sign) an irreducible ${\mathbb G}_h^1({\mathbb F}_q)$-representation. By Theorem \[t:hom\], for any character $\chi'$, $$\operatorname{Hom}_{{\mathbb G}_h^1({\mathbb F}_q)}\Big(\operatorname{Ind}_{{\mathbb T}_h^1({\mathbb F}_q)}^{{\mathbb G}_h^1({\mathbb F}_q)}(\chi'), H_c^i({X_h^1}, \overline {\mathbb Q}_\ell)\Big) \neq 0 \qquad \Longleftrightarrow \qquad i = r_{\chi'}.$$ As explained in Corollary \[c:s\_chi\], this implies that if $H_c^*({X_h^1}, \overline {\mathbb Q}_\ell)[\chi]$ is a summand of $\operatorname{Ind}_{{\mathbb T}_h^1({\mathbb F}_q)}^{{\mathbb G}_h^1({\mathbb F}_q)}(\chi')$ for some $\chi'$, then $$H_c^i({X_h^1}, \overline {\mathbb Q}_\ell)[\chi] \neq 0 \qquad \Longleftrightarrow \qquad i = r_{\chi'} \equalscolon s_\chi.$$ By Theorem \[t:r=s\], we see that in fact we can take $\chi' = \chi$, and therefore the nonvanishing cohomological degree of $H_c^i({X_h^1}, \overline {\mathbb Q}_\ell)[\chi]$ is in fact $i = r_\chi$. The final assertion about the action of $\operatorname{Fr}_{q^n}$ on $H_c^{r_\chi}({X_h^1}, \overline {\mathbb Q}_\ell)[\theta] = (-1)^{r_\chi} H_c^*({X_h^1}, \overline {\mathbb Q}_\ell)[\theta]$ now follows from Theorem \[t:hom\]. Dimension formula ----------------- We use Theorem \[t:single\_degree\] to give an explicit dimension formula for the ${\mathbb G}_h^1({\mathbb F}_q)$-representation $H_c^*({X_h^1}, \overline {\mathbb Q}_\ell)[\chi]$. \[c:dimension\] If $\chi {\colon}{\mathbb T}_h^1({\mathbb F}_q) \cong {\mathbb W}_h^1({\mathbb F}_{q^n}) \to \overline {\mathbb Q}_\ell^\times$ is any character, then $$\dim H_c^{r_\chi}({X_h^1}, \overline {\mathbb Q}_\ell)[\chi] = q^{(n^2 - n)(h-1) - nr_\chi/2}.$$ In particular, if $\chi$ has trivial $\operatorname{Gal}(L/k)$-stabilizer, then $$\log_q(\dim H_c^{r_\chi}({X_h^1}, \overline {\mathbb Q}_\ell)[\chi]) = \frac{n}{2}\textstyle\Big(n(h_1-1)-(h_{d'}-1)-(n'-1) - \sum\limits_{t=1}^{d'-1} \frac{n}{m_t}(h_t - h_{t+1})\Big).$$ By applying [@Boyarchenko_12 Lemma 2.12] to calculate the character of $H_c^{r_\chi}({X_h^1}, \overline {\mathbb Q}_\ell)[\chi]$ at the identity, we have $$\dim H_c^{r_\chi}({X_h^1}, \overline {\mathbb Q}_\ell)[\chi] = \frac{(-1)^{r_\chi}}{\lambda \cdot \#{\mathbb T}_h^1({\mathbb F}_q)} \sum_{t \in {\mathbb T}_h^1({\mathbb F}_q)} \chi(t) \cdot \#S_{1, t},$$ where $S_{1,t} = \{x \in {X_h^1}(\overline {\mathbb F}_q) : \sigma(\operatorname{Fr}_{q^n}(x)) = x \cdot t\}$ and $\lambda$ is the scalar by which $\operatorname{Fr}_{q^n}$ acts on $H_c^{r_\chi}({X_h^1}, \overline {\mathbb Q}_\ell)[\chi]$. Suppose that $x \in S_{1,t}$. Then by the same argument as [@CI_ADLV Lemma 9.3], $\det(b\sigma(g_b(x))) = t \cdot \det(b) \det(g_b(x))$, which then forces $t = 1$. By construction, $S_{1,1} = {\mathbb G}_h^1({\mathbb F}_q)$, so therefore $$\dim H_c^{r_\chi}({X_h^1}, \overline {\mathbb Q}_\ell)[\chi] = \frac{\#{\mathbb G}_h^1({\mathbb F}_q)}{q^{nr_\chi/2} \cdot \#{\mathbb T}_h^1({\mathbb F}_q)} = q^{(n^2-n)(h-1) - nr_\chi/2},$$ where we also use the fact that $\lambda = (-1)^{r_\chi} q^{nr_\chi/2}$ from Theorem \[t:single\_degree\]. The assertion in the case that $\chi$ has trivial $\operatorname{Gal}(L/k)$-stabilizer follows from Corollary \[c:prounip\_degree\]. Conjectures =========== Concentration in a single degree -------------------------------- Recall that from Corollary \[t:irred\], we know that if $\theta {\colon}{\mathbb T}_h({\mathbb F}_q) \cong {\mathbb W}_h^\times({\mathbb F}_{q^n}) \to \overline {\mathbb Q}_\ell^\times$ is a character with trivial $\operatorname{Gal}({\mathbb F}_{q^n}/{\mathbb F}_{q^{n_0r}})$-stabilizer, then the alternating sum $H_c^*(X_h \cap {\mathbb{L}}_h^{(r)}{\mathbb G}_h^1, \overline {\mathbb Q}_\ell)[\theta]$ is (up to sign) an irreducible ${\mathbb{L}}_h^{(r)}({\mathbb F}_q){\mathbb G}_h^1({\mathbb F}_q)$-representation. We conjecture that in fact these cohomology groups should be concentrated in a single degree. \[c:single\_degree\] Let $r \mid n'$ and let $\theta {\colon}{\mathbb T}_h({\mathbb F}_q) \cong {\mathbb W}_h^\times({\mathbb F}_{q^n}) \to \overline {\mathbb Q}_\ell^\times$ be a character with trivial $\operatorname{Gal}({\mathbb F}_{q^n}/{\mathbb F}_{q^{n_0r}})$-stabilizer. Then there exists an integer $i_{\theta,r}$ such that $$H_c^i(X_h \cap {\mathbb{L}}_h^{(r)}{\mathbb G}_h^1, \overline {\mathbb Q}_\ell)[\theta] \neq 0 \qquad \Longleftrightarrow \qquad i = i_{\theta,r}.$$ In this paper, we proved this conjecture in the case $r = n'$ and in fact pinpointed the nonvanishing cohomological degree $i_{\theta,n'}$ (Theorem \[t:single\_degree\]). We expect that a similar formula for $i_{\theta,r}$ should be obtainable, where the methods in this paper can be used to reduce the determination of $i_{\theta,r}$ to a “depth-zero” setting. The hypotheses of Conjecture \[c:single\_degree\] should be equivalent to saying that the consequent depth-zero input comes from the $\theta_0$-isotypic part of the cohomology of a classical Deligne–Lusztig variety (of dimension $r-1$) for the twisted Levi ${\mathbb{L}}_{1,r}$ in ${\mathbb G}_1$, where $\theta_0$ is a character of ${\mathbb T}_1({\mathbb F}_q) \cong {\mathbb F}_{q^n}^\times$ in general position. Relation to loop Deligne–Lusztig varieties ------------------------------------------ The varieties $X_h$ are closely related to a conjectural construction of Deligne–Lusztig varieties for $p$-adic groups initiated by Lusztig [@Lusztig_79]. We call these sets *loop Deligne–Lusztig varieties*, although the algebro-geometric structure is still unknown in general. In [@CI_ADLV], we studied this question for a certain class of these sets attached to inner forms of $\operatorname{GL}_n$. We prove (see also [@CI_loopGLn Proposition 2.6]) that the fpqc-sheafification $X$ of the presheaf on category $\operatorname{Perf}_{\overline {\mathbb F}_q}$ of perfect ${\mathbb F}_q$-schemes $$X \colon R \mapsto \{x \in LG(R) : x^{-1} F(x) \in LU(R)\}/L(U \cap F^{-1}U)$$ is representable by a perfect $\overline {\mathbb F}_q$-scheme and that $X$ is the perfection of $$\bigsqcup_{g \in G(k)/G_{x,0}({\mathcal O}_k)} g \cdot \varprojlim_h X_h.$$ We see that an intermediate step to understanding the cohomology of loop Deligne–Lusztig is to calculate the cohomology of $X_h$. However, for various reasons, it is often easier to calculate the cohomology of the Drinfeld stratification. For example, in [@CI_loopGLn], to prove cuspidality of $H_*(X, \overline {\mathbb Q}_\ell)[\theta]$ for a broad class of characters $\theta {\colon}T(k) \to \overline {\mathbb Q}_\ell^\times$, we calculate the formal degree of this representation, which we achieve by calculating the dimension of $H_c^*(X_h^{(n')}, \overline {\mathbb Q}_\ell)[\theta]$ from the Frobenius eigenvalues (see Corollary \[c:dimension\]). In this setting, we can prove a comparison formula between the cohomology of $X_h^{(n')}$ and the cohomology of $X_h$ (see Section \[s:p&gt;n closed\]). We conjecture the following comparison theorem between the cohomology of $X_h$ and its Drinfeld stratification. In Section \[s:evidence\], we present evidence supporting the truth of this conjecture. \[c:Xh\] Let $r \mid n'$ and let $\theta {\colon}{\mathbb T}_h({\mathbb F}_q) \cong {\mathbb W}_h^\times({\mathbb F}_{q^n}) \to \overline {\mathbb Q}_\ell^\times$ be a character with trivial $\operatorname{Gal}(L/k)$-stabilizer. Let $\chi \colonequals \theta|_{{\mathbb W}_h^1({\mathbb F}_{q^n})}$ and assume that the stabilizer of $\chi$ in $\operatorname{Gal}(L/k)$ is equal to the unique index-$n_0r$ subgroup. Then we have an isomorphism of virtual ${\mathbb G}_h({\mathbb F}_q)$-representations $$H_c^*(X_h, \overline {\mathbb Q}_\ell)[\theta] \cong H_c^*(X_h^{(r)}, \overline {\mathbb Q}_\ell)[\theta].$$ Combining Conjectures \[c:single\_degree\] and \[c:Xh\] with Corollary \[t:irred\], the above conjecture asserts that as elements of the Grothendieck group of ${\mathbb G}_h({\mathbb F}_q)$, $$\begin{aligned} H_c^*(X_h, \overline {\mathbb Q}_\ell)[\theta] &= (-1)^{i_{\theta,r}} H_c^{i_{\theta,r}}(X_h^{(r)}, \overline {\mathbb Q}_\ell)[\theta] \\ &= (-1)^{i_{\theta,r}} \operatorname{Ind}_{{\mathbb{L}}_h^{(r)}({\mathbb F}_q){\mathbb G}_h^1({\mathbb F}_q)}^{{\mathbb G}_h({\mathbb F}_q)}\Big(H_c^{i_{\theta,r}}(X_h \cap {\mathbb{L}}_h^{(r)} {\mathbb G}_h^1, \overline {\mathbb Q}_\ell)[\theta]\Big).\end{aligned}$$ ### Evidence {#s:evidence} At present, we can prove Conjecture \[c:Xh\] in some special cases. We discuss these various cases, their context, and the ideas involved in the proof. #### The most degenerate setting of Conjecture \[c:Xh\] is when $G$ is a division algebra over $k$. Then $n' = 1$ and so the closed Drinfeld stratum $X_h^{(n')} = X_h^{(1)}$ is the only Drinfeld stratum. Additionally, we have that $X_h^{(n')}$ is a disjoint union of $\#{\mathbb G}_h({\mathbb F}_q)$ copies of $X_h^1 \colonequals X_h \cap {\mathbb G}_h^1$. In [@Chan_siDL], all the technical calculations happen at the level of $X_h^1$ (though in different notation in *op. cit.*), and using the new methods developed there, one knows nearly everything about the representations $H_c^i(X_h^1, \overline {\mathbb Q}_\ell)[\chi]$ for arbitrary characters $\chi {\colon}{\mathbb T}_h^1({\mathbb F}_q) \to \overline {\mathbb Q}_\ell^\times$. However, the expected generalization of these techniques extend not to $H_c^i(X_h, \overline {\mathbb Q}_\ell)[\chi]$, but to $H_c^i(X_h^{(r)}, \overline {\mathbb Q}_\ell)[\chi]$—hence one is really forced to work on the stratum in order to approach $X_h$ (at least with the current state of technology). #### {#s:p>n closed} Now let $G$ be any inner form of $\operatorname{GL}_n$ (as it has been this entire paper, outside Section \[s:drinfeld\]). We are close to establishing Conjecture \[c:Xh\] when $\chi = \theta|_{{\mathbb W}_h^1({\mathbb F}_{q^n})}$ has trivial $\operatorname{Gal}(L/k)$-stabilizer. In this case, Conjecture \[c:Xh\] says that $H_c^*(X_h, \overline {\mathbb Q}_\ell)[\theta] \cong H_c^*(X_h^{(n')}, \overline {\mathbb Q}_\ell)[\theta]$ as virtual ${\mathbb G}_h({\mathbb F}_q)$-representations. In [@CI_loopGLn Theorem 4.1], we prove this isomorphism holds under the additional assumption that $p > n$. The idea here is to use a highly nontrivial generalization of a method of Lusztig to calculate the inner product $\big\langle H_c^*(X_h, \overline {\mathbb Q}_\ell)[\theta], H_c^*(X_h^{(n')}, \overline {\mathbb Q}_\ell)[\theta]\big\rangle$ in the space of conjugation-invariant functions on ${\mathbb G}_h({\mathbb F}_q)$. #### {#section-1} In Appendix \[s:fibers\], we present a possible geometric approach to Conjecture \[c:Xh\] which has its roots in the $\operatorname{GL}_2$ setting of the proof of [@Ivanov_15_ADLV_GL2_unram Theorem 3.5]. The idea is to study the fibers of the natural projection[^2] $\pi {\colon}X_h \to X_{h-1}$. We can show that the behavior of $\pi^{-1}(x)$ depends *only* on the location of $x$ relative to the Drinfeld stratification of $X_h$: If $r$ is the smallest divisor of $n'$ such that $x \in X_h^{(r)}$ (i.e. $x$ is in the $r$th Drinfeld stratum $X_{h,r}$ of $X_h$), then there exists a morphism $$\pi^{-1}(x) \to \bigsqcup_{{\mathbb W}_h^{h-1}({\mathbb F}_{q^{n_0 r}})} {\mathbb A}^{n-1}$$ which is a composition of isomorphisms and purely inseparable morphisms. Moreover, the action of $\ker({\mathbb W}_h^{h-1}({\mathbb F}_{q^n}) \to {\mathbb W}_h^{h-1}({\mathbb F}_{q^{n_0r}}))$ on $\pi^{-1}(x)$ fixes the set of connected components. The crucial point here is that the fibers of the natural map $$X_{h,r}/\ker({\mathbb W}_h^{h-1}({\mathbb F}_{q^n}) \to {\mathbb W}_h^{h-1}({\mathbb F}_{q^{n/(n_0r)}})) \to X_{h-1,r}$$ are again isomorphic to $\bigsqcup_{{\mathbb W}_h^{h-1}({\mathbb F}_{q^{n_0r}})}{\mathbb A}^{n-1}$ and therefore $\ker({\mathbb W}_h^{h-1}({\mathbb F}_{q^n}) \to {\mathbb W}_h^{h-1}({\mathbb F}_{q^{n/(n_0r)}}))$ acts trivially on the cohomology of $X_{h,r}$: $$H_c^*(X_{h,r}, \overline {\mathbb Q}_\ell) \cong H_c^*(X_{h,r}, \overline {\mathbb Q}_\ell)^{\ker({\mathbb W}_h^{h-1}({\mathbb F}_{q^n}) \to {\mathbb W}_h^{h-1}({\mathbb F}_{q^{n/(n_0r)}}))}.$$ Using open/closed decompositions of $X_h$ via Drinfeld strata, we have that if $\theta$ is trivial on $\ker({\mathbb W}_h^{h-1}({\mathbb F}_{q^n}) \to {\mathbb W}_h^{h-1}({\mathbb F}_{q^{n/(n_0r)}}))$, then $$H_c^*(X_h, \overline {\mathbb Q}_\ell)[\theta] \cong H_c^*(X_h^{(r)}, \overline {\mathbb Q}_\ell)[\theta]$$ as virtual ${\mathbb G}_h({\mathbb F}_q)$-representations. It seems reasonable to guess that if one can generalize Appendix \[s:fibers\] to study the fibers of $X_h \to X_1$, then one could establish Conjecture \[c:Xh\] using a similar reasoning as above. The geometry of the fibers of projection maps {#s:fibers} ============================================= In this section, we study the fibers of the projection maps $X_h \to X_{h-1}$. This is a technical computation which we perform by first using the isomorphism $X_h \cong X_h(b,b_{{\,{\rm cox}}})$ for a particular choice of $b$ which we call the *special representative*. This is the first time in this paper that we see the convenience of having the alternative presentations of $X_h$ discussed in Sections \[s:Xhbw\] and \[s:drinfeld b,w\]. The special representative -------------------------- We first recall the content of Section \[s:drinfeld b,w\] in the context of a particular representative of the $\sigma$-conjugacy class corresponding to the fixed integer $\kappa$. \[d:special\] The *special representative* $b_{{\,{\rm sp}}}$ attached to $\kappa$ is the block-diagonal matrix of size $n \times n$ with $(n_0 \times n_0)$-blocks of the form $\left(\begin{matrix} 0 & \varpi \\ 1_{n_0-1} & 0 \end{matrix}\right)^\kappa$. By [@CI_ADLV Lemma 5.6], there exists a $g_0 \in G_{x,0}({\mathcal O}_{\breve k})$ such that $b_{{\,{\rm sp}}}= g_0 b_{{\,{\rm cox}}}\sigma(g_0)^{-1}$. Observe further that since $b_{{\,{\rm sp}}}, b_{{\,{\rm cox}}}$ are $\sigma$-fixed and $b_{{\,{\rm sp}}}^n = b_{{\,{\rm cox}}}^n = \varpi^{kn}$, $$\sigma^n(g_0) = g_0.$$ Therefore $b_{{\,{\rm sp}}}$ satisfies the conditions of Lemma \[l:g0 independence\]. Recall from Section \[s:drinfeld b,w\] that we have $$\label{e:Xh} X_h \cong X_h(b_{{\,{\rm sp}}},b_{{\,{\rm cox}}}) \cong \{v \in {\mathscr L}_h : \sigma(\det g_{b_{{\,{\rm sp}}}}(v)) = (-1)^{n-1} \det g_{b_{{\,{\rm sp}}}}(v) \in {\mathbb W}_h^\times\},$$ where $$\begin{aligned} {\mathscr L}_h &= ({\mathbb W}_h \oplus (V {\mathbb W}_{h-1})^{\oplus n_0 - 1})^{\oplus n'} \subset {\mathbb W}_h^{\oplus n} \\ g_{b_{{\,{\rm sp}}}}(v) &= \big( v_1 \, \big| \, v_2 \, \big| \, v_3 \, \big| \, \cdots \, \big| \, v_n\big) \\ \text{where } v_i &= \varpi^{\lfloor (i-1)k_0/n_0 \rfloor} \cdot (b_{{\,{\rm sp}}}\sigma)^{i-1}(v) \text{ for $1 \leq i \leq n-1$.} \end{aligned}$$ In this section, we will work with $$\label{e:Xh+} X_h^+ \colonequals \{v \in {\mathscr L}_h^+ : \sigma(\det g_{b_{{\,{\rm sp}}}(v)}) = \det g_{b_{{{\,{\rm sp}}}}}(v) \in {\mathbb W}_h^\times\}$$ where ${\mathscr L}_h^+$ is now the subquotient of ${\mathbb W}_{h+1}^{\oplus n}$ $${\mathscr L}_h^+ \colonequals ({\mathbb W}_h \oplus (V {\mathbb W}_h)^{\oplus n_0 -1})^{\oplus n'},$$ and $g_{b_{{\,{\rm sp}}}}(v)$ is defined as before. Note that differs from in that the former takes place in $G_{x,0}/G_{x,(h-1)+}$ and the latter takes place in $G_{x,0}/G_{x,h}$. A straightforward computation shows that the defining equation of $X_h^+$ does not depend on the quotient ${\mathscr L}_h^+/{\mathscr L}_h = {\mathbb A}^{n-n'}$. Observe that $\det g_{b_{{\,{\rm sp}}}}(\zeta v) = \operatorname{Nm}(\zeta) \cdot \det g_{b_{{\,{\rm sp}}}}(\zeta v)$ where $\operatorname{Nm}(\zeta) = \zeta \cdot \sigma(\zeta) \cdot \sigma^2(\zeta) \cdots \sigma^{n-1}(\zeta)$. Picking any $\zeta$ such that $\sigma(\operatorname{Nm}(\zeta)) = (-1)^{n-1} \operatorname{Nm}(\zeta)$ allows us to undo the $(-1)^{n-1}$ factor in the defining equation in . In particular, this means $$H_c^i(X_h^+, \overline {\mathbb Q}_\ell) = H_c^{i + 2(n-n')}(X_h, \overline {\mathbb Q}_\ell), \qquad \text{for all $i \geq 0$.}$$ For each divisor $r \mid n'$, we define the $r$th Drinfeld stratum $X_{h,r}^+$ of $X_h^+$ to be the preimage of $X_{h,r}$ under the natural surjection $X_h^+ \to X_h$. Fibers of $X_{h,r}^+ \to X_{h-1,r}^+$ ------------------------------------- For notational convenience, we write $b = b_{{\,{\rm sp}}}.$ We may identify ${\mathscr L}_h^+ = {\mathbb A}^{n(h-1)}$ with coordinates $x = \{x_{i,j}\}_{1 \leq i \leq n, \, 0 \leq j \leq h-1}$ which we typically write as $x = (\widetilde x, x_{1,h-1}, x_{2,h-1}, \ldots, x_{n,h-1}) \in {\mathscr L}_{h-1}^+ \times {\mathbb A}^n$; here, an element $v = (v_1, \ldots, v_n) \in {\mathscr L}_h^+$ is such that $v_i = [x_{i,0}, x_{i,1}, \ldots, x_{i,n}]$ if $i \equiv 1 \pmod{n_0}$ and $v_i = [0, x_{i,0}, x_{i,1}, \ldots, x_{i,n}]$ if $i \not\equiv 1 \pmod{n_0}$. In this section, fix a divisor $r \mid n'$. From the definitions, $X_{h,r}^+$ can be viewed as the subvariety of $X_{h-1,r}^+ \times {\mathbb A}^n$ cut out by the equation $$0 = P_0(x)^q - P_0(x),$$ where $P_0$ is the coefficient of $\varpi^{h-1}$ in the expression $\det g_b^{\operatorname{red}}(v)$. Let $c$ denote the polynomial consisting of the terms of $P_0(x)$ which only depend on $\widetilde x$. An explicit calculation shows that there exists a polynomial $P_1$ in $x$ such that $$\label{e:c} P_0(x) = c(\widetilde x) + \sum_{i=0}^{n_0-1} P_1(x)^{q^i}.$$ Therefore $X_{h,r}^+$ is the subvariety of $X_{h-1,r}^+ \times {\mathbb A}^n$ cut out by $$P_1(x)^{q^{n_0}} - P_1(x) = c(\widetilde x) - c(\widetilde x)^q.$$ One can calculate $P_1$ explicitly (see [@CI_ADLV Proposition 7.5]): \[lm:polynomial\_P\_describing\_the\_fiber\_arbitrary\_kappa\] Explicitly, the polynomial $P_1$ is $$P_1(x) = \sum_{1 \leq i,j \leq n^{\prime}} m_{ji} x_{1 + n_0(i-1),h-1}^{q^{(j-1)n_0}},$$ where $m \colonequals (m_{ji})_{j,i}$ is the adjoint matrix of $\overline{g_b}(\bar{x})$ and $\bar x$ denotes the image of $x$ in $\overline{V} = {\mathscr L}_0/{\mathscr L}_0^{(1)}$. Explicitly, $m \cdot \overline{g_b}(\bar{x}) = \det\overline{g_b}(\bar{x})$ and the $(j,i)$th entry of $m$ is $(-1)^{i+j}$ times the determinant of the $(n^{\prime}-1)\times (n^{\prime}-1)$ matrix obtained from $\overline{g_b}(\bar{x})$ by deleting the $i$th row and $j$th column. The main result of this section is: \[p:Mr\] There exists an $X_{h-1,r}^+$-morphism $$M_r {\colon}X_{h-1,r}^+ \times {\mathbb A}^n \to X_{h-1,r}^+ \times {\mathbb A}^n$$ (the left ${\mathbb A}^n$ in terms of the coordinates $\{x_{i,h-1}\}_{i=1}^n$ and the right ${\mathbb A}^n$ in terms of new coordinates $\{z_i\}_{i=1}^n$) satisfying the following properties: 1. $M_r$ is a composition of $X_{h-1,r}^+$-isomorphisms and purely inseparable $X_{h-1,r}^+$-morphisms. 2. $M_r(X_{h,r}^+)$ is the closed subscheme defined by the equation $$z_1^{q^{n_0 r}} - z_1 = c(\widetilde x) - c(\widetilde x)^q,$$ where $c$ is as in . 3. $M_r$ is ${\mathbb W}_h^{h-1}({\mathbb F}_{q^n})$-equivariant after equipping the left $X_{h-1,r}^+ \times {\mathbb A}^n$ with the ${\mathbb W}_h^{h-1}({\mathbb F}_{q^n})$-action $$1 + \varpi^{h-1} a {\colon}x_{i,h-1} \mapsto x_{i,h-1} + x_{i,0} a, \qquad \text{for all $1 \leq i \leq n$,}$$ and the right $X_{h-1,r}^+ \times {\mathbb A}^n$ with the ${\mathbb W}_h^{h-1}({\mathbb F}_{q^n})$-action $$1 + \varpi^{h-1} a {\colon}z_i \mapsto \begin{cases} z_1 + \operatorname{Tr}_{{\mathbb F}_{q^n}/{\mathbb F}_{q^{n_0r}}}(a) & \text{if $i = 1$,} \\ z_2 + a & \text{if $r \neq n'$ and $i = 2$,} \\ z_i & \text{otherwise.} \end{cases}$$ In the rest of this section we prove Proposition \[p:Mr\]. To simplify the notation we will first establish the proposition in the case $\kappa = 0$ (i.e. $G = \operatorname{GL}_n$), and at the end generalize it to all $\kappa$. The first part of the proof of Proposition \[p:Mr\] is given by the lemma below. Before stating it, we establish some notation. For an ordered basis ${\mathscr{B}}$ of $V$ and $v \in V$, let $v_{{\mathscr{B}}}$ denote the coordinate vector of $v$ in the basis ${\mathscr{B}}$. For two ordered bases ${\mathscr{B}}, {\mathscr{C}}= \{c_i\}_{i=1}^n$ of $V$, let $M_{{\mathscr{B}},{\mathscr{C}}}$ denote the base change matrix between them, that is, the $i$th column vector of $M_{{\mathscr{B}},{\mathscr{C}}}$ is $c_{i, {\mathscr{B}}}$. It is clear that - $M_{{\mathscr{C}},{\mathscr{B}}} = M_{{\mathscr{B}},{\mathscr{C}}}^{-1}$, - for any $v\in V$, $M_{{\mathscr{B}},{\mathscr{C}}} v_{{\mathscr{C}}} = v_{{\mathscr{B}}}$, - for a third ordered basis ${\mathscr{D}}$ of $V$, one has $M_{{\mathscr{B}},{\mathscr{C}}} M_{{\mathscr{C}},{\mathscr{D}}} = M_{{\mathscr{B}},{\mathscr{D}}}$. For a linear map $f \colon V \rightarrow V$, let $M_{{\mathscr{B}},{\mathscr{C}}}(f)$ denote the matrix representation of $f$; that is, $M_{{\mathscr{B}},{\mathscr{C}}}(f)\cdot v_{{\mathscr{C}}} = f(v)_{{\mathscr{B}}}$. In $V$ we have the two ordered bases: $$\begin{aligned} {\mathscr{E}}&:= \text{ the standard basis of $V$, arising from the basis $\{e_i\}$ of the lattice ${\mathscr L}_0$}, \\ {\mathscr{B}}_x &:= \{ \sigma_b^{i-1}(x) \}_{i=1}^n, \text{attached to the given $x \in X_0^+$.}\end{aligned}$$ We identify $V$ with $\overline{\mathbb F}_q^n$ via the standard basis ${\mathscr{E}}$ and write $v = v_{{\mathscr{E}}}$ for all $v \in V$. \[lm:linear\_change\_of\_variables\_fibers\] Assume $\kappa = 0$. There exists an $X_{h-1,r}^+$-isomorphism $X_{h-1,r}^+ \times {\mathbb A}^n \stackrel{\sim}{\rightarrow} X_{h-1, r}^+ \times {\mathbb A}^n$ given by a linear change of variables $x_{i,h-1} \rightsquigarrow x_{i,h-1}^{\prime}$, such that $P_1$ in the new coordinates $x_{i,h-1}^{\prime}$ takes the form $$P_1 = x_{1,h-1}^{\prime} + x_{1,h-1}^{\prime,q} + \dots + x_{1,h-1}^{\prime, q^{n-1}} + \sum_{j=0}^s \sum_{\lambda = i_j + 1}^{i_{j+1}} x_{s+2-j,h-1}^{\prime, q^{\lambda}},$$ and the action of $1 + \varpi^{h-1} a \in W_h^{h-1}({\mathbb F}_{q^n})$ on the coordinates $x_{i,h-1}^{\prime}$ is given by $$\label{eq:Wh_action_on_intermediate_coordinates} x_{i,h-1}^{\prime} \mapsto \begin{cases} x_{1,h-1}^{\prime} + a & \text{if $i=1$,} \\ x_{i,h-1} & \text{if $i \geq 2$.} \end{cases}$$ We have to find a morphism $C := (c_{ij}) \colon X_{h-1,r}^+ \rightarrow \operatorname{GL}(V) = \operatorname{GL}_{n,{\mathbb F}_q}$ (this identification uses the standard basis ${\mathscr{E}}$ of $V$) such that the corresponding linear change of coordinates $$\label{eq:general_coordinate_change} x_{i,h-1} = c_{i,1} x_{1,h-1}^{\prime} + c_{i,2} x_{2,h-1}^{\prime} + \dots + c_{i,n} x_{n,h-1}^{\prime}, \text{ for all $1 \leq i \leq n$}.$$ brings $P_1$ to the requested form. Moreover, it suffices to do this fiber-wise by first determining $C(\tilde x)$ for any point $\tilde x \in X_{h-1,r}^+$ and then seeing that $\tilde x \mapsto C(\tilde x)$ is in fact an algebraic morphism. Fix $\tilde{x} \in X_{h-1,r}^+$ with image $x \in X_1^+$, and write $C$ instead of $C(\tilde{x})$ to simplify notation. Let $C_i$ denote the $i$th column of $C$. Our coordinate change replaces $P_1$ by the polynomial (after dividing by the irrelevant non-zero constant $\det g_b(x) \in \mathbb{F}_q^{\times}$) $$\begin{aligned} \label{eq:poly_P_basechanged_in_general} P_1 &= x_{1,h-1}^{\prime} (m_1 \cdot C_1) + x_{1,h-1}^{\prime,q} (m_2 \cdot \sigma_b(C_1)) + x_{1,h-1}^{\prime, q^2} (m_3 \cdot \sigma_b^2(C_1)) + \dots + x_{1,h-1}^{\prime, q^{n-1}} (m_n \cdot \sigma_b^{n-1}(C_1)) \nonumber \\ &+ x_{2,h-1}^{\prime} (m_1 \cdot C_2) + x_{2,h-1}^{\prime,q} (m_2 \cdot \sigma_b(C_2)) + x_{2,h-1}^{\prime,q^2} (m_3 \cdot \sigma_b^2(C_2)) + \dots + x_{2,h-1}^{\prime,q^{n-1}} (m_n \cdot \sigma_b^{n-1}(C_2)) \nonumber \\ &+ \cdots + \nonumber \\ &+ x_{n,h-1}^{\prime} (m_1 \cdot C_n) + x_{n,h-1}^{\prime,q} (m_2 \cdot \sigma_b(C_n)) + x_{n,h-1}^{\prime,q^2} (m_3 \cdot \sigma_b^2(C_n)) + \dots + x_{n,h-1}^{\prime,q^{n-1}} (m_n \cdot \sigma_b^{n-1}(C_n)) \nonumber \\\end{aligned}$$ in the indeterminates $\{x_{i,h-1}^{\prime}\}_{i=1}^n$. Here, we write $m_i$ to mean the $i$th row of the matrix $m$ (adjoint to $g_b(x)$) from Lemma \[lm:polynomial\_P\_describing\_the\_fiber\_arbitrary\_kappa\]. For $z \in V$, we put $$\label{eq:def_m_ast} m\ast z = \sum_{i=1}^{n} (m_i \cdot (b\sigma)^{i-1}(z)) e_i.$$ The intermediate goal is to describe the map $m \ast \colon V \rightarrow V$ in terms of a coordinate matrix. Of course, $m\ast$ is not linear, but its composition with the projection on the $i$th component (corresponding to the $i$th standard basis vector) is $\sigma^{i-1}$-linear. Thus we instead will describe the linear map $(m\ast)^{\prime} \colon V \rightarrow V$, which is the composition of $m\ast$ and the map $\sum_i v_i e_i \mapsto \sum_i \sigma^{-(i-1)}(v_i) e_i$. This is done by the following lemma. \[lm:explicit\_construction\_of\_M\_BxEmast\] Assume $\kappa = 0$. We have $$M_{{\mathscr{E}}, {\mathscr{B}}_x}((m\ast)^{\prime}) = \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 & 0 \\ 1 & 0 & 0 & \cdots & 0 & \sigma^{-1}(y_1) \\ \vdots & \vdots & \vdots & \text{\reflectbox{$\ddots$}} & \sigma^{-2}(y_2) & \ast \\ 1 & 0 & 0 & \text{\reflectbox{$\ddots$}} & * & \vdots \\ 1 & 0 & \sigma^{-(n-2)}(y_{n-2}) & \text{\reflectbox{$\ddots$}} & \vdots & \ast \\ 1 & \sigma^{-(n-1)}(y_{n-1}) & * & \cdots & * & \ast \end{pmatrix}$$ where the $y_i$’s are defined by the equation $$(b\sigma)^n(\mathfrak v) = v + \sum_{i=1}^{n-1} y_i (b\sigma)^i(\mathfrak v).$$ More precisely, if $\mu_{i,j}$ denotes the $(i,j)$th entry of $\det(g_b(\bar{x}))^{-1} M_{{\mathscr{E}}, {\mathscr{B}}_x}((m\ast)^{\prime})$, then for $1 \leq i,j \leq n$ we have $$\mu_{i,j} = \begin{cases} 1 & \text{if $j = 1$} \\ 0 & \text{if $i+j \leq n+1$ and $j > 1$} \\ \sigma^{-(i-1)}(y_{i-1}) & \text{if $i + j = n+2$} \\ \mu_{i-1,j} + \sigma^{-(i-1)}(y_{i-1}) \sigma^{n-(i-1)}(\mu_{n,j+i-(n+1)}) & \text{if $i+j \geq n+3$ and $i \geq 3$}. \end{cases}$$ In particular, if $i+j \geq n+3$ and $y_{i-1} = 0$, then $\mu_{i,j} = \mu_{i-1,j}$. Let $z = \sum_{i=1}^n z_i (b\sigma)^{i-1}(x)$ be a generic element of $V$, written in ${\mathscr{B}}_x$-coordinates, that is $z_{{\mathscr{B}}_x}$ is the $n$-tuple $(z_i)_{i=1}^n$. The $(i,j)$th entry of $M_{{\mathscr{E}}, {\mathscr{B}}_x}((m\ast)^{\prime})$ is equal to $\sigma^{-(i-1)}$ applied to the coefficient of $\sigma^{i-1}(z_j)$ in the $i$th entry of $(b\sigma)^{i-1}(z)_{{\mathscr{B}}_x}$ ($=$ the $i$th entry of $m\ast z$). The coordinate matrix of the $\sigma$-linear operator $b\sigma \colon V \rightarrow V$ in the basis ${\mathscr{B}}_x$, $$M_{{\mathscr{B}}_x,{\mathscr{B}}_x}(b\sigma) = \begin{pmatrix} 0 & 0 & \cdots & 0 & 1 \\ 1 & 0 & \cdots & 0 & y_1 \\ 0 & 1 & \ddots & \vdots & y_2 \\ \vdots & \ddots & \ddots & 0 & \vdots \\ 0 & \cdots & 0 & 1 & y_{n-1} \end{pmatrix}.$$ That is, for any $z \in V$, $$\label{eq:sigma_lin_coord_change_for_a_vector} b\sigma(z)_{{\mathscr{B}}_x} = M_{{\mathscr{B}}_x,{\mathscr{B}}_x}(b\sigma) \cdot \sigma(z_{{\mathscr{B}}_x}),$$ where the last $\sigma$ is applied entry-wise. Explicitly, the first entry of $b\sigma(z)_{{\mathscr{B}}_x}$ is $\sigma(z_n)$, and for $2 \leq i \leq n$ the $i$th entry of $b\sigma(z)_{{\mathscr{B}}_x}$ is $\sigma(z_{i-1}) + y_{i-1}\sigma(z_n)$. This allows to iteratively compute $b\sigma^i(z)$ for all $i$, which we do to finish the proof. First, we see that $z_1$ can occur in the $n$th (i.e. last) entry of $(b\sigma)^{\lambda - 1}(z)_{{\mathscr{B}}_x}$ only if $\lambda \geq n$; hence its contribution to the $i$th entry of $(b\sigma)^{i-1}(z)_{{\mathscr{B}}_x}$ for $i \leq n$ is simply $\sigma^{i-1}(z_1)$. This shows that the first column of $M_{{\mathscr{E}}, {\mathscr{B}}_x}((m\ast)^{\prime})$ consists of $1$’s. Assume now $j \geq 2$. Then there is a smallest (if any) $i_0$ , such that $z_j$ occurs in the $i_0$th entry of $(b\sigma)^{i_0-1}(z)_{{\mathscr{B}}_x}$. Note that as $j\geq 2$, one has $i_0 \geq 2$. Then $z_j$ must have been occurred in the $n$th entry of $(b\sigma)^{i_0 - 2}(z)_{{\mathscr{B}}_x}$. As $z_j$ occurs in $z_{{\mathscr{B}}_x}$ in exactly the $j$th entry, and it needs $(n-j)$ times to apply $b\sigma$ to get it to the $n$th entry, we must have $i_0 - 2 \geq n - j$. This shows that the $(i,j)$th entry of $M_{{\mathscr{E}}, {\mathscr{B}}_x}((m\ast)^{\prime})$ is $0$, unless $i \geq n+2-j$. The same consideration shows that if $i = n+2-j$, then $\sigma^{i-1}(z_j)$ has the coefficient $y_{i-1}$ in $\sigma_b^{i - 1}(z)_{{\mathscr{B}}_x}$. This gives the entries of $M_{{\mathscr{E}}, {\mathscr{B}}_x}((m\ast)^{\prime})$ on the diagonal $i = n+2-j$. It remains to compute the entries below it, so assume $i > n + 2 - j$. Again, by the characterization of the entries of $M_{{\mathscr{E}}, {\mathscr{B}}_x}((m\ast)^{\prime})$ in the beginning of the proof and by the explicit description of how $\sigma_b$ acts (in the ${\mathscr{B}}_x$-coordinates), it is clear that the $(i,j)$th entry of $M_{{\mathscr{E}}, {\mathscr{B}}_x}((m\ast)^{\prime})$ is just the sum of the $(i-1,j)$th entry and $\sigma^{-(i-1)}(y_{i-1})\sigma^{n-(i-1)}((n,j-1)\text{th entry})$. This finishes the proof of Lemma \[lm:explicit\_construction\_of\_M\_BxEmast\]. Now we continue the proof of Lemma \[lm:linear\_change\_of\_variables\_fibers\]. Let ${\mathscr{C}}$ denote the ordered basis of $V$ consisting of columns $C_1, C_2, \dots, C_n$ of $C$. We have $M_{{\mathscr{B}}_x,{\mathscr{C}}} = (\det g_b(x))^{-1} m \cdot C$. In particular, to give the invertible matrix $C$ it is equivalent to give the invertible matrix $M_{{\mathscr{B}}_x,{\mathscr{C}}}$. But the $i$th column of $M_{{\mathscr{B}}_x,{\mathscr{C}}}$ is the coordinate vector of $C_i$ in the basis ${\mathscr{B}}_x$, i.e., what we denoted $C_{i,{\mathscr{B}}_x}$. We now show that one can find an invertible $M_{{\mathscr{B}}_x,{\mathscr{C}}}$, such that for its columns $C_{i,{\mathscr{B}}_x}$ we have $$\begin{aligned} \nonumber m \ast C_{1,{\mathscr{B}}_x} &= \textstyle\sum\limits_{\lambda=1}^n e_{\lambda} &\\ \label{eq:what_we_want_in_the_image_of_m_ast} m \ast C_{s+2 - j,{\mathscr{B}}_x} &= \textstyle\sum\limits_{\lambda = i_j + 1}^{i_j} e_{\lambda} \quad &\text{for $s \geq j \geq 0$,} \\ \nonumber m \ast C_{j^{\prime},{\mathscr{B}}_x} &= 0 \quad &\text{if $j^{\prime}>s+2$}.\end{aligned}$$ Taking into account equation and the definition of $m \ast $ in , this (plus the fact that $x \mapsto M_{{\mathscr{B}}_x,{\mathscr{C}}}$ will in fact an algebraic morphism) finishes the proof of Lemma \[lm:linear\_change\_of\_variables\_fibers\], except for the claim regarding the $W_h^{h-1}({\mathbb F}_{q^n})$-action. To find $M_{{\mathscr{B}}_x,{\mathscr{C}}}$ satisfying , first observe that by Lemma \[lm:explicit\_construction\_of\_M\_BxEmast\], there is some invertible matrix $S$ depending on $\tilde{x} \in X_{h-1,r}^+$ (in fact, only on its image $x \in X_1^+$), such that $M_{{\mathscr{E}},{\mathscr{B}}_x}((m\ast)^{\prime}) \cdot S$ has the following form: its first column consists of $1$’s; its $i$th column is $0$, unless $i = n + 1 - i_j$ for some $s \geq j \geq 0$; for $s \geq j \geq 0$, the $\lambda$th entry of its $(n + 1 - i_j)$th column is $1$ if $i_j +1 \leq \lambda \leq i_{j+1}$ (we put $i_{s+1} := n$ here) and zero otherwise. (To show this, use the general shape of $M_{{\mathscr{E}},{\mathscr{B}}_x}((m\ast)^{\prime})$ provided by Lemma \[lm:explicit\_construction\_of\_M\_BxEmast\], and then consecutively apply row operations to it and use the last statement of Lemma \[lm:explicit\_construction\_of\_M\_BxEmast\]). Moreover, it is also clear from Lemma \[lm:explicit\_construction\_of\_M\_BxEmast\] that $S$ will be upper triangular with the upper left entry $= 1$. Secondly, let $T$ be a matrix such that: the first row has $1$ in the first position and zeros otherwise; all except for the first entry of the first column are $0$; for $s \geq j \geq 0$, the $(n + 1 - i_j)$th row has $1$ in the $(s + 2 - j)$th position and $0$’s otherwise; the remaining rows can be chosen arbitrarily. Obviously, $T$ can be chosen to be a permutation matrix with entries only $0$ or $1$, and in particular invertible and independent of $x$. Finally, put $M_{{\mathscr{B}}_x,{\mathscr{C}}} := S \cdot T$. Explicitly the columns of the matrix $$\label{eq:some_auxil_matrix_A} M_{{\mathscr{E}},{\mathscr{B}}_x}((m\ast)^{\prime}) \cdot M_{{\mathscr{B}}_x,{\mathscr{C}}} = (M_{{\mathscr{E}},{\mathscr{B}}_x}((m\ast)^{\prime}) \cdot S) \cdot T$$ are as follows: the first column consist of $1$’s; for $s \geq j \geq 0$, the the $\lambda$th entry of the $(s + 2 - j)$th column is $1$ if $i_j +1 \leq \lambda \leq i_{j+1}$, and zero otherwise; all other columns consist of $0$’s. On the other side, the $j$th column of of $M_{{\mathscr{E}},{\mathscr{B}}_x}((m\ast)^{\prime}) \cdot M_{{\mathscr{B}}_x,{\mathscr{C}}}$ is precisely $m \ast C_{j,{\mathscr{B}}_x}$ (up to the unessential $\sigma^{-\ast}$-twist in each entry). This justifies . \[loc:action\] The action of $1 + \varpi^h a \in W_h^{h-1}({\mathbb F}_{q^n})$ on the coordinates $x_{i,h}$ is given by $(x_{i,h})_{i=1}^n \mapsto (x_{i,h} + ax_{i,0})_{i=1}^n$. We determine the action $1 + \varpi^h a$ in the coordinates $x^{\prime}_{i,h}$. Indeed, let $C^{-1} = (d_{i,j})_{1\leq i,j\leq n}$. Then $1 +\varpi^h a$ acts on $x_{i,h}^{\prime}$ by $$\begin{aligned} x_{i,h}^{\prime} = \sum_{j=1}^n d_{i,j}x_{j,h} \mapsto \sum_{j=1}^n d_{i,j} (x_{j,h} + ax_{j,0}) = x_{i,h}^{\prime} + a\sum_{j=1}^n d_{i,j} x_{j,0}.\end{aligned}$$ Organizing the $x_{i,h}$ for $1 \leq i \leq n$ in one (column) vector, we can rewrite this as $$1 +\varpi^h a \colon (x_{i,h}^{\prime})_{i=1}^n \mapsto (x_{i,h}^{\prime})_{i=1}^n + a C^{-1} \cdot x.$$ We determine $C^{-1} \cdot x$. As $M_{{\mathscr{B}}_x,{\mathscr{C}}} = \det(g_b(x))^{-1} m C = g_b(x)^{-1}C$ (as $\det(g_b(x))^{-1} m = g_b(x)^{-1}$), we have $C^{-1} = M_{{\mathscr{B}}_x,{\mathscr{C}}}^{-1}g_b(x)^{-1}$. But $x$ is the first column of $g_b(x)$, thus $$C^{-1} \cdot x = M_{{\mathscr{B}}_x,{\mathscr{C}}}^{-1}g_b(x)^{-1}\cdot x = M_{{\mathscr{B}}_x,{\mathscr{C}}}^{-1} \cdot (1, 0, \ldots, 0)^{\intercal},$$ so $C^{-1} \cdot x$ is the first column of $M_{{\mathscr{B}}_x,{\mathscr{C}}}^{-1} = (ST)^{-1} = T^{-1} S^{-1}$. But $S$ is upper triangular with upper left entry $=1$, so the first column of $M_{{\mathscr{B}}_x,{\mathscr{C}}}^{-1}$ is the first column of $T^{-1}$, which is $(1, 0, \ldots, 0)^{\intercal}$. This finishes the proof of the lemma. The second part of the proof is given by the following lemma. \[lm:non\_linear\_changes\_of\_variables\] Assume $\kappa = 0$. There exists a $X_{h-1,r}^+$-morphism $X_{h-1,r}^+ \times {\mathbb A}^n \to X_{h-1,r}^+ \times {\mathbb A}^n$ such that if $\{z_i\}$ denotes the coordinates on ${\mathbb A}^n$ on the target ${\mathbb A}^n$, then the image of $X_{h,r}^+$ in $X_{h-1,r}^+ \times {\mathbb A}^n$ and the action of ${\mathbb W}_h^{h-1}({\mathbb F}_{q^n})$ on $z_i$ are given by Proposition \[p:Mr\](ii),(iii). Moreover, such a morphism is given by the composition of the change-of-variables $x_{i,h}'$ and purely inseparable morphisms of the form $x_{i,h-1}' \mapsto x_{h-1}^{\prime,q^{-j}}$ for appropriate $i,j$. If $r = n$, this is literally Lemma \[lm:linear\_change\_of\_variables\_fibers\]. Assume $r<n$. First, for $s \geq j \geq 0$, replace $x_{s+2-j}^{\prime}$ by $x_{s+2-j}^{\prime, q^{i_j + 1}}$. Then, by applying a series of iterated changes of variables of the form $x_c^{\prime} =: x_c^{\prime} + x_d^{\prime, q^{\lambda}}$ for appropriate $2 \leq c,d \leq s+2$ and $\lambda$ (essentially following the Euclidean algorithm to find the gcd of the integers $(i_{j+1} - i_j)$ (this gcd is equal to $r$)), we transform $P_1$ from Lemma \[lm:linear\_change\_of\_variables\_fibers\] to the form $$P_1 = \sum_{i=0}^{n-1} x_{1,h}^{\prime, q^i} + \sum_{i=0}^{r-1} x_{2,h}^{\prime,q^i}.$$ As these operations does not involve $x_{1,h}^{\prime}$, the formulas remain true. Now make the change of variables given by $z_1 := x_{2,h}^{\prime} + \sum_{j=0}^{\frac{n}{r} - 1} x_{1,h}^{\prime,q^{rj}}$ and $z_2 := x_{1,h-1}^{\prime}$. In this coordinates, $P_1 = \sum_{i=0}^{r-1} z_1^{q^i}$ and the action is as claimed. We are now ready to complete the proof of Proposition \[p:Mr\]. Combining Lemmas \[lm:linear\_change\_of\_variables\_fibers\] and \[lm:non\_linear\_changes\_of\_variables\] we obtain Proposition \[p:Mr\] in the case $\kappa = 0$. Now let $\kappa$ be arbitrary. It is clear that the proof of Lemma \[lm:linear\_change\_of\_variables\_fibers\] can be applied to this more general situation. One then obtains the same statement, with the only difference being that now our change of variables does not affect the variables $x_{i,h-1}$ for $i \not\equiv 1 \mod n_0$ (these are exactly the variables which do not show up in $P_1$). That is, the right-hand side $X_{h-1,r}^+ \times {\mathbb A}^n$ will have the coordinates $\{x_{i,h-1}^{\prime} \colon i\equiv 1 \mod n_0, 1\leq i\leq n \} \cup \{x_{i,h-1} \colon i \not\equiv 1 \mod n_0, 1\leq i\leq n \}$ and the polynomial defining $X_{h,r}^+$ as a relative $X_{h-1,r}^+$ hypersurface in $X_{h-1,r}^+ \times {\mathbb A}^n$ is $$P_1 = x_{1,h-1}^{\prime} + x_{1,h-1}^{\prime,q^{n_0}} + \dots + x_{1,h-1}^{\prime, q^{n_0(n^{\prime}-1)}} + \sum_{j=0}^s \sum_{\lambda = i_j + 1}^{i_{j+1}} x_{s+2-j,h-1}^{\prime, q^{n_0\lambda}},$$ and the ${\mathbb W}_h^{h-1}({\mathbb F}_{q^n})$-action is given by $$1 + \varpi^{h-1} a \colon x_{i,h-1}^{\prime} \mapsto \begin{cases} x_{1,h-1}^{\prime} + a & \text{if $i=1$} \\ x_{i,h-1}^{\prime} & \text{if $i \equiv 1 \mod n_0$ and $i > 1$} \\ x_{i,h-1} + x_{i,0} a & \text{if $i \not\equiv 1 \mod n_0$.} \end{cases}$$ We now apply the change of variables replacing $x_{i,h-1}$ by $x_{i,h-1}' \colonequals x_{h-1} - x_{i,0} x_{1,h-1}'$ for all $i \not\equiv 1 \mod n_0$. This exactly gives us Lemma \[lm:linear\_change\_of\_variables\_fibers\] for arbitrary $\kappa$ (the only difference being the $q^{n_0}$-powers occurring in $P_1$). Now Lemma \[lm:non\_linear\_changes\_of\_variables\] can be applied as in the case $\kappa = 0$, and this finishes the proof of Proposition \[p:Mr\]. [^1]: We mean here that this torus is not contained in $\ker(\alpha)$ for any root $\alpha$ of ${\mathbb T}_1$ the reductive group ${\mathbb G}_1$. See [@CI_loopGLn Lemma 3.7]. [^2]: When $G = \operatorname{GL}_n$, then this is literally what we do in Appendix \[s:fibers\]. When $G$ is a nonsplit inner form of $\operatorname{GL}_n$, in order to get a shape analogous to the split case, we work with an auxiliary scheme which is an affine fibration over $X_h$.
--- author: - | Antony Valentini\ Augustus College --- [Subquantum Information and Computation]{}[^1] Antony Valentini[^2] *Theoretical Physics Group, Blackett Laboratory, Imperial College, Prince Consort Road, London SW7 2BZ, England.[^3]* *Center for Gravitational Physics and Geometry, Department of Physics, The Pennsylvania State University, University Park, PA 16802, USA.* *Augustus College, 14 Augustus Road, London SW19 6LN, England.[^4]* It is argued that immense physical resources – for nonlocal communication, espionage, and exponentially-fast computation – are hidden from us by quantum noise, and that this noise is not fundamental but merely a property of an equilibrium state in which the universe happens to be at the present time. It is suggested that ‘non-quantum’ or nonequilibrium matter might exist today in the form of relic particles from the early universe. We describe how such matter could be detected and put to practical use. Nonequilibrium matter could be used to send instantaneous signals, to violate the uncertainty principle, to distinguish non-orthogonal quantum states without disturbing them, to eavesdrop on quantum key distribution, and to outpace quantum computation (solving $NP$-complete problems in polynomial time). Introduction and Motivation =========================== In quantum theory the Born probability rule is regarded as a fundamental law of Nature: a system with wavefunction $\psi$ has an associated probability distribution $\rho=|\psi|^{2}$. However, there are reasons to believe that this distribution is not fundamental, but merely corresponds to a special ‘equilibrium’ state, analogous to thermal equilibrium \[1–7\]. For there seems to be a ‘conspiracy’ in the known laws of physics: long-distance quantum correlations suggest that our universe is fundamentally nonlocal, and yet the nonlocality cannot be used for practical instantaneous signalling. This apparent conspiracy may be explained if one supposes that signal-locality is merely a property of the special state $\rho=|\psi|^{2}$, in which nonlocality happens ** to be hidden by quantum noise; while for a general distribution $\rho\neq|\psi|^{2}$, nonlocality would be directly visible. While $\rho=|\psi|^{2}$ to high accuracy now (for all systems probed so far), perhaps $\rho\neq|\psi|^{2}$ in the early universe, the relaxation $\rho\rightarrow|\psi|^{2}$ having taken place soon after the big bang. Thus our experience happens to be restricted to an equilibrium state $\rho =|\psi|^{2}$ in which locality and uncertainty *appear* to be fundamental. A heuristic analogy may be drawn with physics in a universe that has reached a state of thermal ‘heat death’, in which all systems have the same temperature \[2\]. In such a universe there is a universal probability distribution given by the Boltzmann rule $\rho=e^{-E/kT}/Z$, analogous to our universal Born rule $\rho=|\psi|^{2}$; all systems are subject to a universal thermal noise, analogous to our universal uncertainty noise; and it is impossible to convert thermal energy into useful work, just as it is impossible in our universe to convert quantum nonlocality into a useful instantaneous signal. A precise model of this scenario is obtained in deterministic hidden-variables theories such as the pilot-wave theory of de Broglie and Bohm \[1–14\]. These nonlocal theories allow one to discuss the properties of hypothetical nonequilibrium distributions $\rho\neq|\psi|^{2}$, for which it may be shown that there are instantaneous signals at the statistical level \[2, 15, 16\]. Thus in these theories it may be asserted that quantum theory is just the theory of a special state $\rho=|\psi|^{2}$ in which nonlocality happens to be hidden by statistical noise. And in pilot-wave theory at least, the relaxation $\rho\rightarrow|\psi|^{2}$ may be accounted for by an *H*-theorem \[1, 5\], much as in classical statistical mechanics, so that $\rho=|\psi|^{2}$ is indeed merely an equilibrium state.[^5] Here we shall work with the pilot-wave model. The details of that model may or may not be correct: but it has qualitative features, such as nonlocality, that are known to be properties of all hidden-variables theories; and it is helpful to work with a specific, well-defined theory. In this model, a system with wavefunction $\psi(x,t)$ has a configuration $x(t)$ whose velocity is determined by $\dot{x}(t)=j(x,t)/|\psi(x,t)|^{2}$, where $j$ is the quantum probability current. Quantum theory is recovered if one assumes that an ensemble of systems with wavefunction $\psi_{0}(x)$ begins with a ‘quantum equilibrium’ distribution of configurations $\rho_{0}(x)=|\psi_{0}(x)|^{2}$ at $t=0$ (guaranteeing $\rho(x,t)=|\psi(x,t)|^{2}$ for all $t$). In effect, the Born rule is assumed as an initial condition. But the theory also allows one to consider arbitrary ‘nonequilibrium’ initial distributions $\rho_{0}(x)\neq|\psi_{0}(x)|^{2}$, which violate quantum theory \[1–7\], and whose evolution is given by the continuity equation$$\frac{\partial\rho(x,t)}{\partial t}+\nabla\cdot(\dot{x}(t)\rho(x,t))=0$$ (the same equation that is satisfied by $|\psi(x,t)|^{2}$). Our working hypothesis, then, is that $\rho=|\psi|^{2}$ is an equilibrium state, analogous to thermal equilibrium in classical mechanics. This state has special properties – in particular locality and uncertainty – which are not fundamental. It then becomes clear that a lot of new physics must be hidden behind quantum noise, physics that is unavailable to us only because we happen to be trapped in an equilibrium state. This new physics might be accessible if the universe began in nonequilibrium $\rho\neq|\psi|^{2}$. First, in theories of cosmological inflation, early corrections to quantum fluctuations would change the spectrum of primordial density perturbations imprinted on the cosmic microwave background \[6, 7\]. Second, relic cosmological particles that decoupled at sufficiently early times might still be in quantum nonequilibrium today, violating quantum mechanics \[3–7\]. The second possibility is particularly relevant here. If relic nonequilibrium matter from the early universe was discovered, what could we do with it? Thermal and chemical nonequilibrium have myriad technological applications; we expect that quantum nonequilibrium would be just as useful. Detection and Use of Quantum Nonequilibrium =========================================== First we need to consider how a nonequilibrium distribution $\rho\neq |\psi|^{2}$ might be deduced by statistical analysis of a random sample of relic matter \[7\]. Consider the unrealistic but simple example of a large number $N$ of Hydrogen atoms in the ground state $\psi_{100}(r)$. Assume they make up a cloud of gas somewhere in space. Because the phase of $\psi_{100}$ has zero gradient, the de Broglie-Bohm velocity field vanishes, and pilot-wave theory predicts that each electron is at rest relative to its nucleus. We then have a static distribution $\rho(r)$, which may or may not equal the quantum equilibrium distribution $\rho_{eq}(r)=\left| \psi_{100}(r)\right| ^{2}\propto e^{-2r/a_{0}}$. To test this, one could draw a random sample of $N\acute{}$ atoms ($N\acute{}<<N$) and measure the electron positions. The sample $r_{1},\;r_{2},\;r_{3},.....,\;r_{N\acute{}}$ may be used to make statistical inferences about the parent distribution $\rho(r)$. In particular, one may estimate the likelihood that $\rho (r)=\rho_{eq}(r)$. Should one deduce that, almost certainly, the cloud as a whole has a nonequilibrium distribution $\rho(r)\neq\rho_{eq}(r)$, the rest of the cloud may then be used as a resource for new physics. For example, one could test $\rho(r)$ via the sample mean $\bar{r}$. If $\rho(r)$ has mean $\mu$ and variance $\sigma^{2}$, the central limit theorem tells us that for large $N\acute{}$ the random variable $\bar{r}$ has an approximately normal distribution with mean $\mu$ and variance $\sigma^{2}/N\acute{}$. We can then calculate the probability that $\bar{r}$ differs from $\mu$, and we can test the hypothesis that $\rho(r)=\rho_{eq}(r)$ with $\mu=\mu _{eq}=\frac{3}{2}a_{0}$. A standard technique is to compare the probability $P(\bar{r}|\rho_{eq})$ of obtaining $\bar{r}$ from a distribution $\rho_{eq}$ with the probability $P(\bar{r}|\rho_{noneq})$ of obtaining $\bar{r}$ from some nonequilibrium distribution $\rho_{noneq}$. One usually refers to $P(\bar{r}|\rho_{eq})$ and $P(\bar{r}|\rho_{noneq})$ as the ‘likelihoods’ of $\rho_{eq}$ and $\rho_{noneq}$ respectively, given the sample mean $\bar{r}$. If $P(\bar{r}|\rho_{eq})<<P(\bar{r}|\rho_{noneq})$, one concludes that nonequilibrium is much more likely. Similarly, using standard techniques such as the chi-square test, one may deduce the most likely form of the parent distribution $\rho(r)$, which almost certainly applies to the rest of the cloud.[^6] In what follows, then, we assume that at $t=0$ we have a large number of particles with the same known wavefunction $\psi_{0}(x)$, and with positions $x$ that have a *known* nonequilibrium distribution $\rho_{0}(x)\neq\left| \psi_{0}(x)\right| ^{2}$. Instantaneous Signalling ======================== The most obvious application of such ‘non-quantum’ matter would be for instantaneous signalling across space \[7\]. Suppose we take pairs of nonequilibrium particles and prepare each pair in an entangled state $\psi(x_{A},x_{B},t_{0})$ at time $t_{0}$ (by briefly switching on an interaction). Given the details of the preparation, we may use the Schrödinger equation to calculate the evolution of the wavefunction of each pair, from $\psi(x_{A},x_{B},0)=\psi_{0}(x_{A})\psi_{0}(x_{B})$ at $t=0$ to $\psi(x_{A},x_{B},t_{0})$ at $t=t_{0}$. We then know the de Broglie-Bohm velocity field throughout $(0,t_{0})$, and so we may use the continuity equation to calculate the evolution of the joint distribution for the pairs from $\rho(x_{A},x_{B},0)=\rho_{0}(x_{A})\rho_{0}(x_{B})$ at $t=0$ to $\rho(x_{A},x_{B},t_{0})\neq\left| \psi(x_{A},x_{B},t_{0})\right| ^{2}$ at $t=t_{0}$.[^7] We then have the situation discussed in detail elsewhere \[2\]. The marginal distribution $\rho_{A}(x_{A},t_{0})\equiv\int dx_{B}\,\rho(x_{A},x_{B},t_{0})$ at $A$ is known, and its subsequent evolution will depend instantaneously on perturbations applied at $B$, however remote $B$ may be from $A$. Thus instantaneous signals may be sent from $B$ to $A$. It might be thought that superluminal signals would necessarily lead to causal paradoxes. However, it could well be that at the nonlocal hidden-variable level there is a preferred slicing of spacetime, labelled by a time parameter that defines a fundamental causal sequence \[3, 7, 17, 18\].[^8] Subquantum Measurement ====================== Let us now consider how our nonequilibrium particles could be used to perform novel measurements on ordinary, equilibrium systems \[7\]. Assume once again that we have an ensemble of what we shall now call ‘apparatus’ particles with known wavefunction $g_{0}(y)$ and known *nonequilibrium* distribution $\pi_{0}(y)\neq\left| g_{0}(y)\right| ^{2}$. (The position $y$ may be regarded as a ‘pointer’ position.) And let us now use them to measure the positions of ordinary ‘system’ particles with known wavefunction $\psi_{0}(x)$ and known *equilibrium* distribution $\rho_{0}(x)=\left| \psi_{0}(x)\right| ^{2}$. We shall see that, if the apparatus distribution $\pi_{0}(y)$ were arbitrarily narrow, we could measure the system position $x_{0}$ without disturbing the system wavefunction $\psi_{0}(x)$, to arbitrary accuracy, in complete violation of the uncertainty principle. We shall illustrate the idea with an exactly-solvable model. At $t=0$, we take a system particle and an apparatus particle and switch on an interaction between them described by the Hamiltonian $\hat{H}=a\hat{x}\hat{p}_{y}$, where $a$ is a coupling constant and $p_{y}$ is the momentum canonically conjugate to $y$. (This is the standard interaction Hamiltonian for an ideal quantum measurement of $x$ using the pointer $y$.) For simplicity, we neglect the Hamiltonians of $x$ and $y$ themselves.[^9] For $t>0$ the joint wavefunction $\Psi(x,y,t)$ satisfies the Schrödinger equation$$\frac{\partial\Psi(x,y,t)}{\partial t}=-ax\frac{\partial\Psi(x,y,t)}{\partial y}$$ while $\left| \Psi(x,y,t)\right| ^{2}$ obeys the continuity equation$$\frac{\partial\left| \Psi(x,y,t)\right| ^{2}}{\partial t}+ax\frac {\partial\left| \Psi(x,y,t)\right| ^{2}}{\partial y}=0$$ The hidden-variable velocity fields $\dot{x}$ and $\dot{y}$ must satisfy$$\frac{\partial\left| \Psi(x,y,t)\right| ^{2}}{\partial t}+\frac {\partial\left( \left| \Psi(x,y,t)\right| ^{2}\dot{x}\right) }{\partial x}+\frac{\partial\left( \left| \Psi(x,y,t)\right| ^{2}\dot{y}\right) }{\partial y}=0$$ from which we deduce the (non-standard) guidance equations[^10] $\dot{x}=0,\;\dot{y}=ax$ and the de Broglie-Bohm trajectories $x(t)=x_{0},\;y(t)=y_{0}+ax_{0}t$. Now the initial product wavefunction $\Psi_{0}(x,y)=\psi_{0}(x)g_{0}(y)$ evolves into the entangled wavefunction $\Psi(x,y,t)=\psi_{0}(x)g_{0}(y-axt)$. In the limit $at\rightarrow0$, we have $\Psi(x,y,t)\approx\psi_{0}(x)g_{0}(y)$ and the system wavefunction $\psi_{0}(x)$ is undisturbed. Yet, no matter how small $at$ may be, at the hidden-variable level the ‘pointer’ position $y(t)=y_{0}+ax_{0}t$ contains information about the value of $x_{0}$ (and of $x(t)=x_{0}$). And this ‘subquantum’ information about $x$ will be visible to us if the initial pointer distribution $\pi_{0}(y)$ is sufficiently narrow. For consider an ensemble of similar experiments, where $x$ and $y$ have the initial joint distribution $P_{0}(x,y)=\left| \psi_{0}(x)\right| ^{2}\pi _{0}(y)$ (equilibrium for $x$ and nonequilibrium for $y$). The continuity equation$$\frac{\partial P(x,y,t)}{\partial t}+ax\frac{\partial P(x,y,t)}{\partial y}=0$$ implies that at later times $P(x,y,t)=\left| \psi_{0}(x)\right| ^{2}\pi _{0}(y-axt)$. If $\pi_{0}(y)$ is localised – say $\pi_{0}(y)\approx0$ for $\left| y\right| >w/2$ – then from a standard measurement of $y$ we may deduce that $x$ lies in the interval $(y/at-w/2at,\;y/at+w/2at)$ (so that $P(x,y,t)\neq0$), where the error margin $w/2at\rightarrow0$ as the width $w\rightarrow0$. Thus, if the nonequilibrium ‘apparatus’ distribution $\pi_{0}(y)$ has an arbitrarily small width $w$, then to arbitrary accuracy we may measure the position $x$ of each equilibrium particle without disturbing the wavefunction $\psi_{0}(x)$.[^11] We have for simplicity considered an exactly-solvable system with an unusual total Hamiltonian. Similar conclusions hold for more standard systems: if the interaction between $x$ and $y$ is sufficiently weak, then while $\psi_{0}(x)$ is hardly disturbed, at the hidden-variable level $y$ generally contains information about $x$ – information that is visible if $y$ has a sufficiently narrow distribution. Generalising, if $w$ is arbitrarily small, then by a sequence of such measurements, it is clear that for a system particle with arbitrary wavefunction $\psi(x,t)$ we can determine the hidden trajectory $x(t)$ without disturbing $\psi(x,t)$, to arbitrary accuracy. Distinguishing Non-Orthogonal Quantum States ============================================ In quantum mechanics, non-orthogonal states $\left| \psi_{1}\right\rangle $, $\left| \psi_{2}\right\rangle $ (with $\langle\psi_{1}|\psi_{2}\rangle\neq0$) cannot be distinguished without disturbing them \[19\]. This theorem breaks down in the presence of nonequilibrium matter \[7\]. For example, if $\left| \psi_{1}\right\rangle $, $\left| \psi_{2}\right\rangle $ are distinct initial states of a single spinless particle, then in de Broglie-Bohm theory the velocity fields $j_{1}(x,t)/\left| \psi_{1}(x,t)\right| ^{2}$, $j_{2}(x,t)/\left| \psi_{2}(x,t)\right| ^{2}$ generated by the wavefunctions $\psi_{1}(x,t)$, $\psi_{2}(x,t)$ will in general be different, even if $\langle\psi_{1}|\psi_{2}\rangle=\int dx\,\,\psi_{1}^{\ast}(x,0)\psi_{2}(x,0)\neq0$. The hidden-variable trajectories $x_{1}(t)$ and $x_{2}(t)$ – associated with $\psi_{1}(x,t)$ and $\psi_{2}(x,t)$ respectively – will generally differ if $\psi_{1}(x,0)\neq\psi_{2}(x,0)$ (even if $x_{1}(0)=x_{2}(0)$). Thus, a subquantum measurement of the particle trajectory (even over a short time) would generally enable us to distinguish the quantum states $\left| \psi _{1}\right\rangle $ and $\left| \psi_{2}\right\rangle $ without disturbing them, to arbitrary accuracy. Eavesdropping on Quantum Key Distribution ========================================= Alice and Bob want to share a secret sequence of bits that will be used as a key for cryptography. During distribution of the key between them, they must be able to detect any eavesdropping by Eve. Three protocols for quantum key distribution – BB84 \[20\], B92 \[21\], and E91 (or EPR) \[22\] – are known to be secure against classical or quantum attacks (that is, against eavesdropping based on classical or quantum physics) \[23\]. But these protocols are *not* secure against a ‘subquantum’ attack \[7\]. Both BB84 and B92 rely on the impossibility of distinguishing non-orthogonal quantum states without disturbing them. In BB84 Alice sends Bob a random sequence of spin-1/2 states $\left| +z\right\rangle ,\;\left| -z\right\rangle ,\;\left| +x\right\rangle ,\;\left| -x\right\rangle $, while in B92 she sends a random sequence of arbitrary non-orthogonal states $\left| u_{0}\right\rangle ,\;\left| u_{1}\right\rangle $ (the states being subjected to appropriate random measurements by Bob). In each case the sequence is chosen by Alice. But if Eve possesses non-quantum matter with an arbitrarily narrow nonequilibrium distribution, she may identify the states sent by Alice without disturbing them, to arbitrary accuracy, and so read the supposedly secret key. (For B92, $\left| u_{0}\right\rangle ,$ $\left| u_{1}\right\rangle $ could be states of a spinless particle with wavefunctions $\psi_{0}(x,t),$ $\psi_{1}(x,t)$, which Eve may distinguish by monitoring the hidden-variable trajectories. Similarly for BB84 – though for spin-1/2 states one must consider pilot-wave theory for two-component wavefunctions \[7, 10\].) E91 is particularly interesting for it relies on the completeness of quantum theory – that is, on the assumption that there are no hidden ‘elements of reality’. Pairs of spin-1/2 particles in the singlet state are shared by Alice and Bob, who perform spin measurements along random axes. For coincident axes the same bit sequence is generated at each wing, by apparently random quantum outcomes. ‘The eavesdropper cannot elicit any information from the particles while in transit ..... because there is no information encoded there’ \[22\]. But our Eve has access to information outside the domain of quantum theory. She can measure the particle positions while in transit, without disturbing the wavefunction, and so *predict* the outcomes of spin measurements at the two wings (for the publicly-announced axes).[^12] Thus Eve is able to predict the key shared by Alice and Bob. Outpacing Quantum Computation ============================= Quantum theory allows parallel Turing-type computations to occur in different branches of the state vector for a single computer \[24\]. However, owing to the effective collapse that occurs under measurement, an experimenter is able to access only one result; the outputs of the other computations are lost. Of course, by clever use of entanglement and interference, one can make quantum computation remarkably efficient for certain special problems. But in general, what at first sight seems to be a massive increase in computational power is not, in fact, realised in practice. All the results of a parallel quantum computation could be read, however, if we had access to nonequilibrium matter with a very narrow distribution \[3, 7\]. For each result could be encoded in an integer $n$, and stored as an energy eigenvalue $E_{n}$ for a single spinless particle (a component of the computer). At the end of the computation the particle wavefunction will be a superposition$$\psi(x,t)=\sum_{n\in S}\phi_{n}(x)e^{-iE_{n}t}$$ of $N$ energy eigenfunctions $\phi_{n}(x)$, where $S$ is an *unknown* set of $N$ quantum numbers. (We assume a Hamiltonian $\hat{H}=\hat{p}^{2}/2+V(\hat{x})$, where the mass $m=1$ and $\hbar=1$.) In standard quantum theory an energy measurement for the particle yields just one value $E_{n}$. To find out what other eigenvalues are present, one would have to run the whole computation many times – to produce an ensemble of copies of the same wavefunction – and repeat the energy measurement for each. And so one may as well just run many computations on a single classical computer, one after the other. But the hidden-variable particle trajectory $x(t)$ – determined by $\dot {x}(t)=j/\left| \psi\right| ^{2}$ or $\dot{x}=\operatorname*{Im}\left( \nabla\psi/\psi\right) $ – contains information about all the modes in the superposition (provided the $\phi_{n}(x)$ overlap in space). If we had a sample of nonequilibrium matter with a very narrow distribution, we could use it to measure $x(t)$ without disturbing $\psi(x,t)$. We could then read the set $S$ of quantum numbers: having measured the values of $x(t),\;\dot{x}(t)$ at $N$ times $t=t_{1},\;t_{2},\;....,\;t_{N}$, the equation $\dot {x}=\operatorname*{Im}\left( \nabla\psi/\psi\right) $ may be solved for the $N$ quantum numbers $n$.[^13] Thus we could read the results of all $N$ arbitrarily long parallel computations (at the price of solving $N$ simultaneous equations), even though the computer has been run only once. By combining subquantum measurements with quantum algorithms, we could solve $NP$-complete problems in polynomial time, and so outpace all known quantum (or classical) algorithms \[7\]. To see this, consider the computational enhancement noted by Abrams and Lloyd in nonlinear quantum mechanics \[25\]. Let a quantum (equilibrium) computer begin with $n+1$ qubits in the state $\left| 00\;.....\;0\right\rangle $ and apply the Hadamard gate $H$ (which maps $\left| 0\right\rangle \rightarrow (\left| 0\right\rangle +\left| 1\right\rangle )/\sqrt{2}$, $\left| 1\right\rangle \rightarrow(\left| 0\right\rangle -\left| 1\right\rangle )/\sqrt{2}$) to each of the first $n$ qubits to produce $(1/\sqrt{2^{n}})\sum_{x}\left| x,\;0\right\rangle $, where the $n$-bit ‘input’ $x$ ranges from $00\;.....\;0$ to $11\;.....\;1$ (or from $0$ to $2^{n}-1$). Then use an ‘oracle’ or ‘black box’ to calculate – in parallel – a function $f(x)=0$ or $1$, whose value is stored in the last qubit, producing $(1/\sqrt{2^{n}})\sum_{x}\left| x,\;f(x)\right\rangle $. Applying $H$ again to each of the first $n$ qubits produces a state containing the term $(1/2^{n})\sum _{x}\left| 00\;.....\;0,\;f(x)\right\rangle $. If upon quantum measurement of the first $n$ qubits we obtain $\left| 00\;.....\;0\right\rangle $, the total effective state becomes $\left| 00\;.....\;0\right\rangle \otimes\left| \psi\right\rangle $ where $\left| \psi\right\rangle \propto\left| 0\right\rangle (2^{n}-s)/2^{n}+\left| 1\right\rangle s/2^{n}$ and $s$ is the number of inputs $x$ such that $f(x)=1$ (the total number of inputs being $2^{n}$).[^14] As Abrams and Lloyd point out, we could solve $NP$-complete problems if we could distinguish between $s=0$ and $s>0$ for the state $\left| \psi\right\rangle $ of the last qubit. This could be accomplished by nonlinear evolution, in which non-orthogonal states evolve into (distinguishable) orthogonal ones \[25\]. But equally, non-orthogonal qubits could be distinguished using our nonequilibrium matter. Here, the de Broglie-Bohm trajectory $x(t)$ of an equilibrium particle guided by $\psi(x,t)=\langle x|\psi(t)\rangle$ will in general be sensitive to the value of $s$, which may therefore be read by a subquantum measurement of $x(t)$ \[7\]. Conclusion ========== We have argued that immense physical resources are hidden from us by quantum noise, and that we will be unable to access those resources only for as long as we are trapped in the ‘quantum heat death’ – a state in which all systems are subject to the noise associated with the Born probability distribution $\rho=|\psi|^{2}$. It is clear that hidden-variables theories offer a radically different perspective on quantum information theory. In such theories, a huge amount of ‘subquantum information’ is hidden from us simply because we happen to live in a time and place where the hidden variables have a certain ‘equilibrium’ distribution. As we have mentioned, nonequilibrium instantaneous signals occur not only in pilot-wave theory but in *any* deterministic hidden-variables theory \[15, 16\]. And in pilot-wave theory at least, we have shown that the security of quantum cryptography depends on our being trapped in quantum equilibrium; and, that nonequilibrium would unleash computational resources far more powerful than those of quantum computers. Some might prefer to regard this work as showing how the principles of quantum information theory depend on a particular axiom of quantum theory – the Born rule $\rho=|\psi|^{2}$. (One might also consider the role of the axiom of linear evolution \[25, 26\].) But if one takes hidden-variables theories seriously as physical theories of Nature, one can hardly escape the conclusion that we just happen to be confined to a particular state in which our powers are limited by an all-pervading statistical noise. It then seems important to search for violations $\rho\neq|\psi|^{2}$ of the Born rule \[3–7\]. **Acknowledgement.** This work was supported by the Jesse Phillips Foundation. **REFERENCES** \[1\] A. Valentini, Phys. Lett. A **156**, 5 (1991). \[2\] A. Valentini, Phys. Lett. A **158**, 1 (1991). \[3\] A. Valentini, PhD thesis, International School for Advanced Studies, Trieste, Italy (1992). \[4\] A. Valentini, in *Bohmian Mechanics and Quantum Theory: an Appraisal*, eds. J. T. Cushing *et al.* (Kluwer, Dordrecht, 1996). \[5\] A. Valentini, in *Chance in Physics: Foundations and Perspectives*, eds. J. Bricmont *et al*. (Springer, Berlin, 2001) \[quant-ph/0104067\]. \[6\] A. Valentini, Int. J. Mod. Phys. A (forthcoming). \[7\] A. Valentini, *Pilot-Wave Theory of Physics and Cosmology* (Cambridge University Press, Cambridge, forthcoming). \[8\] L. de Broglie, in *Électrons et Photons: Rapports et Discussions du Cinquième Conseil de Physique*, ed. J. Bordet (Gauthier-Villars, Paris, 1928). \[English translation: G. Bacciagaluppi and A. Valentini, *Electrons and Photons: The Proceedings of the Fifth Solvay Congress* (Cambridge University Press, Cambridge, forthcoming).\] \[9\] D. Bohm, Phys. Rev. **85**, 166; 180 (1952). \[10\] J. S. Bell, *Speakable and Unspeakable in Quantum Mechanics* (Cambridge University Press, Cambridge, 1987). \[11\] P. Holland, *The Quantum Theory of Motion: an Account of the de Broglie-Bohm Causal Interpretation of Quantum Mechanics* (Cambridge University Press, Cambridge, 1993). \[12\] D. Bohm and B. J. Hiley, *The Undivided Universe: an Ontological Interpretation of Quantum Theory* (Routledge, London, 1993). \[13\] J. T. Cushing, *Quantum Mechanics: Historical Contingency and the Copenhagen Hegemony* (University of Chicago Press, Chicago, 1994). \[14\] *Bohmian Mechanics and Quantum Theory: an Appraisal*, eds. J. T. Cushing *et al.* (Kluwer, Dordrecht, 1996). \[15\] A. Valentini, Phys. Lett. A (in press) \[quant-ph/0106098\]. \[16\] A. Valentini, in *Modality, Probability, and Bell’s Theorems*, eds. T. Placek and J. Butterfield (Kluwer, Dordrecht, 2002) \[quant-ph/0112151\]. \[17\] D. Bohm and B. J. Hiley, Found. Phys. **14**, 255 (1984). \[18\] A. Valentini, Phys. Lett. A **228**, 215 (1997). \[19\] M. A. Nielsen and I. L. Chuang, *Quantum Computation and Quantum Information* (Cambridge University Press, Cambridge, 2000). \[20\] C. H. Bennett and G. Brassard, in *Proceedings of IEEE International Conference on Computers, Systems and Signal Processing, Bangalore, India* (IEEE, New York, 1984). \[21\] C. H. Bennett, Phys. Rev. Lett. **68**, 3121 (1992). \[22\] A. Ekert, Phys. Rev. Lett. **67**, 661 (1991). \[23\] N. Gisin *et al*., Rev. Mod. Phys. (forthcoming) \[quant-ph/0101098\]. \[24\] D. Deutsch, Proc. Roy. Soc. London A **400**, 975 (1985). \[25\] D. S. Abrams and S. Lloyd, Phys. Rev. Lett. **81**, 3992 (1998). \[26\] A. Valentini, Phys. Rev. A **42**, 639 (1990). [^1]: To appear in: *Proceedings of the Second Winter Institute on Foundations of Quantum Theory and Quantum Optics: Quantum Information Processing*, ed. R. Ghosh (Indian Academy of Science, Bangalore, 2002). [^2]: email: a.valentini@ic.ac.uk [^3]: Corresponding address. [^4]: Permanent address. [^5]: Other authors tend to consider pilot-wave theory in equilibrium alone. This is like considering classical mechanics only in thermal equilibrium. [^6]: The same reasoning applies if the parent distribution is time-dependent: if the sampling is done at time $t_{0}$, and statistical analysis favours a distribution $\rho(r,t_{0})$ at $t_{0}$, then the most likely distribution at later times may be calculated by integrating the continuity equation. [^7]: If the velocity field does not vary too rapidly in configuration space and the time interval $(0,t_{0})$ is not inordinately long, relaxation to equilibrium will not be significant. [^8]: Instantaneous signals would define (operationally) an absolute simultaneity; ‘backwards-in-time’ effects generated by Lorentz transformations would be fictitious, moving clocks being incorrectly synchronised if one assumes isotropy of the speed of light in all frames \[3, 7, 17\]. [^9]: This might be justified by assuming $a$ to be relatively large; or, one can just accept the above Hamiltonian as a simple illustrative model. [^10]: For standard Hamiltonians, $\dot{x}=j/\left| \psi\right| ^{2}$ usually reads $\dot{x}=(\hbar/m)\operatorname*{Im}(\nabla\psi/\psi)$. Here the velocity field is unusual because the Hamiltonian is. [^11]: For finite $w<\Delta$, where $\Delta$ is the width of $\left| g_{0}(y)\right| ^{2}$, we may make probabilistic statements about the value of $x$ that convey more information than quantum theory allows; while if $w>\Delta$, the measurements will be less accurate than those of quantum theory \[7\]. [^12]: In Bell’s pilot-wave theory of spin-1/2 \[10\], particle positions within the wavepacket determine the outcomes of Stern-Gerlach measurements. [^13]: We are assuming that $\phi_{n}(x),\;E_{n}$ are known functions of $x,\;n$ – obtained by solving the eigenvalue problem $\hat{H}\phi_{n}(x)=E_{n}\phi_{n}(x)$. The $N$ pairs of values $x(t_{i}),\;\dot{x}(t_{i})$ might be obtained by subquantum measurements of $x(t)$ at $2N$ times $t=t_{1},\;t_{1}+\epsilon,\;t_{2},\;t_{2}+\epsilon,\;....,\;t_{N},$ $t_{N}+\epsilon$, with $\epsilon$ very small. [^14]: The quantum equilibrium probability of obtaining $\left| 00\;.....\;0\right\rangle $ is at least 1/4 \[25\].
--- abstract: 'Quantum dynamical time-evolution of bosonic fields is shown to be equivalent to a stochastic trajectory in space-time, corresponding to samples of a statistical mechanical steady-state in a higher dimensional quasi-time. This is proved using the Q-function of quantum theory with time-symmetric diffusion, that is equivalent to a forward-backward stochastic process in both the directions of time. The resulting probability distribution has a positive, time-symmetric action principle and path integral, whose solution corresponds to a classical field equilibrating in an additional dimension. Comparisons are made to stochastic quantization and other higher dimensional physics proposals. Time-symmetric action principles for quantum fields are also related to electrodynamical absorber theory, which is known to be capable of violating a Bell inequality. We give numerical methods and examples of solutions to the resulting stochastic partial differential equations in a higher time-dimension, giving agreement with exact solutions for soluble boson field quantum dynamics. This approach may lead to useful computational techniques for quantum field theory, as the action principle is real, as well as to ontological models of physical reality.' author: - 'Peter D. Drummond' bibliography: - 'Qbridgefull.bib' - 'RMP\_Draft\_references.bib' title: Time evolution with symmetric stochastic action --- Introduction ============ The role that time plays in quantum mechanics is a deep puzzle in physics. Quantum measurement appears to preferentially choose a particular time direction via the projection postulate. This, combined with the Copenhagen interpretation that only macroscopic measurements are real, has led to many quantum paradoxes. Here, we derive a time-symmetric, stochastic quantum action principle to help resolve these issues, extending Dirac’s idea [@dirac1938pam] of future-time boundary conditions to the quantum domain. In this approach, quantum field dynamics is shown to be equivalent to a time-symmetric stochastic equilibration in the quasi-time of a higher dimensional space, with a genuine probability. There are useful computational consequences: an action principle with a real exponent has no phase problem. The theory uses the Q-function of quantum mechanics [@Husimi1940; @Hillery_Review_1984_DistributionFunctions; @drummond2014quantum], which is the expectation value of a coherent state projector. It is a well-defined and positive distribution for any bosonic quantum density matrix, and can be generalized to include fermions [@FermiQ]. The corresponding dynamical equation is of Fokker-Planck form, with a zero trace, non-positive-definite diffusion. This leads to an action principle for diffusion in positive and negative time-directions simultaneously, equivalent to a forward-backwards stochastic process. The result is time-reversible and ** non-dissipative, explaining how quantum evolution can be inherently random yet time-symmetric. Using stochastic bridge theory [@schrodinger1931uber; @hairer2007analysis; @drummond2017forward], the Q-function time-evolution is shown to correspond to the steady-state of a diffusion equation in an extra dimension. Thus, stochastic equilibration of a classical field in five dimensions gives quantum dynamics in four dimensional space-time. This shows that classical fields in higher dimensions can behave quantum-mechanically, including all the relevant real-time dynamics. No imaginary-time propagation is required, and the statistical description is completely probabilistic. A treatment of measurement theory is given elsewhere, showing that with this approach, a projection postulate is not essential, as only gain is needed to understand measurement [@reid2017interpreting; @drummond2019q]. For the fields used here to be equivalent to quantum fields, they must propagate stochastically in a negative as well as a positive time-direction. Time symmetric evolution was proposed by Tetrode in classical electrodynamics [@tetrode1922causal]. Dirac used the approach to obtain an elegant theory of classical radiation reaction [@dirac1938pam], which was extended by Feynman and Wheeler [@wheeler1945interaction]. Time-reversible methods are also studied in quantum physics [@aharonov1964time; @cramer1980generalized; @pegg1982time; @pegg2002quantum], the philosophy of science [@price2008toy], and used to explain Bell violations [@argaman2010bell]. Here, we use this general approach to analyze *interacting* fields, thus giving time-symmetric quantum physics a strong theoretical foundation. By comparison, the Fenyes-Nelson approach to stochastic quantum evolution [@fenyes1952wahrscheinlichkeitstheoretische; @nelson1966derivation], does not have a constructive interpretation [@grabert1979quantum]. The approach of stochastic quantization [@parisi1981perturbation] uses imaginary time. Such methods have the drawback that analytic continuation to real time dynamics can be intractable [@silver1990maximum; @feldbrugge2017lorentzian]. The mathematical technique used here combines the Wiener-Stratonovic stochastic path integral [@wiener1930generalized; @stratonovich1971probability], with Schrödinger’s [@schrodinger1931uber] idea of a stochastic bridge in statistical mechanics, as generalized by later workers. The resulting classical equilibration is exactly equivalent to quantum dynamics. All quantum effects are retained in this approach, including Bell violations [@drummond2019q]. This is not unexpected, because quantum absorber theory, with similar time-reversed propagation, also has Bell violations[@pegg1982time]. The focus of this paper is to understand quantum dynamics and measurement using stochastic methods. This is important both for fundamental applications to quantum measurement theory [@drummond2019q]. In addition, stochastic methods scale well in computation involving large systems. This may therefore help to compute exponentially complex many-body dynamics. The Kaluza-Klein theory of electromagnetism [@kaluza1921unitatsproblem; @klein1926quantentheorie; @overduin1997kaluza], string theory [@horowitz2005spacetime; @mohaupt2003introduction] as well as the Randall-Sundrum [@randall1999alternative] and Gogberashvili [@Gogberashvili2000] approach to the hierarchy problem all use extra space-time dimensions. In the present theory the extra dimension is time-like and non-compact. Although it is not necessary to take this literally, one could ask at which coordinate in the fifth dimension is our universe? This is answerable in anthropometric terms. Just as in ‘flatland’ [@abbott2006flatland], the location of observers defines the extra coordinate. It is not impossible to generalize this approach to Riemannian metrics. The Q-function is probabilistic and defined in real time. Yet it does $not$ have a traditional stochastic interpretation, since unitary evolution can generate diffusion terms that are not positive-definite [@zambrini2003non]. An earlier method of treating this was to double the phase-space dimension to give a positive diffusion [@Drummond_Gardiner_PositivePRep]. This is usually applied to normal ordering [@Glauber_1963_P-Rep], but the corresponding distribution is non-unique, and is most useful for damped systems [@Drummond:1999_QDBEC] or short times [@Carter:1987; @Deuar:2007_BECCollisions; @Corney_2008]. With undamped systems, doubling phase-space gives sampling errors that increase with time [@Deuar2006a; @Deuar2006b]. Rather than using this earlier approach, here a positive diffusion is obtained through equilibration in an extra space-time dimension. Quantum dynamical problems arise in many fields, from many-body theory to cosmology. The utility of the path integral derived here is that it is real, not imaginary [@feynman2010quantum]. Other methods exist for quantum dynamics. These include mean field theory, perturbation theory, variational approaches [@cosme2016center], standard phase-space methods [@Hillery_Review_1984_DistributionFunctions] and the density matrix renormalization group [@White:1992]. Each has its own drawbacks, however. The time-symmetric techniques given here use a different approach, as well as providing a model for a quantum ontology. To demonstrate these results, we introduce a general number conserving quartic bosonic quantum field Hamiltonian. The corresponding Q-function dynamics satisfies a Fokker-Planck equation with zero trace diffusion. This leads directly to a time-symmetric action principle. The corresponding probabilistic path integral has a solution obtained through diffusion in a higher dimension. Elementary examples and numerical solutions are obtained. We compare results with exactly soluble cases. The content of this paper is as follows. Section \[sec:Q-functions\] summarizes properties of Q-functions, and proves that they have a traceless diffusion for number conserving bosonic quantum field theories. Section \[sec:Bidirectional-stochastic-bridges\] derives the action principle. Section \[sec:Extra-dimensions\] treats extra dimensions, and shows how the classical limit is regained. Section \[sec:Quadratic-Hamiltonian-Examples\] gives examples and numerical results. Finally, section \[sec:Summary\] summarizes the paper. Q-functions\[sec:Q-functions\] ============================== Phase-space representations in quantum mechanics allow efficient treatment of large systems via probabilistic sampling [@drummond2016quantum]. These methods are very general. They are related to coherent states [@Glauber_1963_P-Rep] and Lie group theory [@Perelomov_1972_Coherent_states_LieG], which introduces a continuous set of parameters in quantum mechanics. Results for bosonic fields are summarized in this section. The Q-function method [@Husimi1940] can also be used for spins [@Arecchi_SUN; @Drummond1984PhysLetts] and fermions [@FermiQ] as well as for bosons, with modifications. These cases are not treated in detail here, for length reasons. General definition of a Q-function ---------------------------------- A general abstract definition of a Q-function [@FermiQ] is: $$Q(\bm{\lambda},t)=Tr\left\{ \hat{\Lambda}\left(\bm{\lambda}\right)\hat{\rho}\left(t\right)\right\} \,,$$ where $\hat{\rho}\left(t\right)$ is the quantum density matrix, $\hat{\Lambda}\left(\bm{\lambda}\right)$ is a positive-definite operator basis, and $\bm{\lambda}$ is a point in the phase-space. This must give an expansion of the Hilbert space identity operator $\hat{I}$, so that, given an integration measure $d\bm{\lambda}$, $\hat{I}=\int\hat{\Lambda}\left(\bm{\lambda}\right)d\bm{\lambda}\,.$ The basis is not orthogonal, and it is generally essential to employ non-orthogonal bases and Lie groups in order to obtain differential and integral identities. Provided $Tr\left[\hat{\rho}\left(t\right)\right]=1$, the Q-function is positive and normalized to unity: $$\int d\bm{\lambda}Q\left(\bm{\lambda}\right)=1\,.\label{Q-normalization}$$ It therefore satisfies the requirements of probability. Quantum expectations $\left\langle \hat{O}\right\rangle _{Q}$ of ordered observables $\hat{O}$ are identical to classical probabilistic averages $\left\langle O\right\rangle _{C}$ - including corrections for operator re-ordering if necessary - so that: $$\left\langle \hat{O}\right\rangle _{Q}=\left\langle O\right\rangle _{C}\equiv\int d\bm{\lambda}Q\left(\bm{\lambda}\right)O(\bm{\lambda})\,.$$ Here, $\left\langle \right\rangle _{Q}$ indicates a quantum expectation value, $\left\langle \right\rangle _{C}$ is a classical phase-space probabilistic average, and time-arguments are implicit. The basis function $\hat{\Lambda}$ does not project the eigenstates of a Hermitian operator, and therefore the quantum dynamical equations differ from those for orthogonal eigenstates. The examples treated here use the Q-function for a complex $N-$component bosonic field $\hat{\bm{\psi}}\left(r\right)$. This is defined with an $n_{d}$-dimensional space-time coordinate $r$, where $r=\left(r^{1},\ldots r^{n_{d}}\right)=\left(\bm{r},t\right)$. Quantum fields are expanded using $M$ annihilation and creation operators $\hat{a}_{i},\hat{a}_{i}^{\dagger}$ for $M/N$ spatial modes. These describe excitations localized on a spatial lattice, or single-particle eigenmodes. The indices $i$ include the $N$ internal degrees of freedom like spin quantum numbers and/or different particle species. For bosonic fields, $\hat{\Lambda}$ is proportional to a coherent state projector [@Glauber_1963_P-Rep], $$\hat{\Lambda}\left(\bm{\alpha}\right)\equiv\left|\bm{\alpha}\right\rangle _{c}\left\langle \bm{\alpha}\right|_{c}/\pi^{M}.$$ The state $|\bm{\alpha}\rangle_{c}$ is a normalized Bargmann-Glauber [@Bargmann:1961; @Glauber_1963_P-Rep] coherent state with $\hat{a}_{i}|\bm{\alpha}\rangle_{c}=\alpha_{i}|\bm{\alpha}\rangle_{c}$ and $\hat{\bm{\psi}}\left(\bm{x}\right)|\bm{\alpha}\rangle_{c}=\bm{\psi}\left(\bm{x}\right)|\bm{\alpha}\rangle_{c}$, where $\bm{\alpha}$ is an $NM$-dimensional complex vector of coherent field mode amplitudes and $\bm{\psi}\left(x\right)$ is the corresponding coherent field. The Q-function for mode amplitudes, $Q^{\alpha}\left(\bm{\alpha}\right)$, is the expectation value of $\hat{\Lambda}\left(\bm{\alpha}\right)$. On Fourier transforming to position space, $Q\left[\bm{\psi}\right]$ in field space is a functional of the complex field amplitudes $\bm{\psi}\left(x\right)$. Results can be calculated in either field or mode notation. Either approach is equivalent in terms of the resulting dynamics. From now on we focus on the mode expansion method, although there are equivalent formulations using functional integrals [@Steel1998; @opanchuk_2013]. Observables ----------- The transition probability or expectation of any observable $\hat{\sigma}$ is obtained by expanding $\hat{\rho}$ in a generalized P-representation, $P\left(\alpha,\beta\right)$. This always exists [@Drummond1980], so that for any quantum density matrix $\hat{\rho}$, $$\hat{\rho}=\int P\left(\bm{\alpha},\bm{\beta}\right)\hat{\Lambda}_{p}\left(\bm{\alpha},\bm{\beta}\right)d\bm{\alpha}d\bm{\beta}\,.\label{eq:Positive-P-expansion}$$ Here $\hat{\Lambda}_{p}\left(\bm{\alpha},\bm{\beta}\right)$ is an off-diagonal coherent projector, $$\hat{\Lambda}_{p}\left(\bm{\alpha},\bm{\beta}\right)=\frac{\left|\bm{\alpha}\right\rangle _{c}\left\langle \bm{\beta}\right|_{c}}{\left\langle \bm{\beta}\right.\left|\bm{\alpha}\right\rangle _{c}}\,,$$ and $d\bm{\alpha}$, $d\bm{\beta}$ are each $M$ dimensional complex integration measures, so that if $\bm{\alpha}=\bm{\alpha}_{x}+i\bm{\alpha}_{y}$, then $d\bm{\alpha}=d^{M}\bm{\alpha}_{x}d^{M}\bm{\alpha}_{y}$. The existence proof [@Drummond1980] shows that there is a canonical probability distribution $P\left(\bm{\alpha},\bm{\beta}\right)$ given by: $$P\left(\bm{\alpha},\bm{\beta}\right)=\left(\frac{1}{4\pi}\right)^{M}\exp\left[-\frac{\left|\bm{\alpha}-\bm{\beta}\right|^{2}}{4}\right]Q\left(\frac{\bm{\alpha}+\bm{\beta}}{2}\right)\,.\label{eq:canonical-expansion}$$ We now show that this leads to a general operator correspondence function for $\hat{\sigma}$ in the form of: $$\left\langle \hat{\sigma}\right\rangle _{Q}\equiv\text{\ensuremath{\int}d\ensuremath{\bm{\alpha}Q^{\alpha}\left(\bm{\alpha}\right)O_{\sigma}\left(\bm{\alpha}\right)}}=\left\langle O_{\sigma}\right\rangle _{C}.$$ To prove this, we use the expansion of $\hat{\rho}$ in Eq (\[eq:Positive-P-expansion\]), which gives that: $$\left\langle \hat{\sigma}\right\rangle _{Q}\equiv\int P\left(\bm{\beta},\bm{\gamma}\right)Tr\left[\hat{\sigma}\hat{\Lambda}_{p}\left(\bm{\beta},\bm{\gamma}\right)\right]d\bm{\beta}d\bm{\gamma}\,.$$ Expanding this using the canonical expansion, Eq (\[eq:canonical-expansion\]), the c-number function corresponding to $\hat{\sigma}$ is therefore $O_{\sigma}\left(\bm{\alpha}\right)$, where on defining $\bm{\alpha}=\left(\bm{\beta}+\bm{\gamma}\right)/2$, $\bm{\Delta}=\left(\bm{\beta}-\bm{\gamma}\right)/2$: $$O_{\sigma}\left(\bm{\alpha}\right)=\frac{1}{\pi^{M}}\int e^{-\left|\Delta\right|^{2}}Tr\left[\hat{\sigma}\hat{\Lambda}_{p}\left(\bm{\alpha}+\bm{\Delta},\bm{\alpha}-\bm{\Delta}\right)\right]d\bm{\Delta}\,.$$ As a simple example, particle numbers in the bosonic case are given by introducing the equivalent c-number function $n\left(\alpha\right)\equiv\left|\alpha\right|^{2}-1$, so that the quantum and classical averages agree: $$\left\langle \hat{n}\right\rangle _{Q}=\left\langle n\left(\alpha\right)\right\rangle _{C}=\left\langle \left|\alpha\right|^{2}-1\right\rangle _{C}.\label{eq:number_variable}$$ This is a special case of the more general identity given above. As another example, when expanded in mode operators, an $n-th$ order anti-normally ordered moment is: $$\begin{aligned} \left\langle \hat{a}_{i1}\ldots\hat{a}_{i_{n}}^{\dagger}\right\rangle _{Q} & =\left\langle \alpha_{i_{1}}\ldots\alpha_{i_{n}}^{*}\right\rangle _{C}\nonumber \\\end{aligned}$$ The operator moments can be of any order. Similar techniques are available for fermions [@FermiQ] and spins [@Arecchi_SUN; @Drummond1984PhysLetts], so this approach is not restricted to bosonic fields. As first emphasized in Dirac’s early review paper [@Dirac_RevModPhys_1945], one can calculate any observable average from a classical looking distribution, provided the observable is re-expressed in terms of a suitable operator ordering. In this case, it is the anti-normal ordering of ladder operators that is utilized, and the resulting distribution is always positive. Exact results and identities ---------------------------- Exact analytic solutions for the Q-function are known for a number of special cases, including all gaussian states. For a noninteracting multi-mode vacuum state, $\left|\Psi\right\rangle =\left|\bm{0}\right\rangle $, and more generally for a coherent state $\left|\Psi\right\rangle =\left|\bm{\alpha}_{0}\right\rangle _{c}$, where $\bm{\alpha}_{0}\equiv\bm{0}$ in the vacuum state, one obtains $$Q^{\alpha}\left(\bm{\alpha}\right)=\frac{1}{\pi^{M}}\exp\left(-\left|\bm{\alpha}-\bm{\alpha}_{0}\right|^{2}\right)\,.$$ This has a well known interpretation [@arthurs1965bstj; @Leonhardt:1993]. If one makes a simultaneous measurement of two orthogonal quadratures, which is possible using a beam-splitter, then $Q\left(\bm{\alpha}\right)$ is the probability of a simultaneous measurement of quadratures $\alpha_{x}$ and $\alpha_{y}$, where $\alpha=\alpha_{x}+i\alpha_{y}$. This is also the result of an amplified measurement [@leonhardt1993simultaneous]. Similarly, any number state $\left|\Psi\right\rangle =\left|\bm{n}\right\rangle $ has a simple representation as: $$\begin{aligned} Q^{\alpha}\left(\bm{\alpha}\right) & =\left|\left\langle \bm{n}\right|\left.\bm{\alpha}\right\rangle _{c}\right|^{2}\nonumber \\ & =\exp\left(-\left|\bm{\alpha}\right|^{2}\right)\prod_{m}\frac{\left|\alpha_{m}\right|^{2n_{m}}}{n_{m}!}.\end{aligned}$$ A free-particle thermal state with mean particle number $\bm{n}^{th}$ is given by: $$Q^{\alpha}\left(\bm{\alpha}\right)=\prod_{m}\frac{1}{\pi\left(1+n_{m}^{th}\right)}\exp\left(-\left|\alpha_{m}\right|^{2}/\left(1+n_{m}^{th}\right)\right).$$ There are several mathematical properties that make this expansion a very interesting approach to quantum dynamics. We first introduce a shorthand notation for differential operators, $\partial_{n}\equiv\partial/\partial\alpha_{n}$. There are the following operator correspondences [@Glauber_1963_P-Rep; @Cahill_Glauber_OrderedExpansion_1969; @Cahill_Galuber_1969_Density_operators; @drummond2014quantum] written in terms of mode creation and annihilation operators: $$\begin{aligned} \hat{a}_{n}^{\dagger}\hat{\Lambda} & = & \left(\partial_{n}+\alpha_{n}^{*}\right)\hat{\Lambda}\nonumber \\ \hat{a}_{n}\hat{\Lambda} & = & \alpha_{n}\hat{\Lambda}\nonumber \\ \hat{\Lambda}\hat{a}_{n} & = & \left(\partial_{n}^{*}+\alpha_{n}\right)\hat{\Lambda}\nonumber \\ \hat{\Lambda}\hat{a}_{n}^{\dagger} & = & \alpha_{n}^{*}\hat{\Lambda}.\label{eq:identities}\end{aligned}$$ Q-function evolution equations are obtained by using the operator identities to change Hilbert space operators acting on $\hat{\rho}$ to differential operators acting $\hat{\Lambda}$, and hence on $Q^{\alpha}$. There is a product rule for operator identities. The full set of $2M$ mode operators are written with a superscript as $\hat{a}^{\mu}$ for $\mu=1,\ldots2M$, where $\hat{a}^{j}=\hat{a}_{j}$, $\hat{a}^{j+M}=\hat{a}_{j}^{\dagger}$ , and $\mathcal{L}^{\mu}$ denotes the corresponding differential term. The general identities can be written: $$\begin{aligned} \hat{a}^{\mu}\hat{\Lambda} & =\mathcal{L}^{\mu}\hat{\Lambda}\nonumber \\ \hat{\Lambda}\hat{a}^{\mu} & =\bar{\mathcal{L}}^{\mu}\hat{\Lambda}.\end{aligned}$$ To obtain operator product identities, one uses the fact that the mode operators commute with the c-number terms, so that the operator closest to the kernel $\hat{\Lambda}$ always generates a differential term that is furthest from $\hat{\Lambda}$: $$\begin{aligned} \hat{a}^{\mu}\hat{a}^{\nu}\hat{\Lambda} & =\mathcal{L}^{\nu}\left[\hat{a}^{\mu}\hat{\Lambda}\right]=\mathcal{L}^{\nu}\mathcal{L}^{\mu}\hat{\Lambda}\nonumber \\ \hat{\Lambda}\hat{a}^{\mu}\hat{a}^{\nu} & =\bar{\mathcal{L}}^{\mu}\left[\hat{\Lambda}\hat{a}^{\nu}\right]=\bar{\mathcal{L}}^{\mu}\bar{\mathcal{L}}^{\nu}\hat{\Lambda}.\label{eq:product-identities}\end{aligned}$$ Quantum field dynamics ---------------------- To understand time-evolution, we consider an arbitrary time-dependent multi-mode Hamiltonian with quartic, cubic and quadratic terms and a generalized number conservation law, typical of many common quantum field theories. This generic quartic Hamiltonian is expressed by expanding all fields with mode operators. Here we choose to use an antinormally ordered form, for simplicity in applying the operator identities, which therefore gives us that: $$\hat{H}\left(t\right)=\sum_{ijkl=0}^{M}\hat{H}_{ijkl}\left(t\right)\equiv\frac{\hbar}{2}\sum_{ijkl=0}^{M}g_{ijkl}\left(t\right)\hat{a}_{i}\hat{a}_{j}\hat{a}_{k}^{\dagger}\hat{a}_{l}^{\dagger}.\label{eq:General hermitian}$$ For notational convenience, to combine all the terms in one sum, we include summations over $i=0,\ldots M$, and define $\hat{a}_{0}=1$. While formally quartic, this includes linear, quadratic and cubic terms as well, through the terms that include $\hat{a}_{0}$. This is the most general quartic number preserving Hamiltonian, which has no more than quadratic terms in either creation and annihilation operators. The time argument of $g_{ijkl}\left(t\right)$ is understood if it is not always written explicitly. Without loss of generality, we assume a permutation symmetry with $$g_{ijkl}=g_{jikl}=g_{ijlk}.$$ As $\hat{H}$ must be hermitian, $$g_{ijkl}=g_{lkji}^{*}.\label{eq:hermiticity}$$ We assume a momentum cutoff, so that any renormalization required is carried out through the use of cut-off dependent coupling constants. Cubic terms of form $\hat{a}_{0}\hat{a}_{j}\hat{a}_{k}^{\dagger}\hat{a}_{l}^{\dagger}=\hat{a}_{j}\hat{a}_{k}^{\dagger}\hat{a}_{l}^{\dagger}$ are included, and these describe parametric couplings that have a generalized type of number conservation. From the Schrödinger equation, $$i\hbar\frac{d\hat{\rho}}{dt}=\left[\hat{H},\hat{\rho}\right],$$ so the dynamical evolution of the Q-function for unitary evolution is given by: $$\begin{aligned} \frac{dQ^{\alpha}}{dt} & =\frac{i}{\hbar}Tr\left\{ \left[\hat{H},\hat{\Lambda}\left(\bm{\alpha}\right)\right]\hat{\rho}\right\} .\end{aligned}$$ After implementing the mappings given above, one obtains: $$\frac{dQ^{\alpha}}{dt}=\mathcal{L}^{\alpha}Q^{\alpha}.$$ On defining $\mathcal{L}_{H}\hat{\Lambda}\left(\bm{\alpha}\right)=\hat{H}\hat{\Lambda}\left(\bm{\alpha}\right)$ as the mappings of operators on the left, and $\bar{\mathcal{L}}_{H}\hat{\Lambda}\left(\bm{\alpha}\right)=\hat{\Lambda}\left(\bm{\alpha}\right)\hat{H}$ for operators on the right, the identities of Eq (\[eq:identities\]) and (\[eq:product-identities\]) give: $$\begin{aligned} \mathcal{L}_{H} & =\frac{\hbar}{2}\sum_{ijkl=0}^{M}g_{ijkl}\left(\partial_{k}+\alpha_{k}^{*}\right)\left(\partial_{l}+\alpha_{l}^{*}\right)\alpha_{j}\alpha_{i}\nonumber \\ \bar{\mathcal{L}}_{H} & =\frac{\hbar}{2}\sum_{ijkl=0}^{M}g_{ijkl}\left(\partial_{i}^{*}+\alpha_{i}\right)\left(\partial_{j}^{*}+\alpha_{j}\right)\alpha_{k}^{*}\alpha_{l}^{*}.\label{eq:Full-identities}\end{aligned}$$ Here $\alpha_{0}\equiv1$, $\partial_{0}\equiv0$, so that all cases are included. This gives the general differential equation, $$\begin{aligned} \frac{dQ^{\alpha}}{dt} & =\frac{i}{\hbar}\left[\mathcal{L}_{H}-\bar{\mathcal{L}}_{H}\right]Q^{\alpha}=\mathcal{L}^{\alpha}Q^{\alpha}.\end{aligned}$$ Similar results hold for the functional approach, but the mode expansion approach is used here for its greater simplicity. The main thrust of the present paper is to treat unitary systems, as described by the equations above. One can include decoherence and reservoirs by adding them to the Hamiltonian. Although these can also be treated with a master equation, any reservoirs can simply be included in the dynamical equations. Next, define an extended vector $\alpha^{\mu}$, and corresponding derivatives $\partial_{\mu}$ for $\mu=1,\ldots2M$, where $\alpha^{j}=\alpha_{j}$, $\alpha^{j+M}=\alpha_{j}^{*}$, $\partial_{j+M}=\partial_{j}^{*}$, which includes amplitudes and conjugates. Using an implicit Einstein summation convention over $\mu,\nu=1,\ldots2M$, and noting that constant terms cancel: $$\frac{dQ^{\alpha}}{dt}=\left[-\partial_{\mu}^{\alpha}A_{\alpha}^{\nu}\left(\bm{\alpha}\right)+\frac{1}{2}\partial_{\mu}^{\alpha}\partial_{\nu}^{\alpha}D_{\alpha}^{\mu\nu}\left(\bm{\alpha}\right)\right]Q^{\alpha}.$$ From using Eq (\[eq:hermiticity\]) and (\[eq:identities\]), the diffusion term for $1\le k,l\le M$ is: $$\begin{aligned} D_{\alpha}^{kl}\left(\bm{\alpha},t\right) & =i\sum_{i,j=0}^{M}g_{ijkl}\left(t\right)\alpha_{i}\alpha_{j}\,\,.\label{eq:GeneralDiffusionTerm}\end{aligned}$$ Letting $\mu',\nu'\equiv\mu-M,\nu-M$, one sees that $D_{\alpha}^{\mu\nu}=D_{\alpha}^{\mu'\nu'*}$, and for unitary evolution there are no cross-terms $D_{\alpha}^{\mu\nu'}$. Similarly, using permutation symmetry, the drift term is: $$\begin{aligned} A_{\alpha}^{k}\left(\bm{\alpha},t\right) & =-i\sum_{i,j,l=0}^{M}g_{ijkl}\left(t\right)\alpha_{i}\alpha_{j}\alpha_{l}^{*}\,\,\,[1\le k\le M],\label{eq:GeneralDriftTerm}\end{aligned}$$ and the conjugate drift for $\mu>M$ is $A_{\alpha}^{\mu}=A_{\alpha}^{\mu'*}.$ Generally, the second-order coefficient $D_{\alpha}^{\mu\nu}\left(\bm{\alpha}\right)$ depends on the phase-space location $\bm{\alpha}$. In cases of purely quadratic Hamiltonians, the diffusion is either zero or constant in phase-space. Traceless diffusion and time-reversibility ------------------------------------------ For unitary quantum evolution, the diffusion matrix is divided into two parts, one positive definite and one negative definite, corresponding to diffusion in the forward and backward time directions respectively. To prove this, we first show that the corresponding Q-function time-evolution follows a Fokker-Planck equation with a traceless diffusion matrix. That is, the equation has an equal weight of positive and negative diagonal diffusion terms. This was previously demonstrated in studies of a different type of Hamiltonian, for thermalization in Q-function dynamical equations for spin systems [@altland2012quantum]. The result is also true for Bose and Fermi quantum fields, and is generic to second-order unitary Q-function evolution equations. In this paper it is proved for Bose fields only. The proof in the Fermi case will be given elsewhere. To map Hilbert space time-evolution to phase-space time evolution, and to prove that the resulting diffusion term is traceless, the operator identities are utilized. Using the results of Eq (\[eq:GeneralDiffusionTerm\]), terms with non-zero $i$ and $k$ indices generate second-order derivative terms which combine to give the diffusion matrix. The diagonal second order terms of interest are obtained when two derivatives both act on the same mode. If all the Hamiltonian terms have $k=l=0$ the diffusion is constant, but otherwise it depends on the phase-space position $\bm{\alpha}$. The $k-th$ diagonal diffusion term in complex variables $\partial_{k}$ comes from from identities involving $\hat{H}_{ijkl}\hat{\Lambda}$ with $0<k=l\le M$ , which therefore is: $$\begin{aligned} D_{\alpha}^{kk} & =i\sum_{i,j=0}^{M}g_{ijkk}\alpha_{i}\alpha_{j}\nonumber \\ & =e^{-2i\eta(\alpha)}\left|D_{\alpha}^{kk}\right|.\end{aligned}$$ The phase term $\eta(\alpha)$ depends on the coupling and amplitudes. This diagonal term is accompanied by the hermitian conjugate term derived from the reverse ordering, of form $\hat{\Lambda}\hat{H}_{ijkl}$, so that $\mathcal{L}^{\alpha}$ is real overall. These conjugate terms have derivatives $\partial_{j}^{*}$, which give, on defining $k'=k+M$ for $k\le M$, $$D_{\alpha}^{k'k'}=e^{2i\eta(\alpha)}\left|D_{\alpha}^{kk}\right|.$$ This allows the introduction of real quadrature variables $X_{j}$, defined such that for $\mu=j\le M$: $$\alpha_{j}e^{i\eta(\alpha)}=X_{j}+iY_{j}.\label{eq:real-quadratures}$$ Hence, the derivative terms become: $$\frac{\partial}{\partial\alpha_{j}}=\frac{e^{i\eta(\alpha)}}{2}\left[\frac{\partial}{\partial X_{j}}-i\frac{\partial}{\partial Y_{j}}\right].$$ Defining $X_{j+M}=Y_{j}$ gives an extended $2M$ dimensional real vector, which is written with a superscript as $X^{\mu}$. After making this transformation, the diagonal diffusion term in real variables is: $$D^{kk}=-D^{k'k'}=\frac{1}{2}\left|D_{\alpha}^{kk}\right|.$$ Here $k'=k+M$, and as a result, on summing the diagonal terms, the diffusion matrix with these variables is traceless, i.e., $Tr\left[\bm{D}\right]=0.$ This is different to classical Fokker-Planck theory, where diffusion matrices are positive-definite [@Gardiner1997; @Risken1996]. The Q-function diffusion matrix is not positive-definite, yet the distribution remains positive, from its construction as a positive-definite observable. Given this analysis, the traceless property applies to a general class of quadratic, cubic and quartic Hamiltonians. There can also be variables with zero diffusion, which are deterministic and are also traceless. Traceless diffusion is preserved under both rescaling and orthogonal rotations: $\bm{\phi}=\bm{O}\bm{X}$, of the real quadrature coordinates. Since the diffusion matrix of a real Fokker-Planck equation is real and symmetric, it can always be transformed into a diagonal form $D^{\mu}\left(\bm{\phi}\right)$ in the new variables $\bm{\phi}$, using orthogonal rotations. As a result, the transformed phase-space variables can be classified into two groups, having either positive or negative diffusion, with the equation: $$\frac{dQ\left(\bm{\phi}\right)}{dt}=\partial_{\mu}\left[-A^{\mu}\left(\bm{\phi}\right)+\frac{1}{2}\partial_{\mu}D^{\mu}\left(\bm{\phi}\right)\right]Q\left(\bm{\phi}\right).\label{eq:quadrature-Q-fn}$$ An orthogonal rotation is chosen so that it results in a traceless diagonal diffusion with $D^{\mu}\ge0$ for $\mu\le M$ and $D^{\mu}\le0$ for for $\mu>M$ . The $2M$-dimensional phase-space coordinate is written with a superscript as $\phi^{\mu}$ for $\mu=1,\ldots2M$, and repeated greek indices indicate implicit summation over $1,\ldots2M$. This generates a characteristic structure which is universal for unitary evolution with all Hamiltonians of this form. The phase-space vector $\bm{\phi}$ is subdivided into two complementary pairs so that $\bm{\phi}=\left(\bm{x},\bm{y}\right)$, where the $\bm{x}$ variables have a positive semi-definite diffusion, and the $\bm{y}$ variables have a negative semi-definite diffusion. This universal structure is **not** the positive-definite diffusion matrix found in classical diffusion processes. One can obtain a positive-definite diffusion in phase-space via an alternative approach of doubling the phase-space dimension, which is especially useful for open systems in the positive P-function representation [@Chaturvedi:1977; @Drummond_Gardiner_PositivePRep]. This is also applicable to unitary evolution [@Carter:1987; @Deuar:2007_BECCollisions], but gives sampling errors that increase with time [@Gilchrist1997-positive; @Carusotto:2001; @Deuar2002]. Such methods have successfully treated soliton quantum squeezing in photonics [@Corney_2008; @Corney:2006_ManyBodyQD], and quantum soliton dissociation [@ng2019nonlocal] in BEC dynamics [@yurovsky2017a]. Yet on long time-scales sampling errors increase, because the distribution becomes less compact. The method used here is to obtain an algorithm for the Q-function, rather than the P-function. Since the Q-function is unique, sampling-error growth in time is minimized. However, a different approach to simulation is necessary, via positive-definite diffusion in a higher space-time dimension, *without* changing the phase-space itself, as explained below. Density-density coupling ------------------------ If the Hamiltonian has only quadratic terms, the diffusion terms are either zero or constant in phase-space, as pointed out above. We now treat an alternative approach that leads to constant diffusion for the most common form of *nonlinear* coupling, namely density-density coupling. The result is a different definition of the phase-space variable $\bm{\phi}$, which has constant diffusion independent of $\bm{\phi}$, as well as being traceless and diagonal. This type of physics is found in the Bose-Hubbard model, and many other bosonic quantum field theories [@Drummond_1987_JOptSocAmB; @zee2010quantum]. On a lattice, consider a quartic Hamiltonian of form: $$\hat{H}=\hbar\sum_{ij}^{M}\left[\omega_{ij}\hat{a}_{i}^{\dagger}\hat{a}_{j}+\frac{1}{2}g_{ij}\hat{a}_{i}^{\dagger}\hat{a}_{i}\hat{a}_{j}^{\dagger}\hat{a}_{j}\right]\,.$$ Using the identities of Eq (\[eq:identities\]) again, the second-order derivative terms in $\bm{\alpha}$ are: $$\begin{aligned} \mathcal{L}_{1}^{\alpha} & =\frac{ig_{ij}}{2}\frac{\partial}{\partial\alpha_{j}}\frac{\partial}{\partial\alpha_{i}}\alpha_{j}\alpha_{i}\,.\end{aligned}$$ In this case one may define a mapping, $\theta_{j}=\lambda\ln(\alpha_{j})$, where $\lambda$ is a scaling factor, so that in the new variables the diffusion matrix $D_{ij}^{\theta}$ is constant, where: $$D_{ij}^{\theta}=i\lambda^{2}g_{ij}.$$ This transformation simplifies the derivation of the Fokker-Planck path integral. Path integrals for space-dependent diffusion exist [@graham1977path], but are more complex. For the analysis given here, it is simpler to transform to the case of constant diffusion, although it is possible to obtain a path integral without doing this. If the diffusion is constant, as in a quadratic Hamiltonians, this step is unnecessary. ![Transformations used in the phase-space. The original complex mode amplitudes $\bm{\alpha}$ are transformed first to constant-diffusion mode amplitudes $\bm{\theta}=\lambda\ln\left(\bm{\alpha}\right)$, then mapped with a time-dependent mapping to real quadrature amplitudes $\bm{\phi}=\bm{T}\bm{\theta}$.\[fig:Transformations-used-in\]](Fig1){width="0.75\columnwidth"} We now check that the traceless property persists after making this variable change to logarithmic variables. The Q-function is mapped to a set of constant diffusion, complex phase-space variables $\bm{\theta}$, as shown in Fig (\[fig:Transformations-used-in\]), which satisfy an equation of form: $$\frac{\partial}{\partial t}Q^{\theta}=\mathcal{L}^{\theta}Q^{\theta}\,.$$ To prove the traceless property, a second mapping is made to a real quadrature vector, $\bm{\phi}=\left[\phi_{1},\ldots\phi_{2M}\right]$, described by the linear transformation $\bm{\phi}=\bm{T}\bm{\theta}.$ In this constant diffusion space, there are diagonal second derivative terms together with conjugate terms such that $D_{jj}^{\theta}=e^{-2i\eta_{j}}\left|D_{jj}^{\theta}\right|.$ The corresponding real variables are defined in this case as: $$\theta_{j}e^{i\eta_{j}}=x_{j}+iy_{j},$$ where $\bm{\phi}=\left(\bm{x},\bm{y}\right)$, such that the new diagonal diffusion term in $\phi$ is $$D_{jj}=-D_{j'j'}=\frac{1}{2}\left|D_{jj}^{\theta}\right|,$$ where $j'=j+M$. This clearly results in traceless diffusion. For quadratic cases with no logarithmic transformation, one may simply define $\left(\bm{x},\bm{y}\right)$ as in the notation of the previous subsection. For a one-mode case, the mapping transformation matrix to real variables is $$\bm{T}\left(t\right)=\left[\begin{array}{cc} e^{i\eta_{j}} & e^{-i\eta_{j}}\\ -ie^{i\eta_{j}} & ie^{-i\eta_{j}} \end{array}\right].$$ The result is a transformed Q-function, $Q=Q^{\theta}\left|\delta\bm{\theta}/\delta\bm{\phi}\right|$, which evolves according to real differential equation. Introducing $\partial_{\mu}\equiv\partial/\partial\phi^{\mu}$, for $\mu=1,\ldots2M$ , the time-evolution equation with a diagonal, constant diffusion is: $$\begin{aligned} \frac{\partial Q}{\partial t} & =\partial_{\mu}\left[-\text{\ensuremath{A^{\mu}}}(t)+\frac{1}{2}\partial_{\mu}D^{\mu}\right]Q\left(\bm{\phi}\right)\label{eq:Transformed evolution operator}\end{aligned}$$ The transformed diffusion matrix is traceless as previously, so that: $$\sum D^{\mu}=0\,.$$ This means that the diagonal diffusion matrix can be subdivided into positive and negative constant diffusion parts. The phase-space probability $Q$ is positive, yet since the overall diffusion term $\bm{D}$ is *not* positive definite, this is not a stochastic process of the usual forward-time type [@Gardiner_Book_SDE]. The form of Eq (\[eq:Transformed evolution operator\]) means that probability is conserved at all times provided boundary terms vanish, i.e., $$\int d\bm{\phi}Q\left(\bm{\phi},t\right)=1\,.$$ The Q-function has very unusual properties. It is probabilistic, and obeys a generalized Fokker-Planck equation. Yet it describes a *reversible* process. Positive distribution functions in statistical mechanics commonly follow an equation which is *irreversible*, owing to couplings to an external reservoir. Despite this, the Q-function is a phase-space distribution that is positive, and has probabilistic sampling. This implies that it can be treated in a similar way to a classical probability distribution, with some modifications. Time-symmetric stochastic action\[sec:Bidirectional-stochastic-bridges\] ======================================================================== The previous section, showed that the Q-function for unitary evolution can be transformed to have a real differential equation with a traceless diffusion term. The consequence is that it has a time-symmetric diffusion which is not positive-definite. As a result, forward-time sampling via a stochastic differential equation in time is not possible. In the present section we obtain local propagators for the traceless Fokker-Planck equation. This will be used to derive an action principle and path integral. The result leads to stochastic evolution in space-time with equations and boundary conditions that have time-reversal symmetry. Time-symmetric Green’s functions -------------------------------- The universal property of traceless Q-function diffusion means that the real phase space of $\bm{\phi}$ is generally divisible into two $M$-dimensional sub-vectors, so that $\bm{\phi}=\left[\bm{x},\bm{y}\right]$. The $\bm{x}$ fields will be called positive-time fields, with indices in the set $T_{+}$, while fields $\bm{y}$ are called negative-time fields, with indices in the set $T_{-}$. The fields $\bm{x}$ are those with $D^{\mu}\ge0$, while fields $\bm{y}$ have $D^{\mu}\le0$. These have the physical interpretation of complementary variables. Throughout this paper, we use the general notation of $P\left(X_{out}\left|X_{in}\right.\right)$ to indicate the conditional probability density of event(s) $X_{out}$ given the occurrence of event(s) $X_{in}$. This does not imply any particular time-ordering of the events. For a time-symmetric diffusion process, $X_{out}$ is termed an output event(s) to distinguish it from the input event(s) $X_{in}$, even if (some) output events may occur earlier in time than (some) input events. One usually solves for $Q\left(\bm{\phi},t\right)$ at a later time $t>t_{0}$, given an initial distribution $Q_{0}\left(\bm{\phi},t_{0}\right)$. However, the lack of a positive-definite diffusion in $\bm{y}$ means that one cannot use Green’s functions to propagate $Q$ forward in time, without requiring singular functions. Instead, we first consider how to solve for $Q\left(\bm{\phi},t\right)$ given a joint *input* distribution $P\left(\bm{x}_{0},\bm{y}_{f}\right)$, over an initial value of $\bm{x}=\bm{x}_{0}$ at $t_{0}$ and a final value of $\bm{y}=\bm{y}_{f}$ at $t_{f}.$ The time-symmetric input boundary value will be labeled $\tilde{\phi}=\left(\bm{x}_{0},\bm{y}_{f}\right)$, while the complementary pair $\left(\bm{x}_{f},\bm{y}_{0}\right)$ is an output of the process. As explained above, the terms *input* and *output* are used to indicate the causality, not the time-ordering of events. For compactness, we omit the time coordinate when there is no ambiguity, so that $P\left(\tilde{\phi}\right)\equiv P\left(\bm{x}_{0},\bm{y}_{f}\right)\equiv P\left(\bm{x}_{0},t_{0};\bm{y}_{f},t_{f}\right)$. ![Quantum fields propagating in phase space, from time $t_{0}$ to $t_{f}$. The $\bm{x}$ components propagate in the positive time direction, while the $\bm{y}$ components propagate in the negative time direction. Conditional initial or final boundary values are indicated by the purple arrows. Here (a) is the overall time-symmetric probability $P\left(\bm{x}_{f},\bm{y}_{0}\left|\bm{x}_{0},\bm{y}_{f}\right.\right)$ with joint boundaries $\bm{x}_{0},\bm{y}_{f}$, while (b) represents a conditional *initial* boundary value $Q\left(\bm{x}_{0}\left|\bm{y}_{0}\right.\right)$ , and (c) a conditional *final* boundary value $Q\left(\bm{y}_{f}\left|\bm{x}_{f}\right.\right)$. \[fig:Quantum-fields-propagating-full\]](Fig2a "fig:"){width="1\columnwidth"}\ ![Quantum fields propagating in phase space, from time $t_{0}$ to $t_{f}$. The $\bm{x}$ components propagate in the positive time direction, while the $\bm{y}$ components propagate in the negative time direction. Conditional initial or final boundary values are indicated by the purple arrows. Here (a) is the overall time-symmetric probability $P\left(\bm{x}_{f},\bm{y}_{0}\left|\bm{x}_{0},\bm{y}_{f}\right.\right)$ with joint boundaries $\bm{x}_{0},\bm{y}_{f}$, while (b) represents a conditional *initial* boundary value $Q\left(\bm{x}_{0}\left|\bm{y}_{0}\right.\right)$ , and (c) a conditional *final* boundary value $Q\left(\bm{y}_{f}\left|\bm{x}_{f}\right.\right)$. \[fig:Quantum-fields-propagating-full\]](Fig2b "fig:"){width="1\columnwidth"}\ ![Quantum fields propagating in phase space, from time $t_{0}$ to $t_{f}$. The $\bm{x}$ components propagate in the positive time direction, while the $\bm{y}$ components propagate in the negative time direction. Conditional initial or final boundary values are indicated by the purple arrows. Here (a) is the overall time-symmetric probability $P\left(\bm{x}_{f},\bm{y}_{0}\left|\bm{x}_{0},\bm{y}_{f}\right.\right)$ with joint boundaries $\bm{x}_{0},\bm{y}_{f}$, while (b) represents a conditional *initial* boundary value $Q\left(\bm{x}_{0}\left|\bm{y}_{0}\right.\right)$ , and (c) a conditional *final* boundary value $Q\left(\bm{y}_{f}\left|\bm{x}_{f}\right.\right)$. \[fig:Quantum-fields-propagating-full\]](Fig2c "fig:"){width="0.9\columnwidth"} The time-symmetric input-output probability density for a final $\bm{x}$ quadrature value $\bm{x}_{f}$ and an initial $\bm{y}$ quadrature value $\bm{y}_{0}$, given that the initial $\bm{x}$ boundary values are $\bm{x}_{0}$ and the final $\bm{y}$ boundary values are $\bm{y}_{f},$ is $P\left(\bm{x}_{f},\bm{y}_{0}\left|\bm{x}_{0},\bm{y}_{f}\right.\right)=P\left(\bm{x}_{f},\bm{y}_{0}\left|\tilde{\phi}\right.\right)$. This is defined as: $$P\left(\bm{x}_{f},\bm{y}_{0}\left|\tilde{\phi}\right.\right)=\frac{P\left(\bm{\phi}_{0},\bm{\phi}_{f}\right)}{P\left(\tilde{\phi}\right)}.$$ To solve for $Q\left(\bm{\phi},t\right)$, the central quantity is a time-symmetric Green’s function for paths $\bm{\phi}(t)$ where $t_{0}\leq t\leq t_{f}$, and whose positive-time components *begin* at $\bm{x}_{0}=\bm{x}(t_{0})$, while the negative time components *end* at $\bm{y}_{f}=\bm{y}(t_{f})$. We abbreviate this as $P\left(\bm{\phi}\left(t\right)\left|\tilde{\phi}\right.\right)$. This is a function of $\bm{\phi}$ that satisfies the generalized Fokker-Planck equation (\[eq:Transformed evolution operator\]), with initial and final marginal conditions: $$\begin{aligned} \int P\left(\bm{\phi}\left(t_{0}\right)\left|\tilde{\phi}\right.\right)d\bm{y} & =\delta\left(\bm{x}-\bm{x}_{0}\right)\nonumber \\ \int P\left(\bm{\phi}\left(t_{f}\right)\left|\tilde{\phi}\right.\right)d\bm{x} & =\delta\left(\bm{y}-\bm{y}_{f}\right)\,.\label{eq:marginal delta}\end{aligned}$$ The form of Eq (\[eq:Transformed evolution operator\]) means that, using partial integration, probabilities are conserved both for $Q\left(\bm{\phi},t\right)$ and for the Green’s functions, provided boundary terms vanish. We show below how the time-symmetric Green’s function, $P\left(\bm{\phi}\left(t\right)\left|\tilde{\phi}\right.\right)$, is related to the input-output probability density, $P\left(\bm{x}_{f},\bm{y}_{0}\left|\tilde{\phi}\right.\right)$, through the time-symmetric action and path integral. Conditional boundary values --------------------------- $Q\left(\bm{\phi},t\right)$ can also be obtained from a conditional plus a marginal boundary value, which is often more useful. To explain this, we denote marginal distributions at the same time as $Q_{x}\left(\bm{x},t\right)$so that for $\bm{x}$, $$Q_{x}\left(\bm{x},t\right)=\int Q\left(\bm{\phi},t\right)d\bm{y}\,,\label{eq:marginal_x-1}$$ and for $\bm{y}$: $$Q_{y}\left(\bm{y},t\right)=\int Q\left(\bm{\phi},t\right)d\bm{x}\,.\label{marginal_y-1}$$ In special cases, the boundary values for $Q\left(\bm{\phi},t\right)$ are independent of each other. This implies that one can write the joint probability of $\bm{x}_{0}$ in the past and and $\bm{y}_{f}$ in the future, as a product of two independent marginal distributions, so that $$P\left(\tilde{\phi}\right)=Q_{x}\left(\bm{x}_{0},t_{0}\right)Q_{y}\left(\bm{y}_{f},t_{f}\right).$$ Generally, this is **not** the case. However, even if the initial and final input values *are* correlated, there are other ways to specify the boundary values that are often more useful. One may have knowledge of an initial marginal distribution $Q\left(\bm{x}_{0},t_{0}\right)$ and final conditional probability $Q\left(\bm{y}_{f}\left|\bm{x}_{f}\right.\right)$ at the same time, or else the inverse pair, the final marginal $Q\left(\bm{y}_{f},t_{f}\right)$ together with an initial conditional distribution $Q\left(\bm{x}_{0}\left|\bm{y}_{0}\right.\right)$. In either case one of the pair of input variables is conditioned on the output of the complementary output variable at the *same* time, which is typically a well-known and accessible quantity. We require that the conditional distribution $Q\left(\bm{x}_{0}\left|\bm{y}_{0}\right.\right)$ is specified independently of future events, or alternatively, that $Q\left(\bm{y}_{f}\left|\bm{x}_{f}\right.\right)$ is independent of earlier events. An obvious additional requirement is that the total Q-function must follow Bayes theorem, so that: $$\begin{aligned} Q\left(\bm{\phi}_{0},t_{0}\right) & =Q\left(\bm{x}_{0}\left|\bm{y}_{0}\right.\right)Q\left(\bm{y}_{0},t_{0}\right)\nonumber \\ Q\left(\bm{\phi}_{f},t_{f}\right) & =Q\left(\bm{y}_{f}\left|\bm{x}_{f}\right.\right)Q\left(\bm{x}_{f},t_{f}\right).\end{aligned}$$ In the case that one wishes to specify the initial conditional distribution, future time marginal boundary values $Q\left(\bm{y}_{f},t_{f}\right)$ are required. This input marginal must be chosen so that one obtains the known initial marginal distribution $Q\left(\bm{y}_{0},t_{0}\right)$ as the output distribution. Explanations of how one may compute such future time boundary values algorithmically is given later. The three cases of boundary value distributions treated here are shown graphically in Fig (\[fig:Quantum-fields-propagating-full\]). The general quantities here are $P\left(\bm{x}_{f},\bm{y}_{0}\left|\tilde{\phi}\right.\right)$ and the associated path probabilities. If an initial or final conditional probability is imposed, this reduces to $P\left(\bm{x}_{f},\bm{y}_{0}\left|\bm{y}_{f}\right.\right)$ or $P\left(\bm{x}_{f},\bm{y}_{0}\left|\bm{x}_{0}\right.\right)$, where: $$\begin{aligned} P\left(\bm{x}_{f},\bm{y}_{0}\left|\bm{y}_{f}\right.\right) & =\int P\left(\bm{x}_{f},\bm{y}_{0}\left|\tilde{\phi}\right.\right)Q\left(\bm{x}_{0}\left|\bm{y}_{0}\right.\right)d\bm{x}_{0}\nonumber \\ P\left(\bm{x}_{f},\bm{y}_{0}\left|\bm{x}_{0}\right.\right) & =\int P\left(\bm{x}_{f},\bm{y}_{0}\left|\tilde{\phi}\right.\right)Q\left(\bm{y}_{f}\left|\bm{x}_{f}\right.\right)d\bm{y}_{f}.\label{eq:Conditional-solution}\end{aligned}$$ In the following, the properties of these transition probabilities are explored. General results --------------- We now prove three general results. ### Normalization Like the Q-function, time-symmetric Green’s functions are normalized. This is because the Q-function dynamical equation conserves probability, and clearly from Eq (\[eq:marginal delta\]) the $P\left(\bm{\phi}\left(t\right)\left|\bm{x}_{0},\bm{y}_{f}\right.\right)$ functions are normalized both initially and finally. As a result, for all times, $$\int P\left(\bm{\phi}\left(t\right)\left|\tilde{\phi}\right.\right)d\bm{\phi}=1.$$ ### Solution theorem If a symmetric Green’s function exists for arbitrary $\bm{x}_{0},\bm{y}_{f}$, then the solution for $Q\left(\bm{\phi},t\right)$ can be obtained by integration over $\bm{x}_{0}$ and $\bm{y}_{f}$, so that: $$Q\left(\bm{\phi},t\right)=\int d\bm{x}_{0}d\bm{y}_{f}P\left(\bm{\phi}\left(t\right)\left|\tilde{\phi}\right.\right)P\left(\bm{x}_{0},\bm{y}_{f}\right)\,.\label{Q-solution}$$ To prove this, note that $Q\left(\bm{\phi},t\right)$ as defined above must satisfy the Q-function differential equation, (\[eq:Transformed evolution operator\]), since $P\left(\bm{\phi}\left(t\right)\left|\tilde{\phi}\right.\right)$ does. One can verify through direct integration, together with the fact that $P\left(\bm{x}_{0},\bm{y}_{f}\right)$ is normalized to unity from Eq (\[Q-normalization\]), that this solution for $Q$ also satisfies the required marginal probability conditions, (\[eq:marginal\_x\]) and (\[marginal\_y\]). Similarly, one has for conditional boundaries that: $$\begin{aligned} Q\left(\bm{\phi},t\right) & =\int d\bm{y}_{f}P\left(\bm{\phi}\left(t\right)\left|\bm{y}_{f}\right.\right)Q_{y}\left(\bm{y}_{f}\right)\nonumber \\ & =\int d\bm{x}_{0}P\left(\bm{\phi}\left(t\right)\left|\bm{x}_{0}\right.\right)Q_{x}\left(\bm{x}_{0}\right)\,,\end{aligned}$$ where $P\left(\bm{\phi}\left(t\right)\left|\bm{y}_{f}\right.\right)$ and $P\left(\bm{\phi}\left(t\right)\left|\bm{x}_{0}\right.\right)$ are defined as in the conditional transition probabilities of Eq (\[eq:Conditional-solution\]). ### Factorization theorem Given a time-evolution equation of form (\[eq:Transformed evolution operator\]), then in the limit of a short time interval $\Delta t=t_{f}-t_{0}$, the time-symmetric Green’s function factorizes into a product of forward time and backward Green’s functions. In greater detail, this factorization property of the time-symmetric propagator is as follows: Defining short time propagators $P\left(\bm{x}\left(t\right)\left|\tilde{\bm{\phi}}\right.\right)$ and $P\left(\bm{y}\left(t\right)\left|\tilde{\bm{\phi}}\right.\right)$ for the $\bm{x}$ and $\bm{y}$ fields, then $P\left(\bm{\phi}\left(t\right)\left|\tilde{\bm{\phi}}\right.\right)$ factorizes over short time intervals as $$\lim_{\Delta t\rightarrow0}P\left(\bm{\phi}\left(t\right)\left|\tilde{\bm{\phi}}\right.\right)=P\left(\bm{x}\left(t\right)\left|\tilde{\bm{\phi}}\right.\right)P\left(\bm{y}\left(t\right)\left|\tilde{\bm{\phi}}\right.\right)\,.\label{eq:factorization}$$ We now prove this and obtain the explicit form of $P\left(\bm{x}\left(t\right)\left|\tilde{\bm{\phi}}\right.\right)$ and $P\left(\bm{y}\left(t\right)\left|\tilde{\bm{\phi}}\right.\right)$ from the generalized Fokker-Planck equation. The relevant time-evolution equation is (\[eq:Transformed evolution operator\]). The diagonal diffusion means that the Fokker-Planck equation has forward and backward parts, so the differential operator can be written $$\mathcal{L}=\mathcal{L}_{+}-\mathcal{L}_{-}.$$ Here, $\mathcal{L}_{+}$ ($\mathcal{L}_{-}$) only includes derivatives of $\bm{x}$, ($\bm{y})$ respectively, together with drift terms that are functions of $\bm{\phi}$. Each is a positive-definite Fokker-Planck operator. On defining $d^{\mu}=\left|D^{\mu}\right|$, these differential operators are: $$\begin{aligned} \mathcal{L}_{\pm} & =\sum_{\mu\in T_{\pm}}\left\{ \mp\partial_{\mu}A^{\mu}\left(\bm{\phi}\right)+\frac{1}{2}\partial_{\mu}^{2}d^{\mu}\left(\bm{\phi}\right)\right\} .\end{aligned}$$ For a short time interval $\Delta t=t_{f}-t_{0}$, if $A^{\mu}\left(\bm{\phi}\right)$ is differentiable and smooth, the drift and diffusion terms can be approximated by their initial values and times at $\tilde{\phi}$ [@Risken1996; @drummond2014quantum; @drummond2017forward], so that $$\begin{aligned} \bm{A}\left(\bm{\phi},t\right) & \rightarrow\tilde{\bm{A}}\equiv\left(\bm{A}_{x}\left(\tilde{\bm{\phi}},t_{0}\right),\bm{A}_{y}\left(\tilde{\bm{\phi}},t_{f}\right)\right)\nonumber \\ \bm{d}\left(\bm{\phi},t\right) & \rightarrow\tilde{\bm{d}}\equiv\left(\bm{d}_{x}\left(\tilde{\bm{\phi}},t_{0}\right),\bm{d}_{y}\left(\tilde{\bm{\phi}},t_{f}\right)\right)\,.\end{aligned}$$ The local differential equation then has the form: $$\dot{f}\left(\bm{x},\bm{y}\right)=\left[\mathcal{L}_{+}\left(\bm{x}\right)-\mathcal{L}_{-}\left(\bm{y}\right)\right]f\left(\bm{x},\bm{y}\right).$$ Provided that boundary requirements are satisfied, this is solved by setting $f\left(\bm{x},\bm{y}\right)=f_{x}\left(\bm{x}\right)f_{y}\left(\bm{y}\right)$, where: $$\begin{aligned} \dot{f}_{x}\left(\bm{x}\right) & =\mathcal{L}_{+}\left(\bm{x}\right)f_{x}\left(\bm{x}\right)\nonumber \\ \dot{f}_{y}\left(\bm{y}\right) & =-\mathcal{L}_{-}\left(\bm{y}\right)f_{y}\left(\bm{y}\right).\end{aligned}$$ To satisfy the boundary conditions we must impose the initial condition on $\bm{x}$ that $f_{x}\left(\bm{x},t_{0}\right)=\delta\left(\bm{x}-\bm{x}_{0}\right)$, and the final condition on $\bm{y}$ that $f_{y}\left(\bm{y},t_{f}\right)=\delta\left(\bm{y}-\bm{y}_{f}\right)$, while noting that the form of $\mathcal{L}_{\pm}$ will maintain normalization over the interval. This can be verified more rigorously by expanding $A^{\mu}\left(\bm{\phi}\right)$ in a Taylor series to first order in $\Delta\bm{\phi}=\bm{\phi}-\tilde{\bm{\phi}}$, then solving and taking the limit of $\Delta t\rightarrow0$. Each of the differential operators corresponds to a weighted diffusion time-evolution. In the limit of a short time interval, provided each drift term is evaluated at its initial value in $\bm{x}$ ($\bm{y}$) respectively, the time-symmetric Green’s function factorizes into a product of forward time and backward time terms, where: $$\begin{aligned} P\left(\bm{x}\left(t\right)\left|\tilde{\bm{\phi}}\right.\right) & =\mathcal{N}_{+}\exp\left\{ \sum_{\mu\in T_{+}}\left[\frac{-[\Delta x^{\mu}-\Delta t_{+}\tilde{A}^{\mu}]^{2}}{2\Delta t_{+}\tilde{d}^{\mu}}\right]\right\} \,\nonumber \\ P\left(\bm{y}\left(t\right)\left|\tilde{\bm{\phi}}\right.\right) & =\mathcal{N}_{-}\exp\left\{ \sum_{\mu\in T_{-}}\left[\frac{-[\Delta y^{\mu}+\Delta t_{-}\tilde{A}^{\mu}]^{2}}{2\Delta t_{-}\tilde{d}^{\mu}}\right]\right\} \,,\label{eq:short-time-propagator}\end{aligned}$$ with $\Delta\bm{x}=\bm{x}-\bm{x}_{0}$, $\Delta t_{+}=t-t_{0}$, $\Delta\bm{y}=\bm{y}-\bm{y}_{f}$, $\Delta t_{-}=t_{f}-t$. The time interval $\Delta t_{+}$ is in the forward time direction, measured from the start of the interval, while $\Delta t_{-}$ is a time interval in the backward time direction, measured from the end of the interval. The normalization term is the standard one for a normalized solution to the diffusion equation [@graham1977path; @Risken1996]: $$\mathcal{N}_{\pm}=\mathcal{N}_{\pm}\left(\Delta t_{\pm}\right)=\prod_{\mu\in T_{\pm}}\frac{1}{\sqrt{2\pi\Delta t_{\pm}\tilde{d}^{\mu}}}.$$ ![ Quantum fields propagating over multiple time-intervals in phase space, with probability density $\left[\bm{x}_{4},\bm{y}_{0}\left|\bm{x}_{0},\bm{y}_{4}\right.\right]$. The interactions can lead to correlations. Nonlinear coupling is indicated by the green arrows.\[fig:Quantum-fields-propagating\]](Fig3b){width="1\columnwidth"} Discrete trajectories --------------------- Consider a phase-space trajectory discretized for times $t_{k}=t_{0}+\Delta tdt$, with $k=1,\ldots n$. We wish to construct a trajectory probability $P\left(\left[\bm{\phi}\right]\left|\tilde{\bm{\phi}}\right.\right)$, where $\tilde{\bm{\phi}}\equiv\left(\bm{x}_{0},\bm{y}_{f}\right)$ defines the two constrained end-points as previously, while $\left[\bm{\phi}\right]=\left[\bm{y}_{0},\bm{\phi}_{2},\dots\bm{x}_{f}\right]$ and $\bm{\phi}_{k}\equiv\left(\bm{x}_{k},\bm{y}_{k}\right)$. Here, $P\left(\left[\bm{\phi}\right]\left|\tilde{\bm{\phi}}\right.\right)$ is the probability of a transition from $\bm{x}_{0}\rightarrow\bm{x}_{f}$, and $\bm{y}_{f}\rightarrow\bm{y}_{0}$, with intermediate points $\bm{\phi}_{k}$, and a final point $\bm{\phi}_{f}\equiv\bm{\phi}_{n}.$ For $n=1$, we have an initial and final constraint, as in the Green’s function boundary conditions of Eq (\[eq:marginal delta\]). One can obtain the respective probabilities of transition from $\bm{x}_{0}\rightarrow\bm{x}_{1}$ and $\bm{y}_{1}\rightarrow\bm{y}_{0}$, over a short time interval, using the results of the last section. Since the Q-function is a probability, these transition probabilities are the marginal probabilities $Q_{x}\left(\bm{x}_{1},t_{1}\right)$ and $Q_{y}\left(\bm{y}_{0},t_{0}\right)$. On integrating the factorized Green’s function solution over the conjugate variables, one obtains: $$\begin{aligned} Q_{x}\left(\bm{x}_{1},t_{1}\right) & =P\left(\bm{x}_{1}\left|\bm{x}_{0},\bm{y}_{1}\right.\right)\nonumber \\ Q_{y}\left(\bm{y}_{0},t_{0}\right) & =P\left(\bm{y}_{0}\left|\bm{x}_{0},\bm{y}_{1}\right.\right)\,.\end{aligned}$$ The joint probability of transitions $\bm{x}_{0}\rightarrow\bm{x}_{1}$ and $\bm{y}_{1}\rightarrow\bm{y}_{0}$ both occurring, since they are independent events on a short time interval, is: $$P\left(\bm{x}_{1},\bm{y}_{0}\left|\bm{x}_{0},\bm{y}_{1}\right.\right)=P\left(\bm{x}_{1}\left|\bm{x}_{0},\bm{y}_{1}\right.\right)P\left(\bm{y}_{0}\left|\bm{x}_{0},\bm{y}_{1}\right.\right)\,.$$ If the initial conditional probability, $P\left(\bm{x}_{0}\left|\bm{y}_{0}\right.\right)$, is known, and is independent of future events, then: $$\begin{aligned} P\left(\bm{x}_{1}\left|\bm{y}_{1}\right.\right) & =P\left(\bm{x}_{1},\bm{y}_{0}\left|\bm{x}_{0},\bm{y}_{1}\right.\right)P\left(\bm{x}_{0}\left|\bm{y}_{0}\right.\right)\,\nonumber \\ & =P\left(\bm{x}_{1}\left|\bm{x}_{0},\bm{y}_{1}\right.\right)P\left(\bm{x}_{0}\left|\bm{y}_{0}\right.\right)P\left(\bm{y}_{0}\left|\bm{x}_{0},\bm{y}_{1}\right.\right)\,.\end{aligned}$$ These probabilities can be extended to multiple events, as shown in Fig (\[fig:Quantum-fields-propagating\]) for $n=4$. Following the chain rule of probability for conditional events, the probabilities of transition from $\bm{x}_{0}\rightarrow\bm{x}_{1}\rightarrow\bm{x}_{2}$ and $\bm{y}_{2}\rightarrow\bm{y}_{1}\rightarrow\bm{y}_{0}$, is therefore: $$\begin{aligned} P\left(\bm{y}_{0},\bm{\phi}_{1},\bm{x}_{2}\left|\bm{x}_{0},\bm{y}_{2}\right.\right) & =P\left(\bm{x}_{2}\left|\bm{x}_{1},\bm{y}_{2}\right.\right)P\left(\bm{y}_{0},\bm{\phi}_{1}\left|\bm{x}_{0},\bm{y}_{2}\right.\right)\nonumber \\ & \text{\ensuremath{\times\,}P\ensuremath{\left(\bm{y}_{1}\left|\bm{x}_{1},\bm{y}_{2}\right.\right)}}.\end{aligned}$$ This shows that the probability for a final $\bm{x}_{2}$ and initial $\bm{y}_{0}$ is conditioned on both $\bm{x}_{1}$ and $\bm{y}_{2}$. The result for the whole trajectory is obtained by extending the argument given above recursively, since the probability for a final $\bm{x}_{k}$ and initial $\bm{y}_{0}$ is conditioned on both $\bm{x}_{k-1}$ and $\bm{y}_{k}$, so that $$\begin{aligned} P\left(\left[\bm{x}_{0},\bm{\phi}_{1}\ldots\bm{x}_{k}\right]\left|\tilde{\bm{\phi}}\right.\right) & =P\left(\bm{x}_{k}\left|\bm{x}_{k-1},\bm{y}_{k}\right.\right)\text{\ensuremath{P\left(\bm{y}_{k-1}\left|\bm{x}_{k-1},\bm{y}_{k}\right.\right)}}\nonumber \\ & \times P\left(\left[\bm{\phi}_{0},\ldots\bm{\phi}_{k-1}\right]\left|\bm{x}_{0},\bm{y}_{k-1}\right.\right).\end{aligned}$$ Applying the argument $n$ times, and defining $\tilde{\bm{\phi}}_{k}\equiv\left(\bm{x}_{k},\bm{y}_{k+1}\right)$, this implies that: $$P\left(\left[\bm{\phi}\right]\left|\tilde{\bm{\phi}}\right.\right)=\lim_{\Delta t\rightarrow0}\prod_{k=0}^{n-1}P\left(\bm{x}_{k+1}\left|\tilde{\bm{\phi}}_{k}\right.\right)P\left(\bm{y}_{k}\left|\tilde{\bm{\phi}}_{k}\right.\right).$$ This probability can be written as a general time-symmetric stochastic action, namely: $$P\left(\left[\bm{\phi}\right]\left|\tilde{\bm{\phi}}\right.\right)=\text{\ensuremath{\lim_{\Delta t\rightarrow0}\mathcal{N}^{n}\left(\Delta t\right)\exp\left(-A\left[\bm{\phi}\right]\right)},}$$ where $\mathcal{N}\left(\Delta t\right)=\mathcal{N}_{+}\left(\Delta t\right)\mathcal{N}_{-}\left(\Delta t\right)$, and the action is given by a sum over all the propagation weight terms: $$A\left[\bm{\phi}\right]=\sum_{k=0}^{n-1}L\left(\tilde{\bm{\phi}}_{k},\dot{\bm{\phi}}_{k}\right)\Delta.$$ Here the time-symmetric Lagrangian defined in the generalized Ito sense [@Gardiner_Book_SDE], such that the drift and diffusion terms are evaluated at the start of every step, depending on the time propagation direction for the variable in question: $$L\left(\tilde{\bm{\phi}}_{k},\dot{\bm{\phi}}_{k}\right)=\frac{1}{2\tilde{d}_{k}^{\mu}}\left[\dot{\phi}_{k}^{\mu}-\Delta t\tilde{A}_{k}^{\mu}\right]^{2}.$$ To explain this limiting procedure more precisely, we use the following definitions to take the limits: $$\begin{aligned} \dot{\bm{\phi}}_{k} & \equiv\frac{\bm{\phi}\left(t_{k+1}\right)-\bm{\phi}\left(t_{k}\right)}{\Delta t}\nonumber \\ \tilde{\bm{A}}_{k} & \equiv\bm{A}\left(\tilde{\bm{\phi}}_{k}^{\mu},t_{k}\right)\nonumber \\ \tilde{d}_{k} & \equiv\bm{D}\left(\tilde{\bm{\phi}}_{k}^{\mu},t_{k}\right).\end{aligned}$$ To understand the physical meaning, we refer back to Fig (\[fig:Quantum-fields-propagating\]), which shows four discrete segments of propagation. At the k-th step, for $t=t_{k}$, the value of $\bm{x}_{k}$ is constrained, while the value of $\bm{y}_{k}$ is only known probabilistically. Similarly, at $t=t_{k+1}$, the value of $\bm{y}_{k+1}$ is constrained, but the value of $\bm{x}_{k+1}$ is only known probabilistically, with some probability calculated from the Green’s function. As a result, the normalization of the probability is that: $$\int d\left[\bm{\phi}\right]P\left(\left[\bm{\phi}\right]\left|\tilde{\bm{\phi}}\right.\right)=1,\label{eq:path-normalization}$$ where the normalization has an integration measure over all coordinates except the two fixed values of $\bm{x}_{0}$ and $\bm{y}_{f}$: $$d\left[\bm{\phi}\right]\equiv d\bm{y}_{0}d\bm{x}_{f}\prod_{k=1}^{n-1}d\bm{\phi}_{k}.$$ We wish to obtain a total Green’s function, $P\left(\bm{\phi}(t)\left|\tilde{\bm{\phi}}\right.\right)$, from the trajectory probability. In the limit of $\Delta t\rightarrow0$, and $n\rightarrow\infty$ , we take $t=t_{K}$, i.e, the target time for the Q-function can be taken as corresponding to one of the $n$ discrete times. This is always possible to any desired accuracy, in the relevant limit. The time-symmetric Green’s function over a finite interval is therefore constructed as: $$P\left(\bm{\phi}(t)\left|\tilde{\bm{\phi}}\right.\right)=\lim_{\Delta t\rightarrow0}\int\prod_{k=1}^{n-1}d\bm{\phi}_{k}P\left(\left[\bm{\phi}\right]\left|\tilde{\bm{\phi}}\right.\right)\delta\left(\bm{\phi}-\bm{\phi}_{K}\right),$$ where $K=(t-t_{0})/\Delta t$. This is a general result, and holds even if the diffusion is a function of the phase-space coordinate. In many cases, $P\left(\tilde{\bm{\phi}}\right)$ is not known, as it involves a joint probability at two different times. However, as previously described, one may have knowledge of a conditional probability at one time, such as $Q_{x}\left(\bm{x}_{0}\left|\bm{y}_{0}\right.\right)$ at the initial time, or else $Q_{y}\left(\bm{y}_{f}\left|\bm{x}_{f}\right.\right)$ at the final time. This can be computed in the standard way from the joint probability if $Q\left(\bm{\phi},t\right)$ is known either initially or finally. In such cases, provided the initial (final) conditional is independent of future (past) events, one can obtain from the conditional probability product rule, that: $$\begin{aligned} P\left(\left[\bm{\phi}\right]\left|\bm{y}_{f}\right.\right) & =P\left(\left[\bm{\phi}\right]\left|\tilde{\bm{\phi}}\right.\right)Q_{x}\left(\bm{x}_{0}\left|\bm{y}_{0}\right.\right)\\ P\left(\left[\bm{\phi}\right]\left|\bm{x}_{0}\right.\right) & =P\left(\left[\bm{\phi}\right]\left|\tilde{\bm{\phi}}\right.\right)Q_{y}\left(\bm{y}_{f}\left|\bm{x}_{f}\right.\right)\end{aligned}$$ Central difference action principle ----------------------------------- The time-evolution equation as a forward diffusion problem for $\bm{x}$ is a well-defined initial-value problem in the positive time coordinate $t_{+}=t$ with a specified distribution at $t_{+}=t_{0}.$ Similarly, the backwards diffusion problem for $\bm{y}$ is a well-defined initial-value problem in the negative time coordinate $t_{-}=t_{0}+t_{1}-t_{+}$, with a specified distribution at $t_{-}=t_{0}$. Over every small time-interval, there is a positive-definite Fokker-Planck differential operator in each time-direction. Each of these differential operators acts on the variables with a diffusion in the chosen time direction. Such diffusion equations have well-known path-integral solutions, from work by Wiener [@wiener1930generalized] and deWitt [@dewitt1957dynamical]. The important difference is that this action is time-symmetric, propagating in both time directions with complementary variables. However, there is a subtlety here. In the previous subsection, the coefficients are evaluated at the start of each interval, at $\tilde{\bm{\phi}}_{k}=\left(\bm{x}_{k},\bm{y}_{k+1}\right)$. Stratonovich [@Stratonovich1956; @stratonovich1971probability], Graham [@graham1977path] and others have shown that there are other action formulae evaluated at the *center* of each interval, which are covariant under phase-space frame transformations. While these can be applied in all cases, the derivation is simplest if $d^{\mu}=d^{\mu}\left(t\right)$ is independent of $\bm{\phi}$, which is the case that is derived here. A more complex result holds for general diffusion, involving the curvature tensor in phase-space with a metric equal to the diffusion matrix [@graham1977path], which is outside the scope of this article. As described above, each of the differential operators corresponds to diffusive time-evolution one of the two time directions. For the case of constant diffusion there is a central-difference Green’s function in the limit of a short time interval, $\Delta t=\Delta t_{+}=\Delta t_{-}$: $$\begin{aligned} P\left(\bm{x}_{k+1}\left|\tilde{\bm{\phi}}_{k}\right.\right) & =\mathcal{N}_{+}\left(\Delta t\right)\exp\left(-L_{x}(\bm{\phi},\dot{\bm{\phi}})\Delta t\right)\,,\label{eq:path-integral-1}\\ P\left(\bm{y}_{k}\left|\tilde{\bm{\phi}}_{k}\right.\right) & =\mathcal{N}_{-}\left(\Delta t\right)\exp\left(-L_{y}(\bm{\phi},\dot{\bm{\phi}})\Delta t\right)\,.\nonumber \end{aligned}$$ The functions $L_{x,y}$ are the Fokker-Planck Lagrangian differential operators in the relevant direction. These are defined as in Eq (\[eq:short-time-propagator\]), except with the drift term $A^{\mu}$ evaluated at the midpoint of the time-step, so that : $$\tilde{A}_{k}^{\mu}\rightarrow A^{\mu}\left(\left(\bm{\phi}_{k}+\bm{\phi}_{k+1}\right)/2,\left(t_{k}+t_{k+1}\right)/2\right).$$ This adds a correction to the normalization, depending on the divergence of the drift $\bm{A}$, resulting in an additional exponential weighting term[@stratonovich1971probability; @graham1977path]. With the identification that $\partial\phi^{\mu}/\partial t_{\pm}\equiv\lim_{dt\rightarrow0}\left[\phi^{\mu}\left(t\pm\Delta t\right)-\phi^{\mu}\left(t\right)\right]/\Delta t$, and taking the limit of $\Delta t\rightarrow0$, the two central Lagrangians can be written as: $$\begin{aligned} L_{x}(\bm{\phi},\dot{\bm{\phi}}) & =\frac{1}{2}\sum_{\mu\in T_{+}}\left[\frac{1}{d^{\mu}}\left(\frac{\partial\phi^{\mu}}{\partial t}-A^{\mu}\right)^{2}+\partial_{\mu}A^{\mu}\right]\,\nonumber \\ L_{y}(\bm{\phi},\dot{\bm{\phi}}) & =\frac{1}{2}\sum_{\mu\in T_{-}}\left[\frac{1}{d^{\mu}}\left(\frac{\partial\phi^{\mu}}{\partial t_{-}}+A^{\mu}\right)^{2}-\partial_{\mu}A^{\mu}\right]\,.\end{aligned}$$ Because $A^{\mu}\equiv A^{\mu}\left(\bm{x},\bm{y}\right)$, there is a physical behavior that does not occur in standard diffusion. The drift in $\bm{x}$ can depend on the field $\bm{y}$ in the backward time direction, and vice-versa, as shown in Fig (\[fig:Quantum-fields-propagating\]). This cross-coupling that leads to nontrivial quantum dynamics. This is not important for short times, since each infinitesimal propagator is defined relative to an initial overall $\bm{\phi}$ , but it does modify the structure of long-time propagation. The general, time-symmetric stochastic action with central differences, when the diffusion is not constant, is also obtainable. However, this involves Riemannian curvature terms in phase-space, and is outside the scope of the present article. Time-symmetric action principle ------------------------------- This method will now be employed to transform interacting quantum field theory into another form, focusing on the constant diffusion case for simplicity. The probability density for quantum time-evolution is given by a path integral over a real Lagrangian, where in each small time interval the propagators factorize. Due to time-reversal symmetry of the propagator, these equations can be solved using path integrals over both the propagators. The path then no longer has to be over an infinitesimal distance in time, and the *total* propagators will not factorize. This is a type of stochastic bridge [@schrodinger1931uber; @hairer2007analysis; @drummond2017forward], which acts in two time directions simultaneously. Hence, to write the bridge in a unified form, with time integration in the positive time direction only, we define a combined, central difference Lagrangian as: $$L_{c}=L_{x}(\bm{\phi},\dot{\bm{\phi}})+L_{y}(\bm{\phi},-\dot{\bm{\phi}})\,,$$ so that the action integral can be written in the positive time direction for $t_{0}<t<t_{1}$, with a total Lagrangian of: $$\begin{aligned} L_{c} & =\sum_{\mu}\frac{1}{2d^{\mu}}\left(\dot{\phi}^{\mu}-A^{\mu}\left(\bm{\phi},t\right)\right)^{2}-V\left(\bm{\phi},t\right).\label{eq:Total Lagrangian}\end{aligned}$$ Here the total potential, $V$, includes contributions of opposite sign from the positive and negative fields, so that: $$V\left(\bm{\phi},t\right)=\frac{1}{2}\sum_{\pm}\left[\mp\sum_{\mu\in T_{\pm}}\partial_{\mu}A^{\mu}\left(\bm{\phi},t\right)\right].\label{eq:Jacobian correction term-2}$$ This defines the total probability for an $n$-step open stochastic bidirectional bridge, with constant diffusion, central difference evaluation of the action, and fixed intermediate points $$P\left(\left[\bm{\phi}\right]\left|\tilde{\bm{\phi}}\right.\right)=\mathcal{N}^{n}\left(\Delta t\right)e^{-\int_{t_{0}}^{t_{f}}L_{c}(\bm{\phi},\dot{\bm{\phi}})dt}.\label{eq:probability_solution}$$ On integrating over the intermediate points, with drift terms defined at the center of each step in phase-space, this can be written in a notation analogous to a quantum-mechanical transition amplitude in a Feynman path integral. One obtains the transition probability: $$\begin{aligned} P\left(\bm{x}_{f},\bm{y}_{0}\left|\tilde{\bm{\phi}}\right.\right)= & \int\mathcal{D}\bm{\phi}e^{-\int_{t_{0}}^{t_{f}}L(\bm{\phi},\dot{\bm{\phi}})dt},\label{eq:total probability}\end{aligned}$$ where $t_{n}=t_{0}+n\Delta t$, and here we only integrate over the intermediate phase-space points: $$\mathcal{D}\bm{\phi}=\mathcal{N}^{n}\left(\Delta t\right)\prod_{k=1}^{n-1}d\bm{\phi}_{k}\,.$$ The paths $\bm{\phi}\left(t\right)$ are defined so that $\bm{x}\left(t_{0}\right)=\bm{x}_{0}$ and $\bm{y}\left(t_{n}\right)=\bm{y}_{f}$ are both constrained, at the initial and final times respectively. The definition of $P\left(\bm{x}_{f},\bm{y}_{0}\left|\tilde{\bm{\phi}}\right.\right)$ is the probability of both arriving at $\bm{x}_{f}$ and starting from $\bm{y}_{0}$, given that the initial value of $\bm{x}$ is $\bm{x}_{0}$, and the final value of $\bm{y}$ is $\bm{y}_{f}$. Although formally similar, this propagator has a different meaning to the quantum transition amplitude, because, as it is a probability, it is always positive valued. Extra dimensions\[sec:Extra-dimensions\] ======================================== Many techniques exist for evaluating real path-integrals, both numerical and analytic. There is a formal analogy between the form given above and the expression for a Euclidean path integral of a polymer, or a charged particle in a magnetic field. Here we obtain an extra-dimensional technique for the probabilistic evaluation of the path integral, due to its simplicity and interesting physical interpretation. Although other methods exist, they are not investigated in this paper in the interests of brevity. Equilibration in higher dimensions ---------------------------------- Direct solutions using stochastic equations in real space-time are not feasible, but there are other methods. To make use of the real path-integral, one needs to probabilistically sample the entire space-time path, since each part of the path depends in general on other parts. To achieve this, we add an additional ’virtual’ time dimension, $\tau$. This is used in the related statistical problem of stochastic bridges, for computing a stochastic trajectory that is constrained by a future boundary condition [@schrodinger1931uber; @maier1992transition; @hairer2007analysis; @majumdar2015effective; @drummond2017forward]. This extra-dimensional distribution, $G\left([\bm{\phi}],\tau\right)$, is defined so that the probability tends asymptotically for large $\tau$ to the required solution: $$\lim_{\tau\rightarrow\infty}G\left([\bm{\phi}],\tau\right)=P\left(\left[\bm{\phi}\right]\left|\tilde{\bm{\phi}}\right.\right)\,.$$ The solution is such that $\bm{\phi}\left(t\right)$ is constrained so that $\bm{x}\left(t_{0}\right)=\bm{x}_{0}$ , and $\bm{y}\left(t_{f}\right)=\bm{y}_{f}$, where $\bm{x}_{0},\bm{y}_{f}$ are randomly distributed as $P\left(\bm{x}_{0},\bm{y}_{f}\right)$ for the case of an unconditional boundary. For the case of an initial conditional boundary condition, one has that, given a known output value $\bm{y}_{0}$, the distribution of $\bm{x}_{0}$ has a distribution that depends on the output value of $\bm{y}_{0}$: $$Q_{x}\left(\bm{x}_{0}\right)=Q_{x}\left(\bm{x}_{0}\left|\bm{y}_{0}\right.\right)\,.$$ It has been shown in work on stochastic bridges [@hairer2007analysis] that sampling using a stochastic partial differential equation (SPDE) can be applied to cases where one of the boundary conditions is free. To define an SPDE the other boundary condition on $\bm{x}$ is specified so that $\dot{\bm{x}}\left(t_{f}\right)=\bm{A}_{x}\left(\bm{\phi}\left(t_{f}\right)\right)$, with a boundary condition for $\dot{\bm{y}}$ so that $\dot{\bm{y}}\left(t_{0}\right)=\bm{A}_{y}\left(\bm{\phi}\left(t_{0}\right)\right)$. This is consistent with the open boundary conditions of the path integral in real time, since in the limit of $\Delta t\rightarrow0$ , the path integral weight implies that one must have $\dot{\bm{x}}\left(t_{f}\right)=\bm{A}_{x}\left(\bm{\phi}\left(t_{f}\right)\right)+O\left(\sqrt{\Delta t}\right)$ and $\dot{\bm{y}}\left(t_{0}\right)=\bm{A}_{y}\left(\bm{\phi}\left(t_{0}\right)\right)+O\left(\sqrt{\Delta t}\right)$, for almost all paths. The effect of the additional constraint vanishes as $\Delta t\rightarrow0$, as it contributes a negligible change to the entire path integral. This condition is necessary in order to have a well-defined partial differential equation in higher dimensions. Extra-dimensional equilibration is not used for conventional SDE sampling, as direct evolution is more efficient. However, we will show that SPDE sampling is applicable to time-symmetric propagation, where direct sampling is not possible. In this section, a simplification is made by rescaling the variables to make the diffusion $d^{\mu}\left(t\right)$ independent of time and index, i.e., $d^{\mu}\left(t\right)=d$. We also assume that there is no explicit time-dependence in the Hamiltonian. The general solution is given in the Appendix. The SPDE is obtained as follows. Firstly suppose that $G\left([\bm{\phi}],\tau\right)$ satisfies a functional partial differential equation of: $$\frac{\partial G}{\partial\tau}=\int_{t_{0}}^{t_{1}}dt\sum_{\mu}\frac{\delta}{\delta\phi^{\mu}(t)}\left[-\mathcal{A}^{\mu}\left(\bm{\phi},t\right)+d\frac{\delta}{\delta\phi^{\mu}(t)}\right]G\,.\label{eq:functional FPE}$$ In order that the asymptotic result agrees with the desired expression (\[eq:probability\_solution\]) for $G$, it follows from functional differentiation of Eq (\[eq:total probability\]), that one must define $\bm{\mathcal{A}}[\bm{\phi}]$ so that: $$\mathcal{A}^{\mu}\left(\bm{\phi},t\right)=-d\frac{\delta}{\delta\phi^{\mu}(t)}\int_{t_{0}}^{t_{f}}L_{c}(\bm{\phi},\dot{\bm{\phi}})dt.$$ This a variational calculus problem, with one boundary fixed, and the other free. Variations vanish at the time boundaries where $\bm{\phi}$ is fixed. At the free boundaries, $\dot{\bm{\phi}}=\bm{A}$, as explained above. In either case boundary terms are zero because they occur in terms that vanish provided: $$\Delta\phi^{\nu}\frac{\partial L}{\partial\dot{\phi}^{\mu}}=\frac{\Delta\phi^{\nu}}{d}\left(\dot{\phi}^{\mu}-A^{\mu}\right)=0\,.$$ As a result, there are two type of natural boundary terms that allow partial integration to obtain Euler-Lagrange equations. Either one can set $\Delta\phi^{\mu}=0$ to give a fixed Dirichlet boundary term, or else one can set $\dot{\phi}^{\mu}=A^{\mu}$, to give an open Neumann boundary term. This allows one to obtain Euler-Lagrange type equations with an extra-dimensional drift defined as: $$\begin{aligned} \mathcal{A}^{\mu}\left(\bm{\phi},t\right) & =d\left[\frac{d}{dt}\frac{\partial L}{\partial\dot{\phi}^{\mu}}-\frac{\partial L}{\partial\phi^{\mu}}\right]\nonumber \\ & =d\left[\frac{d}{dt}v^{\mu}+v^{\nu}\partial_{\mu}A^{\nu}+\partial_{\mu}V\right]\,.\end{aligned}$$ where $v^{\mu}=\left[\dot{\phi}^{\mu}-A^{\mu}\right]/d$. The functional Fokker-Planck equation given above is then equivalent to a stochastic partial differential equation (SPDE): $$\frac{\partial\bm{\phi}}{\partial\tau}=\bm{\mathcal{A}}[\bm{\phi}]+\bm{\zeta}\left(t,\tau\right)\,,\label{eq:SPDE}$$ where the stochastic term $\bm{\zeta}$ is a real delta-correlated Gaussian noise such that $\left\langle \zeta^{\mu}\left(t,\tau\right)\zeta^{\mu}\left(t',\tau'\right)\right\rangle =2d\delta^{\mu\nu}\delta\left(t-t'\right)\delta\left(\tau-\tau'\right)$. Coefficients ------------ Introducing first and second derivatives, $\dot{\bm{\phi}}\equiv\partial\bm{\phi}/\partial t$ and $\ddot{\bm{\phi}}\equiv\partial^{2}\bm{\phi}/\partial t^{2}$, there is an expansion for the higher-dimensional drift term $\bm{\mathcal{A}}$ in terms of the field time-derivatives: $$\bm{\mathcal{A}}=\ddot{\bm{\phi}}+\bm{c}\dot{\bm{\phi}}+\bm{a}\,.$$ Here, $\bm{c}$ is a circulation matrix that only exists when the usual potential conditions on the drift are not satisfied [@drummond2017forward], while $\bm{a}$ is a pure drift without derivatives: $$\begin{aligned} c^{\mu\nu} & =\partial_{\mu}A^{\nu}-\partial_{\nu}A^{\mu}\,.\\ a^{\mu} & =\partial_{\mu}U\,.\nonumber \end{aligned}$$ The function $U$ is an effective potential, which acts to generate an effective force on the trajectories: $$U=dV-\frac{1}{2}\sum_{\nu}\left(A^{\nu}\right)^{2}.$$ The final stochastic partial differential equation that $\bm{\phi}$ must satisfy is then: $$\frac{\partial\bm{\phi}}{\partial\tau}=\ddot{\bm{\phi}}+\bm{C}\dot{\bm{\phi}}+\bm{a}+\bm{\zeta}\left(t,\tau\right)\,.\label{eq:finalPSDE}$$ The final result is a classical field equation in an extra space-time dimension with an additional noise term. It has a steady-state that is equivalent to a full quantum evolution equation, and is identical to classical evolution in real time in the zero-noise, classical limit, as shown in the next subsection. The equations can be treated with standard techniques for stochastic partial differential equations [@Werner:1997], except that the equations have $n_{d}+1$ dimensions in a manifold with $n_{d}$ space-time dimensions. The simplest case, for a single mode, has $n_{d}+1=2$ dimensions. In computational implementations, one can speed up convergence to the steady-state using Monte-Carlo acceleration [@besag1994comments]. Classical limit --------------- The classical limit is for $d\rightarrow0$. In this limit the higher-dimensional equations are noise-free and diffusive. Including a circulation term in case potential conditions are not satisfied, one has: $$\frac{\partial\phi^{\mu}}{\partial\tau}=\ddot{\phi}^{\mu}+\left[\partial_{\mu}A^{\nu}-\partial_{\nu}A^{\mu}\right]\dot{\phi}^{\nu}-A^{\nu}\partial_{\mu}A^{\nu}\,.$$ Substituting the classical trajectory solution, $\dot{\phi}^{\nu}=A^{\nu}$, one sees immediately that for classical trajectories, $$\frac{\partial\phi^{\mu}}{\partial\tau}=\ddot{\phi}^{\mu}-A^{\nu}\partial_{\nu}A^{\mu}.$$ However, on this trajectory, the second derivative term simplifies to: $$\ddot{\phi}^{\mu}=\frac{d}{dt}A^{\mu}=\dot{\phi}^{\nu}\partial_{\nu}A^{\mu},$$ and therefore one obtains: $$\frac{\partial\phi^{\mu}}{\partial\tau}=\mathcal{A}^{\mu}\left(\bm{\phi}\right)=0\,.\label{eq:classical_limit}$$ This extra-dimensional equation therefore has an exact steady state solution corresponding to the integrated classical field evolution in real time, namely: $$\begin{aligned} \bm{x}(t) & =\bm{x}(t_{0})+\int_{t_{0}}^{t}\bm{A}_{x}(t')dt'\nonumber \\ \bm{y}(t) & =\bm{y}(t_{f})-\int_{t}^{t_{f}}\bm{A}_{y}(t')dt'\,.\end{aligned}$$ Both the initial and final boundary term equations are satisfied provided one chooses $\bm{x}\left(t_{0}\right)=\bm{x}_{0}$ and $\bm{y}\left(t_{f}\right)=\bm{y}_{f}$, if these are compatible, that is, if the dynamical equations have a solution. If one uses these equations to solve for $\bm{y}(t_{0})$, the solution can be rewritten in a more conventional form of a classical solution with initial conditions: $$\bm{\phi}\left(t\right)=\bm{\phi}\left(t_{0}\right)+\int_{t_{0}}^{t}\bm{A}\left(t'\right)dt'.$$ The importance of imposing future-time boundary conditions in classical field problems like radiation-reaction has long been recognized in electrodynamics, including work by Dirac [@dirac1938pam], as well as Wheeler and Feynman [@wheeler1945interaction]. In such theories various field components typically require future-time restrictions on their dynamics. Hence the fact that such future-time boundaries arise in the classical limit found here should not be very surprising. Dirac [@dirac1938pam] described his result that effectively gives a future boundary condition on electron acceleration in as “*the most beautiful feature of the theory*”. He explains: “*We now have a striking departure from the usual ideas of mechanics. We must obtain solutions of our equations of motion for which the initial position and velocity of the electron are prescribed, together with its final acceleration, instead of solutions with all the initial conditions prescribed.”* If Dirac’s type of dynamical restriction is compared with the classical limit obtained here, there are are clear similarities. His approach gave a dynamical condition required to derive the correct time evolution, using a restriction on the *future* boundaries of the radiation field. It is a striking feature of the present approach that Dirac’s idea of a future boundary condition arises naturally from the classical limit of our equations. Time-symmetric stochastic differential equation ----------------------------------------------- The path integrals correspond to a functional integral over stochastic paths. Hence the trajectories can be written in an alternative, intuitive form after probabilistic sampling, as a time-symmetric stochastic differential equation [@ma1994solving], with: $$\begin{aligned} \bm{x}(t) & =\bm{x}(\bm{y}_{0},t_{0})+\int_{t_{0}}^{t}\bm{A}_{x}(t')dt'+\int_{t_{0}}^{t}d\bm{w}_{x}\nonumber \\ \bm{y}(t) & =\bm{y}(t_{f})-\int_{t}^{t_{f}}\bm{A}_{y}(t')dt'-\int_{t}^{t_{f}}d\bm{w}_{y}\,.\end{aligned}$$ Here, for an initial conditional boundary, $\bm{x}(\bm{y}_{0},t_{0})$ is a random function that depends on $\bm{y}_{0},$ and $\bm{y}(t_{f})$ is a random input function for $\bm{y}_{f}$. The two fields are propagated in the positive and negative time directions respectively, sometimes called “forward-backward” equations, while the noise terms $d\bm{w}$ are correlated over short times so that, over a small interval $dt$: $$\left\langle dw_{\mu}\left(t\right)dw_{\nu}\left(t\right)\right\rangle =d^{\mu}\delta_{\mu\nu}dt\,.$$ The compelling feature of these equations is that they unify two important features: time reversibility and randomness. These types of equations also occur in stochastic control theory, and have an extensive mathematical literature proving their existence and other properties [@ma1994solving]. However, while they provide an insight into the structure of the stochastic equations, they cannot be readily solved using conventional algorithms for stochastic differential equations. This can be recognized by attempting to write the equations as forward time stochastic differential equations. We define $\bar{\bm{y}}\left(t\right)$ as a time-reversed copy of $\bm{y}\left(t\right)$, i.e., let $t_{-}=t_{0}+t_{1}-t$, and $$\bar{\bm{y}}\left(t\right)=\text{\ensuremath{\bm{y}\left(t_{-}\right)}}\,.$$ The stochastic differential equation that results, treating each argument as the same time $t$, is: $$\begin{aligned} d\bm{x} & =\bm{A}_{x}\left(\bm{x}\left(t\right),\bar{\bm{y}}\left(t_{-}\right),t\right)dt+d\bm{w}_{x}\nonumber \\ d\bar{\bm{y}} & =-\bm{A}_{y}\left(\bm{x}\left(t_{-}\right),\bar{\bm{y}}\left(t\right),t_{-}\right)dt-d\bm{w}_{y}\,.\end{aligned}$$ Here, $\bm{x}\left(t_{0}\right)=\bm{x}(\bm{y}_{0},t_{0})$ and $\bar{\bm{y}}\left(t_{0}\right)=\bm{y}_{f}$ are now both “initial” conditions, but the $y$ coordinate is replaced by $\bar{y}$ instead. In other words, we can regard the stochastic differential equations as having a forward-time stochastic propagation if the drift terms include complementary fields defined at *different* times. However, non-locality in time prevents one from using standard, local-time algorithms for solving even these conventional looking equations as ordinary stochastic differential equations. This behavior is not surprising, physically. If these fields had local drift terms, they would be causal, local theories that satisfy Bell’s theorem, and do not correspond to quantum theory. It is possible that analytic equations like this can be used to develop a stochastic perturbation theory [@Chaturvedi1999] for quantum fields. Since forward-backward stochastic equations occur in other areas as well, such techniques may have wider applicability. Numerical methods ----------------- A variety of numerical techniques can be used to implement path integrals with a time-symmetric action. In this paper we solve the equivalent higher-dimensional partial stochastic differential equation with a finite difference implementation. This permits Neumann, Dirichlet and other boundary conditions to be imposed. We also explain strategies for dealing with future time boundaries, which is the most obvious practical issue with this approach. ### SPDE integration First, we demonstrate convergence of the higher dimensional method, using a central difference implicit method that iterates to obtain convergence at each step, including an iteration of the boundary conditions. The method is similar to a central difference method described elsewhere [@drummond1991computer; @Werner:1997]. A simple finite difference implementation of the Laplacian is used to implement non-periodic time boundaries. ![Example of SPDE solution with an extra dimension. The component $x$ propagates in the positive time direction as a random Wiener process. The expected variance for $\tau\rightarrow\infty$ is $\left\langle x^{2}\right\rangle =1+t$, with $x\left(t,0\right)=0$, $x\left(0,\tau\right)=v$, and $\left\langle v^{2}\right\rangle =1$. Fluctuations are sampling errors due to a finite number of $10000$ trajectories. Variance error bars due to sampling errors were estimated as $\pm2.5\%$, in good agreement with the difference between exact and simulated variance. A semi-implicit finite difference method [@drummond1991computer; @werner1997robust] was used to integrate the equations, with step-sizes of $\Delta\tau=0.0002$ and $\Delta t=0.03$. Errors from the finite step-size in $\tau$ were negligible. \[fig:Quantum-diffusion\]](Fig4){width="1\columnwidth"} In order to demonstrate convergence, Fig (\[fig:Quantum-diffusion\]) gives the computed numerical variance in an exactly soluble example of a stochastic differential equation with no drift term. We treat one variable and $\bm{C}=\bm{a}=0$, using a public-domain SPDE solver [@kiesewetter2016xspde] with a random Gaussian initial condition of $x(t=0)=v$ where $\left\langle v^{2}\right\rangle =1$, so that: $$x(t)=v+\int_{t_{0}}^{t}dw_{x}\,.$$ This is a case of pure diffusion, where one expects the final equilibrium solution as $\tau\rightarrow\infty$ to be $\left\langle x^{2}\right\rangle =1+t$. From Eq (\[eq:finalPSDE\]), the corresponding higher-dimensional stochastic process has boundary conditions of $x(t=0)=v$ and $\dot{x}(t=t_{f})=0$, while satisfying a stochastic partial differential equation: $$\frac{\partial x}{\partial\tau}=\ddot{x}+\zeta\left(t,\tau\right)\,.$$ From the numerical results in Fig (\[fig:Quantum-diffusion\]), the expected variance is reached uniformly in real time $t$ after pseudo-time $\tau\sim$ 2.5, to an excellent approximation, reaching $\left\langle x^{2}\right\rangle =1.95\pm0.05$ at $t=t_{f}=1$ and $\tau=5$. For the examples given here, our focus is on accuracy, not numerical efficiency. The purpose of these examples is simply to demonstrate how this approach works. Checks were made to quantitatively estimate sampling error and step-size error in $\tau$. Substantial improvements in efficiency appear possible. It should be feasible to combine Ritz-Galerkin [@matthies2005galerkin], spectral [@Werner:1997], or other methods [@keese2003review] with boundary iteration. The MALA technique for accelerated convergence is also applicable [@besag1994comments]. Propagation with known end-points --------------------------------- The techniques described above can be used to calculate probabilities of a given path amongst all the possible quantum paths in phase-space. This requires the solution of a higher-dimensional PSDE. Typically it is assumed *a priori*, that one already knows both the initial conditional $x$-distribution, $P\left(\bm{x}_{0}\left|\bm{y}_{0}\right.\right)$, and can compute the final marginal $\bm{y}$ distribution, $Q_{y}\left(\bm{y}_{f}\right)$. Yet, how does one calculate the marginal distribution of a Q-function for $\hat{\rho}_{f}$ at a future time? While there are many possible approaches, here we outline some ways to achieve this within the stochastic framework. There is insufficient space to describe all these approaches in detail here. [Ground states]{} : To obtain a ground state or stationary state of finite entropy for $\hat{H}=\hat{H}_{1}+\hat{H}_{2}$, one may proceed by adiabatic passage as in some experiments [@luothomas2007measurement]. A state $\hat{\rho}_{0}$ which is stationary for $\hat{H}_{1}$ is constructed. This could be the non-interacting ground state. The full Hamiltonian is defined as $\hat{H}\left(t\right)=\hat{H}_{1}+\lambda(t)\hat{H}_{2}$. Here $\lambda(t)$ is varied so that $\lambda(0)=\lambda(2T)=0$, and $\lambda(T)=1$. In the limit of slow passage, the dynamical path has known end-points $\hat{\rho}_{f}=\hat{\rho}_{0}$, so that the future marginal distribution is known. The state at $t=T$ is approximately stationary. [Transitional paths]{} : If both initial and final distributions are known, samples of all intermediate paths for $t_{0}\le t\le t_{f}$ can be calculated. This provides a means to understand quantum dynamical processes and the paradoxes of measurement theory, via the probability distribution of the trajectories that are sampled while reaching a known final quantum state. This is relevant to quantum ontology [@drummond2019q]. [Dynamical solutions]{} : To obtain a true dynamical solution, a known state $\hat{\rho}_{0}$ at $t_{0}$ must be evolved to an unknown state $\hat{\rho}_{f}$ at a time $t_{f}$. This requires Metropolis or similar Monte-Carlo sampling, by using the dynamical equations as a means to generate samples of $Q(\bm{y}_{0})$. The algorithm involves an initial estimated $\bm{y}_{f}$. A stochastic process then generates a distribution for $\bm{y}_{f},$and hence the known distribution for the marginal $\bm{y}_{0}$. Details of this procedure will be treated elsewhere. [Canonical ensembles]{} : If a many-body state is known to be in a canonical ensemble at thermal equilibrium, then it is generally assumed that $\hat{\rho}=\exp\left(-\beta\left(\hat{H}-\mu\hat{N}\right)\right)$ where $\beta=1/k_{B}T$, $T$ is the temperature, and $\mu$ is the chemical potential. This can be handled through an ’imaginary time’ calculation, such that $d\hat{\rho}/d\beta=-\frac{1}{2}\left[\hat{H},\hat{\rho}\right]_{+}$, which involves an anti-commutator rather than a commutator. The operator equation can be turned into a phase-space equation and treated in a similar to the dynamical case, with additional potential terms [@drummond2004canonical]. [Transitional ensembles]{} : If a canonical ensemble is known at two different values of both $\beta$ and $\mu$, then the stochastic techniques defined above can be used to define a transition path and evaluate transitional ensemble properties at intermediate $\beta$ and $\mu$ values. [Conditional inference]{} : One or more future outputs $\bm{y}_{f}$ may be the macroscopic result from an amplifier. If one measures this in the future, then the dynamics can be conditioned on the value of $\bm{y}_{f}$, which may be used to infer information about a state in the past. \[sec:Quadratic-Hamiltonian-Examples\] Examples =============================================== Hamiltonians in quantum field theory of the type analyzed here usually have quadratic and quartic terms. In this section we consider several examples, with details in single-mode cases. Let the general Hamiltonian have the form $\widehat{H}=\widehat{H}_{0}+\widehat{H}_{S}+\widehat{H}_{I}$. Here $\widehat{H}_{0}$ is a free field term, $\widehat{H}_{S}$ describes quadrature squeezing, found in Hawking radiation or parametric down-conversion, and $\widehat{H}_{I}$ is a quartic nonlinear particle scattering interaction. Each of these cases will be treated separately below for simplicity, but they can be combined if required. Free-field case --------------- After discretizing on a momentum lattice, and using the Einstein summation convention, the free-field Hamiltonian can be written in normally-ordered form as $$\widehat{H}=\hbar\omega_{ij}\hat{a}_{i}^{\dagger}\hat{a}_{j}\,.$$ The corresponding Q-function equations are: $$\dot{Q}^{\alpha}=-i\omega_{ij}\left[\frac{\partial}{\partial\alpha_{j}^{\ast}}\alpha_{i}^{\ast}-\frac{\partial}{\partial\alpha_{i}}\alpha_{j}\right]Q^{\alpha}\,.$$ Hence, the coherent amplitude evolution equations are: $$\frac{d\alpha_{i}}{dt}=-i\omega_{ij}\alpha_{j}.\label{linear_evolution}$$ The simplest case is a single-mode simple harmonic oscillator Hamiltonian, such that: $\widehat{H}=\hbar\omega\hat{a}^{\dagger}\hat{a}\,.$ This corresponds to a characteristic equation of $\dot{\alpha}=-i\omega\alpha.$ The expectation value of the coherent amplitude in the Q-function has the equation: $$\frac{\partial}{\partial t}\left\langle \alpha\right\rangle _{Q}=-i\omega\left\langle \alpha\right\rangle _{Q},$$ which is identical to the corresponding Heisenberg equation expectation value. There is no diffusive behavior or noise for these terms, and as a result the Q-function has an exactly soluble, deterministic quantum dynamics. The evolution is noise-free, with no need to make the transformations outlined above, since from (\[eq:classical\_limit\]), the steady-state in extra dimensions is given by solving(\[linear\_evolution\]). There is no difference here between classical and quantum dynamics, as pointed out by Schrödinger [@Schrodinger_CS]. Squeezed state evolution ------------------------ Next, we consider quadratic interaction terms that are mapped to second-order derivatives in the Q-function. These cause squeezed state generation and include quantum noise. They contain dynamics that leads to a model for quantum measurement as well as quantum paradoxes, including EPR and Bell inequality violations. Following the notation of Eq (\[eq:General hermitian\]), the general squeezing interaction term is $\widehat{H}_{S}=\hbar\sum_{ij=0}^{M}\left[g_{ij00}\hat{a}_{i}^{\dagger}\hat{a}_{j}^{\dagger}+g_{00ij}\hat{a}_{i}\hat{a}_{j}\right]/2$. Such quadrature squeezing interactions are found in many areas of physics [@Drummond2004_book]. They illustrate how the Q-function equation behaves in the simplest nontrivial case where there is a diffusion term that is not positive-definite. We will investigate this in some detail, with numerical examples. This case illustrates very clearly how complementary variance changes are related to complementary time propagation directions. Physically, these terms arise from parametric interactions, and lead to the dynamics that cause quantum entanglement. They are widespread, occurring in systems ranging from quantum optics to black holes, via Hawking radiation. The simplest case is a single-mode quantum squeezed state with $$\widehat{H}=\frac{i\hbar}{2}\left[\hat{a}^{\dagger2}-h.c.\right]\,.$$ ### Q-function dynamics We can calculate directly how the Q-function evolves in time. Applying the correspondence rules as previously, one obtains a Fokker-Planck type equation, now with second-order terms. Combining these terms into one equation gives: $$\begin{aligned} \frac{dQ^{\alpha}}{dt} & =-\left[\frac{\partial}{\partial\alpha}\alpha^{\ast}+\frac{1}{2}\frac{\partial^{2}}{\partial\alpha^{2}}+h.c\right]Q^{\alpha}\,.\end{aligned}$$ Hence, on using the real quadrature definitions of Eq (\[eq:real-quadratures\]) with $e^{i\eta}=i$, and making a variable change so that $i\alpha=\left(x+iy\right)/2$, we obtain $$\frac{dQ}{dt}=\left[\partial_{x}x-\partial_{y}y+\partial_{x}^{2}-\partial_{y}^{2}\right]Q\,.$$ This demonstrates the typical behavior of unitary Q-function equations. The diffusion matrix it is traceless and equally divided into positive and negative definite parts. In this case the $X_{+}$ quadrature decays, but has positive diffusion, while the the $X_{-}$ quadrature shows growth and amplification, but has negative diffusion in the forward time direction. The amplified quadrature, which corresponds to the measured signal of a parametric amplifier, has a negative diffusion and therefore is constrained by a future time boundary condition. If initially factorizable, the $Q$-function solutions can always be factorized as a product with $Q=Q_{+}Q_{-}$. Then, if $t_{-}=t_{1}+t_{2}-t$, the time-evolution is diffusive, with an identical structure in each of two different time directions: $$\begin{aligned} \frac{dQ_{-}}{dt_{-}} & =\partial_{y}\left[-y+\partial_{y}\right]Q_{-}\nonumber \\ \frac{dQ_{+}}{dt} & =\partial_{x}\left[-x+\partial_{x}\right]Q_{+}\,.\end{aligned}$$ The corresponding forward-backwards SDE is uncoupled, with decay and stochastic noise occurring in each time direction: $$\begin{aligned} x(t) & =x(t_{0})-\int_{t_{0}}^{t}x(t')dt'+\int_{t_{0}}^{t}dw_{x}\nonumber \\ y(t) & =y(t_{f})-\int_{t}^{t_{f}}y(t')dt'-\int_{t}^{t_{f}}dw_{y}\,.\end{aligned}$$ where $\left\langle dw_{\mu}dw_{\nu}\right\rangle =2\delta_{\mu\nu}dt$. From these equations one can calculate immediately that: $$\begin{aligned} \frac{d}{dt}\left\langle x^{2}\right\rangle & =2\left(1-\left\langle x^{2}\right\rangle \right)\nonumber \\ \frac{d}{dt_{-}}\left\langle y^{2}\right\rangle & =2\left(1-\left\langle y^{2}\right\rangle \right).\end{aligned}$$ This equation for the variance time-evolution implies that the variance is therefore *reduced* in each quadrature’s intrinsic diffusion direction, for an initial vacuum state, with the solution in forward time given by: $$\begin{aligned} \left\langle x^{2}\left(t\right)\right\rangle & =1+e^{-2t}\nonumber \\ \left\langle y^{2}\left(t\right)\right\rangle & =1+e^{2t}.\label{eq:Squeezed-Q-function-solution}\end{aligned}$$ Therefore, the variance reduction occurs in the forward time direction for $x$, giving rise to quadrature squeezing, and in the backward time direction for $y$, corresponding to gain in the forward time direction. However, neither anti-normally ordered variance is reduced below one. This is the minimum possible, corresponding to zero variance in the unordered operator case. With this choice of units, the diffusion coefficient is $d=2$, so the overall Lagrangian is: $$L_{c}=\frac{1}{4}\left[\left(\dot{x}+x\right)^{2}+\left(\dot{y}-y\right)^{2}\right]-1.$$ The net effect of the stochastic processes in opposite time directions is that growth in the uncertainty of one quadrature in one time direction is cancelled by the reduction in uncertainty of the other quadrature in the opposite time direction. This behavior is shown in Figs (\[fig:Quantum-unsqueeze\_vs\_tau\]) to (\[fig:Quantum-squeeze\_vs\_t\]), which illustrate numerical solutions of the forward-backward equations using the techniques of the previous section. ![Variance of SPDE solution with an extra dimension. The unsqueezed quadrature $y$ propagates in the negative time direction, with boundaries fixed in the future. The extra-dimensional stochastic partial differential equation is solved out to $\tau=5$ , with a future time Dirichlet boundary at $t=1$ and a past time Robin boundary at $t=0$. Fluctuations are sampling errors due to a finite number of $1600$ stochastic trajectories. Other details as in Fig (\[fig:Quantum-diffusion\]). \[fig:Quantum-unsqueeze\_vs\_tau\]](Fig5){width="1\columnwidth"} ![Example of SPDE solution with an extra dimension. The unsqueezed quadrature variance $y$ propagates in the negative time direction, with results obtained at virtual time $\tau=5$. The expected variance for $\tau\rightarrow\infty$ is $\left\langle y^{2}\left(t\right)\right\rangle =1+e^{2t}$, and is shown as the dotted line. Fluctuations are sampling errors due to a finite number of $1600$ stochastic trajectories. The two solid lines are plus and minus one standard deviations from the mean. Other details as in Fig (\[fig:Quantum-diffusion\]). \[fig:Quantum-unsqueeze\_vs\_t\]](Fig6){width="1\columnwidth"} These solutions use 1600 trajectories, and hence include sampling error. Three dimensional graphs show equilibration in the extra dimension. Two dimensional graphs show results near equilibrium at $\tau=5$, with plots of variance in $X_{\pm}$ vs time. ![Variance of SPDE solution with an extra dimension. The squeezed quadrature $x$ propagates in the positive time direction. The stochastic partial differential equation is solved out to $\tau=5$ , with a past time Dirichlet boundary at $t=0$ and a future time Robin boundary at $t=1$. Fluctuations are sampling errors due to a finite number of $1600$ stochastic trajectories. Other details as in Fig (\[fig:Quantum-diffusion\]). \[fig:Quantum-squeeze\_vs\_tau\]](Fig7){width="1\columnwidth"} ![Example of SPDE solution with an extra dimension. The squeezed quadrature variance $x$, propagates in the positive time direction, with results obtained at virtual time $\tau=5$. The expected variance for $\tau\rightarrow\infty$ is $\left\langle x^{2}\left(t\right)\right\rangle =1+e^{-2t}$, shown as the dotted line. Fluctuations are sampling errors due to a finite number of $1600$ stochastic trajectories. The two solid lines are plus and minus one standard deviations from the mean. Other details as in Fig (\[fig:Quantum-diffusion\]). \[fig:Quantum-squeeze\_vs\_t\]](Fig8){width="1\columnwidth"} ### Comparison to operator equations Defining quadrature operators $\hat{Y}=\hat{a}+\hat{a}^{\dagger}$ and $\hat{X}=i\left(\hat{a}-\hat{a}^{\dagger}\right)$, this physical system has the well-known behavior that variances change exponentially in time [@Walls1983a], in a complementary way. Given an initial vacuum state in which $\left\langle \hat{X}^{2}\left(0\right)\right\rangle =\left\langle \hat{Y}^{2}\left(0\right)\right\rangle =1$, the Heisenberg equation solutions for the variances are $$\begin{aligned} \left\langle \hat{X}^{2}\left(t\right)\right\rangle & =e^{-2t}\\ \left\langle \hat{Y}^{2}\left(t\right)\right\rangle & =e^{2t}.\end{aligned}$$ Hence, the $\hat{X}$ quadrature is squeezed, developing a variance below the vacuum fluctuation level, and the $\hat{Y}$ quadrature is unsqueezed, developing a large variance. This maintains the Heisenberg uncertainty product, which is invariant. Once operator ordering is taken into account, this gives an identical solution to the Q-function solution in Eq \[eq:Squeezed-Q-function-solution\], because the operator correspondences are for anti-normal ordering. If we use $\{\}$ to denote this, then: $$\begin{aligned} \left\langle \left\{ \hat{X}^{2}\left(t\right)\right\} \right\rangle & =1+e^{-2t}\\ \left\langle \left\{ \hat{Y}^{2}\left(t\right)\right\} \right\rangle & =1+e^{2t}.\end{aligned}$$ In both cases there is a reduction in variance in the direction of positive diffusion. If there is an initial vacuum state, then quadrature squeezing occurs in $X$ in the forward time direction, with a variance reduced below the vacuum level. Backward time squeezing occurs in $Y$, which also has forward-time gain. ### Higher dimensional stochastic equation In the matrix notation used elsewhere, this means that we have $d=2$, and: $$\bm{A}=\left[\begin{array}{c} -x\\ y \end{array}\right],$$ with $\bm{c}=0$, so that the quantum dynamics occurs as the steady-state of the higher dimensional equation: $$\frac{\partial\bm{\phi}}{\partial\tau}=\ddot{\bm{\phi}}-\bm{\phi}+\bm{\zeta}\left(t,\tau\right).$$ where $\left\langle \zeta^{\mu}\left(t,\tau\right)\zeta^{\nu}\left(t',\tau'\right)\right\rangle =4\delta^{\mu\nu}\delta\left(\tau-\tau'\right)\delta\left(t-t'\right)$, with boundary values such that: $$\begin{aligned} x\left(t_{0}\right) & =x_{0}\nonumber \\ y\left(t_{f}\right) & =y_{f}\nonumber \\ \dot{x}\left(t_{f}\right) & =-x\left(t_{f}\right)\nonumber \\ \dot{y}\left(t_{0}\right) & =y\left(t_{0}\right)\,.\end{aligned}$$ These boundary conditions are known as mixed boundary conditions. They are partly Dirichlet (specified value), and partly Robin (specified linear combination of value and derivative). Numerical solutions for the the squeezed $x$ equations are given in Figs (\[fig:Quantum-squeeze\_vs\_tau\]) and (\[fig:Quantum-squeeze\_vs\_t\]), while those for the unsqueezed $y$ equations are given in Figs (\[fig:Quantum-unsqueeze\_vs\_tau\]) and (\[fig:Quantum-unsqueeze\_vs\_t\]).The effects of sampling error are seen through the two solid lines, giving one standard deviation variations from the mean. Exact results are included via the dashed lines. Quartic Hamiltonian example --------------------------- While general quantum field Hamiltonians are certainly possible, here we treat the single-mode case to clearly illustrate the form of the relevant diffusion equation. This includes the most significant issues. Following the notation of Eq (\[eq:General hermitian\]), the single-mode nonlinear interaction term is: $$\widehat{H}_{S}=\hbar\frac{g}{2}\left(\hat{a}^{\dagger}\hat{a}\right)^{2}.$$ This single-mode problem can be solved using other methods, making it a useful benchmark [@drummond2014quantum]. However, when generalized to a nonlinear scalar field theory by including multiple modes and linear couplings, these analytic methods are no longer applicable. The addition of extra modes and linear couplings does not significantly change the arguments used here. We study this case in order to understand the effect of cross-couplings between the forward and backward time-evolution. ### Fokker-Planck equation in complex phase-space From the Q-function identities of (\[eq:identities\]) , after re-ordering the differential operators, and taking $g=1$ for simplicity, one obtains: $$\begin{aligned} \frac{\partial Q^{\alpha}}{\partial t} & = & i\left\{ \alpha\frac{\partial}{\partial\alpha}n\left(\alpha\right)+\frac{1}{2}\left(\alpha\frac{\partial}{\partial\alpha}\right)^{2}-h.c.\right\} Q^{\alpha}\,.\nonumber \\ & & \,\end{aligned}$$ This demonstrates how the ordering identities apply. Damping and detuning terms are not included. For the quartic Hamiltonian, zeroth order and fourth order derivative terms cancel. This equation is known from earlier work in quantum optics [@Milburn:1986]. As a simple check, one can integrate the Fokker-Planck equation in phase-space to obtain moments, hence showing that: $$\frac{\partial}{\partial t}\left\langle \alpha\right\rangle _{Q}=-i\left\langle \alpha\left[\alpha\alpha^{\ast}-\frac{3}{2}\right]\right\rangle _{Q}.$$ Since the $Q$-function averages correspond to anti-normal ordering, one recovers the same expectation value dynamics as for the Heisenberg equations, which are: $$\frac{\partial}{\partial t}\left\langle a\right\rangle =-i\left\langle a^{\dagger}a^{2}\right\rangle =-i\left\langle \hat{a}\left[\hat{a}\hat{a}^{\dagger}-\frac{3}{2}\right]\right\rangle \,.$$ ### Fokker-Planck equation with constant diffusion We introduce a change of variable to a complex phase $\theta$, with a scaling factor of $\sqrt{i/2}$ to simplify the resulting algebra, so that: $$\alpha=e^{\sqrt{i/2}\theta}$$ The result of changing variables in the distribution, is that in $\theta$ coordinates the distribution is modified by the Jacobian of the transformation, so that: $$\begin{aligned} Q^{\theta} & = & \left\langle e^{\sqrt{i/2}\theta}\left|\hat{\rho}\right|e^{\sqrt{i/2}\theta}\right\rangle \left|\frac{\partial\alpha}{\partial\theta}\right|\nonumber \\ & = & Q^{\alpha}\left|\frac{\partial\alpha}{\partial\theta}\right|=\frac{1}{2}Q^{\alpha}\alpha\alpha^{*}.\end{aligned}$$ One also must take account of the chain rule for derivatives when changing variables, which means that $$\frac{\partial}{\partial\alpha}=\frac{\partial\theta}{\partial\alpha}\frac{\partial}{\partial\theta}=\frac{1}{\sqrt{i/2}\alpha}\frac{\partial}{\partial\theta}$$ To transform to phase coordinates with constant diffusion, the Fokker-Planck equation is first transformed into a form that includes the effects of the Jacobian, followed by a variable change to the new variables. The combined effect this is that the equation for $Q$, after the variable change, is given by: $$\frac{\partial Q^{\theta}}{\partial t}=\left[\left(1+i\right)\frac{\partial}{\partial\theta}n+\frac{\partial^{2}}{\partial\theta^{2}}+h.c.\right]Q^{\theta},$$ where we have defined a number variable equivalent to the particle number as in (\[eq:number\_variable\]), so that $n\equiv\alpha\alpha^{*}-1$. Transformation to real coordinates ---------------------------------- As proved in previous sections, in this equation the diffusion term is not positive definite. Accordingly, just as with the squeezing Hamiltonian, there is no equivalent forward time stochastic process. To show this, let $\theta=x+iy$ , and $n=\alpha\alpha^{\ast}-1=\exp\left(x-y\right)-1$, so that: $$\begin{aligned} \frac{\partial^{2}}{\partial\theta^{2}}+\frac{\partial^{2}}{\partial\theta^{*2}} & = & \frac{1}{2}\left[\frac{\partial^{2}}{\partial x{}^{2}}-\frac{\partial^{2}}{\partial y{}^{2}}\right].\end{aligned}$$ As expected from the traceless diffusion property, the equation has a simultaneous positive diffusion in one real coordinate, and negative diffusion in the other, giving the result that: $$\frac{\partial Q}{\partial t}=\left[\left(\frac{\partial}{\partial x}+\frac{\partial}{\partial y}\right)n+\frac{1}{2}\left(\frac{\partial^{2}}{\partial x}-\frac{\partial^{2}}{\partial y}\right)\right]Q\,.$$ This means that the drift term is $$A_{\pm}=-n\,,$$ and the forward and backwards equations are not factorizable, owing to the coupling term $n\left(x,y\right)$, which is proportional to particle number. The total Lagrangian is: $$\begin{aligned} L & =\frac{1}{2}\sum_{\mu}\left(\dot{\phi}^{\mu}+n\right)^{2}-(n+1).\end{aligned}$$ These equations are equivalent to a forward-backwards stochastic equation. The two stochastic equations are almost identical in each time direction, although with opposite drift terms: $$\begin{aligned} x(t) & =x(t_{0})-\int_{t_{0}}^{t}n(x\left(t'\right),y(t'))dt'+\int_{t_{0}}^{t}dw_{x}\nonumber \\ y(t) & =y(t_{f})+\int_{t}^{t_{f}}n(x\left(t'\right),y(t'))dt'-\int_{t}^{t_{f}}dw_{y}\,.\end{aligned}$$ Unlike the previous example, the two time directions are coupled to each other, since $n$ depends on both fields. This implies that scattering takes place between the positive and negative time direction propagating fields. To solve for the quantum dynamical time evolution requires an understanding of the coupled evolution of both quadrature fields. To obtain a dynamical solution from the coupled, forward-backward stochastic equations, we must transform this equation using the real, time-symmetric action principle. In this case, the equivalent extra-dimensional equation is: $$\frac{\partial\phi^{\mu}}{\partial\tau}=\ddot{\phi}^{\mu}\pm\left(1-n^{2}\right)+\zeta^{\mu}\left(t,\tau\right)\,,$$ where $\left\langle \zeta^{\mu}\left(t,\tau\right)\zeta^{\nu}\left(t',\tau'\right)\right\rangle =2\delta_{\mu\nu}\delta\left(\tau-\tau'\right)\delta\left(t-t'\right)$. Thus, these extra-dimensional dynamical equations have a remarkably simple mathematical structure. Summary\[sec:Summary\] ====================== The existence of a time-symmetric probabilistic action principle for quantum fields has several ramifications. It describes a different approach to the computation of quantum dynamics. Neither imaginary time nor oscillatory path integrals are employed. More generally, time evolution through a symmetric stochastic action can be viewed as a dynamical principle in its own right. It is equivalent to the traditional action principle of quantum field theory. The advantage is that it is completely probabilistic, even for real-time quantum dynamics. A property of this method is that it can provide an ontological interpretation of quantum mechanics. In other words, the action principle can give a description of a reality that underlies the Copenhagen interpretation. The picture is that physical fields can propagate both from the past to the future and from the future to the past. This is a completely time-symmetric interpretation, without requiring any collapse of the wave-function. Such ontological interpretations are different to any hidden variable theory, which only allow causality from past to future. As a result, one can have quantum features including vacuum fluctuations, sharp eigenvalues and even Bell violations [@drummond2019q], within a realistic and local framework. The power of rapidly developing petascale and exascale computers appears well-suited to these approaches. Enlarged spatial lattices and increased parallelism are certainly needed. Yet this may not be as problematic to handle as either exponential complexity or the phase problems that arise in other approaches. It is intriguing that the utility of an extra dimension is widely recognized both in general relativity and quantum field theory. One may speculate that extending this action principle to curved space-time may yield novel quantum theories. This could lead to new approaches to unification. PDD thanks the hospitality of the Institute for Atomic and Molecular Physics (ITAMP) at Harvard University, supported by the NSF, and the Weizmann Institute of Science through a Weston Visiting Professorship. This work was performed in part at Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. It was also funded through an Australian Research Council Discovery Project Grant DP190101480. Appendix {#appendix .unnumbered} ======== In this Appendix the higher-dimensional equilibration equations are obtained for a more general case, with a diffusion term $d^{\mu}\left(t\right)$ that is time and index dependent. We recall that $G\left([\bm{\phi}],\tau\right)$ satisfies a higher-dimensional functional partial differential equation of: $$\frac{\partial G}{\partial\tau}=\int_{t_{0}}^{t_{1}}dt\sum_{\mu}\frac{\delta}{\delta\phi^{\mu}(t)}\left[-\mathcal{A}^{\mu}\left(\bm{\phi},t\right)+d^{\mu}(t)\frac{\delta}{\delta\phi^{\mu}(t)}\right]G\,,\label{eq:functional FPE-1}$$ To obtain the steady-state solution as $\tau\rightarrow\infty$, (\[eq:probability\_solution\]) for $G$, from Eq (\[eq:total probability\]) $\bm{\mathcal{A}}[\bm{\phi}]$ must satisfy: $$\mathcal{A}^{\mu}\left(\bm{\phi},t\right)=-d^{\mu}\left(t\right)\frac{\delta}{\delta\phi^{\mu}\left(t\right)}\int_{t_{0}}^{t_{f}}L_{c}(\bm{\phi},\dot{\bm{\phi}},t)dt\,,$$ where the central-difference Lagrangian is: $$L_{c}=\sum_{\mu}\frac{1}{2d^{\mu}\left(t\right)}\left[\dot{\phi}^{\mu}\left(t\right)-A^{\mu}\left(\bm{\phi},t\right)\right]^{2}-V\left(\bm{\phi},t\right).$$ This leads to Euler-Lagrange type equations with a drift defined as: $$\begin{aligned} \mathcal{A}^{\mu}\left(\bm{\phi},t\right) & =d^{\mu}\left(t\right)\left[\frac{d}{dt}\frac{\partial L}{\partial\dot{\phi}^{\mu}}-\frac{\partial L}{\partial\phi^{\mu}}\right]\nonumber \\ & =d^{\mu}\left(t\right)\left[\frac{d}{dt}v^{\mu}+v^{\nu}\partial_{\nu}A^{\mu}+\partial_{\mu}V\right]\,,\end{aligned}$$ where $$v^{\mu}=\frac{1}{d^{\mu}\left(t\right)}\left[\dot{\phi}^{\mu}\left(t\right)-A^{\mu}\left(\bm{\phi},t\right)\right].$$ Solving for the higher-dimensional drift, the time-derivative term is: $$\begin{aligned} \frac{d}{dt}\frac{\partial L}{\partial\dot{\phi}^{\mu}} & =\frac{d}{dt}\frac{1}{d^{\mu}\left(t\right)}\left[\dot{\phi}^{\mu}\left(t\right)-A^{\mu}\right]\\ & =\left[\ddot{\phi}_{\mu}-\dot{A}_{\mu}\right]-\dot{d}_{\mu}\left(d^{\mu}\right)^{-2}\left[\dot{\phi}^{\mu}-A^{\mu}\right]\,,\nonumber \end{aligned}$$ and the remaining potential term is: $$\begin{aligned} \frac{\partial L}{\partial\phi^{\mu}} & =-\sum_{\nu}\partial_{\mu}A^{\nu}\frac{1}{d_{\nu}}\left(\dot{\phi}_{\nu}-A^{\nu}\right)-\partial_{\mu}V.\end{aligned}$$ Introducing first and second derivatives, $\dot{\bm{\phi}}\equiv\partial\bm{\phi}/\partial t$ and $\ddot{\bm{\phi}}\equiv\partial^{2}\bm{\phi}/\partial t^{2}$, there is an expansion for the higher-dimensional drift term $\bm{\mathcal{A}}$ in terms of the field time-derivatives: $$\bm{\mathcal{A}}=\ddot{\bm{\phi}}+\bm{c}\dot{\bm{\phi}}+\bm{a}\,.$$ Here, $\bm{c}$ is a circulation matrix, while $\bm{a}$ is a pure drift: $$\begin{aligned} c^{\mu\nu} & =d_{\nu}^{-1}\left[d^{\mu}\partial_{\mu}A^{\nu}-d^{\nu}\partial_{\nu}A^{\mu}\right]-\delta_{\nu}^{\mu}\frac{\partial}{\partial t}\ln d^{\mu}\nonumber \\ a^{\mu} & =\partial_{\mu}U-d^{\mu}\frac{\partial}{\partial t}\left[A^{\mu}/d^{\mu}\right]\,.\label{eq:FPE-coefficients}\end{aligned}$$ Here partial derivatives with respect to time indicate derivatives for the explicitly time-dependent terms *only*, where the Hamiltonian coefficients are changing in time. The function $U$ is an effective potential, which acts to generate an effective force on the trajectories: $$U=d^{\mu}\left[V-\frac{1}{2}\sum_{\nu}\left(A^{\nu}\right)^{2}/d^{\nu}\right].$$ The functional Fokker-Planck equation given above is then equivalent to a stochastic partial differential equation (SPDE): $$\frac{\partial\bm{\phi}}{\partial\tau}=\bm{\mathcal{A}}[\bm{\phi}]+\bm{\zeta}\left(t,\tau\right)\,,\label{eq:SPDE-1}$$ where the stochastic term $\bm{\zeta}$ is a real delta-correlated Gaussian noise such that $\left\langle \zeta^{\mu}\left(t,\tau\right)\zeta^{\mu}\left(t',\tau'\right)\right\rangle =2d^{\mu}\left(t\right)\delta^{\mu\nu}\delta\left(t-t'\right)\delta\left(\tau-\tau'\right)$. The final stochastic partial differential equation in $\tau$ that $\bm{\phi}$ must satisfy in detail is then: $$\frac{\partial\bm{\phi}}{\partial\tau}=\ddot{\bm{\phi}}+\bm{c}\dot{\bm{\phi}}+\bm{a}+\bm{\zeta}\left(t,\tau\right)\,.\label{eq:finalPSDE-1}$$
--- abstract: 'Rare-earth platinum bismuth ($R$PtBi) has been recently proposed to be a potential topological insulator. In this paper we present measurements of the metallic surface electronic structure in three members of this family, using angle resolved photoemission spectroscopy (ARPES). Our data shows clear spin-orbit splitting of the surface bands and the Kramers’ degeneracy of spins at the $\bar{\Gamma}$ and $\bar{M}$ points, which is nicely reproduced with our full-potential augmented plane wave calculation for a surface electronic state. No direct indication of topologically non-trivial behavior is detected, except for a weak Fermi crossing detected in close vicinity to the $\bar{\Gamma}$ point, making the total number of Fermi crossings odd. In the surface band calculation, however, this crossing is explained by another Kramers’ pair where the two splitting bands are very close to each other. The classification of this family of materials as topological insulators remains an open question.' author: - Chang Liu - Yongbin Lee - Takeshi Kondo - Eun Deok Mun - Malinda Caudle - 'Bruce N. Harmon' - 'Sergey L. Bud’ko' - 'Paul C. Canfield' - Adam Kaminski title: 'Metallic surface electronic state in half-Heusler compounds *R*PtBi (*R* = Lu, Dy, Gd)' --- Introduction ============ The discovery of topologically non-trivial states of matter opens up a new realm of knowledge for fundamental condensed matter physics. Unlike conventional materials, these “topological insulators" exhibit metallic surface states that are protected by time reversal symmetry, while maintaining an insulating bulk electronic structure. This leads to a variety of novel properties including odd number of surface Dirac fermions, strict prohibition of back-scattering, etc., paving the way to potential technical breakthroughs in e.g. quantum computing process via the application of spintronics[@Hasan_review; @Moore]. Recently, extensive theoretical and experimental efforts have led to the realization of such fascinating behaviors in e.g. the HgTe quantum wells[@Zhang_HgTe; @Zhang_HgTe2; @Zhang_HgTe3], the Bi$_{1-x}$Sb$_x$ system[@Hsieh_BiSb; @Hsieh_BiSb2; @Yazdani_BiSb] and the Bi$_2$X$_3$ (X = Te, Se) binary compounds[@Zhang_Bi2Se3; @Shen_Bi2Te3]. Numerous half-Heusler ternary compounds have been proposed, theoretically, to be potential new platforms for topological quantum phenomena[@Zhang_Heusler; @Hasan_Heusler], where the inherent flexibility of crystallographic, electronic and superconducting parameters provide a multidimensional basis for both scientific and technical exploration. The experimental determination of their topological class would set the basis for possible spintronic utilization and further studies on the interplay between the topological quantum phenomena versus e.g. the magnetic[@Canfield], superconducting[@Goll] and heavy Fermionic[@Fisk] behaviors. Theoretically, the topological insulators experience a gapless surface state protected by time reversal symmetry and thus are robust against scattering from local impurities. Such a surface state is “one half" of a normal metal in that the surface bands are strongly spin-polarized, forming a unique spin helical texture[@Hsieh_BiSb2; @Hsieh_Bi2Se3]. On the other hand, the Kramers’ theorem requires that the spin be degenerate at the Kramers’ points - $k$-points of the surface Brillouin zone where time reversal symmetry is preserved[@Kane_Z2]. At the interface between, say, a normal spin-orbit system and vacuum, the spin-polarized surface bands connect pairwise (Kramers’ pair), crossing the chemical potential $\mu$ an even number of times between two distinct Kramers’ points. At the interface between a topologically non-trivial material and vacuum, however, one expects the surface bands to cross $\mu$ an odd number of times[@Hasan_review]. In this paper we present a systematic survey on the surface electronic structure of half-Heusler compounds *R*PtBi (*R* = Lu, Dy, Gd) using angle resolved photoemission spectroscopy (ARPES). Our results show clear spin-orbit splitting of the surface bands that cross the chemical potential, which is nicely reproduced in the full-potential augmented plane wave calculation for a surface electronic state. The Kramers’ degeneracy of spin is unambiguously detected at both the $\bar{\Gamma}$ and $\bar{M}$ points. No direct indication of topologically non-trivial behavior is detected, except for the fact that there is a weak Fermi crossing in the close vicinity to the $\bar{\Gamma}$ point, making the total number of crossings five. In the surface band calculation, however, this inner crossing is explained by two spin-orbit splitting bands that are very close to each other, forming another Kramers’ pair. In this band configuration, the total Berry phase would be zero for the half-Heusler systems, and they would not be topologically non-trivial. The detailed topological class of this family of materials thus remains an open question, requiring a detailed spin-resolved ARPES study with ultra-high momentum resolution and a direct calculation of the topological invariants based on the first principle band structure. ![image](Fig1.pdf){width="7.1in"} ![(Color online) Surface electronic structure of GdPtBi: Comparison between ARPES data and calculational result. (a) Fermi map of GdPtBi observed by ARPES, same as Fig. 1(e). (b) Calculational surface Fermi map of GdPtBi at the Bi(111) cleaving plane. See text for details. (c) ARPES band structure along the contour $\bar{\Gamma}$-$\bar{M}$-$\bar{K}$-$\bar{\Gamma}$. Inset of (c) enhanced the ARPES intensity near $\bar{M}$ and $\bar{K}$ for better visibility of the bands. (d) Calculational band structure with respect to (c). (e)-(f) Expanded figures for (b) and (d), respectively, showing six Fermi crossings. Panel (e) is rotated by 30$^\circ$ with respect to (b).[]{data-label="Fig2"}](Fig2.pdf){width="3.2in"} ![image](Fig3.pdf){width="4.5in"} Experimental ============ Single crystals of *R*PtBi (*R* = Lu, Dy, Gd) were grown out of a Bi flux and characterized by room temperature power X-ray diffraction measurements[@Canfield; @Growth]. The crystals grow as partial octahedra with the (111) facets exposed. Typical dimensions of a single crystal are about $0.5\times0.5\times0.5$ $\textrm{mm}^3$. The ARPES measurements were performed at beamline 10.0.1 of the Advanced Light Source (ALS), Berkeley, California using a Scienta R4000 electron analyzer. Vacuum conditions were better than $3\times10^{-11}$ torr. All ARPES data was taken at $T=15$ K, above the magnetic ordering temperatures of all compounds[@Canfield]. The energy resolution was set at $\sim$ 15 meV. All samples were cleaved *in situ*, yielding clean (111) surfaces in which atoms arrange in a hexagonal lattice. High symmetry points for the surface Brillouin zone are defined as $\bar{\Gamma}(0,0)$, $\bar{K}(k_0,0)$ and $\bar{M}(0, k_0\sqrt{3}/2)$ with unit momentum $k_0=\sqrt{6}\pi/a$, where $a$ is the lattice constant for each type of crystals. We emphasize here that no stress or pulling force is felt by the samples, which ensures that the measured data reveals the intrinsic electronic structure of the single crystals. Results and discussion ====================== We begin this survey in Fig. 1 by showing the Fermi maps of the three half-Heusler compounds *R*PtBi (*R* = Lu, Dy, Gd). Previous theoretical calculations for the bulk electronic structure[@Zhang_Heusler; @Hasan_Heusler; @Antonov] suggested that the Kramers’ crossing at the $\bar{\Gamma}$ point happens very close to $\mu$; the Fermi surface reduces to a single point (Dirac point) at $\bar{\Gamma}$. The data in Fig. 1 shows that, at least in the (111) cleaving plane, this is not the case. Instead there are several bands crossing $\mu$ in the vicinity of both the $\bar{\Gamma}$ and $\bar{M}$ points. The overall Fermi surface for all three half-Heusler compounds are similar, indicating a similar cleaving plane and band structure for all members. By comparing the band structure measured at the (111) surface with results of band calculations for GdPtBi (Fig. 2), we find the cleaving plane to be Bi(111), marked by a red parallelogram in Fig. 1(a). A closer look at Fig. 1(c)-(e) reveals that the $\bar{\Gamma}$ pockets have different sizes for different Heusler members. For example the circular $\bar{\Gamma}$ pockets in LuPtBi are larger in size than those in GdPtBi. This indicates a different effective electron occupancy for different members of the half-Heusler family. One should also note that in Fig. 1(e) the inner of the two bright $\bar{\Gamma}$ pockets is hexagonal in shape, reminiscent of the hexagonal shape of the Dirac cone in Bi$_2$Te$_3$ (Ref. [@Shen_Bi2Te3]), which is explained by higher order terms in the $k \cdot p$ Hamiltonian[@Fu]. This hexagonal shape is very nicely reproduced in the calculation \[Fig. 2(b)\]. For clarifying the topological class of the half-Heuslers, two immediate questions follow the observations in Fig. 1: (1) Are the observed bands actually arising due to the sample surface? (2) Exactly how many times do the bands intersect the chemical potential along the $\bar{\Gamma}$-$\bar{M}$ line segment? ![image](Fig4.pdf){width="4.8in"} ![Band structure analysis at the vicinity of $\bar{M}$ \[red box in Fig. \[Fig1\](c)\]. Data is taken on LuPtBi samples. (a)-(d) Binding energy dependence of band structure near $\bar{M}$. Map location in the surface Brillouin zone is shown in Panel (e). (f) Theoretical band map at the chemical potential for GdPtBi. (g),(h) Band maps for two perpendicular directions marked by red lines in (g). There are in total two Fermi crossings along the $\bar{\Gamma}$-$\bar{M}$ line segment at the vicinity of $\bar{M}$.[]{data-label="Fig5"}](Fig5.pdf){width="3.5in"} Fig. 2 shows the comparison between the ARPES data and a calculational surface state in GdPtBi. For both the band structure and Fermi surface calculation, we used a full-potential linear augmented plane wave (FPLAPW) method[@wien2k] with a local density functional[@LDA]. The crystallographic unit cell is generated such that the (111) direction of the *fcc* Brillouin zone points along the $z$-axis. For calculation of the surface electronic structure, supercells with three unit cell layers and a 21.87 a.u. vacuum is constructed. We calculated band structures of all six possible surface endings (Gd-Bi-Pt-bulk, Gd-Pt-Bi-bulk, Bi-Gd-Pt-bulk, Bi-Pt-Gd-bulk, Pt-Gd-Bi-bulk, Pt-Bi-Gd-bulk); only the Bi-Pt-Gd-bulk construction shows good agreement with experiment \[Fig. 2(b), (d)-(f)\]. Structural data were taken from a reported experimental result[@Haase]. To obtain the self-consistent charge density, we chose 48 $k$-points in the irreducible Brillouin zone, and set $R_{\textrm{MT}}\times k_{\textrm{max}}$ to 7.5, where $R_{\textrm{MT}}$ is the smallest muffin-tin radius and $k_{\textrm{max}}$ is the plane-wave cutoff. We used muffin-tin radii of 2.5, 2.4 and 2.4 a.u. for Gd, Bi, and Pt respectively. For the non-magnetic-state calculation valid for comparison with ARPES results at 15 K, the seven 4*f* electrons of Gd atoms were treated as core electrons with no net spin polarization. The atoms near the surface (Bi, Pt, Gd) were relaxed along the $z$-direction until the forces exerted on the atoms were less than 2.0 mRy/a.u.. With this optimized structure, we obtained self-consistency with 0.01 mRy/cell total energy convergence. After that, we calculated the band structure and two dimensional Fermi surface in which we divided the rectangular cell connecting four $\bar{K}$-points by $40 \times 40$, yielding 1681 $k$-points. Even at first glance, Fig. 2 gives the impression of remarkable agreement between theory and experiment. All basic features observed by ARPES - the overall shape and location of the Fermi pockets \[Fig. 2(a)-(b)\], the binding energies of the bands \[Fig. 2(c)-(d)\] - are well reproduced by the calculation. The main point of this figure, however, is the fact that band calculations show a total of six Fermi crossings along the $\bar{\Gamma}$-$\bar{M}$ line segment, which is an even number and is not directly consistent of the proposed strong topological insulating phenomenon[@Hasan_Heusler; @Zhang_Heusler]. It should be noted that, in order to take into account the spin-orbit splitting, relativistic effects are applied to the calculation. Similar calculations reproduce clear topological insulating behavior in Bi$_2$Te$_3$ thin films[@Park]. The excellent agreement shown in Fig. 2 also implies the validity of such calculation in half-Heusler compounds. In fact traces for the inner two crossings is also found in the ARPES data, where they appear to be one single crossing, most likely due to finite momentum resolution \[Leftmost part in Fig. 2(c), see also Fig. 3(d)-(h)\]. In Fig. 3 we prove that the observed bands come from the sample surface. This is done by scanning the incident photon energy along both $\bar{\Gamma}$-$\bar{K}$ and $\bar{\Gamma}$-$\bar{M}$ high symmetry directions. Varying the photon energy in ARPES effectively changes the momentum offset along the direction perpendicular to the sample surface. In our case, this direction corresponds to $k_z$ or the (111) direction of the *fcc* Brillouin zone. Figs. 3(a)-(b) show that all observed bands form straight lines along the $k_z$ direction, a clear indication for the lack of $k_z$ dependence. In Fig. 3(c) we compare this to a calculated Fermi surface map for the *bulk* bands, along the same direction as in Fig. 3(a). The difference is clear: the bulk bands are dispersive along the $\Gamma$-$A$ direction; and most of the experimentally observed bands are not present in the calculation. In Figs. 3(d)-(h) we pay special attention to the bands crossing $\mu$ near $\bar{\Gamma}$ by showing the band structure for four different photon energies. In total there are at least three Fermi contours surrounding $\bar{\Gamma}$, the outer two being a lot brighter than the inner one (or two, see discussion for Fig. 2). As shown in Fig. 3(h), These three (or four) bands cross $\mu$ at exactly the same $k$ positions for all photon energies. Therefore all of them are surface bands. The data in Fig. 3 thus show, unambiguously, that a metallic surface electronic state exists in the half-Heusler compounds. The exact number of Fermi crossings along the $\bar{\Gamma}$-$\bar{M}$ line segment is also examined in Fig. 4. The main conclusion for Fig. 4 is that there are also three (or four) visible Fermi crossings at the vicinity of $\bar{\Gamma}$ between these two Kramers’ points. We show these bands on the LuPtBi and GdPtBi samples. Both on the band dispersion maps \[Figs. 4(a)-(b)\] and the momentum distribution curves \[MDCs, Figs. 4(c)-(d)\] we see that there are two bright hole-like bands almost parallel to each other, and a much weaker inner band with lower Fermi velocity. This inner band is not easy to see in the band maps (nontheless indicated by green arrows), but is clearly visible in the MDCs by small intensity peaks tracing down from the one marked by a green bar \[also marked by a green color in Figs. 4(e)-(f)\]. The same band also exists in the $\bar{\Gamma}$-$\bar{K}$ direction \[Figs. 3(d)-(h)\]. Same as the discussion for Figs. 2 and 3, this inner crossing is reproduced in the band calculation by two closely located spin-orbit-splitting bands that form a Kramers’ pair. The brighter parallel bands form a second Kramers’ pair of opposite spins. In Fig. 4(e)-(f) we show the linear extrapolation of the two brighter bands. In GdPtBi they are likely to reduce to a Dirac point at about 0.4 eV above $\mu$. If the total number of crossing is four, such a configuration will give zero contribution to the total Berry phase. In Fig. 5 we examine the bands near the $\bar{M}$ point. The $k$-space location of the ARPES maps \[Figs. 5(a)-(d)\] is shown in Fig. 5(e). Panels 5(g)-(h) present the band dispersion maps for two cuts crossing $\bar{M}$, whose positions are marked in Panel 5(f) with the band calculation result. Figs. 5(a)-(d) show that the $\bar{M}$ bands form a very special shape. At high binding energies \[$E\sim-0.1$ eV, Fig. 5(d)\], two U-shape bands are well separated. As binding energy decreases these two bands merge into each other and hybridize to form a central elliptical contour and two curly-bracket-like segments. The segments near each $\bar{M}$ points link together, forming another large Fermi contour enclosing the zone center $\bar{\Gamma}$. It is clear from Fig. 5(g)-(h) that there are two Fermi crossings in both the $\bar{\Gamma}$-$\bar{K}$ and $\bar{\Gamma}$-$\bar{M}$ directions. The special shape of the Fermi surface is formed by two bands that are likely to be members of another Kramers’ pair. Kramers’ degeneracy of spin happens at $\sim30$ meV below $\mu$. All this features are obtained with our calculation for the surface states \[Fig 2(b) and 2(e)\]. These two bands also give zero contribution to the total Berry phase. In summary, we performed an ARPES survey on the electronic structure of three half-Heusler compounds *R*PtBi (*R* = Lu, Dy, Gd) which are proposed to be topological insulators. Our result show unambiguously that these materials have a metallic surface state markedly different from the calculational result on the bulk electronic structures. This surface state is reproduced with high accuracy in our band calculations. Both experiment and theory reveal several bands that cross the Fermi level. Knowledge of the exact number of these bands is possibly limited by experimental momentum resolution. No direct consistency with the proposed strong topological insulating behavior is found in the ARPES results. For final determination of their topological classes, both an APRES measurement of ultrahigh $k$-resolution and a direct calculation of the first Chern number as a topological invariant [@Qi] are in need. Acknowledgement =============== We thank S.-C. Zhang and J. Schmalian for instructive discussions as well as Sung-Kwan Mo for grateful instrumental support at the ALS. Ames Laboratory was supported by the Department of Energy - Basic Energy Sciences under Contract No. DE-AC02-07CH11358. ALS is operated by the US DOE under Contract No. DE-AC03-76SF00098. [99]{} M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. [**82**]{}, 3045 (2010). J. E. Moore, Nature (London) [**464**]{}, 194 (2010). B. A. Bernevig, T. L. Hughes, and S.-C. Zhang, Science [**314**]{}, 1757 (2006). M. König, S. Wiedmann, C. Brüne, A. Roth, H. Buhmann, L. W. Molenkamp, X.-L. Qi, and S.-C. Zhang, Science [**318**]{}, 766 (2007). A. Roth, C. Brüne, H. Buhmann, L. W. Molenkamp, J. Maciejko, X.-L. Qi, and S.-C. Zhang, Science [**325**]{}, 294 (2009). D. Hsieh, D. Qian, L. Wray, Y. Xia, Y. S. Hor, R. J. Cava, and M. Z. Hasan, Nature (London) [**452**]{}, 970 (2008). D. Hsieh, Y. Xia, L. Wray, D. Qian, A. Pal, J. H. Dil, J. Osterwalder, F. Meier, G. Bihlmayer, C. L. Kane, Y. S. Hor, R. J. Cava, and M. Z. Hasan, Science [**323**]{}, 919 (2009). P. Roushan, J. Seo, C. V. Parker, Y. S. Hor, D. Hsieh, D. Qian, A. Richardella, M. Z. Hasan, R. J. Cava, and A. Yazdani, Nature (London) [**460**]{}, 1106 (2009). H. Zhang, C.-X. Liu, X.-L. Qi, X. Dai, Z. Fang, S.-C. Zhang, Nature Physics [**5**]{}, 438 (2009). Y. L. Chen, J. G. Analytis, J.-H. Chu, Z. K. Liu, S.-K. Mo, X. L. Qi, H. J. Zhang, D. H. Lu, X. Dai, Z. Fang, S. C. Zhang, I. R. Fisher, Z. Hussain, and Z.-X. Shen, Science [**325**]{}, 178 (2009). S. Chadov, X.-L. Qi, J. Kübler, G. H. Fecher, C. Felser, and S.-C. Zhang, Nature Materials [**9**]{}, 541 (2010). H. Lin, L. A. Wray, Y. Xia, S. Xu, S. Jia, R. J. Cava, A. Bansil, and M. Z. Hasan, Nature Materials [**9**]{}, 546 (2010). P. C. Canfield, J. D. Thompson, W. P. Beyermann, A. Lacerda, M. F. Hundley, E. Peterson, Z. Fisk, and H. R. Ott, J. Appl. Phys. [**70**]{}, 5800 (1991). G. Goll, M. Marz, A. Hamann, T. Tomanic, K. Grube, T. Yoshino, and T. Takabatake Physica B [**403**]{}, 1065 (2008). Z. Fisk, P. C. Canfield, W. P. Beyermann, J. D. Thompson, M. F. Hundley, H. R. Ott, E. Felder, M. B. Maple, M. A. Lopez de la Torre, P. Visani, and C. L. Seaman, Phys. Rev. Lett. [**67**]{}, 3310 (1991). D. Hsieh, Y. Xia, D. Qian, L. Wray, J. H. Dil, F. Meier, J. Osterwalder, L. Patthey, J. G. Checkelsky, N. P. Ong, A. V. Fedorov, H. Lin, A. Bansil, D. Grauer, Y. S. Hor, R. J. Cava and M. Z. Hasan, Nature (London) [**460**]{}, 1101 (2009). C. L. Kane and E. J. Mele, Phys. Rev. Lett. [**95**]{}, 146802 (2005). P. C. Canfield and Z. Fisk, Philos. Mag. B [**65**]{}, 1117 (1992). V. N. Antonov, P. M. Oppeneer, A. N. Yaresko, A. Y. Perlov, and T. Kraft, Phys. Rev. B [**56**]{}, 13012 (1997). L. Fu, Phys. Rev. Lett. [**103**]{}, 266801 (2009). P. Blaha, K. Schwarz, G. K. H. Madsen, D. Kvasnick and J. Luitz, WIEN2k, An augmented plane wave plus local orbitals program for calculation crystal properties (K. Schwarz, TU wien, Austria, 2001) ISBN 3-9501031-1-2. J. P. Perdew and Y. Wang, Phys. Rev. B [**45**]{}, 13244 (1992). M. G. Haase, T. Schmit, C. G. Richter, H. Block, and W. Jeitschko, J. Solid State Chem. [**168**]{}, 18 (2002). K. Park, J. J. Heremans, V. W. Scarola, and D. Minic, arXiv:1005.3476 (unpublished) (2010). X.-L. Qi, T. L. Hughes, and S.-C. Zhang, Phys. Rev. B [**78**]{}, 195424 (2008) and references therein.
--- author: - 'R. Schulz' - 'R. Morganti' - 'K. Nyland' - 'Z. Paragi' - 'E. K. Mahony' - 'T. Oosterloo' bibliography: - 'References.bib' date: '–; –' title: Mapping the neutral atomic hydrogen gas outflow in the restarted radio galaxy --- Introduction {#sec:Intro} ============ ![Left panel: 1.4GHz VLA NVSS image of the large-scale radio emission of [@Condon1998]. The blue square and cross highlight the area covered by our VLA observation and the pointing of the VLBI observation, respectively. Middle panel: the gray-colored background shows a zoom-in into archival *Hubble Space Telescope* V-band image (ACS/HRC/F555W, @ODea2001). The blue contour lines trace the VLBI radio continuum emission starting for visibility at $5\times\sigma_\mathrm{noise,VLBI}$. The orange cross marks the position of the VLBI core from [@Taylor2001] to which our VLBI image was aligned to in this montage. The dashed lines mark the plot range of the VLBI image shown in the bottom panel. Bottom panel: VLBI image of obtained by our observation. The dashed and solid black contour lines trace negative and positive brightness starting from $3\times\sigma_\mathrm{noise,VLBI}$ and increasing logarithmically by a factor of 2.[]{data-label="fig:Collage"}](3C236_collages_paper.png){width="0.97\linewidth"} The evolution of galaxies is considered to be strongly linked to that of their central supermassive black holes (SMBH). The required feedback is commonly explained by a phase of enhanced activity related to the SMBH (e.g., @Heckman2014 [@Kormendy2013]). An active galactic nucleus (AGN) can affect the interstellar medium (ISM) by heating-up and expelling gas which hinders star formation and the accretion of matter onto the SMBH (e.g., @Silk1998 [@DiMatteo2005; @Croton2006; @McNamara2007]). Prominent observational signatures include outflows of ionized, molecular and atomic gas that have been associated with a number of AGN at a range of redshifts. The highest outflow rates have been determined for the cold ISM gas (molecular and atomic). Among the different possible drivers of these outflows are the radio jets launched in some AGN. The complex interplay between the AGN and the ISM requires detailed observational and theoretical studies of each phase of the outflowing gas (see reviews by @Veilleux2005 [@Fabian2012; @Alexander2012; @Wagner2016; @Tadhunter2016; @Harrison2017; @Morganti2017b] and references therein). Here, we focus on the outflows of neutral atomic hydrogen () gas which have been observed in absorption in a number of radio sources with different radio power (e.g., @Morganti1998 [@Morganti2005; @Morganti2013; @Morganti2016; @Oosterloo2000; @Mahony2013; @Gereb2015; @Allison2015]). Some of these objects host young or re-started AGN where the central radio source shows characteristics of a compact steep spectrum (CSS) object. This provides valuable insight into the evolution of radio galaxies, because CSS sources are considered to be the younger counterparts of the much larger Fanaroff-Riley type radio galaxies (e.g., @ODea1998 [@Kunert-Bajraszewska2010; @Orienti2016]). The radio continuum is commonly a few kpc or less in size which limits the spatial scales on which the outflow can be observed. In the two radio galaxies 3C305 and 3C295, the outflows were found on kpc scales [@Morganti2005b; @Mahony2013]. However, in most cases sub-arcsecond angular resolution is needed in order to locate the outflow and trace its structure. The angular resolution can be achieved by very long baseline interferometry (VLBI) which has been used to study the associated gas in absorption in various radio sources (e.g., @Carilli1998 [@Peck1999; @Peck2001; @Vermeulen2003; @Vermeulen2006; @Struve2010; @Struve2012; @Araya2010]). The first detection of a broad outflow with VLBI was reported by [@Oosterloo2000] in the Seyfert 2 galaxy . Of particular relevance for our study was the succesful imaging and mapping of the outflow in the young restarted radio galaxy by [@Morganti2013]. The broad bandwidth and high sensitivity of the VLBI observation revealed the outflow in this source as an extended cloud co-spatial with the southern extent of the radio continuum emission. This is offset to the gas at the systemic velocity which is located north of the nucleus. The study determined a mass of the cloud of up to 10$^5$ solar masses (M$_\sun$) and a mass outflow rate of $16\mathrm{-}29\mathrm{\,M_\sun\,yr^{-1}}$. A comparison with an unresolved absorption spectrum obtained with the Westerbork Synthesis Radio Telescope (WSRT) showed that all of the absorption was recovered by VLBI. The results provide the strongest evidence for this type of radio AGN so far that the outflow is driven by the jet. Based on these results, we performed VLBI observations of in absorption of a small sample comprising , , and . This initial work will pave the way for future VLBI observations of gas in a larger sample selected from the WSRT absorption survey [@Gereb2015; @Maccagni2017]. In this paper, we focus on at a redshift of $z=0.1005$ [@Hill1996] which is one of the largest known radio galaxies extending about 4.5Mpc [@Willis1974; @Barthel1985; @Schilizzi2001]. This source represents a re-started AGN, i.e., it exhibits signs of different stages of AGN activity. The large scale morphology (top panel in Fig. \[fig:Collage\]) stems from a previous cycle of activity compared to the CSS-type radio source in its inner 2kpc region which is the result of the most recent cycle. The inner radio emission has a dynamical time scale consistent with the age of the young star formation region [@ODea2001; @Schilizzi2001; @Tremblay2010]. The host galaxy of features a large outer and a smaller inner dust lane which are slightly offset in position angle (PA) with respect to each other [@ODea2001; @Schilizzi2001; @Labiano2013]. The inner dust lane has a PA of $\sim30\degr$ which is almost perpendicular to the sub-kpc scale radio jet. VLBI observations by [@Schilizzi2001] showed that the jet is oriented in north-west direction extending from the brightest feature which is synchrotron self-absorbed and thus likely to be the core region. The south-east lobe produced by the counter jet is positionally coincident with parts of the inner dust lane and its morphology is considered to be partially a result of jet-ISM interaction. The background image in the middle panel of Fig. \[fig:Collage\] shows a zoom-in of the inner dust lane overlayed by the brightness distribution of the radio source as obtained in this paper (bottom panel of Fig. \[fig:Collage\], see Sect. \[sec:Results\]). Low-resolution absorption spectra reveal a deep narrow absorption feature near the systemic velocity [@vanGorkom1989] and a broad (up to 1000kms$^{-1}$) shallow blue wing corresponding to a mass outflow rate of $\sim 47\mathrm{M_\sun\,yr^{-1}}$ [@Morganti2005]. Based on VLBI observations by [@Struve2012], the narrow component has been intepreted as gas located in a regular rotating disk which is co-spatial to the south-east lobe about 250mas from the nucleus. Because of bandwidth limitations the VLBI data was not able to cover the velocity range of the outflow. The optical AGN has been classified as a low-excitation radio galaxy (LERG, @Buttiglione2010 [@Best2012]) which makes it less likely that strong quasar or starbust-driven winds are the origin of the outflow, but rather the jets. There are also signs of an outflow of ionized gas [@Labiano2013]. The cold (CO) and warm (H$_2$) molecular gas have only been detected in a disk-like geometry aligned with position angle of the inner dust lane, but the latter has a significant turbulent component [@Nesvadba2011; @Labiano2013]. This paper presents new VLBI observations with a larger bandwidth than previous data to localise the outflow with respect to the radio jet and constrain its properties in combination with new lower resolution Very Large Array (VLA) data. It is structured as follows: in Sect. \[sec:Obs\] we present the data and subsequent calibration procedure. This is followed by a presentation of our results in Sect. \[sec:Results\] and discussion in Sect. \[sec:Discussion\]. We end the paper with our conclusions and a summary in Sect. \[sec:Summary\]. Throughout this paper, we adopt a standard $\Lambda$CDM-cosmology ($H_0 = 70 \mathrm{\,km\,s^{-1}\,Mpc^{-1}}$, $\Omega_m=0.3$, $\Omega_\lambda=0.7$) based on which 1.0mas corresponds to about 1.8pc for . It is important to point out that a range of values are available for the systemic velocity $v_\mathrm{sys}$ (see also [@Struve2012]). [@Labiano2013] determined $v_\mathrm{sys}^\mathrm{CO} \approx 29761\mathrm{\,km\,s^{-1}}$ based on the CO spectrum, though the spectral setup prohibited sampling of the continuum emission at low velocities limiting the Gaussian fit to the spectrum. Nevertheless, this value is close to the SDSS value of $v_\mathrm{sys}^\mathrm{SDSS} \approx 29740\mathrm{\,km\,s^{-1}}$. [@Struve2012] reported a value of $v_\mathrm{sys}^\mathrm{\ion{H}{I}} \approx 29820\mathrm{\,km\,s^{-1}}$ assuming that parts of the observed is constrained within a disk. However, the was detected in absorption which makes it difficult to measure the full extent of the disk (see also Sect. \[sec:Discussion:disk\]). Therefore, we use $v_\mathrm{sys}^\mathrm{CO}$ throughout this paper as a reference value. ------- --------- ------------ -------------------- ------------------ ------ ----------------- --------- ---- ----------------- -------------- Array Code Date $\nu_\mathrm{obs}$ $T_\mathrm{obs}$ Pol. Correlator Pass IFs BW $N_\mathrm{ch}$ $\Delta \nu$ \[GHz\] \[min\] \[MHz\] \[kHz\] continuum 4 16 32 500 spectral-line 1 16 512 31.25 VLA 11A-166 2011-08-10 1.283 40 Dual spectral-line 1 16 256 62.5 ------- --------- ------------ -------------------- ------------------ ------ ----------------- --------- ---- ----------------- -------------- [ccccccc]{} Data & $\sigma_\mathrm{noise}$ & Beam & $\Delta v$ & $N_\mathrm{ch}$ & $S_\mathrm{peak}$ & $S_\mathrm{tot}$\ & \[mJybeam$^{-1}$ch$^{-1}$\] & & \[kms$^{-1}$\] & & \[Jybeam$^{-1}$\] & \[Jy\]\ \ Continuum & 0.23 & $20\mathrm{mas}\times 20\mathrm{mas}$ & - & 1 & $0.145\pm 0.020$ & $1.35\pm 0.20$\ Cube & 0.37 & $20\mathrm{mas}\times 20\mathrm{mas}$ & 21.7 (43.4) & 143 & - & -\ \ Continuum & 2.1 & $1\farcs7\times 1\farcs2, -67\degr$ & - & 1 & $2.883\pm 0.086$ & $3.37\pm 0.10$\ Cube (VLBI) & 1.1 & $2\farcs0\times 1\farcs5, -81\degr$ & 21.7 (43.4) & 157 & - & -\ Cube (WSRT) & 1.1 & $2\farcs0\times 1\farcs5, -81\degr$ & 20.0 (40.0) & 186 & - & -\ Observation & Data reduction {#sec:Obs} ============================ VLBI Observation {#sec:Obs:VLBI} ---------------- was observed with a global VLBI array of 14 telescopes on 2015 Jun 13 (project code: GN002B). The observational setup is summarized in Table \[tab:Data:Observation\]. The array included the full VLBA and stations from the European VLBI Network (EVN). In addition, Arecibo (Puerto Rico) participated for two hours. The observation lasted a total of 15 hours with the EVN and VLBA observing for $\sim 11$ hours and $\sim 12$ hours, respectively, and including an overlap between both arrays of $ \sim 8$ hours. The stations of Onsala and Kitt Peak (VLBA) were flagged during the calibration due to unusually high system temperatures and bandpass problems, respectively. The data were correlated at the Joint Institute for VLBI ERIC (JIVE) providing two data sets, i.e., one with 4 IFs each with 32 channels (‘continuum pass’) and one with one IF with 512 channels (‘spectral-line pass’). () was observed as the phase reference calibrator, while () and () served as the bandpass calibrators. The data were calibrated in two steps using standard procedures in the Astronomical Image Processing Software (<span style="font-variant:small-caps;">AIPS</span>, version 31DEC15) package [@AIPS1999] and beginning with the data from the continuum pass. The amplitude calibration and initial flagging were provided by the EVN pipeline. As a next step manual phase calibration on a single scan of the calibrator was performed to remove the instrumental delay. This was followed by a global fringe fit of the calibrators to correct for the phase delay and rate with the solutions applied to the target source. Finally, the bandpass was corrected using the bandpass calibrators and the solutions applied to the phase reference and target source. For the spectral-line pass, the amplitude calibration and initial flagging was also taken from the EVN pipeline. The phase calibration was performed using the solutions from the manual phase calibration and global fringe fit of the continuum pass, which was followed by the bandpass calibration. Afterwards, the data were separated into a data cube with full spectral resolution and continuum data set with all channels averaged toghether. The continuum data were further processed in <span style="font-variant:small-caps;">Difmap</span> [@Shepherd1994; @Difmap2011]. This entailed imaging of the brightness distribution of the source using the <span style="font-variant:small-caps;">clean</span>-algorithm [@Hoegbom1974] in combination with phase self-calibration and flagging of corrupted visibilities. Once a sufficiently good model was found, a time-independent gain correction factor was determined for each telescope through amplitude self-calibration. The iterative process of imaging and phase self-calibration with subsequent time-dependent amplitude self-calibration was repeated several times with decreasing solution interval for amplitude self-calibration. The resulting continuum image was used to perform a single phase self-calibration of the data cube with full spectral resolution in <span style="font-variant:small-caps;">AIPS</span>. After carefully inspecting the channels and flagging of corrupted visibilities, the continuum was subtracted in the visibility domain using a linear fit to the first and last 100 channels (<span style="font-variant:small-caps;">AIPS</span> task <span style="font-variant:small-caps;">UVLIN</span>). Since we focus on the faint and broad component of the absorption, we averaged over three consecutive channels to improve the sensitivity. The data were corrected for the Doppler shift in frequency caused by the rotation and movement of the Earth. Finally, a redshift correction was applied to the channel width in observed frequency to convert into rest-frame velocity following [@Meyer2017]. Each channel of the spectral-line cube was imaged individually with robust weighting set to $1$ and a $(u,v)$-taper of 10M$\lambda$ to further improve the sensitivity. We found that this tapering of the visibility data provides the best combination of resolution and sensitivity. It is similar to [@Struve2012]. The channels were only imaged if significant negative flux density was found in the area covered by continuum emission. The resulting cube covers a velocity range of $28117$–$31209\mathrm{\,km\,s^{-1}}$ at channel resolution of $21.7\mathrm{\,km\,s^{-1}}$ which is effectively doubled to $43.4\mathrm{\,km\,s^{-1}}$ due to Hanning-smoothing. In order to compare the image cube and the continuum image, both were restored with the same circular restoring beam of 20mas. This is close to the synthesized beam due to the $(u,v)$-tapering and similar to [@Struve2012]. The noise levels were determined by fitting a Gaussian distribution to the pixels which are not contaminated by emission from the target following the procedure outlined in [@Boeck2012]. For the subsequent analysis the average noise level of the cube is used as a reference value. An overview of the image parameters is given in Table \[tab:Data:Image\]. The overall amplitude calibration uncertainty of the VLBI data was estimated to be around 15% based on multiple iterations of imaging and self-calibration of the continuum data. The total flux density was measured in the image plane using the CASA Viewer [@CASA2011]. We estimated the uncertainty of the peak and total flux density measurement as $\sqrt{(N_\mathrm{beam}\times \sigma_\mathrm{noise})^2 + (0.15 \times S_\mathrm{tot})^2}$ following [@Nyland2016] where $N_\mathrm{beam}$ corresponds to the number of beams covered by the source. However, we noticed that in our case the first term has only a marginally impact on the uncertainty. VLA Observation --------------- The VLA observed for 40min in A-array configuration (project code: 11A-166) on 2011 Aug 10. The setup is summarized in Table \[tab:Data:Observation\]. () was used as the flux density calibrator and () as the complex gain calibrator. The data were processed with <span style="font-variant:small-caps;">Miriad</span> [@Sault1995; @Miriad2011]. Due to technical difficulties one polarisation was lost and had to be flagged completely. The data were calibrated using standard procedures for the VLA. A continuum image was produced using <span style="font-variant:small-caps;">CLEAN</span> and self-calibration. The spectral-line data were continuum subtracted using only channels devoid of the broad absorption. Further processing was performed in <span style="font-variant:small-caps;">AIPS</span>. The VLA cube was averaged to match the velocity resolution of the VLBI data, cleaned in the region of the continuum emission and afterwards Hanning smoothed. Consistent with the VLBI spectral-line data, the channel width was redshift-corrected to the rest-frame velocity width. The resulting image cube covers a velocity range from $(28316$–$31706)\mathrm{\,km\,s^{-1}}$. For a comparison with the WSRT spectrum presented in [@Morganti2005], we also created a second cube which matches the velocity resolution of the WSRT data. The parameters of the continuum image and the cube are given in Table \[tab:Data:Image\]. We estimate the uncertainty of the absolute flux density scale at this frequency and given the calibrators to be $\sim 3\%$ based on [@Perley2013a]. Results {#sec:Results} ======= VLA and VLBI continuum {#sec:Results:Cont} ---------------------- The VLA continuum image covers the central $6\arcmin\times6\arcmin$ ($650\mathrm{\,kpc}\times650\mathrm{\,kpc}$) of 3C236 (see Fig. \[fig:Collage\]). The recovered radio emission is unresolved and yields a flux density of about $3.37\pm0.10\mathrm{\,Jy}$. No further extended emission is detected. The flux density is consistent with the value of $3.324\pm0.097\mathrm{\,Jy}$ obtained from the central region of the radio emission by the lower-resolution NVSS survey at 1.4GHz [@Condon1998]. The full VLBI continuum emission of from our observation is shown in the bottom panel of Fig. \[fig:Collage\]. The total flux density is about $1.35\pm\,0.20\mathrm{\,Jy}$ which is consistent with the value obtained by [@Struve2012] (1.36Jy). It corresponds to about 40% of the flux density measured by the VLA observation. The source is significantly extended covering approximately 1($\sim 1.8\mathrm{\,kpc}$), but most of the radio emission is localized within 400mas ($\sim 720\mathrm{\,pc}$). The difference in flux density between VLBI and VLA of about 2Jy can have different reasons. Firstly, the shortest baseline of the VLBI array limits the largest angular scale on which emission can be recovered to about 600mas. Any extended emission on larger scales is resolved out by the inteferometer. However, there could still be a significant amount of emission within the largest angular scale limit. Assuming the emission is uniformly distributed over the entire area, then an integrated flux density of more than 600mJy is necessary to achieve a brightness at the $3\sigma_\mathrm{VLBI,cont}$ sensitivity limit of the VLBI image. Secondly, the consistency between our VLA and the NVSS flux density shows that all of the undetected emission must be within the area covered by the synthesized beam of the VLA observation, i.e. $1\farcs7\times 1\farcs2$ ($3.1\mathrm{\,kpc}\times2.2\mathrm{\,kpc}$). Assuming again a uniform distribution of the undetected emission over this region leads to a brightness of about $0.6\mathrm{\,mJy\,beam^{-1}}$. This is just below the $3\sigma_\mathrm{VLBI,cont}$-limit. Thus, a global VLBI experiment including the VLA and eMERLIN would provide the sensitivity and short spacing to recover all of the emission. The morphology is overall consistent with previous VLBI observations by [@Schilizzi2001] and [@Struve2012]. The sensitivity of our VLBI image is about a factor of three better than the image from [@Struve2012], but we do not detect significantly more extended emission or any movement of features in the jet at the given resolution. Following [@Schilizzi2001], we consider the location of the VLBI core region to coincide with the brightest feature at the phase center of the image and the jet to extend to the north-west. The emission in the south-east direction from the core region corresponds to the radio lobe created by the counter jet. Due to the chosen restoring beam, the brightest feature is actually a blend of the emission from the VLBI core and part of the innermost jet. We refer to this as the nuclear region from here on. In the following, we continue to focus on the inner most 400mas of the source where the bulk of the radio emission is located. We determine the position angle of the jet as the angle along which the brightest features of the south-east jet are best aligned on to be about 116which is consistent with the measurement of [@Struve2012] of 117. ![ absorption spectra of . The dashed, vertical line marks the systemic velocity from [@Labiano2013]. Top panel: WSRT (black) spectrum from [@Morganti2005] and VLA (red) spectrum between $28500$–$30900\mathrm{\,km\,s^{-1}}$. Here, the velocity resolution of the VLA spectrum was matched to the WSRT data. Middle panel: The spatially integrated VLBI with (darkblue, dashed) and without (blue, solid) clipping of the cube pixels at the $3\sigma_\mathrm{VLBI,Cube}$ between $29000$–$30400\mathrm{\,km\,s^{-1}}$. For the VLA spectrum (red), the spectral resolution was matched to the VLBI spectrum. Bottom panel: Same as the middle panel, but zoomed-in in flux density.[]{data-label="fig:Spec2"}](Spectrum_WSRT_VLA_paper.pdf "fig:"){width="0.97\linewidth"} ![ absorption spectra of . The dashed, vertical line marks the systemic velocity from [@Labiano2013]. Top panel: WSRT (black) spectrum from [@Morganti2005] and VLA (red) spectrum between $28500$–$30900\mathrm{\,km\,s^{-1}}$. Here, the velocity resolution of the VLA spectrum was matched to the WSRT data. Middle panel: The spatially integrated VLBI with (darkblue, dashed) and without (blue, solid) clipping of the cube pixels at the $3\sigma_\mathrm{VLBI,Cube}$ between $29000$–$30400\mathrm{\,km\,s^{-1}}$. For the VLA spectrum (red), the spectral resolution was matched to the VLBI spectrum. Bottom panel: Same as the middle panel, but zoomed-in in flux density.[]{data-label="fig:Spec2"}](3C236_Spectrum_VLBI_VLA_restframe_paper.pdf "fig:"){width="0.97\linewidth"} ![ absorption spectra of . The dashed, vertical line marks the systemic velocity from [@Labiano2013]. Top panel: WSRT (black) spectrum from [@Morganti2005] and VLA (red) spectrum between $28500$–$30900\mathrm{\,km\,s^{-1}}$. Here, the velocity resolution of the VLA spectrum was matched to the WSRT data. Middle panel: The spatially integrated VLBI with (darkblue, dashed) and without (blue, solid) clipping of the cube pixels at the $3\sigma_\mathrm{VLBI,Cube}$ between $29000$–$30400\mathrm{\,km\,s^{-1}}$. For the VLA spectrum (red), the spectral resolution was matched to the VLBI spectrum. Bottom panel: Same as the middle panel, but zoomed-in in flux density.[]{data-label="fig:Spec2"}](3C236_Spectrum_VLBI_VLA_restframe_paper_Zoom.pdf "fig:"){width="0.97\linewidth"} absorption spectrum {#sec:Results:Spectrum} -------------------- Figure \[fig:Spec2\] (top panel) shows the unresolved VLA spectrum between $28500\mathrm{\,km\,s^{-1}}$ and $30900\mathrm{\,km\,s^{-1}}$ in combination with the WSRT spectrum from [@Morganti2005]. The spectra taken with both instruments are consistent and show the same features, i.e., a deep and narrow absorption that smoothly falls off towards higher velocities, but has a complex, broad wing towards lower velocities. The consistency between both spectra implies that all of the absorption stems from scales smaller than the beam size of the VLA. Two spatially integrated VLBI spectra are shown in the bottom panel of Fig. \[fig:Spec2\] between $29000\mathrm{\,km\,s^{-1}}$ and $30400\mathrm{\,km\,s^{-1}}$. Both were compiled by considering those pixels in the image cube that are located within the region marked by the $3\sigma_\mathrm{VLBI,cont}$ contour line of the continuum image. They differ in terms of the selection of pixels in the cube. For the dashed, blue line labelled ‘VLBI (clipped)’ a very conservative limit of $|S_\mathrm{pixel,cube}|\geq 3\times\sigma_\mathrm{VLBI,cube}$ was used where $|S_\mathrm{pixel,cube}|$ is the absolute value of the pixel brightness. No such limit was applied for the compilation of the spectrum marked by the solid, blue line (labelled ‘VLBI’) and the VLA spectrum (solid, red line) which is unresolved in contrast to the VLBI spectrum. The VLBI without clipping shows a deep and narrow absorption feature consistent with the previous measurement by [@Struve2012], but its depth and width does not match the VLA reference spectrum. Because the deep absorption is likely due to gas associated with the extended dust lane (see Sect. \[sec:Results:GasDistribution\]), the undetected absorption flux density is likely related to structure resolved out as seen from the missing continuum flux density. However, some unsettled gas appears in our VLBI observations (see Sect. \[sec:Results:GasDistribution\]). More interesting, the observations reveal some of the outflowing gas (bottom panel of Fig. \[fig:Spec2\]). However, the VLBI observation recover only a small fraction of the blue-shifted wing of the profiles. We will discuss the possible implications of this for the distribution of the outflowing gas in Sect. \[sec:Discussion:outflow\]. ![image](3C236_collage_tau){width=".95\linewidth"} gas distribution {#sec:Results:GasDistribution} ----------------- The spatial distribution of the gas is shown in the central panel of Fig. \[fig:tau\]. It shows the optical depth $\tau$ integrated over the same velocity range as in the bottom panel of Fig. \[fig:Spec2\], i.e., between $29000\mathrm{\,km\,s^{-1}}$ and $30400\mathrm{\,km\,s^{-1}}$. The optical depth is defined as $\tau=\log(1-\Delta S_\mathrm{abs}/(c_f S_\mathrm{cont}))$ where $\Delta S_\mathrm{abs}$ and $S_\mathrm{cont}$ correspond to the absorbed and the continuum flux density, respectively, and the covering factor $c_f$ is assumed to be unity. In order to avoid integrating over noise and to get a reliable albeit conservative distribution of the gas, we take into account only channels with $\leq-3\sigma_\mathrm{VLBI,cube}$. In addition to the integrated optial depth $\int\tau dv$, this figure shows single-pixel spectra of optical depth extracted at specific locations of the radio continuum, for which the detection limit of channels was not applied. The map of $\int\tau dv$ reveals a complex gas distribution across the south-east lobe and compact absorption towards the nucleus. In particular, the gas covering the south-east lobe exhibits significant changes in $\int\tau dv$. In the central and brightest part of the lobe $\int\tau dv$ reaches its lowest values, but there are several gaps in the gas distribution, in particular towards the nuclear region. The highest values of $\int\tau dv$ are measured towards the end of the south-east lobe. This is the region where the majority of the gas leading to the narrow, deep feature in the integrated spectrum, is situated. The spectra of $\tau$ in Fig. \[fig:tau\]a–h shows the range of optical depth probed by our observation. The lowest optical depth is reached towards the nuclear region as it has the brightest part of the radio source (Fig. \[fig:tau\]c). Here, we are sensitive down to $\tau\gtrsim 0.0077$ at the $3\sigma_\mathrm{VLBI,cube}$-noise level and find three distinct kinematic features which will be discussed in greater detail later in this section. Towards the south-east lobe, the optical depth sensitivity varies. The lowest optical depth is reached in the central region of the lobe (Fig. \[fig:tau\]a) with $\tau\gtrsim 0.013$ where we also find a more complex kinematic structure than in the other regions. In order to investigate the spatial and velocity distribution of these features in greater detail, we show position-velocity plots along different position angles in Fig. \[fig:PV\]a–e. Again, we focus on the velocity range of $29000\mathrm{\,km\,s^{-1}}$ and $30400\mathrm{\,km\,s^{-1}}$. For comparison, the central panel shows the same map of $\int\tau dv$ as in Fig. \[fig:tau\]. Figure \[fig:PV\]c and d show the velocity structure of the gas along the jet position angle and along the position angle of the inner dust lane, respectively. In particular, Fig. \[fig:PV\]d reveals a gradient in velocity similar to [@Struve2012] which has been interpreted as a signature of an disk aligned with the inner dust lane. Our new data shows that this gradient is even more prominent across the central part of the lobe (Fig. \[fig:PV\]e). At the $-2\sigma_\mathrm{VLBI,cube}$-level, the gradient would cover almost 300kms$^{-1}$. We label the disk-related feature in the following as S1. However, more interesting and relevant from these observations are structures that appear to have a disturbed kinematics (see Figure \[fig:PV\]c). S2b is located towards the central part of the lobe and is slightly blue-shifted with respect to the peak of the absorption by about $150\mathrm{\,km\,s^{-1}}$. At the $3\sigma_\mathrm{VLBI,cube}$-level it is not connected to S1 spatially or in velocity. The clear separation in velocity between S2b and S1 could indicate that S2b traces a different component of the gas than S1. The most important result from the new observations is the finding that the gas co-spatial to the nucleus is entirely blue-shifted with respect to the peak absorption by up to $\sim 640\mathrm{\,km\,s^{-1}}$ (Fig. \[fig:PV\]a,c). The projected size of this region is only about 36pc or even smaller. It comprises three distinct features labelled S2a, S3, and S4, which are separated in velocity by a few channels. While it is possible that this separation could be due to changes in sensitivity across the channels, we consider this to be the least likely explanation as the variation in sensitivity are not significant. This leaves three other possibilities (see also Sect. \[sec:Discussion:outflow\]). First, there is no further gas in this region or the optical depth of the gas is too low to be detectable. In this case the majority of the remaining ouflowing gas would have to be located elsewhere. Second, the cold gas clouds could be entrained by warmer gas which has a higher spin temperature and thus, a lower optical depth. Third, the gas within the outflow is highly clumpy which could imply differnces in the covering factor or spin temperature of the gas. We cannot exclude the possibility of gas co-spatial to the north-west part of the jet. Figure \[fig:PV\]b and the map of $\int\tau dv$ suggest that there could be gas in this region that is redshifted with respect to the deep absorption. However, these features are just at the $3\sigma_\mathrm{VLBI,cube}$-level, very narrow and generally located at the edge of the continuum emission. Therefore, we cannot consider them reliable detections. ![image](3C236_collage_vr){width=".84\linewidth"} Discussion {#sec:Discussion} ========== Our VLBI observation has successfully recovered part of the outflowing gas in 3C236 in the form of distinct, compact clouds (S2a, S3, S4). They are located primarily co-spatial, in projection, to the nuclear region which has a projected size of $\lesssim 36\mathrm{\,pc}$ with one possible exception (S2b). The clouds cover velocities of $150\text{--}600\mathrm{\,km\,s^{-1}}$ blue-shifted with respect to the that is likely related to a rotating disk aligned with inner dust lane (S1). The disk-related gas extends over most over the south-east radio lobe. The overall gas distribution is clumpy with the majority of the gas concentrated at the end of the lobe. An important characteristic of the VLBI data is the significant amount of absorption that is not detected, but inferred from low-angular resolution observations with the VLA and the WSRT. The high-velocity I gas {#sec:Discussion:outflow} ----------------------- As mentioned in Sect. \[sec:Intro\], is classified as a LERG, which makes it less likely that the AGN is able to produce powerful radiative winds which could couple to the dust of the galaxy and create strong gaseous outflows. This in turn leaves the radio jet as the main driver for the outflow, i.e., features S2a, S3, S4, and perhaps S2b. However, we cannot exclude the possibility of S2b being part of the disk (see also Sect. \[sec:Discussion:disk\]). The location of S2a, S3, and S4 suggests that the outflow starts already very close to the nucleus of (see also Fig. \[fig:nh\]a, b, and c). As these structures are unresolved, we use the restoring beam as an upper limit of the extent of the gas ($\lesssim36\mathrm{\,pc}$ in projection). In contrast, S2b is extended and we estimate its projected size to be about $54\mathrm{\,pc}\times 15\mathrm{\,pc}$. There are several implications for the undetected absorbed flux density in our VLBI observation. The diffuse extended continuum emission that is resolved out by the high-resolution of VLBI (see Section \[sec:Results:Cont\]) limits the area over which we can probe for absorption. However, we could still be able to detect absorption in regions outside of the detected continuum emission if the absorption is compact. In fact, in some channels the absorption does extend marginally outside of the $3\sigma_\mathrm{VLBI,cont}$ contour lines, but is below the $3\sigma_\mathrm{VLBI,cube}$-level. Another possibility is related to the small fraction of the outflow detected only against the nuclear region. This region is the brightest one with a peak flux density of about $145\mathrm{\,mJy}$ which corresponds to an optical depth limit of 0.0026 at the $1\sigma_\mathrm{VLBI,cube}$-level or 0.0077 at the $3\sigma_\mathrm{VLBI,cube}$-level. It is worth noting that some of the channels between S4 and S3 and between S3 and S2a have optical depths above the $1\sigma_\mathrm{VLBI,cube}$-level at the location of the peak continuum flux density. However, the peak flux density of the lobe is about $87\mathrm{\,mJy}$, which corresponds to an optical depth limit of 0.0043 at the $1\sigma_\mathrm{VLBI,cube}$-level or 0.013 at the $3\sigma_\mathrm{VLBI,cube}$-level. The peak optical depth of the clouds S3 and S4 is about 0.014 which is consistent with the $3\sigma_\mathrm{VLBI,cube}$-level within in our amplitude calibration uncertainty. Thus, we cannot entirely exclude the existence of clouds such as S3 and S4 at larger distances from the nucleus, i.e., agains the lobe. However, we cannot detect these clouds because of sensitivity limits. This could be a likely situation and entails that the outflow is actually more extended but made up of clouds similar to the one detected against the nucleus. We cannot exclude a component of diffuse gas, but this would be also too faint to be detectable. We can exclude the presence of single clouds producing a depth of the absorption similar to the one detected by the VLA and WSRT (see Fig. \[fig:Spec2\]). Taking the peak flux density of the lobe and the lowest absorption recovered by the WSRT at around $28800\mathrm{\,km\,s^{-1}}$ would correspond to an optical depth of 0.02. Therefore, such clouds would have been detected at least against the brighter part of the lobe. We would also not have seen this cloud towards the nuclear region, the peak flux density implies an optical depth of about 0.015 similar to S3 and S4. ### Properties of the outflowing gas Figure \[fig:nh\] shows the column density $N_\mathrm{H}$ normalized by the spin temperature $T_\mathrm{spin}$ following $N_\mathrm{\ion{H}{I}}T_\mathrm{spin}^{-1} \approx 1.823\times 10^{18} \int\tau(v)dv \mathrm{\,cm^{-2}\,K^{-1}}$, where $v$ is in units of $\mathrm{km\,s^{-1}}$. Again, we focus on the velocity range of $\sim29000\mathrm{\,km\,s^{-1}}$ to $\sim30400\mathrm{\,km\,s^{-1}}$ considering only channels with $\leq -3\sigma_\mathrm{VLBI,cube}$. The spin temperature of the gas is unknown in . However, it is very likely that it reaches higher values in the nuclear region than in the south-east lobe. A similar argument was made by [@Morganti2005b] and a $T_\mathrm{spin}$ of $\sim 1000\mathrm{\,K}$ is often used for gas close to the active nucleus or gas with extremely disturbed kinematics, although measurements of this quantity are scarce (see @Morganti2016 [@Holt2006]). Thus, it seems reasonable to assume $T_\mathrm{spin}=1000\mathrm{\,K}$ for S2a, S3 and S4 and $T_\mathrm{spin}=100\mathrm{\,K}$ for S2b. This results in column densities of $1.8\text{--}4.0\times 10^{21}\mathrm{cm^{-2}}$ for the clouds co-spatial to the nucleus and $1.8\times 10^{20}\mathrm{cm^{-2}}$ for S2b (see Table \[tab:Data:Outflow\]). We can further estimate the mass outflow rate $\dot{M}_\mathrm{\ion{H}{I}}$ following [@Heckman2002] $$\begin{aligned} \dot{M}_\mathrm{\ion{H}{I}} \sim 30 \frac{r_\star}{\mathrm{kpc}}\frac{N_\mathrm{\ion{H}{I}}}{10^{21}\mathrm{\,cm^{-2}}}\frac{v}{300\mathrm{\,km\,s^{-1}}}\frac{\Omega}{4\pi}M_\sun\mathrm{\,yr^{-1}} \end{aligned}$$ where $r_\star$ is the distance of the cloud, $v$ its velocity and $\Omega$ its solid angle which is assumed to be $\pi$. We estimate $v$ for each of the features relative to the peak velocity of S1 (see Fig. \[fig:PV\]c) and assume that $r_\star$ is given by the size of the beam. The only exception is S2b for which we measure a distance of $\sim150\mathrm{\,mas}$ in Fig. \[fig:PV\]c leading to $r_\star \sim 310\mathrm{\,kpc}$ (de-projected). The resulting values are listed in Table \[tab:Data:Outflow\]. As both $N_\mathrm{H}$ and $r_\star$ are lower limits, the actual mass outflow produced by these features is likely to be higher. Due to the order-of-magnitude lower column density compared to the clouds co-spatial to the nucleus, S2b has the lowest value. This is also the case for the density of all four clouds which ranges from $30\text{--}50\mathrm{\,cm^{-3}}$ for S2a, S3 and S4 compared to $2\mathrm{\,cm^{-3}}$ for S2b (Table \[tab:Data:Outflow\]. For the former clouds a spherical geometry was assumed and for the latter an ellipsoidal geometry. [@Morganti2005] estimated the total mass outflow rate to be $\sim47M_\sun\mathrm{\,yr^{-1}}$ based on the full extent of the unresolved absorption spectrum ($v\sim 1500\mathrm{\,km\,s^{-1}}$) and assuming an homogeneous distribution of the gas over a radius of 0.5kpc. Thus, we consider this value an upper limit on the total mass outflow rate. Integrating over S2a, S3 and S4 yields $N_\mathrm{H}\sim 7.8\times10^{21}\mathrm{\,cm^{-2}}$ and $\dot{M}_\mathrm{\ion{H}{I}}\sim 5M_\sun\mathrm{\,yr^{-1}}$. This suggests that at least 10% of the estimated total mass outflow rate is located close (in projection) to the nucleus. Following [@Holt2006], the kinetic energy of an outflow is given by $$\begin{aligned} E_\mathrm{kin} \approx 6.34\times 10^{35}\frac{\dot{M}}{2}\left(v_\mathrm{out}^2 + \frac{\mathrm{FWHM}^2}{1.85}\right) \end{aligned}$$ where $\mathrm{FWHM}$ is the full width at half maximum of the line. [@Labiano2013] fitted three Gaussian distributions to the WSRT spectrum. One of the distributions described the broad, blue wing of the spectrum with the following parameters: $v_\mathrm{hel}=29474\mathrm{\,km\,s^{-1}}$ and $\mathrm{FWHM}\sim 1100\mathrm{\,km\,s^{-1}}$. Using these values and $\dot{M}_\mathrm{\ion{H}{I},max}\sim 47M_\sun\mathrm{\,yr^{-1}}$ yields $E_\mathrm{kin}\sim 9.4\times 10^{42}\mathrm{erg\,s^{-1}}$. The estimated mass of the central supermassive black hole of is $\log\,m_\mathrm{SMHB}\approx8.5\mathrm{M_\sun}$ [@Mezcua2011] and implies that the maximum kinetic energy of the outflow is up to 0.02% of the Eddington luminosity. For the VLBI detected outflow, we take the integrated mass outflow rate ($\gtrsim 5M_\sun\mathrm{\,yr^{-1}}$), the central velocity of S3 and the full width at zero intensity (FWZI) of $500\mathrm{km\,s^{-1}}$ (Fig. \[fig:PV\]c). This suggests that the outflow close to the nucleus has at least 4% of the maximum kinetic energy. Recent numerical simulations of jet interaction with an inhomogeneous multi-phase ISM have reached resolutions which allow a comparison with VLBI measurements (by e.g., @Wagner2012 [@Mukherjee2016; @Mukherjee2017; @Cielo2017b]). This allows us to qualitatively consider the implications for our measurements. In general, the simulations demonstrated the strong impact that powerful jets have on the velocity, temperature, pressure and density distribution of the ISM gas. When the jet hits the ISM, the gas at the shock front of the jet is being accelerated to the highest velocities. While the jet continues to push through the ISM, the already accelerated gas moves outwards along and transversal to the jet axis forming a kind of expanding cocoon around the jet. This disrupts the ISM in particular in the proximity of the jet and decreases the overall density of the gas. At some point the jet breaks through the ISM and the gas primarily expands transversal to the jet axis. However, the expansion of the jet can also be halted if the jet power is too low and/or the density of the medium is too high. This can prevent the jet from pushing all the way through the ISM. The velocities and densities of the clouds that we measure are within the range of values expected from these simulations. Given the properties of the clumpy gas and the morphology of the radio emission in 3C236, it seems likely that we see the jet- interaction in an already advanced stage in its evolution. However, projection effects and the undetected gas make it difficult to assess whether the VLBI jet has already entirely pushed through the gas. A more quantitive comparison is difficult as most of the available simulations consider only the warm and hot ISM gas ($\gtrsim10^4\mathrm{\,K}$). An exception is the recent study by [@Mukherjee2018] of which traces the cold molecular gas down to $10^2\mathrm{\,K}$. In contrast to the jet axis in is aligned with the disk. In this particular work, the simulations were able to reconstruct kinematic features of the cold gas as seen in observations by [@Morganti2015]. Numerical simulations like the one performed by [@Mukherjee2018] are essential to understand the interaction of the radio jet and cold ISM gas. ![image](3C236_nhT_collage){width="1\linewidth"} The disk-related gas {#sec:Discussion:disk} -------------------- [@Struve2012] related the rather symmetric gradient around the deep absorption to a disk of gas (see Fig. \[fig:PV\]d). They did not consider the gas detected within a distance of $<200\mathrm{\,mas}$ from the nucleus to be connected to it due to the lack of spatial and kinematic structure. However, our observations reveal a velocity gradient also across the central part of the lobe (Fig. \[fig:PV\]d). It is larger than the one at the location of the deep absorption (Fig. \[fig:PV\]a). Because the absorption extends to the edges of the continuum and not all of the absorption is recovered, it is difficult to measure the full width of the disk. Thus, we do not calculate the properties of the disk at this point as it would require more detailed modelling that is beyond the scope of this work. The column density of the gas changes significantly across the lobe. Figure \[fig:nh\]d depicts the gas related to S1. [@Struve2012] reported a value of $N_\mathrm{\ion{H}{I}}\approx 6.1\times 10^{21}\mathrm{cm^{-2}}$ at the location of the peak of the absorption assuming a conservative value of $T_\mathrm{spin}=100\mathrm{\,K}$. This is similar to our measurements, but we find that there is variation over an order of magnitude across the lobe. The column densities are up to an order of magnitude lower than the column density from CO estimated by [@Labiano2013]. Assuming that the disk is aligned with the inner dust lane, we can estimate the height of the disk using the extent of S1 in Fig. \[fig:PV\]c. This yields $\gtrsim 200\mathrm{\,mas}$ or $\gtrsim360\mathrm{\,pc}$ in projection and has to be considered a lower limit due to the undetected gas. [@Schilizzi2001] estimated an apparent inclination of the radio jet to the line of sight of $\sim 60^\circ$ based on the ellipticity of the host galaxy and assuming that the jets are perpendicular to the dust lanes. This yields a de-projected height of the disk of $\gtrsim 420\mathrm{\,pc}$. The major axis of the inner dust line is about 1.8kpc in projected size [@ODea2001] and the CO disk extends up to about 1.3kpc. Assuming that the disk has the radial extent of the CO implies a thick rather than a thin disk. [@Nesvadba2011] also suggested an ellipsoidal configuration instead of thin disk for the H$_2$ gas on larger scales. The distribution of the gas across the south-east radio lobe seen in our VLBI image and in [@Struve2012] in addition to the co-spatiality with the inner dust lane provides further support for the interpretation that the morphology of the lobe is, to some extent, the result of interaction between the jet and the dust lane [@ODea2001]. Such an interaction would affect the morphology and kinematics of the disk. The location and kinematic properties of S2b (see Fig. \[fig:PV\]c,d and Fig. \[fig:nh\]d) could be a signature of this interaction instead of being related to outflowing gas. In this context, it is interesting that [@Labiano2013] required two Gaussian functions to fit the deep part of the absorption spectrum, a deep, narrow component ($v_\mathrm{hel}=29828\mathrm{\,km\,s^{-1}}$, $\mathrm{FWHM}\sim 80\mathrm{\,km\,s^{-1}}$) and a shallower, broader one ($v_\mathrm{hel}=29846\mathrm{\,km\,s^{-1}}$, $\mathrm{FWHM}\sim 300\mathrm{\,km\,s^{-1}}$). Further investigations are necessary and would require detailed numerical simulations of the interaction between jet and the cold ISM. ----------- --------------------------------------------- ------------------------- --------------- ------------------------- ------------------------- ---------------- -------------- ------------------------------- Component $N_\mathrm{\ion{H}{I}}T_\mathrm{spin}^{-1}$ $N_\mathrm{\ion{H}{I}}$ $d$ $n_\mathrm{\ion{H}{I}}$ $m_\mathrm{\ion{H}{I}}$ $v$ $r_\star$ $\dot{M}_\mathrm{\ion{H}{I}}$ \[$10^{19}$cm$^{-2}$K$^{-1}$\] \[$10^{19}$cm$^{-2}$\] \[pc\] \[cm$^{-3}$\] \[$10^4$M$_\sun$\] \[kms$^{-1}$\] \[pc\] \[M$_\sun$yr$^{-1}$\] S4 0.18 18 $\lesssim36$ 30 0.65 640 $\lesssim40$ 1.2 S3 0.40 40 $\lesssim36$ 60 1.5 420 $\lesssim40$ 1.7 S2a 0.33 33 $\lesssim36$ 50 1.2 150 $\lesssim40$ 0.5 Nucleus 0.78 78 $\lesssim36$ 120 2.8 640 $\lesssim40$ 5 S2b 0.18 1.8 $56\times 15$ 2 0.28 150 310 0.2 ----------- --------------------------------------------- ------------------------- --------------- ------------------------- ------------------------- ---------------- -------------- ------------------------------- Comparison with 4C12.50 {#sec:Discussion:Comparison} ----------------------- As mentioned in Sect. \[sec:Intro\] the compact radio galaxy located at a redshift of 0.1217 is currently the only other powerful radio galaxy for which the outflow has also been studied with VLBI. There have been other radio galaxies in which a strong outflow has been detected and partially resolved, i.e, 3C305 [@Morganti2005b], 3C293 [@Mahony2013]. However, these observations were obtained at lower angular resolution and thus probed larger spatial scales. Therefore, we focus our comparison on . [@Morganti2013] show that the gas is distributed on either end of its projected 200pc-size radio structure, i.e., the deep absorption is located at the northern extent of the source, while the broad outflow is co-spatial with the hot spot in the southern part of the source. In contrast to 3C236, no absorption was reported in the nuclear region and the high- and low-resolution spectrum match well which suggests that all of the absorption has been recovered by the VLBI observation. [@Morganti2013] measured a column density of the blue-shifted clouds in of $4.6\times 10^{21}\mathrm{cm^{-2}}$, using $T_\mathrm{spin}=100\mathrm{\,K}$ due to the distance of the gas to the nucleus. This is comparable to S2a, S3 and S4 even though these are located co-spatial to the nucleus, i.e., a higher value for $T_\mathrm{spin}$ was assumed. The mass outflow rate of the in was determined to range between $16\,M_\sun\mathrm{\,yr^{-1}}$ and $29\,M_\sun\mathrm{\,yr^{-1}}$. However, it is difficult to compare with 3C236 as we only measure lower limits. Although S2b would be better suitable for comparison in terms of its location, its column density and total mass is an order of magnitude lower than what was determined for . The mass outflow rate and outflow velocity suggests a kinetic energy of about 0.02–0.03% of the Eddington luminosity for depending also on the assumed black hole mass [@Dasyra2006; @Dasyra2011b; @Son2012]. This value range is similar to what we have estimated as a possible upper limit for the kinetic energy of the outflow in . There is also potentially a large difference in the total mass. [@Morganti2013] estimated a mass of $\sim1.4\times 10^5M_\sun$ for which represents a lower limit as the gas could be distributed beyond the radio continuum. [@Struve2012] derived a value of 5.9–9$\times 10^9M_\sun$ assuming the at the southern end of the lobe in is contained within a regular rotating disk. The differences observed between these two objects can be the result of a combination of differences in size and age of the two sources and differences in the conditions of the ISM. The gas in 3C236 could be more settled than in 4C12.50. This may suggest that at the distances of the south-east radio lobe of 3C236 of from the nuclear region of about 0.3 kpc, there are no more dense clouds and the jet has already broken through the denser gas. This would open the possibility that both sources represent different stages of evolution at least with respect to the jet- interaction. could be further advanced in its evolution than . The age of both radio sources have been estimated based on cooling time to be $\sim10^4\mathrm{\,years}$ (, @Morganti2013) and $\sim10^5\mathrm{\,years}$ (,@ODea2001 [@Tremblay2010]). The jet in 3C236 could have had more time to interact with the dispersing the gas to greater extent. Thus, the combination of these parameters need to be considered when the presence (or absence) of outflows and their properties are investigated. Summary & Conclusion {#sec:Summary} ==================== In this paper, we have presented results on the gas distribution in the central 1kpc of the radio source as detected in absorption by milli-arcsecond global VLBI and arc-second VLA observations. We find that all of the gas recovered by VLBI is contained within the nuclear region and the south-east lobe of the radio source. The VLBI data recovers a significant amount of absorption from the disk-related and the outflowing gas component compared to the lower resolution VLA data as well as about 40% of the continuum flux density. The latter implies substantial extended low-surface brightness emission that is resolved out by the high resolution of VLBI. For the first time, we have been able to localise part of the broad blue-shifted component of the gas in in the form of distinct clouds located almost exclusively in the compact nuclear region which is $\lesssim 40\mathrm{\,pc}$ in size (in projection). The clouds cover a velocity range of about 600kms$^{-1}$ with respect to the peak of the disk-related gas and we estimate that they have a density of $\gtrsim 60\mathrm{cm^{-3}}$. There is also one cloud co-spatial to the south-east lobe and well aligned with the position angle of the jet that appears to be kinematically disturbed gas. While it could be part of the disk, it is also possible that it resembles outflowing gas. In the latter case, its location and extended size implies a density as low as $\sim 2\mathrm{cm^{-3}}$. Overall, the mass outflow rate of the VLBI detected outflowing gas is about 10% of the total mass outflow rate of $47\mathrm{M_\sun\,yr^{-1}}$ estimated from unresolved spectra. The clouds co-spatial to the nuclear region account for about 4% of the total kinetic energy of the outflow. Because is classified as a LERG, we consider the radio jets as the most likely driver of the outflow. The discrepancy between the low- and high-resolution absorption spectra in combination with the distribution of the detected gas implies that both the observed and undetected outflow is clumpy. However, we cannot exclude the possibility of a highly diffuse gas component. A qualitative comparison with numerical simulations suggests that the interaction of the jet with the gas has already been going on for long time in . In this scenario the high-velocity gas that we do not detect has been dispersed significantly as a result of the jet interaction. Even the disk-related gas could have been affected. We compare our results to , in which no absorption was detected co-spatial to the nuclear region and all of the was recovered by VLBI including the jet-driven outflowing gas. The differences to are very intriguing as it could be a sign that the gas is more settled in . However, the available data does not allow to draw strong conclusions on whether both sources represent different stages in AGN evolution. Additional data on other sources are required. This work is part of our ongoing effort to spatially resolve the jet-driven outflow in young and re-started powerful radio galaxies. It demonstrates that great care is required when physical quantities such as density and mass of the gas are derived from unresolved spectra. This shows the need for high-resolution follow-up observations of upcoming large absorption surveys conducted by e.g., Apertif [@Oosterloo2010; @Maccagni2017], MeerKat [@Gupta2017], and ASKAP [@Allison2015]. RS gratefully acknowledge support from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013)/ERC Advanced Grant RADIOLIFE-320745. EKM acknowledges support from the Australian Research Council Centre of Excellence for All-sky Astrophysics (CAASTRO), through project number CE110001020. The European VLBI Network is a joint facility of independent European, African, Asian, and North American radio astronomy institutes. Scientific results from data presented in this publication are derived from the following EVN project code: GN002. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The Long Baseline Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The Arecibo Observatory is a facility of the National Science Foundation (NSF) operated by SRI International in alliance with the Universities Space Research Association (USRA) and UMET under a cooperative agreement. The Arecibo Observatory Planetary Radar Program is funded through the National Aeronautics and Space Administration (NASA) Near-Earth Objects Observations program. Based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA). This research has made use of NASA’s Astrophysics Data System Bibliographic Services. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research made use of Astropy, a community-developed core Python package for Astronomy [@Astropy2013]. This research made use of APLpy, an open-source plotting package for Python [@Aplpy2012].
--- abstract: 'This paper presents a micromechanical study of unsaturated granular media in the pendular regime, based upon numerical experiments using the discrete element method, compared to a microstructural elastoplastic model. Water effects are taken into account by adding capillary menisci at contacts and their consequences in terms of force and water volume are studied. Simulations of triaxial compression tests are used to investigate both macro and micro-effects of a partial saturation. The results provided by the two methods appear to be in good agreement, reproducing the major trends of a partially saturated granular assembly, such as the increase in the shear strength and the hardening with suction. Moreover, a capillary stress tensor is exhibited from capillary forces by using homogenisation techniques. Both macroscopic and microscopic considerations emphasize an induced anisotropy of the capillary stress tensor in relation with the pore fluid distribution inside the material. In so far as the tensorial nature of this fluid fabric implies shear effects on the solid phase associated with suction, a comparison has been made with the standard equivalent pore pressure assumption. It is shown that water effects induce microstrural phenomena that cannot be considered at the macro level, particularly when dealing with material history. Thus, the study points out that unsaturated soil stress definitions should include, besides the macroscopic stresses such as the total stress, the microscopic interparticle stresses such as the ones resulting from capillary forces, in order to interpret more precisely the implications of the pore fluid on the mechanical behaviour of granular materials. \[published, DOI: 10.1002/nag.767\]' address: - 'Laboratoire 3S-R (Sols, Solides, Structures - Risques), Grenoble Universités, Domaine Universitaire BP 53, 38041 Grenoble cedex 9, France.' - 'Institut de Recherche en Génie Civil et Mécanique GeM CNRS, Ecole Centrale de Nantes, BP 92101, 44321 Nantes Cedex 3, France.' - 'Cemagref - Unité de recherche Erosion Torrentielle Neige et Avalanches, Domaine Universitaire BP 76, 38402 Saint-Martin d’Hères cedex, France.' author: - 'L. Scholtès' - 'P.-Y. Hicher' - 'F. Nicot' - 'B. Chareyre' - 'F. Darve' date: '01/30/2009' title: 'On the capillary stress tensor in wet granular materials.' --- , , , , . micromechanics; granular materials; unsaturated; DEM; capillary forces; microstructure INTRODUCTION ============ Macroscopic properties of granular materials such as soils depend on particle interactions. In dry granular materials, interparticle forces are related to the applied external loads as different studies have shown [@NicotDarve2005; @Cundall1979]. In unsaturated soils subjected to capillary effects, new features must be accounted for in order to understand properly their behaviour. The presence of water leads to the formation of water menisci between neighboring grains, introducing new interparticle forces. The effects of these forces depend on the degree of saturation of the medium. For low water content level corresponding to disconnected liquid bridges between grains, capillary theory allows the force induced by those bridges to be linked to the local geometry of the grains and to the matric suction or capillary pressure inside the medium [@Fisher1926]. Since the disconnected menisci assumption is not valid for high water content levels due to water percolation, we consider here only the unsaturated state where the discontinuity of the water phase can be assumed, the so-called pendular regime.\ There has been a wide debate on the various interpretations for the mechanical behaviour of unsaturated soils. At early stages of soil mechanics, Terzaghi [@Terzaghi1925] first introduced the concept of effective stress for the particular case of saturated soils enabling the conversion of a multiphase porous medium into a mechanically equivalent single-phase continuum. In unsaturated soils, water induced stresses are still debated. The common practice [@Bishop1959; @FredlundMorgenstern1978] is to use the suction or a modified version as a second stress variable within a complete hydro-mechanical framework. An alternative method is introduced to develop homogenisation techniques in order to derive stress-strain relationships from forces and displacements at the particle level as proposed in [@NicotDarve2005] for dry granular materials. The basic idea is to consider the material as represented by a set of micro-systems, postulating that the behaviour of a material volume element depends on the intergranular interactions belonging to this volume. We propose here to extend this micro-mechanical approach to unsaturated granular materials as proposed by Li [@Li2003], Jiang et al. [@Jiang2004] or Lu and Likos [@LuLikos2006].\ Along these lines we present two micromechanical models which take into account capillary forces. The first one is a three dimensional numerical model based on the Discrete Element Method (hereafter designed as the DEM model) pioneered by Cundall and Strack [@Cundall1979], and the second one is an analytical model (hereafter designed as the microstructural model) recently proposed by Hicher and Chang [@HicherChang2006]. The microstructural model is a stress-strain relation which considers interparticle forces and displacements. Thanks to analytical homogenisation/localisation techniques, the macroscopic relation between stress and strain can be derived. In the DEM model, a granular medium is modelled by a set of grains interacting according to elementary laws. Direct simulations are carried out on grain assemblies, computing the response of the material along a given loading path. By studying their effects under triaxial loading, we investigate capillary forces implications at the macroscopic level, and offer an insight into the unsaturated soil stress framework by introducing a capillary stress tensor as a result of homogenization techniques. UNSATURATED SOIL STRESSES ========================= Macroscopic views ----------------- Macroscopic interpretations of the mechanical behaviour of unsaturated soils have been mainly developed in the framework of elastoplasticity [@NuthLaloui2007]. Most of these models consider that the strain tensor is governed by the net stress tensor $\sigma_{ij}-u_a\delta_{ij}$ ($u_a$ being the pore air pressure) and the matric suction or capillary pressure $u_a-u_w$ ($u_w$ being the pore water pressure) inside the medium [@AlonsoGens1990; @WheelerSivakumar1995]. In particular, they consider a new yield surface, called Loading Collapse (LC) surface in the plane (($\sigma_{ij}-u_a\delta_{ij}$),($u_a-u_w$)) which controls the volume changes due to the evolution of the degree of saturation for a given loading path. As a matter of fact, all these formulations can be considered as extensions of the relationship initially proposed by Bishop and Blight [@Bishop1963] for unsaturated soils: $$\sigma'_{ij} = (\sigma_{ij} - u_a\delta_{ij})+\chi(S_r)(u_a-u_w)\delta_{ij}$$ where $\chi(S_r)$ is called the effective stress parameter or Bishop’s parameter, and is a function of the degree of saturation $S_r$ of the medium ($\chi=0$ for a dry material, $\chi=1$ for a fully saturated material).\ Obviously, since the effective stress principle is by definition a macroscopic concept, several authors (Lu and Likos [@LuLikos2006] or Li [@Li2003]) have proposed to use a micromechanical approach for the effective stress principle. In order to further study this micromechanical approach to study unsaturated soil stresses, we propose here a micromechanical analysis of the problem, examining the local water induced effects through a set of simulated laboratory experiments. Micromechanical interpretation ------------------------------ Let us consider a Representative Volume Element (RVE) of a wet granular material, subjected to an assigned external loading. When the water content decreases inside a saturated granular sample, the air breaks through at a given state. The capillary pressure ($u_a-u_w$) corresponding to that point is called the air-entry pressure, and strongly depends on the pore sizes. Thereafter, the sample becomes unsaturated and capillary forces start to grow due to interface actions between air and water. Since the the gaseous phase is discontinue, this is the capillary regime. From this state, a constant decrease in the degree of saturation corresponds to a gentle increase in pore water pressure. The pendular regime starts when the water phase is no longer continuous. In this state, fluids equilibrium is obtained by the vapor pressure. Analytical and experimental results [@Haines1925; @Fisher1926] demonstrate that capillary effects at particle contacts produces a kind of bond between particles as a result of menisci (Fig.\[fig1\]). Liquid bridges may form between some pairs of adjoining particles not necessarily in contact, generating an attractive capillary force between the bonded particles. If the drying process continues, these water bridges begin to fail, starting from the non-contacting grains, until the complete disappearance of capillary forces inside the assembly.\ As the pendular regime is considered throughout this paper, water is considered to be solely composed of capillary menisci: each liquid bridge is assumed to connect only two particles. Therefore, two types of forces coexist within the granular medium. For dry contacts, a contact force develops between contacting granules. This repulsive force, that is a function of the relative motion between the contacting grains, is usually well described by an elastoplastic contact model. For water bonded particles, a specific attractive force exists. This water-induced attractive interaction can be described by a resulting capillary force, rather than by a stress distribution as mentionned by Haines [@Haines1925] or Fisher [@Fisher1926]. This capillary force is a function of the bridge volume, of the size of particles, and of the fluid nature (see section 3.1.1 for the details). The objective of this section is to derive, in a simple manner, an expression relating the overall stress tensor within the RVE to this internal force distribution.\ For this purpose, the Love [@Love1927] static homogenisation relation is used. This relation expresses the mean stress tensor $\sigma$ within a granular volume $V$ as a function of the external forces $\vec{F}^{ext,p}$ applied to the particles $p$ belonging to the boundary $\partial V$ of the volume: $$\sigma_{ij} = \frac{1}{V} \sum_{p \epsilon \partial V} F_i^{ext,p} x_j^p$$ where $x_j^p$ are the coordinates of the particle $p$ with respect to a suitable frame. It is worth noting that this relation is valid whatever the nature of the interactions between grains.\ Taking into account the mechanical balance of each particle of the volume $V$ (including the boundary $\partial V$), Eq.(2) can be written as: $$\sigma_{ij} = \frac{1}{V} \sum_{p=1}^N \sum_{q=1}^N F_i^{q,p} l_j^{q,p}$$ where $N$ is the number of particles within the volume, $\vec{F}^{q,p}$ is the interaction force exerted by the particle $q$ onto the particle $p$, and $\vec{l}^{q,p}$ is the branch vector pointing from particle $q$ to particle $p$ ($\vec{l}^{q,p} = \vec{x}^p - \vec{x}^q$).\ As we consider partially saturated granular media, two independent kinds of interparticle forces can be distinguished: 1. if particles $p$ and $q$ are in contact, a contact force $\vec{F}_{cont}^{q,p}$ exists. 2. if particles $p$ and $q$ are bonded by a liquid bridge, a capillary force $\vec{F}_{cap}^{q,p}$ exists. Actually, depending on the local geometry, a liquid bond can exist between two grains in contact. In that case, solid contacts are surrounded by the continuous liquid phase providing the simultaneity of contact and capillary forces. The two contributions have therefore to be accounted for by summation.\ Finally, in all cases and for any couple $(p,q) \epsilon [1,N]^2$, it can be written that: $$\vec{F}^{q,p} = \vec{F}_{cont}^{q,p} + \vec{F}_{cap}^{q,p}$$ Thus, by combining Eqs.(2) and (4), it follows that: $$\sigma_{ij} = \frac{1}{V} \sum_{p=1}^N \sum_{q=1}^N F_{cont,i}^{q,p} l_j^{q,p} + \frac{1}{V} \sum_{p=1}^N \sum_{q=1}^N F_{cap,i}^{q,p} l_j^{q,p}$$ As a consequence, Eq.(5) indicates that the stress tensor is split into two components: $$\sigma_{ij} = \sigma_{ij}^{cont} + \sigma_{ij}^{cap}$$ 1. A first component $\sigma_{ij}^{cont} = \frac{1}{V} \sum_{p=1}^N \sum_{q=1}^N F_{cont,i}^{q,p} l_j^{q,p}$ accounting for the contact forces transmitted along the contact network. 2. A second component $\sigma_{ij}^{cap} = \frac{1}{V} \sum_{p=1}^N \sum_{q=1}^N F_{cap,i}^{q,p} l_j^{q,p}$ representing the capillary forces existing within the assembly. It is to be noted that $\sigma_{ij}^{cont}$ is a stress quantity standing for intergranular contact forces in the same way as in saturated or dry conditions. Considering the concept as initially introduced by Terzaghi, $\sigma_{ij}^{cont}$ plays the role of the so-called effective stress by governing soil deformation and failure. Besides, $\sigma_{ij}^{cap}$ is the tensorial attribute to capillary water effects or suction, by extension. By analogy with Eq.(1), we can therefore define a microstructural effective stress $$\sigma_{ij}^{cont} = \sigma_{ij} - \sigma_{ij}^{cap}$$ where $\sigma_{ij}$ could be affiliated to net stress, representing the apparent stress state in the material. Compared with Eq.(1), where the effect of water is intrinsically isotropic, $\sigma_{ij}^{cap}$ implies a tensorial attribute to the water effects.\ In fact, in both terms $\sigma_{ij}^{cont}$ and $\sigma_{ij}^{cap}$, a fabric tensor can emerge from the summation [@Love1927; @Christofferson1981; @Rothenburg1981]. The fabric tensor is useful to characterize the contact anisotropy of the assembly, which is known as a basic feature of granular assemblies. In dry granular materials, an induced anisotropy can develop when a deviatoric stress loading is applied. In partially saturated assembly, due to the possibility of interactions without contact, the conclusion is not so trivial. As an illustration, if we restrict our analysis to spherical particles [@NicotDarve2005], it can be inferred that: $$\sigma_{ij}^{cap} = \frac{1}{V} \sum_{p=1}^N \sum_{q=1}^N F_{cap}^{q,p} l^{q,p} n_i^{p,q} n_j^{p,q}$$ This relation points out that, contrary to the contact term where the fabric tensor $\frac{1}{N} \sum_{q=1}^N n_i^{p,q} n_j^{p,q}$ is directly linked to the induced anisotropy (a contact is associated with a force), two causes can be invoked for the capillary term. First, the distribution of the liquid bonds can be anisotropic. Secondly, the geometry of the bonds being obviously dependent on the local geometry, it is possible that the distribution of both terms $F_{cap}^{q,p}$ and $l^{q,p}$ is also anisotropic. This is significant because the anisotropic attribute yields shear effects associated with pore fluid, which could mostly influence material behaviour.\ In order to enrich our discussion, we present numerical investigations of these features in the following sections using both DEM and micromechanical simulations. MICROSTRUCTURAL INVESTIGATION ON THE CAPILLARY STRESS TENSOR USING DEM ====================================================================== We present here a numerical analysis of the stress variables using a micromechanical model based upon the Discrete Element Method initially introduced by Cundall and Strack [@Cundall1979]. This technique starts with basic constitutive laws between interacting particles and can provide a macroscopic response of a particle assembly due to loading changes at the boundaries. Each particle of the material is a rigid sphere identified by its own mass, $m$, radius, $R$ and moment of inertia, $I_0$. For every time step of the computation, interaction forces between particles, and consequently resulting forces acting on each of them, are deduced from sphere positions through the interaction law. Newton’s second law is then integrated through an explicit second order finite difference scheme to compute the new sphere positions. The discrete element model -------------------------- A 3D software called YADE (Yet Another Dynamic Engine), Kozicki and Donzé [@YADE], has been enhanced in order to properly simulate partially saturated granular material features. ### Inter-particle behaviour The contact interaction is described by an elastic-plastic relation between the force $F$ and the relative displacement $U$ of two interacting particles. A normal stiffness $K_n$ is therefore defined to relate the normal force $F_n$ to the intergranular normal relative displacement $U_n$ : $$F_n = \left\{ \begin{array}{ll} K_n U_n \quad \textrm{if} \quad U_n\leq0\\ 0 \quad \textrm{if} \quad U_n > 0 \end{array} \right.$$ and a tangential stiffness $K_t$ allows us to deduce the shear force $F_t$ induced by the incremental tangential relative displacement $dU_t$; this tangential behaviour obeying the Coulomb friction law: $$\left\{ \begin{array}{ll} dF_t = -K_t dU_t\\ F_t^{max} = -\mu F_n \end{array} \right.$$ where $\mu$ is the Coulomb friction coefficient defined as $\mu = tan(\phi)$, with $\phi$ the intergranular friction angle.\ In the work presented here, $K_n$ and $K_t$ are dependent functions of the interacting particle sizes and of a characteristic modulus of the material denoted as $E$: $$\left\{ \begin{array}{ll} K_n = 2E \frac{R1R2}{(R1+R2)}\\ K_t = \alpha\,K_n \end{array} \right.$$ This definition results in a constant ratio between $E$ and the effective bulk modulus of the packing, whatever the size of the particles.\ For simplicity, we assume that the water inside the sample is solely composed of capillary water as defined in the pendular state, with a discontinuous liquid phase. ![Illustration of a liquid bridge between two particles of unequal sizes: (a) global geometry, (b) details of the bridge.[]{data-label="fig1"}](Scholtes1_doublet.eps){width="130mm"} Much attention has been given to these pendular liquid bridges, [@Hotta1974; @Lian1993; @Willet2000]. Their exact shape between spherical bodies is defined by the Laplace equation, relating the pressure difference $\Delta u = u_a - u_w$ across the liquid-gas interface to the mean curvature of the bridge and the surface tension of the liquid phase $\gamma$: $$\Delta u = \gamma \left( \frac{1}{r_1} + \frac{1}{r_2} \right)$$ In the Cartesian coordinates of Fig.\[fig1\](b) the two curvature radii $r_1$ and $r_2$ (Fig.\[fig1\](a)), are given by: $$\frac{1}{r_1} = \frac{1}{y(x)\sqrt{1+y'^2(x)}}$$ and $$\frac{1}{r_2} = \frac{y''(x)}{(1+y'^2(x))^{3/2}}$$ where $y(x)$ defines the profile of the liquid-gas interface curve. The $x$ axis coincides with the axis of symmetry of the bridge, passing through the centers of the connected spheres (Fig.\[fig1\](b)). According to the Laplace equation, the profile of the liquid bridge is thus related to the capillary pressure $\Delta u$ through the following differential equation: $$\frac{\Delta u}{\gamma}(1+y'^2(x))^{3/2} + \frac{1+y'^2(x)}{y(x)} - y''(x) = 0$$ The corresponding liquid bridge volume $V$ and intergranular distance $D$ can be obtained by considering the $x$-coordinates ($x_1$ and $x_2$) of the three-phases contact lines defining the solid-liquid-gas interface, as defined by Soulié et al.,[@Soulie2006] : $$\begin{array}{cc} V = \pi \int_{x_1}^{x_2} y^2(x)dx - \frac{1}{3} \pi R_1^3 (1-acos(x_1))^2(2+acos(x_1))\\ - \frac{1}{3} \pi R_2^3 (1-acos(x_2))^2(2+acos(x_2)) \end{array}$$ and $$D = R_2(1-acos(x_2))+x_2+R_1(1-acos(x_1))-x_1$$ The capillary force due to the liquid bridge can be calculated at the profile apex $y_0$ according to the ‘gorge method’ [@Hotta1974] and consists of a contribution of the capillary pressure $\Delta u$ as well as the surface tension $\gamma$: $$F_{cap} = 2\pi y_0\gamma + \pi y_0^2\Delta u$$ The relation between the capillary pressure and the configuration of the capillary doublet is thus described by a system of non-linear coupled equations (15, 16, 17, 18) where the local geometry ($D$) and water volume arise as a result of the solved system [@Soulie2006]. So, to account for capillarity in the code, an interpolation scheme on a set of discrete solutions of the Laplace equation has been developed in order to link directly the capillary pressure to the capillary force and water volume of the liquid bridge for a given grain-pair configuration. This results in a suction-controlled model where, at every time-step during the simulation, capillary forces and water volumes ($F_{cap}, V$) are computed based upon the microstructure geometry $D$ and the imposed suction level $\Delta u$. $$( F_{cap}, V ) = \Im (\Delta u; D)$$ A schematic diagram of the implemented capillary law is shown in Fig.\[fig2\] for a given value of the suction. ![Evolution of the capillary force $F_{cap}$ with the intergranular distance $D$ for a given suction value: a meniscus can form for $D < D_{creation}$ and breaks off for $D > D_{rupture}$.[]{data-label="fig2"}](Scholtes2_ForceDiagram.eps){width="75mm"} In this paper, the choice was made to define the appearance of a bridge when particles come strictly in contact ($D_{creation} =0$), neglecting the possibility of adsorbed water effects. The capillary force is considered constant for the range of the elastic deformation ($D \leq 0$), assuming local displacements to be very small compared to particle radii. Let us note that the formulation intrinsically defines the distance from which the meniscus breaks off as depending on the given capillary pressure and on the local geometry. This maximum distance $D_{rupture}$ corresponds to the minimum $D$ value from which the Laplace equation has no solution. ### Stress tensors Since this study covers the macroscopic and microscopic aspects of unsaturated granular media, stress tensors are calculated by both macro and micro-methods. The macro-method is the conventional way used in laboratory to measure stresses in experiments, that is to say: $$\sigma_{ij} = (\sum F_i)/S_j$$ where $F_i$ is the normal force acting on the boundary, and $S_j$ the surface of the boundary oriented by the normal direction $j$. This stress is equivalent to net stress $(\sigma_{ij} - u_a\delta_{ij})$ used in unsaturated soils mechanics, with $u_a$ used as the reference pressure because the pore air pressure is effectively zero in many practical problems (as well as in this study).\ As seen in section 2.2, two other stress tensors can be considered through homogenization techniques: the intergranular stress tensor $\sigma^{cont}$ computed from intergranular forces, and the capillary stress tensor $\sigma_{ij}^{cap}$ computed from capillary forces. ### Sample description and testing programme The studied particle assembly is a 1 mm length cubic sample composed of 10000 spheres, with a grain size distribution ranging from 0.025 mm to 0.08 mm, as shown in Fig.\[fig3\], and a porosity of 0.385. The input parameters are listed in table 1, referring to equation (10). ---------------- ------------------- ---------------- -- Global Modulus $\frac{K_t}{K_n}$ Friction angle $E$ (MPa) $\alpha$ $\phi$ (deg.) 150 0.5 30 \[Table1\] ---------------- ------------------- ---------------- -- : DEM model parameters ![Sample description.[]{data-label="fig3"}](Scholtes3_DistributionDense1.eps){width="120mm"} The sample was prepared by an isotropic-compaction technique, which can be described in two stages. - All particles are randomly positioned inside a cube made up of 6 rigid walls in such a manner that no overlap/contact force develops between any pair of particles. The interparticle friction angle is set to a small value (the smaller the friction angle, the denser the assembly. A value of 0.2 degree has been chosen here) and particle radii are then homogeneously increased, whereas boundary walls stay fixed. The process is run until the confining pressure (15 kPa) is reached and equilibrium between the internal stress state and the external load is satisfied [@Mahboubi1996]. - The interparticle friction coefficient is then changed to a value classically used in DEM simulation to reproduce an acceptable shear strength (30 degree) and boundary walls are servo-controlled in displacement to keep the equilibrium state. Starting from the initially stable configuration, a given suction is then applied. In this case, capillary forces (Eq.(18)) are added to all existing contact forces ($D_{creation} = 0$, see figure \[fig2\]). This process simulates the appearance of liquid bridges at body contacts as it would take place during a capillary condensation when the relative humidity of the surrounding air is increased, and coincides with a wetting of the material. The sample then reaches a new equilibium state from which different stress paths can be imposed. Note that capillary forces are assumed to be zero between a sphere and a wall.\ Suction-controlled triaxial compression tests have been carried out on the generated specimen which was taken up from the initial state of $15\,kPa$ to upper confining pressures by isotropic compaction through wall displacements. The capillary pressure value ($10\,kPa$) has been chosen so as liquid bridge volumes are small enough to avoid the possibility of interconnected liquid bridges, and hence to ensure the pendular regime in the medium. The water content can be simply computed as the sum of all the liquid bridge volumes. In this case, the associated initial degree of saturation of the sample is about $20\,\%$. A constant compression rate is then applied in the axial direction, controlling the lateral walls in displacement to keep the confining pressure constant. To keep the quasi-static assumption, the loading rate was fixed sufficiently small so as the normalized mean resultant force on particles (which is 0 at static equilibrium) does not exceed $1\,\%$ at each loading step. Stress tensor analysis ---------------------- ### Macroscopic evidence for the capillary stress tensor The quasi-static assumption ensures the equilibrium of the internal stress state and external load to be satisfied over every loading path. The internal stress has therefore to be the sum of the repulsive stress due to the elastic forces $F_n$ and the tensile one due to the capillary forces $F_{cap}$, as presented by Eq.(6) in subsection 2.2. $$\sigma_{ij} = \sigma_{ij}^{cont} + \sigma_{ij}^{cap}$$ As shown in figure \[fig4\] for a triaxial loading path under a given confining pressure of 15kPa, this additivity is perfectly verified. ![Axial stresses resulting from elastic forces ($\sigma_{cont}$) and capillary forces ($\sigma_{cap}$), as well as the external load on the wall ($\sigma_{wall}$) under a suction-controlled triaxial compression.[]{data-label="fig4"}](Scholtes4_compAxialStress.eps){width="100mm"} The quasi-static assumption is numerically confirmed and the existence of a capillary stress tensor is therefore proved for the case of wet granular materials. ### Capillary stress tensor analysis This suction induced stress leads to some questions about its structure and more generally on water distribution inside the material. In fact, classical considerations of unsaturated materials often assimilate suction effects to an equivalent pressure which consequently acts in the medium independently from its anisotropy (the “hydraulic” component of the effective stress is generally considered as an isotropic tensor).\ Computing the principal components of this capillary stress tensor along the deviatoric loading path of a triaxial test compromises this assumption as shown in Fig.\[fig5\]. ![Evolution of the principal capillary stress tensor components during a suction-controlled triaxial compression test ($P_0 = 15 kPa$, $u = 10 kPa$).[]{data-label="fig5"}](Scholtes5_capStress.eps){width="100mm"} This is all the more remarkable in that the model ensures a uniform distribution of the capillary pressure inside the medium. It is clear that for the initial state corresponding to an isotropic configuration of the assembly, the capillary stress tensor is almost spherical with an initial mean value of about 3.6 kPa for both axial and lateral components. Nevertheless, the anisotropy rapidly evolves with the one induced by loading, providing a difference between the principal components. If we define $\alpha = 2 \frac{\sigma_1^{cap} - \sigma_2^{cap}}{(\sigma_1^{cap}+\sigma_2^{cap})}$ as a representative index of the tensor sphericity: $\alpha = 0$ for the initial isotropic state, and then slightly evolves to a quasi-constant value of $0.12$ from the $7\,\%$ deformation level until the final $15\,\%$ computed state.\ The causes of this evolution can be analysed by examining the volumetric deformation of the sample (Fig.\[fig6\](a)) and the associated packing rearrangement through the average number of contacts by particle, $K$, (Fig.\[fig6\](b)). ![Evolutions of the porosity and average number of contacts by particles $K$ under a suction-controlled triaxial compression ($P_0 = 15\,kPa$, $u = 10\,kPa$).[]{data-label="fig6"}](Scholtes6_Poro_K.eps){width="140mm"} As these global considerations have to be completed by a micromechanical analysis in order to gain a clear insight into the microstructural origins of the phenomenon, Fig.\[fig6\] will be commented in the following section. ### Micromechanical investigation Here we develop a micromechanical analysis of the stress variables by considering both contact and liquid bridge distributions through the whole assembly. As the local interactions (dry contacts or menisci interactions) involve normal directions, a database can be defined in terms of orientations for all the grains of the sample. It is proposed to examine this normal direction network.\ The search has been done considering the given direction angle $\theta$ as presented in Fig.\[fig.7\], with $\theta$ corresponding to the angle of the unit normal vector $(\vec n )$ from the axis of axisymmetry of the sample ($Y$). ![Description of the method used for interaction orientation distribution.[]{data-label="fig.7"}](Scholtes7_searchContactMethod1.eps){width="70mm"} As seen in section 2.2, $\sigma_{ij}^{cap}$ can be written as: $$\sigma_{ij}^{cap} = \frac{1}{V} \sum_{p=1}^N \sum_{q=1}^N F_{cap}^{q,p} l^{q,p} n_i^{p,q} n_j^{p,q}$$ which points out the possible induced anisotropy of the capillary tensor by means of both liquid bridge and force intensity distributions. If we now introduce $P_{meniscus}(\vec{n})$ as the menisci orientation distribution inside the sample (in fact, the number of menisci along the direction $\vec{n}$) defined by $\int_{V} P_{meniscus}(\vec{n}) dV =1 $, Eq.(22) becomes: $$\sigma_{ij}^{cap} = \frac{1}{V} \int_{V} <F_{cap}.l>_{\vec{n}} P_{meniscus}(\vec{n}) \vec{n} \otimes \vec{n} dV$$ where $<F_{cap}.l>_{\vec{n}}$ is the mean value of $\vec{F_{cap}}.\vec{l} $ along the direction $\vec{n}$. It is therefore possible to compute separately the geometric distribution ($P_{meniscus}(\vec{n})$) and the static distribution of the ($<F_{cap}.l>_{\vec{n}}$) quantity which involves the mean force intensity, for every direction characterized by $\vec{n}$.\ $P(\vec{n})$ (Fig.\[fig8\]) and $<F.l>_{\vec{n}}$ (fig.\[fig9\]) distributions with $\theta$ are plotted for several deformation levels for both contacts and menisci contributions. $<F_{cont}.l>_{\vec{n}}$ and $<F_{cap}.l>_{\vec{n}}$ are simply normalized by their mean value to be qualitatively compared. The search has been done for different deformation levels on the sample confined under $15\,kPa$ and subjected to a constant capillary pressure of $10\,kPa$. Different snapshots have been taken, starting from the assumed isotropic initial state, until a $15\,\%$ axial strain level where the deformation regime appears almost permanent. ![Contacts and menisci orientation distribution $P(\vec{n})$ for different deformation levels[]{data-label="fig8"}](Scholtes8_interactions.eps){width="130mm"} First, as menisci are added at contacts to simulate capillary condensation, distributions of both contact and capillary terms are identical in the initial state. The structural isotropy of the sample clearly appears with a uniform distribution of $P(\vec{n})$ and $<f.l>_{\vec{n}}$ for contacts and menisci along all the directions (Figs.\[fig8\](a) and \[fig9\](a)), confirming here the accuracy of the generation process.\ The evolutions of both contact and liquid bridge distributions during the loading directly results from the deformations of the assembly. As the sample reacts just like a dense granular material (Fig.\[fig6\](a)), the initial contractancy gives rise to a brief increase in the coordination number (Fig.\[fig6\](b)), leading to the development of new liquid bridges. The corresponding growth of both axial and lateral capillary stress components can be viewed on Fig.\[fig5\]. As a consequence of the persistance of menisci for low interparticle distances, this combined augmentation persists until $\epsilon_1 = 3\,\%$ even if $K$ strongly drops before rising up again. The small difference between the axial and the lateral components is simply due to the deviatoric loading which produces more contacts, and consequently more menisci, in the active loading direction ($Y$) than in the passive stress-controlled one.\ After $\epsilon_1 = 3\,\%$, the lateral capillary tensor component clearly starts to reduce. As pointed out in Fig.\[fig8\](b), this results from the lateral spreading of the particles produced by the dilatancy of the assembly. Even though menisci distribution $P_{meniscus}(\vec{n})$ seems not to follow the induced fabric anisotropy because of the remaining of the liquid bridges, lateral capillary forces tend to diminish due to increasing interparticular distances (Fig.\[fig2\]). On the other hand, the axial component of the capillary stress tensor regularly rises until the initiation of a permanent regime in the deformation process (near $\epsilon_1 = 7\,\%$) where $K$ continues on going quasi-constant. From this state, as the number of contacts stabilizes while dilatancy persists, $<F_{cap}.l>_{\vec{n}}$ cannot endure any further increase and the axial component of the capillary tensor starts to diminish because of a higher spreading of the grains.\ ![Contacts and capillary forces distribution orientations ($<F.l>_{\vec{n}}$) for different deformation levels[]{data-label="fig9"}](Scholtes9_Forces.eps){width="130mm"} Describing the fabric induced anisotropy, $P_{cont}(\vec{n})$ distribution from $3$ to $15\,\%$, seems to be quite constant. However, it can be noted that $<F_{cont}.l>_{\vec{n}}$ contribution appears to reach rapidly a maximum anisotropic state before slightly reducing to the final one. This maximum anisotropic strength state fairly corresponds to the peak shear strength of the sample ($\epsilon_1 = 3\,\%$) where vertical contact force chains are subjected to maximum loading before breaking off. Concerning the final distributions, they are well representative of the so-called critical state, where a stabilization of the stresses occurs with no new significative change in the evolution of the anisotropy. In a noteworthy way, menisci seem not to follow the same evolution. Effectively, as liquid bridges can exist in a certain range of increasing intergranular distances, their distribution in the media is not driven in the same way by the fabric induced anisotropy and tends to stay close to the initial state, particularly for the range of small deformations. Nevertheless, for large deformations,due to the sample dilatancy, a small induced anisotropy arises from the disappearance of liquid bridges in the lateral directions. It is evident that the history of the material is fundamentally essential when dealing with water distribution.\ To sum up, the analysis reveals a slight induced anisotropy of the capillary stress tensor as a function of the medium fabric. The pore fluid in unsaturated soil has its own fabric that may be readily altered with changes in the granular fabric and is also strongly dependent on the water distribution inside the media. The global approximation which characterizes water effects in unsaturated materials by an equivalent pore pressure is, therefore, unable in essence to point out this intrinsically anisotropic microstructural force contribution. However, depending on the hydric history, the evolving anisotropy of the pore water distribution can validate the assumption by counterbalancing the induced fabric anisotropy in the material.\ Since DEM analyses result from direct simulations of a granular assembly, the purpose of the next section is to compare the results with those of a microstructural model where the behaviour of the material is obtained through a micromechanically based constitutive relation. COMPARISON WITH A MICRO-MECHANICAL MODEL ======================================== In this section, we first present the microstructural model used to compare with DEM simulations. This is a stress-strain model ([@Cambou1989; @NematNasser2000; @NicotDarve2005]) proposed by Chang and Hicher [@Chang2005] which considers inter-particle forces and displacements. Its capability has been recently extended to unsaturated states by incorporating the influence of capillary forces at the micro level [@HicherChang2006]. By comparing the predicted triaxial loading results obtained by the two approaches for the granular assembly of section 3, we confirm the stress conceptions introduced previously, focusing on the capillary stress tensor as defined in section 2.2. Stress-Strain Model ------------------- In this model, we envision a granular material as a collection of particles. The deformation of a representative volume of this material is generated by the mobilisation of contact particles in all orientations. Thus, the stress-strain relationship can be derived as an average of the mobilisation behaviour of local contact planes in all orientations. The forces and movements at the contact planes of all orientations are suitably superimposed to obtain the macroscopic stress strain tensors using the static homogenization presented in Section 2.2. ### Inter-particle behaviour - *Contact forces and capillary forces* For dry samples, contact forces are directly determined from the external stresses $\sigma$ applied on the granular assembly (Eq.(2)). In the case of wetted samples, different stages of saturation can be identified. The fully saturated regime corresponds to a two-phase material with water filling completely the voids between grains. The water pressure $u_w$ can either be positive or negative (suction) but in both cases the effective stress concept [@Terzaghi1925] can be applied and the contact forces determined by considering the effective stresses $\sigma'$ as the external stresses ([@DeBuhan1996; @Hicher1998]): $$\sigma' = \sigma - u_w I$$ As seen previously, in the case of partially saturated samples in the pendular regime, the liquid phase is distributed in menisci located between close grains. As a consequence, capillary forces are applied on the grains and are added to the contact forces defined above. The attractive capillary force between two grains connected by a water bridge is a decreasing function of the distance between the grains until the bridge fails (Fig.\[fig2\]). This function depends on the volume of liquid found between the grains. Different mathematical expressions have been proposed for these capillary forces, $F_{cap}$. Eq.(18) presents the expression used in DEM. $F_{cap}$ depends on the capillary pressure defined as the pressure jump across the liquid-air interface, on the liquid-air interface surface tension, as well as on the geometry of the menisci governed by the solid-liquid contact angle and the filling angle. One can see that $F_{cap}$ depends on the geometry of the liquid bridge which is function of the amount of the pore water and of the distance between two neighboring grains. The use of Eq.(18) for determining the amplitude of the capillary forces is not a straightforward one and therefore, in the micro-mechanical model, a simplified approach is to consider an empirical relation between $F_{max}$ and the degree of saturation Sr: $$F^n_{cap} = F_{max} e^{-c (\frac{D}{R})}$$ where $F^n_{cap}$ is the capillary force between two neighbouring grains, not necessarily in contact, $F_{max}$ the value of $F^n_{cap}$ for two grains in contact and $R$ the mean grain radius. $D$ represents the distance between two grains and is equal to $l-2R$, $l$ being the branch length given as a distribution function of the grain size and the void ratio and $c$ is a material parameter, dependent on the grain morphology and on the water content: $$\begin{array}{ll} F_{max} = F_0 \frac{S_r}{S_0} \quad \textrm{for} \quad 0 < S_r < S_0 \\ F_{max} = F_0 \frac{S_0 (1-S_r)}{S_r (1-S_0)} \quad \textrm{for} \quad S_0 < S_r < 0 \end{array}$$ where $F_0$ and $S_0$ are material parameters. $F_0$ depends on the grain size distribution, $S_0$ represents the degree of saturation at which any further drying of the specimen will cause substantial breaking of the menisci in the pendular domain. $S_0$ depends on the nature of the granular material.\ Since the menisci are not necessarily all formed in the funicular regime, Eq.(25) may not be applicable for high degrees of saturation. However, in this first approach, we decided to extend it to the whole range of saturation, considering that the amplitudes of capillary forces were small for degrees of saturation higher that 80% and could thus be approached with sufficient accuracy by using the same equation. - *Elastic relationship* The contact stiffness of a contact plane includes normal stiffness, $k_n^\alpha$ , and shear stiffness, $k_t^\alpha$. The elastic stiffness tensor is defined by $$F_i^\alpha = k_{ij}^{\alpha e} \delta_{j}^{\alpha e}$$ which can be related to the contact normal and shear stiffness by $$k_{ij}^{\alpha e} = k_n^{\alpha} n_i^\alpha n_j^\alpha + k_t^{\alpha} (s_i^\alpha s_j^\alpha + t_i^\alpha t_j^\alpha)$$ The value of the stiffness for two elastic spheres can be estimated from Hertz-Mindlin’s fomulation. For sand grains, a revised form was adopted [@Chang1989], given by $$k_n = k_{n0} (\frac{F_n}{G_g l^2})^n \quad k_t = k_{t0} (\frac{F_n}{G_g l^2})^n$$ where $G_g$ is the elastic modulus for the grains, $F_n$ is the contact force in normal direction, $l$ is the branch length between the two particles, $k_{n0}$, $k_{t0}$ and $n$ are material constants. - *Plastic relationship* Plastic sliding often occurs along the tangential direction of the contact plane with an upward or downward movement, thus shear dilation/contraction takes place. The dilatancy effect can be described by $$\frac{d \delta_n^p}{d \Delta^p} = \frac{T}{F_n} - tan \phi_0$$ where $\phi_0$ is a material constant which, in most cases, can be considered equal to the internal friction angle $\phi_{\mu}$. This equation can be derived by equating the dissipation work due to plastic movements and friction in the same orientation. Note that the shear force $T$ and the rate of plastic sliding $d \Delta^p$ can be defined as $T = \sqrt{F_s^2 + F_t^2} \quad and \quad d \Delta^p =\sqrt{(d \delta_s^p)^2 + (d \delta_t^p)^2}$. The yield function is assumed to be of the Mohr-Coulomb type, $$F(F_i, \kappa) = T - F_n \kappa (\Delta^p) = 0$$ where $\kappa(\Delta^p)$ is an isotropic hardening/softening parameter defined as: $$\kappa = \frac{k_{p0} tan (\phi_p) \Delta^p }{\vert F_n \vert tan (\phi_p) + k_{p0} \Delta^p}$$ The hardening function is defined by a hyperbolic curve in $\kappa - \Delta^p$ plane, which involves two material constants: $\phi_p$ and $\kappa_{p0}$. On the yield surface, under a loading condition, the shear plastic flow is determined by a normality rule applied to the yield function. However, the plastic flow in the direction normal to the contact plane is governed by the stress-dilatancy equation in Eq.(32). Thus, the flow rule is non-associated. - *Interlocking influence* The internal friction angle $\phi_\mu$ is a constant for the material. However, the peak friction angle, $\phi_p$, on a contact plane is dependent on the degree of interlocking by neighboring particles, which can be related to the state of the packing void ratio $e$ by: $$tan(\phi_p) = (\frac{e_c}{e})^m tan(\phi_\mu)$$ where $m$ is a material constant [@Biarez1994]. The state of packing is itself related to the void ratio at critical state $e_c$. The critical void ratio $e_c$ is a function of the mean stress. The relationship has traditionally been written as: $$e_c = \Gamma - \lambda log(p') \quad or \quad e_c = e_{ref} - \lambda log(\frac{p'}{p_{ref}})$$ where $\Gamma$ and $\lambda$ are two material constants and $p'$ is the mean stress of the packing, and ($e_{ref}$, $p_{ref}$) is a reference point on the critical state line.\ For dense packing, the peak frictional angle $\phi_p$ is greater than $\phi_\mu$. When the packing structure dilates, the degree of interlocking and the peak frictional angle are reduced, which results in a strain-softening phenomenon. - *Elasto-plastic relationship* With the elements discussed above, the final incremental stress-strain relations of the material can be derived that includes both elastic and plastic behaviour, given by $\dot F_i^\alpha = k_{ij}^{\alpha p} \dot \delta_j^{\alpha}$. Detailed expression of the elasto-plastic stiffness tensor is given in [@Chang2005]. ### Stress-strain relationship - *Macro micro relationship* The stress-strain relationship for an assembly can be determined from integrating the behaviour of inter-particle contacts in all orientations. During the integration process, a relationship is required to link the macro and micro variables. Using the static hypotheses proposed by Liao et. al [@Liao1997], we obtain the relation between the macro strain and inter-particle displacement (finite strain condition not being considered here): $$u_{i,j} = A_{ik}^{-1} \sum_{\alpha = 1}^N \delta_j^\alpha l_k^\alpha$$ where $\delta_j^\alpha$ is the relative displacement between two contact particles and the branch vector $l_k$ is the vector joining the centers of two contacting particles. Using both the principle of energy balance and Eq.(36), the mean force on the contact plane of each orientation is $$F_i^{\alpha} = \sigma_{ij} A_{jk}^{-1} l_k^\alpha V$$ The stress increment $\sigma_{ij}$ induced by the loading can then be obtained through the contact forces and branch vectors for contacts in all orientations [@Christofferson1981; @Rothenburg1981]. Since $l_k^\alpha$ represents the mean branch vector for the $\alpha^{th}$ orientation including both contact and non-contact particles, the value of $F_i^\alpha$ in Eq.(37) represents the mean of contact forces in the $\alpha^{th}$ orientation. $$\sigma_{ij} = \frac{1}{V} \sum_{\alpha =1}^N F_i^{\alpha} l_j^\alpha$$ When the defined contact force is applied in Eq.(37), Eq.(38) is unconditionally satisfied.\ Using the definition of Eq.(38), the stress induced by capillary forces can be computed and is termed as capillary stress, given by $$\sigma_{ij}^{cap} = \frac{1}{V} \sum_{\alpha =1}^N F_{cap,i}^{\alpha} l_j^\alpha$$ As mentionned in section 2.2, it is noted that this term is not analogous to the usual concept of capillary pressure or suction which represents the negative pore water pressure inside the unsaturated material. In agreement with the results obtained through DEM simulations, the capillary stress depends on the geometry of the pores and is a tensor rather than a scalar. Only for an isotropic distribution of the branch lengths $l^\alpha$, this water associated stress can be reduced to an isotropic tensor. This is the case for an initially isotropic structure during isotropic loading, but during deviatoric loading, an induced anisotropy is created and the capillary tensor is no longer isotropic.\ At the equilibrium state, the effective intergranular forces will therefore be the difference between the repulsive forces due to the external stresses (Eq.(38)) and the attractive capillary forces (Eq.(39)). In a similar way to Eq.(7), we can thus define a generalized intergranular stress tensor $\sigma^*$, defined by: $$\sigma^* = \sigma - \sigma^{cap}$$ Assuming that $\sigma^*$ can stand as an appropriate definition of the effective stress, this equation could represent a generalisation of Eq.(1) proposed by Bishop, in which the capillary stress is reduced to an isotropic tensor. Dangla et al. [@Dangla1998] demonstrated the validity of the effective stress approach in elasticity by means of an energy approach. They obtained an expression similar to Eq.(40) but with an additional term corresponding to the work of the interfaces. As pointed out before, capillary forces in our models, and consequently the capillary stresses, depend on the negative pore water pressure, or suction, and on the water-air interface surface tension. A similar approach can be found in the work of Fleureau and al. [@Fleureau2003] who obtained an explicit expression of the capillary stress as a function of the suction for regular arrangements of spherical grains by neglecting the surface tension. The definition of the capillary stress tensor in Eq.(39) can therefore be considered as an extension of the results obtained from these previous studies to cases of non isotropic granular assemblies. ### Summary of parameters One can summarize the material parameters as: - Normalized contact number per unit volume: $\frac{Nl^3}{V}$. - mean particle size, $2R$. - Inter-particle elastic constants: $k_{n0}$, $k_{t0}$ and $n$. - Inter-particle friction angle: $\phi_\mu$ and $m$. - Inter-particle hardening rule: $k_{p0}$ and $\phi_0$. - Critical state for packing: $\lambda$ and $\Gamma$ or $e_{ref}$ and $p_{ref}$. - Capillary force equation: $f_0$, $S_0$ and $c$. Other than critical state parameters, all parameters are inter-particles. Standard values for $k_{p0}$ and $\phi_0$ are the following: $k_{p0} = k_{n0}$ and $\phi_0 = \phi_\mu$ and a typical ratio $\frac{k_{t0}}{k_{n0}} = 0.4$ can generally be assumed [@HicherChang2006]. Therefore, for dry or saturated samples, only six parameters have to come from experimental results and these can all be determined from the stress-strain curves obtained from drained or undrained compression triaxial tests. For unsaturated states, three more parameters need to be determined, using specific triaxial tests on partially saturated samples. Numerical simulations --------------------- Several simulations of triaxial loading have been performed in order to compare DEM and Micromechanical Model results. These are based on computations on a Representative Volume Element of about 10 000 spherical elements. ### Dry samples The model parameters were determined from the following procedure. The granular assembly defined in the discrete element model is made of spherical particles with the grain size distribution presented in Fig.\[fig3\]. The particle size $2R$ was selected equal to $d_{50} = 0.045\,mm$. The elastic parameters could not be directly derived from the inter-particle behaviour used in the DEM simulations, which consider a linear contact stiffness. From previous studies on glass beads assemblies [@Hicher1996; @HicherChang2006], typical values were adopted in this study. The plastic parameters were determined from the numerical results obtained by DEM. Table II summarizes the set of parameters used for modelling the dry sample behaviour. $k_{n0}$ (N/m) $\frac{k_{t0}}{k_{n0}}$ $n$ $\phi_\mu$ (deg.) $\lambda$ $m$ ---------------- ------------------------- ----- ------------------- ----------- ----- 300 0.5 30 20 0.05 0.5 : Microstructural Model parameters for the glass beads assembly \ Fig.\[fig11\] presents the numerical simulations of three triaxial tests performed at three different confining pressures of $15$, $30$ and $60\,kPa$ for the simulated dry assembly having an initial void ratio equal to $0.38$. One can see that the results obtained with the Microstructural Model compared well to DEM ones. ![DEM and MicroMechanical simulations of triaxial compression tests on a similar dry granular assembly.[]{data-label="fig11"}](Scholtes11_q_epsV.eps){width="100mm"} ### Wet samples DEM simulations were performed on unsaturated assemblies with an initial saturation degree of about $20\,\%$, corresponding to a suction value equal to $10\,kPa$. In order to determine the corresponding capillary forces, Eq.(25) includes two material parameters $c$ and $d$ which control the evolution of the water induced forces with the distance between two particles. According to experimental results presented in different studies [@Lian1993; @Soulie2006], we decided to take a value of $c = 4$ and of $d = 0.05\,mm$. These values give a standard evolution for the capillary forces, function of the distance between particles, as well as an initial isotropic distribution of these forces if the material structure is isotropic. The evolution of the capillary forces with the degree of saturation requires two more parameters: $S_0$ and $f_0$. According to previous studies [@HicherChang2006], we selected a value of $S_0 = 1\,\%$ and determined the value of $f_0 = 0.045\,N$. In order to obtain an initial value of the capillary stress in accordance with the one computed by DEM. We then performed numerical simulations of triaxial tests on wet samples for different confining pressures. Contrary to DEM suction-controlled simulations, the Microstructural Model tests are water content controlled. In order to compare those two approaches, samples were initially wetted at a common degree of saturation of about $20\,\%$. ![Evolution of the saturation degree during DEM and Micromechanical simulated triaxial tests.[]{data-label="fig12"}](Scholtes12_Sr.eps){width="90mm"} One should notice that, even if the test conditions are not strictly identical, the changes in the degree of saturation during loading obtained for both tests are sufficiently similar for us (Fig.\[fig12\]) to compare the results obtained by the two approaches. ![DEM and Micromechanical simulations of triaxial compression tests on a similar wet granular assembly.[]{data-label="fig13"}](Scholtes13_q_epsVW.eps){width="95mm"} \ As presented in Fig.\[fig13\], the two models give quite similar results. One can see that a material strength increase is obtained for unsaturated samples compared to dry ones at the same confining pressure. The volume changes during triaxial loading create a small change in the degree of saturation (Fig.\[fig12\]). As a consequence, the capillary forces evolve, according to Eq.(5). ![Evolution of the principal capillary stress tensor components during a triaxial compression test on a wet granular assembly.[]{data-label="fig14"}](Scholtes14_ScapM.eps){width="100mm"} Fig.\[fig14\] shows the evolution of the principal components of the capillary stress tensor during constant water content triaxial tests. The initial state corresponds to an isotropic capillary stress tensor with a mean stress equal to $3.6\,kPa$ as obtained in DEM. During loading, a structural anisotropy is created due to the evolution of the fabric tensor defined in Eq.(38). Therefore, the principal components of the capillary stress tensor evolve differently. In the studied cases, this difference remains small and corresponds at the end of the test to a relative difference less than $10\,\%$. This small difference can be explained by two causes. The first one is the small amount of induced anisotropy obtained by the evolution of the fabric tensor. This evolution is due to the change in the branch length $l_i^\alpha$ for each $\alpha$ direction which, in this version of the model, is only due to elastic deformations of the grains in contact. Since all our numerical testing were performed at small confining stresses, the amount of elastic deformation remained quite small. The second reason is linked to the fact that the capillary bridges can exist for non-touching neighboring grains. This has been taken into account in calculating the mean capillary force and also in determining the capillary stress tensor (Eqs.(27) and (41)). This result is in agreement with the distribution of the contacts and menisci distribution computed by DEM (Figs.\[fig8\] and \[fig9\]).\ Regarding the constitutive behaviour at contacts between solid particles, the results provided by the two methods are in rather good agreement concerning the influence of capillary forces at a macroscopic level. The increase in the shear strength classically observed for partially saturated materials is clearly encountered starting from microscopical considerations, and the slight induced anisotropy of the capillary stress tensor is observed, confirming that suction effects in unsaturated materials cannot be precisely accounted for by an equivalent pore pressure assumption. CONCLUSION ========== Starting from local capillary forces, a stress variable, denoted as the capillary stress tensor and intrinsically associated to water effects, has been defined through homogenisation techniques. Triaxial compression test simulations from two fundamentally different micromechanical models were performed on a granular assembly under several confining pressures for dry and partially saturated conditions. Both models reproduce in quite good agreement the main features of unsaturated granular materials, in particular the increase of the shear strength due to capillary effects.\ The results also suggest that, in partially saturated materials within the pendular regime, the effects of pore fluid are adequately represented by a discrete distribution of forces rather than by an averaged pressure in the fluid phase. Effectively, as a representative quantity of the pore fluid distribution inside unsaturated materials, this suction associated stress tensor reveals that pore fluid has its own fabric which is inherently anisotropic and strongly dependent on the combined loading and hydric history of the material. Even if the induced anisotropy of the capillary stress tensor appears slight in this study, it is obvious that this tensorial nature of water in unsaturated material implies suction to produce shear effects on the solid phase. This suction induced shear effect consequently makes it difficult to associate an isotropic quantity to water as expressed in the Bishop’s effective stress. Pore pressure is no longer an isotropic stress in unsaturated soil, and therefore, cannot be considered as an equivalent continuum medium. The analysis finally confirms that suction is a pore-scale concept, and that stress definitions for unsaturated soils should also include microscopic interparticle stresses as the ones resulting from capillary forces.\ The multi-scale approach presented here appears to be a pertinent complementary tool for the study of unsaturated soil mechanics. More precisely, discrete methods should convey new insights into the discussion about the controversial concept of generalized effective stress by relating basic physical aspects to classical phenomenological views. [9]{} Nicot F, Darve F. A multiscale approach to granular materials. *Mechanics of materials* 2005; **37**(9):980-1006. Cundall PA, Strack ODL. A discrete numerical model for granular assemblies. *Géotechnique* 1979; **29**(1):47-65. Fisher RA. On the capillarity forces in an ideal soil; correction of formulae given by W.B. Haines. *Journal of Agricultural Science* 1926; **16**:492-505. Terzaghi K. Principles of soil mechanics, a summary of experimental results of clay and sand. *Engineering New Record*, 1925; 3-98. Bishop AW. The principle of effective stress. *Tecnisk Ukelab* 1959; **39**:859-863. Fredlund DG, Morgenstern RR, Widger RA. The shear strength of unsaturated soils. *Canadian Geotechnical Journal* 1978; **15**(3):313-321. Li XS. Effective stress in unsaturated soil: A microstructural analysis. *Géotechnique* 2003; **53**:273-277. Jiang MJ, Leroueil S, Konrad JM. Insight into shear strength functions of unsaturated granulates by DEM analyses. *Computers and Geotechnics* 2004; **31**:473-489. Lu N, Likos WJ. Suction stress characteristic curve for unsaturated soil. *J. of Geotechnical and Geoenvironmental Engineering* 2006; **132**(2):1090-0241. Hicher P-Y, Chang CS. A microstructural elastoplastic model for unsaturated granular materials. *Int. Journal of Solids and Structures* 2007; **44**:2304-2323. Nuth M, Laloui L. Effective stress concept in unsaturated soils: clarification and validation of a unified framework. *Int. Journal for Numerical and Analytical Methods in Geomechanics* 2008; **32**:771-801. Alonso EE, Gens A, Hight DW. A constitutive model for partially saturated soils. *Géotechnique* 1990; **40**(3):405-430. Wheeler SJ, Sivakumar V. An elasto-plastic critical state framework for unsaturated soils. *Géotechnique* 1945; **45**(1):35-53. Bishop AW, Blight GE. Somme aspects of effective stress in saturated and partly saturated soils. *Géotechnique* 1963; **13**(3):177-197. Haines WB. Studies of the physical properties of soils. II A note on the cohesion developed by capillarity forces in an ideal soil. *Journal of Agricultural Science* 1925; **15**:529-535. Love AEH. *A treatise on the mathematical theory of elasticity*. Cambridge University Press, Cambridge, 1927. Christofferson J, Mehrabadi MM, Nemat-Nassar S. A micromechanical description on granular material behaviour. *ASME Journal of Applied Mechanics* 1981; **48**:339-344. Rothenburg L, Selvadurai APS. Micromechanical definition of the Cauchy stress tensor for particulate media. *Mechanics of structured media* 1981; Selvadurai A.P.S. editor. Amsterdam, The Netherlands: Elsevier. 469-486 Kozicki J, Donze FV. A new open-source software using a discrete element method to simulate granular material. *Computer Methods in Applied Mechanics and Engineering* 2008; **197**:4429-4443. Hotta K, Takeda K, Ionya K. The capillary binding force of a liquid bridge. *Powder Technology* 1974; **10**:231-242. Lian G, Thornton C, Adams MJ. A theoritical study of the liquid bridge force between rigid spherical bodies. *Journal of Colloid an Interface Science* 1993; **161**:138-147. Willet CD, Adams MJ, Simon AJ, Seville JPK. Capillary bridges between two spherical bodies. *Langmuir* 2000; **16**:9396-9405. Soulié F, Cherblanc F, El Youssoufi MS, Saix C. Influence of liquid bridges on the mechanical behaviour of polydisperse granular materials. *Int. Journal for Numerical and Analytical Methods in Geomechanics* 2006; **30**:213-228. Mahboubi A, Ghaouti A, Cambou B. La simulation num�rique discrète du comportement des matériaux granulaires. *Revue Française de Géotechnique* 1996; **76**:45-61. Cambou B, Jafari K. A constitutive model for non-cohesive soils. *Computer and Geotechnics* 1989; **7**(4):341-359. Sia Nemat-Nasser. A micro-mechanically based constitutive model for frictional deformation of granular materials. *Journal of the Mechanics and Physics of Solids* 2000; **48**:1541-1563. Chang CS, Hicher P-Y. An elastoplastic model for granular materials with microstructural consideration. *Int. Journal of Solids and Structures* 2005; **42**(12):4258-4277. De Buhan P and Dormieux L. On the validity of the effective stress concept for assessing the strength of saturated porous materials: a homogenization approach. *Journal of the Mech. and Phys. of Solids* 1996; **44**(10):1649-1667. Hicher P-Y. Experimental behaviour of granular materials. In Cambou, B. (Ed.) *Behavior of granular materials* 1998; Springer, Wien New York; 1-97. Chang CS, Sundaram SS, Misra A. Initial moduli of particulate mass with frictional contacts. *Int. Journal for Numerical and Analytical Methods in Geomechanics* 1989; **13**(6):626-641. Biarez J, Hicher P-Y. *Elementary Mechanics of Soil Behaviour*. Balkema, Rotterdam; 1994, p. 208. Liao CL, Chang TP, Young D, Chang CS. Stress-strain relationship for granular materials bases on hypothesis of best fit. *Int. Journal of Solids and Structures* 1997; **34**:4087-4100. Dangla P, Coussy O, Eymard R. Non-linear poroelasticity for unsaturated porous materials: an energy approach. *Poromechanics, a tribute to M.A. Biot*. Proceedings of the Biot Conference on Poromechanics, Balkema, 1998; 59-64. Fleureau J-M, Hadiwardoyo S, Gomes Correia A. Generalised efective stress analysis of strength and small strains behaviour of a silty sand, from dry to saturated state. *Soils and Foundations* 2003; **43**(4):21-33. Hicher P-Y. Elastic properties of soils. *Journal of Geotechnical Engineering* ASCE, 1996; **122**(8): 641-648.
--- address: 'Department of Mathematics, Bilkent University, 06800 Bilkent, Ankara, Turkey' author: - Franz Lemmermeyer title: | Higher Descent on Pell Conics.\ I. From Legendre to Selmer --- Introduction {#introduction .unnumbered} ============ The theory of Pell’s equation has a long history, as can be seen from the huge amount of references collected in Dickson [@Dick], from the two books on its history by Konen [@Kon] and Whitford [@Whit], or from the books by Walfisz [@Wal], Faisant [@Fai], and Barbeau [@Bar]. For the better part of the last few centuries, the continued fractions method was the undisputed method for solving a given Pell equation, and only recently faster methods have been developed (see the surveys by Lenstra [@Len] and H.C. Williams [@Wil]). This is the first in a series of articles which has the goal of developing a theory of the Pell equation that is as close as possible to the theory of elliptic curves: we will discuss $2$-descent on Pell conics, introduce Selmer and Tate-Shafarevich groups, and formulate an analogue of the Birch–Swinnerton-Dyer conjecture. In this article, we will review the history of results that are related to this new interpretation. We will shortly discuss the construction of explicit units in quadratic number fields, and then deal with Legendre’s equations and the results of Rédei, Reichardt and Scholz on the solvability of the negative Pell equation. The second article [@L2] is devoted to references to a “second $2$-descent” in the mathematical literature from Euler to our times, and in [@L3] we will discuss the first $2$-descent and the associated Selmer and Tate-Shafarevich groups from the modern point of view. Explicit Units ============== Since finding families of explicit units in number fields is only indirectly related to our topic, we will be rather brief here. The most famous families of explicitly given units live in fields of Richaud-Degert type: if $d = b^2 + m$ and $b \mid 4m$, then $\alpha = b+\sqrt{d}$ has norm $-m$, hence is a unit if $m \in \{\pm 1, \pm 4\}$; if $\alpha$ is not a unit, then $\frac1m \alpha^2$ is. Brahmagupta observed in 628 AD that if $a^2 - nb^2 = k$, then $x = \frac{2ab}k$, $y = \frac{a^2 + nb^2}k$ satisfy the Pell equation $x^2 - ny^2 = 1$. If $k = \pm 1, \pm 2$, these solutions are integral; if $k = \pm 4$, Brahmagupta showed how to construct an integral solution. Note that $x$ and $y$ are necessarily integral if $k$ divides the squarefree integer $n$. According to C. Henry (see [@Male] and Dickson [@Dick v. II, p. 353]), Malebranche (1638–1715) claimed that the Pell equation $Ax^2 + 1 = y^2$ can be solved easily if $A = b^2 + m$ with $m|2b$. Euler mentioned in [@Eul4 p. 99] and later again in [@Euler] that if $d = b^2c^2 \pm 2b$ or $d = b^2c^2 \pm b$, then the Pell equation $x^2 - dy^2 = 1$ can be solved explicitly. This result was rediscovered e.g. by M. Stern [@Stern], Richaud [@Ri3], Hart [@Hart2], Speckman [@Spec] and Degert [@Deg]. Special cases are due to Moreau [@Mor73], de Jonquières [@Jon],Ricalde [@Ric], Boutin [@Bout], Malo [@Mal], and von Thielmann [@vTh]. The quadratic fields ${\mathbb Q}(\sqrt{d}\,)$ with $d = a^2 \pm r$ and $r \mid 2a$ were called fields of Richaud-Degert type by Hasse [@Has]. Degert’s results were generalized by Yokoi [@Yok; @Yok2], Kutsuna [@Kut], Katayama [@Kat1; @Kat2], Takaku & Yoshimoto [@TY], Ramasamy [@Ram], and Mollin [@Mol1]. The results about units in “fields of Richaud-Degert type” had actually been generalized long before Degert. Observe that the computation of the fundamental solution of $X^2 - dY^2 = 1$ for $d = m^2+1$ with the method of continued fractions is trivial because the development of $\sqrt{d}$ has period $1$. Similarly, the development of $\sqrt{d}$ for $d$ of Richaud-Degert type have small periods; here are some examples: - $d = k^2+k$: $\sqrt{d} = [k,\overline{2,2k}]$; - $d = k^2+2k$: $\sqrt{d} = [k,\overline{1,2k}]$; - $d = a^2k^2+a$: $\sqrt{d} = [ak,\overline{2k,2ak}]$; - $d = a^2k^2+2a$: $\sqrt{d} = [ak,\overline{k,2ak}]$. Actually, already Euler [@Eul4] gave the continued fraction expansions for $d = n^2+1$, $n^2+2$, $n^2+n$, $9n^2+3$ and a few other values of $d$. Perron [@Perr] gave examples of polynomials $f(x)$ for which the continued fraction expansion of $\sqrt{f(x)}$ can be given explicitly; the period lengths of his examples were $\le 6$. More examples were given by Yamamoto [@Yam] and Bernstein [@Ber1; @Ber2] (who produced units whose continued fraction expansions have arbitrarily long period), as well as by Azuhata [@Azh1; @Azh2], Tomita [@Tom1; @Tom2], Levesque & Rhin [@LR], Levesque [@Lev], Madden [@Mad], Mollin [@Mol], H.C. Williams [@WilS], van der Poorten & H.C. Williams [@PW], J. McLaughlin [@McL1; @McL2], and probably many others. Nathanson [@Nat] proved that $X^2 - DY^2 = 1$, where $D= x^2+d$, has nontrivial solutions $X, Y \in {\mathbb Z}[x]$ if and only if $d = \pm 1, \pm 2$. This result was generalized by Hazama [@Haz] and by Webb & Yokota [@WY]. For connections with elliptic curves, see Berry [@Ber] and Avanzi & Zannier [@AZ]. Legendre ======== Legendre’s Théorie des Nombres ------------------------------ § VII of Legendre’s book [@LegTN] on number theory had the title > Théorèmes sur la possibilité des équations de la forme $Mx^2 - Ny^2 = \pm 1$ ou $\pm 2$.[^1] Legendre starts his investigation by assuming that $A$ is prime, and that $p$ and $q$ are the smallest positive solutions of the equation $$\label{ELe} p^2 - Aq^2 = 1.$$ Writing this equation as $(p-1)(p+1) = Aq^2$ he deduces that $q = fgh$ with $f \in \{1, 2\}$ and $$\left. \begin{array}{rcl} p+1 & = & fg^2A \\ p-1 & = & fh^2 \end{array} \right\} \quad \text{or} \quad \left. \begin{array}{rcl} p+1 & = & fg^2 \\ p-1 & = & fh^2A \end{array} \right\}$$ Subtracting these equations from each other he gets equations of the form $\pm \frac2f = x^2 - Ay^2$. The case $f = \pm 1$ leads to contradictions modulo $4$, the case $f = 2$ contradicts the minimality of $p$ and $q$, thus we must have $f = -2$ and therefore $-1 = x^2 - Ay^2$ (see [@LegTN p. 55]). Similar arguments easily yield \[PL1\] Let $p$ be a prime. 1. If $p \equiv 1 \bmod 4$, then $X^2 - pY^2 = -1$ has integral solutions. 2. If $p \equiv 3 \bmod 8$, then $X^2 - pY^2 = -2$ has integral solutions. 3. If $p \equiv 7 \bmod 8$, then $X^2 - pY^2 = +2$ has integral solutions. Next Legendre considers composite values of $A$: if $A = MN$ is the product of two odd primes $\equiv 3 \bmod 4$, then he shows that $Mx^2 - Ny^2 = \pm 1$ is solvable;[^2] if $M \equiv N \equiv 1 \bmod 4$, however, none of the equations $x^2 - MNy^2 = -1$ and $Mx^2 - Ny^2 = \pm 1$ can be excluded. He also states that for given $A$ at most one of these equations can have a solution, but the argument he offers is not conclusive. He then treats the general case and comes to the conclusion > Étant donné un nombre quelquonque non quarré $A$, il est toujours possible de décomposer ce nombre en deux facteurs $M$ et $N$ tels que l’une des deux équations $Mx^2 - Ny^2 = \pm 1$, $Mx^2 - Ny^2 = \pm 2$ soit satisfaite, en prenant convenablement le signe du second membre.[^3] Again, Legendre adds the remark that among these equations there is exactly one which is solvable; he gives a different argument this time, which again is not conclusive since he is only working with the minimal solution of the Pell equation. Dirichlet’s Exposition ---------------------- Dirichlet [@Dir] gave an exposition of Legendre’s technique, which we will quote now; his § 1 begins > Wir beginnen mit einer kurzen Darstellung der Legendre’schen Methode. Es bezeichne $A$ eine gegebene positive Zahl ohne quadratischen Factor, d.h. deren Primfactoren alle von einander verschieden sind, und es seien $p$ und $q$ die kleinsten Werthe $(p = 1$ und $q = 0$ ausgenommen), welche der bekanntlich immer lösbaren Gleichung: $$\label{ED1} \tag{1} p^2 - Aq^2 = 1$$ Bringt man dieselbe in die Form $(p+1)(p-1) = Aq^2$, und bemerkt man, dass $p+1$ und $p-1$ relative Primzahlen sind oder bloss den gemeinsamen Factor $2$ haben, je nachdem $p$ gerade oder ungerade ist, so sieht man, dass die Gleichung (\[ED1\]) im ersten Falle die folgenden nach sich zieht: $$p+1 = Mr^2, \quad p-1 = Ns^2, \quad A = MN, \ q = rs,$$ und ebenso im zweiten: $$p+1 = 2Mr^2, \quad p-1 = 2Ns^2, \quad A = MN, \ q = 2rs,$$ wo $M, N$ und mithin $r, s$ durch $p$ völlig bestimmt sind. Es sind nämlich $M$, $N$ im ersten Fall respective die grössten gemeinschaftlichen Theiler von $A$, $p+1$ und $A$, $p-1$, im andern dagegen von $A$, $\frac{p+1}2$ und $A$, $\frac{p-1}2$. Aus diesen Gleichung folgt $$\label{ED2} > \tag{2} Mr^2- Ns^2 = 2, \quad Mr^2- Ns^2 = 1.$$ > > Hat man die Gleichung (\[ED1\]) nicht wirklich aufgelöst, und ist also $p$ nicht bekannt, so weiss man bloss, dass eine dieser Gleichungen stattfinden muss, und da unter dieser Voraussetzung $M$ und $N$ nicht einzeln gegeben sind, so enthält jede der Gleichungen (\[ED2\]) mehrere besondere Gleichungen, die man erhält, indem man successive für $M$ alle Factoren von $A$ ($1$ und $A$ mit eingeschlossen) nimmt und $N = \frac{A}{M}$ setzt. [^4] Thus we have The solvability of the Pell equation $X^2 - AY^2 = 1$ implies the solvability of one of the equations $Mr^2 - Ns^2 = 1$ or $Mr^2 - Ns^2 = 2$, where $MN = A$. This set of auxiliary equations (\[ED2\]) was derived using continued fractions by Arndt [@Arn1; @Arn2], and later also by Richaud [@Ri2; @Ri3] and Roberts [@Rob]. Catalan [@Cat] also studied the equations (\[ED2\]). Legendre’s method, Dirichlet writes, consists in showing that all but one of these equations are unsolvable, thus demonstrating that the remaining equation must have integral solutions. Dirichlet then shows how to apply this technique by proving Legendre’s Proposition \[PL1\] as well as the following result: \[PL2\] Let $p$ denote a prime. 1. If $p \equiv 3 \bmod 8$, then $pr^2 - 2s^2 = 1$ has integral solutions. 2. If $p \equiv 5 \bmod 8$, then $2pr^2 - s^2 = 1$ has integral solutions. 3. If $p \equiv 7 \bmod 8$, then $2r^2 - ps^2 = 1$ has integral solutions. If $p \equiv 1 \bmod 8$, either of the three equations may be solvable. Richaud ------- These results were extended to $d$ having many prime factors by Richaud. In [@Ri1], he states without proof some results of which the following are special cases (Richaud considered values of $d$ that were not necessarily squarefree): 1. If $p$ and $q$ are primes congruent to $5 \bmod 8$, then the equations $X^2 - 2pY^2 = -1$ and $X^2 - 2pqY^2 = -1$ are solvable in integers. 2. If $p, q$ and $r$ are primes congruent to $5 \bmod 8$, and if $(2p/q) = (2p/r) = -1$, then the equation $X^2 - 2pqrY^2 = -1$ is solvable in integers. 3. If $p \equiv 5 \bmod 8$ and $a \equiv b \equiv 1 \bmod 8$ are primes such that $(2p/a) = (2p/b) = -1$, then the equations $X^2 - 2apY^2 = -1$ and $X^2 - 2abpY^2 = -1$ are solvable in integers. 4. If $p \equiv q \equiv 5 \bmod 8$ and $a \equiv 1 \bmod 8$ are primes such that $(pq/a) = -1$, then the equation $X^2 - 2apqY^2 = -1$ is solvable in integers. Here are some results from Richaud [@Ri2; @Ri3; @Ri4]: Let $p_i$ denote primes. 1. If $d = p_1p_2p_3p_4$, $p_i \equiv 5 \bmod 8$, if $X^2 - p_1p_2p_3 Y^2 = -1$, and if $(p_4/p_1) = (p_4/p_2) = (p_4/p_3) = 1$, then $X^2 - dY^2 = -1$ is solvable. This generalizes to arbitrary numbers of primes. 2. If $d = p_1p_2p_3p_4$, $p_i \equiv 5 \bmod 8$, if $X^2 - p_1p_2p_3 Y^2 = -1$, and if $(p_1/p_3) = (p_2/p_3) = -1$, $(p_1/p_4) = (p_2/p_4) = (p_3/p_4) = -1$, then $X^2 - dY^2 = -1$ is solvable. This generalizes to arbitrary numbers of primes. 3. If $d = p_1p_2p_3p_4$, where $p_1 \equiv \ldots \equiv p_4 \equiv 5 \bmod 8$, then $X^2 - dY^2 = -1$ is solvable if $(p_1p_2/p_3) = (p_1p_2/p_4) = (p_3p_4/p_1) = (p_3p_4/p_2)$. The following result is due to Tano [@Tano]; the special case $n = 3$ was already proved by Dirichlet. \[PTa\] Let $p_1$, …, $p_n$ denote primes $p_i \equiv 1 \bmod 4$, where $n \ge 3$ is an odd number, and put $d = p_1\cdots p_n$. Assume that $(p_i/p_j) = +1$ for at most one pair $(i,j)$. Then $X^2 - dY^2 = -1$ has an integral solution. Proposition \[PTa\] was generalized by Trotter [@Tro], who was motivated by the results of Pumplün [@Pum]: Let $p_1$, …, $p_n$, where $n$ is an odd integer, denote primes $p_i \equiv 1 \bmod 4$. If there is no triple $i, j, k$ such that $(p_i/p_j) = (p_j/p_k) = +1$, then $X^2 - dY^2 = -1$ has an integral solution. Newman [@New] rediscovered a weaker form of Proposition \[PTa\]: he assumed that $(p_i/p_j) = -1$ for all $i \ne j$. There are a lot more results of this kind concerning the solvability of the negative Pell equation $X^2 - dY^2 = -1$ in terms of Legendre symbols than we can (or may want) to mention here. We will see in [@L3] that these results can be interpreted as computations of Selmer groups for certain types of discriminants. In fact, as Dickson [@DiSN §25] observed, Legendre’s set of equations $Mr^2 - Ns^2 = 1, 2$ admit a group structure; in modern language, it is induced by identifying $Mr^2 - Ns^2 = 1$ and $Mr^2 - Ns^2 = 2$ with the elements $M{{\mathbb Q}^{\times\,2}}$ and $2M{{\mathbb Q}^{\times\,2}}$ of the multiplicative group ${{\mathbb Q}^\times}/{{\mathbb Q}^{\times\,2}}$. This group will be called the $2$-Selmer group of the corresponding Pell equation (see [@L3]). Dirichlet ========= After having explained Legendre’s technique, Dirichlet [@Dir] refined this method by invoking the quadratic reciprocity law. His first result going beyond Legendre is the following (for information on quartic residue symbols see [@LRL]): \[PDi1\] For primes $p \equiv 9 \bmod 16$ such that $(2/p)_4 = -1$, the equation $2pr^2 - s^2 = 1$ has an integral solution. Consider the equation $2r^2 - ps^2 = 1$. We see that $s$ is odd, and that $(2/s) = +1$. Thus $s \equiv \pm 1 \bmod 8$, therefore $s^2 \equiv1 \bmod 16$, and then $2r^2 \equiv 2 \bmod 16$ and $p \equiv 9 \bmod 16$ yield a contradiction. Next consider $pr^2 - 2s^2 = 1$. Here $1 = (2/p)_4(s/p)$, and $(s/p) = (p/s) = 1$. Thus solvability implies $(2/p)_4 = 1$, which is a contradiction. Dirichlet next considers other cases where $d$ has two or three prime factors; we are content with mentioning \[PDi2\] Let $p \equiv q \equiv 1 \bmod 4$ be distinct primes. 1. If $(p/q) = -1$, then $s^2 - pqr^2 = -1$ has an integral solution. 2. If $(p/q)_4 = (q/p)_4 = -1$, then $s^2 - pqr^2 = -1$ has an integral solution. Using similar as well as some other techniques, Dirichlet’s results such as Propositions \[PDi1\] and \[PDi2\] were generalized by Tano [@Tano]. Dirichlet was the first who observed that there is essentially only one among the equations derived by Legendre which has an integral solution: \[TDi0\] Let $A$ be a positive squarefree integer. Then there is exactly one pair of positive integers $(M,N) \ne (1,A)$ with $MN = d$ such that $Mr^2 - Ns^2 = 1$ or $Mr^2 - Ns^2 = 2$ is solvable. We know that all solutions of the Pell equation $P^2 - AQ^2 = 1$ are given by $P + Q\sqrt{A} = \pm (p+q\sqrt{A})^m$, where $m \in {\mathbb Z}$ and where $p+q\sqrt{A}$ is the fundamental solution. Up to sign we thus have $ P = \frac12((p+q\sqrt{A})^m + (p-q\sqrt{A})^m). $ This shows that $P \equiv p^m \bmod A$. Assume first that $m$ is odd. Then $P \equiv p \bmod 2$. If $p$ is even, then we have $$p \equiv -1 \bmod M, \qquad p \equiv 1 \bmod N,$$ which implies that $$P \equiv -1 \bmod M, \qquad P \equiv 1 \bmod N.$$ Thus $P+Q\sqrt{A}$ leads to the very same equation $Mr^2 - Ns^2 = 2$ as the fundamental solution $p+q\sqrt{A}$. If $p$ is odd, then we find similarly that $$p \equiv -1 \bmod 2M, \qquad p \equiv 1 \bmod 2N,$$ and again $P+Q\sqrt{A}$ leads to the same equation $Mr^2 - Ns^2 = 1$ as $p+q\sqrt{A}$. Finally, if $m$ is even, then it is easy to see that $P+Q\sqrt{A}$ leads to $r^2 - As^2 = 1$. This shows that there are exactly two equations with integral solutions. Proofs of results equivalent to Theorem \[TDi0\], based on the theory of continued fractions, were given by Petr [@Pet1; @Pet2], and Halter-Koch [@HK]; different proofs are due to Nagell [@Nag3], Kaplan [@Kap], and Mitkin [@Mit]; Trotter [@Tro] proved the special case where the fundamental unit has negative norm. Pall [@Pal] showed that Theorem \[TDi0\] follows easily from a result that Gauss proved in his Disquisitiones Arithmeticae. See also Walsh [@Walth; @Walsh]. In the ideal-theoretic interpretation, Theorem \[TDi0\] says that if $K = {\mathbb Q}(\sqrt{d}\,)$ has a fundamental unit of norm $+1$, then there is exactly one nontrivial relation in the usual class group among the ambiguous ideals, the trivial relation coming from the factorization of the principal ideal $(\sqrt{d}\,)$. From this point of view, Theorem \[TDi0\] is an important step in the proof of the ambiguous class number formula for quadratic number fields. Rédei, Reichardt, Scholz ======================== In 1932, Rédei started studying the $2$-class group of the quadratic number field $k = {\mathbb Q}(\sqrt{d}\,)$, and applied it in [@RedPl] to problems concerning the solvability of the negative Pell equation $X^2 - dY^2 = -4$. In the following, let $e_2$ and $e_4$ denote the $2$-rank and the $4$-rank of the class group $C = {{\operatorname{Cl}}}_2^+(k)$ of $k$ in the strict sense, that is, put $e_2 = \dim_{{\mathbb F}_2} C/C^2$ and $e_4 = \dim_{{\mathbb F}_2} C^2/C^4$. A factorization of the discriminant $d = {{\operatorname{disc}}\,}k$ into discriminants $d = \Delta_1 \Delta_2$ is called a splitting of the second kind (or $C_4$-decomposition) if $(\Delta_1/p_2) = (\Delta_1/p_2) = +1$ for all primes $p_i \mid \Delta_i$. The main result of [@Red32] (see also [@RR33]) is The $4$-rank $e_4$ equals the number of independent splittings of the second kind of $d$. This result of Rédei turned out to be very attractive; new variants of proofs were given by Iyanaga [@Iya], Bloom [@Blo], Carroll [@Car], and Kisilevsky [@Kis]. Inaba [@Ina], Fröhlich [@Froa] and G. Gras [@Gr73] investigated the $\ell$-class group of cyclic extensions of prime degree $\ell$; see Stevenhagen [@Ste] for a modern exposition. For generalization’s of Rédei’s technique to quadratic extensions of arbitrary number fields see G. Gras [@Gr92]. Morton [@Mor0] and Lagarias [@Laga] gave modern accounts of Rédei’s method for computing the $2$-part of the class groups of quadratic number fields. Damey & Payan [@DP] proved Let $k^+ = {\mathbb Q}(\sqrt{m}\,)$ be a real quadratic number field, and put $k^- = {\mathbb Q}(\sqrt{-m}\,)$. Then the $4$-ranks $r_4^+(k^+)$ and $r_4(k^-)$ of ${{\operatorname{Cl}}}^+(k^+)$ (the class group of $k^+$ in the strict sense) and ${{\operatorname{Cl}}}(k^-)$ satisfy the inequalities $r_4^+(k^+) \le r_4(k^-) \le r_4^+(k^+) + 1. $ Other proofs were given by G. Gras [@Gr73], Halter-Koch [@HK84], Uehara [@Ueh89] and Sueyoshi [@Sue95]; see also Sueyoshi [@Sue97; @Sue00]. Bouvier [@Bou1; @Bou2] proved that the $4$-ranks and $8$-ranks of ${\mathbb Q}(\sqrt{2},\sqrt{m}\,)$ and ${\mathbb Q}(\sqrt{2},\sqrt{-m}\,)$ differ at most by $4$. This was generalized considerably by Oriat [@Ori; @Oria; @Orib]; the following proposition is a special case of his results: Let $k$ be a totally real number field with odd class number, let $2^m \ge 4$ be an integer with the property that $k$ contains the maximal real subfield of the field of $2^m$-th roots of unity, and let $d \in k^\times$ be a nonsquare. Then the $2^m$-ranks $r_m(K)$ and $r_m(K')$ of the class groups in the strict sense of $K = k(\sqrt{d}\,)$ and $K' = k(\sqrt{-d}\,)$ satisfy $$r_m(K) - r_m(K') \le R^- - r,$$ where $R^-$ and $r$ denote the unit rank of $K^-$ and $k$, respectively. Rédei & Reichardt [@RR33; @Red53] and Iyanaga [@Iya] observed the following \[ProEl\] If ${{\operatorname{Cl}}}_2^+(k)$ is elementary abelian, then $N{\varepsilon}= -1$. Since $e_4 = 0$ is equivalent to the rank of $R(d)$ being maximal, this can be expressed by saying that if the Rédei matrix has maximal possible rank $n-1$, then $N{\varepsilon}= -1$. Rédei introduced what is now called the Rédei matrix of the quadratic field with discriminant $d = d_1 \cdots d_n$. It is defined as the $n \times n$-matrix $R(d) = (a_{ij})$ with $a_{ij} = (d_i/p_j)$ for $i \ne j$ and $a_{ii} = \sum_{i \ne j} a_{ij}$. Rédei proved that the $4$-rank of ${{\operatorname{Cl}}}^+(k)$ is given by $$e_4 = n-1- {{\operatorname{rank}}\,}R(d).$$ Rédei (see e.g. [@Red37]) introduced a group structure on the set of splittings of the second kind by taking the product of two such factorizations $d = \Delta_1 \Delta_2$ and $d = \Delta_1' \Delta_2'$ to be the factorization $d = \Delta_1'' \Delta_2''$, where $$\Delta_1'' = \frac{\Delta_1 \Delta_1'}{\gcd(\Delta_1,\Delta_1')^2}.$$ This product is well defined because of the relations $$\frac{\Delta_1 \Delta_1'}{\gcd(\Delta_1,\Delta_1')^2} = \frac{\Delta_2 \Delta_2'}{\gcd(\Delta_2,\Delta_2')^2}, \quad \frac{\Delta_1 \Delta_2'}{\gcd(\Delta_1,\Delta_2')^2} = \frac{\Delta_2 \Delta_1'}{\gcd(\Delta_2,\Delta_1')^2}.$$ Rédei & Reichardt [@RR33] and Scholz [@Sch34] started applying class field theory to finding criteria for the solvability of the negative Pell equation $X^2 - dY^2 = -4$. The immediate connection is provided by the following simple observation: we have $N{\varepsilon}= -1$ if and only if ${{\operatorname{Cl}}}(K) \simeq {{\operatorname{Cl}}}^+(K)$, which in turn is equivalent to the fact that the Hilbert class fields in the strict (unramified outside $\infty$) and in the usual sense (unramified everywhere) coincide. This proves The fundamental unit of the quadratic field $K$ has negative norm if and only if the Hilbert class field in the strict sense is totally real. The following theorem summarizes the early results of Rédei and Scholz; observe that ‘unramified’ below means ‘unramified outside $\infty$’: \[RedRe\] Let $k$ be a quadratic number field with discriminant $d$. There is a bijection between unramified cyclic $C_4$-extensions and $C_4$-factorizations of $d$. If $K/k$ is an unramified $C_4$-extension, then $K/{\mathbb Q}$ is normal with ${{\operatorname{Gal}}}(K/{\mathbb Q}) \simeq D_4$. The quartic normal extension $F/{\mathbb Q}$ contained in $K$ can be written in the form $F = {\mathbb Q}(\sqrt{\Delta_1},\sqrt{\Delta_2}\,)$. A careful examination of the decomposition and inertia groups of the ramifying primes shows that $(\Delta_1, \Delta_2) = 1$ and that $d = \Delta_1\cdot \Delta_2$ is a $C_4$-factorization. Conversely, if $d = \Delta_1 \Delta_2$ is a $C_4$-factorization of $d$, then the diophantine equation $ X^2 - \Delta_1Y^2 = \Delta_2Z^2$ has a nontrivial solution $(x,y,z)$, and the extension $K = k(\sqrt{\Delta_1}, \sqrt{\mu}\,)$, where $\mu = x+y\sqrt{\Delta_1}$, is a $C_4$-extension of $k$ unramified outside $2\infty$. By choosing the signs of $x,y,z$ suitably one can make $K/k$ unramified outside $\infty$. The question of whether the cyclic quartic extension $K/k$ constructed in Theorem 13.1. is real or not was answered by Scholz [@Sch34]. Clearly this question is only interesting if both $\Delta_1$ and $\Delta_2$ are positive. Moreover, if one of them, say $\Delta_1$, is divisible by a prime $q \equiv 3 \bmod 4$, then there always exists a real cyclic quartic extension $K/k$: this is so because $\alpha = x+y\sqrt{\Delta_1}$ as constructed above is either totally positive or totally negative (since it has positive norm), hence either $\alpha \gg 0$ or $-q\alpha \gg 0$, so either $k(\sqrt{\alpha}\,)$ or $k(\sqrt{-q\alpha}\,)$ is the desired extension. We may therefore assume that $d$ is not divisible by a prime $q \equiv 3 \bmod 4$, i.e. that $d$ is the sum of two squares. Then Scholz [@Sch34] has shown Let $k$ be a real quadratic number field with discriminant $d$, and suppose that $d$ is the sum of two squares. Assume moreover that $d = \Delta_1\cdot \Delta_2$ is a $C_4$-factorization. Then the cyclic quartic $C_4$-extensions $K/k$ containing ${\mathbb Q}(\sqrt{\Delta_1},\sqrt{\Delta_2}\,)$ are real if and only if $(\Delta_1/\Delta_2)_4 (\Delta_2/\Delta_1)_4 = +1$. Moreover, if there exists an octic cyclic unramified extension $L/k$ containing $K$, then $(\Delta_1/\Delta_2)_4 = (\Delta_2/\Delta_1)_4$. If $\Delta_1$ and $\Delta_2$ are prime, we can say more ([@Sch34]): Let $k = {\mathbb Q}(\sqrt{d}\,)$ be a real quadratic number field, and suppose that $d = {{\operatorname{disc}}\,}k = \Delta_1\Delta_2$ is the product of two positive prime discriminants $\Delta_1, \Delta_2$. Let $h(k)$, $h^+(k)$ and ${\varepsilon}$ denote the class number, the class number in the strict sense, and the fundamental unit of ${\mathcal O}_k$, respectively; moreover, let ${\varepsilon}_1$ and ${\varepsilon}_2$ denote the fundamental units of $k_1 = {\mathbb Q}(\sqrt{\Delta_1}\,)$ and $k_2 = {\mathbb Q}(\sqrt{\Delta_1}\,)$. There are the following possibilities: 1. $(\Delta_1/\Delta_2) = -1$: then $h(k) = h^+(k) \equiv 2 \bmod 4$, and $N{\varepsilon}= -1$. 2. $(\Delta_1/\Delta_2) = +1$: then $({\varepsilon}_1/\Delta_2) = ({\varepsilon}_2/\Delta_1) = (\Delta_1/\Delta_2)_4 (\Delta_2/\Delta_1)_4$, and \[RedRe\] shows that there is a cyclic quartic subfield $K$ of $k^1$ containing $k_1k_2$; i\) $(\Delta_1/\Delta_2)_4 = -(\Delta_2/\Delta_1)_4$: then $h^+(k) = 2\cdot h(k) \equiv 4 \bmod 8$, $N{\varepsilon}= +1$, and $K$ is totally complex; ii\) $(\Delta_1/\Delta_2)_4 = (\Delta_2/\Delta_1)_4 = -1$: then $h^+(k) = h(k) \equiv 4 \bmod 8$, $N{\varepsilon}= -1$, and $K$ is totally real. iii\) $(\Delta_1/\Delta_2)_4 = (\Delta_2/\Delta_1)_4 = +1$: then $h^+(k) \equiv 0 \bmod 8$, and $K$ is totally real. Here $(\Delta_1/\Delta_2)_4$ denotes the rational biquadratic residue symbol (multiplicative in both numerator and denominator). Notice that $(p/8)_4 = +1$ for primes $p \equiv 1 \bmod 16$ and $(p/8)_4 = -1$ for primes $p \equiv 9 \bmod 16$. Moreover, $({\varepsilon}_1/p_2)$ is the quadratic residue character of ${\varepsilon}_1 \bmod {{\mathfrak p}}$ (if $p_2 \equiv 1 \bmod 4$), where ${{\mathfrak p}}$ is a prime ideal in $k_1$ above $p_2$; for $\Delta_2 = 8$ and $\Delta_1 \equiv 1 \bmod 8$, the symbol $({\varepsilon}_1/8)$ is defined by $({\varepsilon}_1/8) = (-1)^{T/4}$, where ${\varepsilon}_1 = T+U\sqrt{\Delta_1}$. Let $p = \Delta_1$ and $q = \Delta_2 \equiv 1 \bmod 4$ be positive prime discriminants, and assume that $\Delta_2$ is fixed; then $$\begin{array}{lcccl} 4 | h^+(k) & \iff & (\Delta_1/\Delta_2) = 1 & \iff & p \in {{\operatorname{Spl}}}(\Omega_4^+(\Delta_2)/{\mathbb Q}) \\ 4 | h(k) & \iff & (\Delta_1/\Delta_2)_4 = (\Delta_2/\Delta_1)_4 & \iff & p \in {{\operatorname{Spl}}}(\Omega_4(\Delta_2)/{\mathbb Q}) \\ 8 | h^+(k) & \iff & (\Delta_1/\Delta_2)_4 = (\Delta_2/\Delta_1)_4 = 1 & \iff & p \in {{\operatorname{Spl}}}(\Omega_4^+(\Delta_2)/{\mathbb Q}) \end{array}$$ Here, the [*governing fields*]{} $\Omega_j(\Delta_2)$ are defined by $$\begin{aligned} \Omega_4^+(\Delta_2) & = \ {\mathbb Q}(i,\sqrt{\Delta_2}\,), \\ \Omega_4(\Delta_2) & = \ \Omega_4^+(\sqrt{{\varepsilon}_2}\,) \ = \ {\mathbb Q}(i,\sqrt{\Delta_2},\sqrt{{\varepsilon}_2}\,), \\ \Omega_8^+(\Delta_2) & = \Omega_4(\sqrt[4]{\Delta_2}\,) \ = \ {\mathbb Q}(i,\root {4\,} \of {\Delta_2},\sqrt{{\varepsilon}_2}\,). \end{aligned}$$ The reason for studying governing fields comes from the fact that sets of primes splitting in a normal extension have Dirichlet densities. The existence of fields governing the property $8|h^+(k)$ allows us to conclude that there are infinitely many such fields. Governing fields for the property $8|h(k)$ or $16|h^+(k)$ are not known and conjectured not to exist. Nevertheless the primes $\Delta_1 = p$ such that $8 \mid h(k)$ (or $16\mid h(k)$ etc.) appear to have exactly the Dirichlet density one would expect if the corresponding governing fields existed. Governing fields were introduced by Cohn and Lagarias [@CL81; @CL83] (see also Cohn’s book [@Coh85]) and studied by Morton [@Mor82a; @Mor82b; @Mor83; @Mor90; @Mor90a] and Stevenhagen [@Ste88; @Ste89; @Ste93]. A typical result is Let $p \equiv 1 \bmod 4$ and $r \equiv 3 \bmod 4$ be primes and consider the quadratic number field $k = {\mathbb Q}(\sqrt{-rp}\,)$. Then $8 \mid h(k) \iff (-r/p)_4 = +1$. More discussions on unramified cyclic quartic extensions of quadratic number fields can be found in Herz [@Her57], Vaughan [@Vau85], and Williams & Liu [@WL94]. Another discussion of Rédei’s construction was given in Zink’s dissertation [@Zin74]. The class field theoretical approach was also used by Rédei [@Red43; @Red53] (see Gerasim [@Geras]), as well as Furuta [@Fur], Morton [@Mor90] and Benjamin, Lemmermeyer & Snyder [@BLS]. The problem for values divisible by squares was treated (along the lines of Dirichlet) by Perott [@Pero] and taken up again by Rédei [@RedPl] and, via class field theory, by Jensen [@Jens1; @Jens2; @Jens3] and Bülow [@Bue]. Brown [@Bro] proved a very special case of Scholz’s results using the theory of binary quadratic forms; see also Kaplan [@Kap1]. Buell [@Buell] gave a list of known criteria. Despujols [@Des] showed that the norm of the fundamental unit is $(-1)^{t-r}$, where $t$ is the number of ramified primes, and $r$ the number of ambiguous ideal classes containing an ambiguous ideal. Graphs of Quadratic Discriminants ================================= In this section we will explain the graph theoretical description of classical results on the $4$-rank of class groups (in the strict sense) of real quadratic number fields, and of solvability criteria of the negative Pell equation. The connection with graph theory was first described by Lagarias [@Lag] and used later by Cremona & Odoni [@CO]. Similar constructions were used by Vazzana [@Vaz1; @Vaz2] for studying $K_2$ of the ring of integers ${\mathcal O}_k$, as well as by Heath-Brown [@HB] and later by Feng [@Feng], Li & Tian [@LT], and Zhao [@Zh1; @Zh2] for describing the $2$-Selmer group of elliptic curves $Y^2 = X^3 - d^2X$. The Language of Graphs {#the-language-of-graphs .unnumbered} ---------------------- A (nondirected) graph consists of a set $V$ of vertices and a subset $E \subseteq V \times V$ whose elements are called edges. The degree of a vertex $d_i$ of a graph is the number of edges $(d_i,d_j) \in V$ adjacent to $d_i$. A graph is called Eulerian if all vertices have even degree; graphs are Eulerian if and only if there is a path through the graph passing each edge exactly once. A tree is a connected graph containing no cycles (closed paths involving at least three vertices inside a graph). A subgraph of a graph $\gamma$ is called a spanning tree of $\gamma$ if it is a tree and if it contains all vertices. Graphs of Quadratic Fields {#graphs-of-quadratic-fields .unnumbered} -------------------------- Let $d$ be the discriminant of a quadratic number field. Then $d$ can be factored uniquely into prime discriminants: $d = d_1 \cdots d_n$. Here the $d_i$ are discriminants of quadratic fields in which only one prime $p_i$ is ramified. Let $d$ be a discriminant of a real quadratic number field such that all the $d_i$ are positive (equivalently, $d$ is the sum of two integral squares). This implies, by quadratic reciprocity, that $(d_i/p_j) = (d_j/p_i)$ for all $1 \le i, j \le n$. To any discriminant $d$ as above we associate a graph $\gamma(d)$ as follows: - $V = \{d_1, \ldots, d_n\}$; - $E = \{(d_i,d_j): (d_i/p_j) = (d_j/p_i) = -1\}$. Every factorization $d = \Delta_1 \Delta_2$ of $d$ into two discriminants $\Delta_1, \Delta_2$ of quadratic fields corresponds to a bipartitioning $\{A_1, A_2\}$ of the vertices by putting $A_1 = \{d_i: d_i \mid \Delta_1$ and $A_2 = \{d_i: d_i \mid \Delta_2$. Clearly we have $A_1 \cup A_2 = V$ and $A_1 \cap A_2 = \varnothing$, and if the factorization is nontrivial, we also have $A_1, A_2 \ne \varnothing$. To each such bipartition, we associate a subgraph $\gamma(\Delta_1,\Delta_2)$ of $\gamma(d)$ by deleting all edges between vertices in $V_1$, and all edges between vertices in $V_2$; thus $\gamma(\Delta_1,\Delta_2)$ has vertices $V = V_1 \cup V_2$ and edges $E_{1,2} = \{(d_i,d_j) \in E: i \in V_1, j \in V_2\}$. The factorization $d = \Delta_1\Delta_2$ is a $C_4$-decomposition if and only if $\gamma(\Delta_1,\Delta_2)$ is Eulerian. The graph $\gamma(\Delta_1,\Delta_2)$ is Eulerian if and only if for each $d_i \mid \Delta_1$ there is an even number of $d_j \mid \Delta_2$ such that $(d_j/p_i) = -1$. This is equivalent to $(\Delta_2/p_i) = +1$. The claim follows. We call $\{A_1, A_2\}$ an Eulerian Vertex Decomposition (EVD) of $\gamma(d)$ if the subgraph $\gamma(\Delta_1,\Delta_2)$ is Eulerian. Since the number of $C_4$-decompositions equals the $4$-rank $e_2$ of the class group in the strict class group of $k = {\mathbb Q}(\sqrt{d}\,)$, we see The number of EVDs of $\gamma(d)$ is $2^{e_2}$, where $e_2$ is the $4$-rank of the class group of $k = {\mathbb Q}(\sqrt{d}\,)$ in the strict sense. The Rédei matrix can be interpreted as the adjacency matrix of a graph $\Gamma(d)$; if $V = V_1 \cup V_2$ and $V_1 \cap V_2 = \varnothing$, the graph with the same vertices as $\Gamma(d)$ and the edges within $V_1$ and $V_2$ deleted coincides with $\gamma(V_1,V_2)$. The graph $\gamma(d)$ is said to be odd if it has the following property: for every bipartitioning of $V$, there is an $a_1 \in A_1$ that is joined to an odd number of $a_2 \in A_2$, or vice versa. \[graph removed. maybe one day latex will be able to deal with jpg, gif, ps or pdf files\] The graph $\gamma(5 \cdot 13 \cdot 17)$ is odd, the graph $\gamma(5 \cdot 29 \cdot 41)$ is not. Observe that the negative Pell equation is solvable in the first, but not in the second case. Let the discriminant $d$ of a quadratic number field be a sum of two squares. If $\gamma(d)$ is an odd graph, then $N {\varepsilon}= -1$. Let $(x,y)$ be the solution of $X^2 - dY^2 = 1$ with minimal $y > 0$. Then $x$ is odd, hence $x+1 = 2fr^2$, $x+1 = 2gs^2$, $fg = d$, $1 = fr^2 - gs^2$. If $g = 1$, then $N{\varepsilon}= -1$, and $f = 1$ contradicts the minimality of $y$. If $f, g > 1$ then we claim that $1 = fr^2 - gs^2$ is not solvable in integers. Let $d = p_1 \cdots d_n$, $A = \{d_i: d_i \mid f\}$ and $B= \{d_i: d_i \mid g\}$. Then $A \cup B = V = \{d_1, \ldots, d_n\}$, $A \cap B = \varnothing$. Since $\gamma(d)$ is odd, we may assume that there is an $d_i \in A$ that is adjacent to an odd number of $d_j \in B$. This implies $(g/p_i) = -1$, contradicting the solvability of $1 = fr^2 - gs^2$. Lagarias observed that a congruence proved by Pumplün [@Pum] could be interpreted as follows: Let $d$ be the discriminant of a quadratic number field. Then $$h^+(d) \equiv \sum_T \prod_{(d_i,d_j) \in T} \Big(1 - \Big(\frac{d_i}{p_j}\Big)\Big) \equiv 2^{n-1} \kappa_d \bmod 2^n,$$ where $T$ runs over all spanning trees of the complete graph with $n$ vertices, and where $\kappa_d$ is the number of spanning trees of $\gamma(d)$. This implies that ${{\operatorname{Cl}}}_2^+$ is elementary abelian if and only if $\gamma(d)$ is an odd graph. This result is implicitly contained in Trotter [@Tro]. Directed Graphs {#directed-graphs .unnumbered} --------------- If $d = p_1 \cdots p_n$ is a product of primes $p_i \equiv 3 \bmod 4$, then the graph with vertices $d_i = -p_i$ and adjacency matrix $A = (a_{ij})$ defined by $$\Big(\frac{p_j}{p_i}\Big) = \begin{cases} (-1)^{a_{ij}} & \text{if}\ i \ne j \\ (-1)^{n+1} (\frac{d/p_i}{p_i}) & \text{if}\ i = j \end{cases}$$ is a directed graph (actually a tournament graph since each edge has a unique direction) studied by Kingan [@Kin]. From Rédei’s results, Kingan deduced the following facts (see also Sueyoshi [@Sue]): If $n$ is even, then $r_4(d) = n-1-{{\operatorname{rank}}\,}A$ and $r_4(-d) = n-{{\operatorname{rank}}\,}A$ or $n-1-{{\operatorname{rank}}\,}A$. If $n$ is odd, then $r_4(d) = n-1-{{\operatorname{rank}}\,}A$ or $r_4(d) = n-2-{{\operatorname{rank}}\,}A$, and $r_4(-d) = n-2-{{\operatorname{rank}}\,}A$. Define $c_i$ via $(-1)^{c_i} = (2/p_i)$ and put $v = (c_1, \ldots, c_n)^T$. Then in the cases where the rank formula is ambiguous, the greater value is attained if $v \in {{\operatorname{im}}\,}(A-I)$. Kohno & Nakahara [@KN] and Kohno, Kitamura, & Nakahara [@KKN] used oriented graphs to describe Morton’s results about governing fields and the computation of the $2$-part of the class group of quadratic fields. Parts of the theory of Rédei and Reichardt has been extended to function fields in one variable over finite fields; Ji [@Ji1] discussed decompositions of the second kind over function fields, and in [@Ji2] he proved results of Trotter [@Tro] in this case, using the graph theoretic language discussed above. Density Problems {#density-problems .unnumbered} ---------------- In this section we will address the question for how many $\Delta$ the negative Pell equation $X^2 - \Delta Y^2 = -1$ is solvable. It was already noticed by Brahmagupta (see Whitford [@Whit]) that the solvability implies that $\Delta$ must be a sum of two squares. Consider therefore the set ${\mathcal D}$ of quadratic discriminants not divisible by any prime $\equiv 3 \bmod 4$, and let ${\mathcal D}(-1)$ denote the subset of all discriminants in ${\mathcal D}$ for which the negative Pell equation is solvable. The problem is then to determine whether the limit $$\label{ESt} \lim_{x \to \infty} \frac{\# \{\Delta \in {\mathcal D}(-1): \Delta \le x\}} {\# \{\Delta \in {\mathcal D}: \Delta \le x\}}$$ exists, and if it does, to find it. Such questions were first asked by Nagell [@Nag] and Rédei [@RedMW; @Redas]; using criteria for the solvability of the negative Pell equation, Rédei could prove that $$\liminf_{x \to \infty} \frac{\# \{\Delta \in {\mathcal D}(-1): \Delta \le x\}} {\# \{\Delta \in {\mathcal D}: \Delta \le x\}} > \alpha: = \prod_{j=1}^\infty (1 - 2^{1-2j}) = 0.419422\ldots,$$ a result later proved again by Cremona & Odoni [@CO]. Stevenhagen [@Ste93c] gave heuristic reasons why the density in (\[ESt\]) should equal $1 - \alpha = 0.580577\ldots$. Related questions concerning the density of quadratic fields whose class groups have given $4$-rank were studied by Gerth [@Ger1; @Ger2] and Costa & Gerth [@CG]. Observe that the negative Pell equation is just one among Legendre’s equations; we might similarly ask for the density of discriminants $d \equiv 1 \bmod 8$ for which $2x^2 - dy^2 = 1$ is solvable, among all discriminants $d \equiv 1 \bmod 8$ whose prime factors are $\equiv \pm 1 \bmod 8$. Selmer groups {#selmer-groups .unnumbered} ------------- The graph theoretic language introduced for studying the density of discriminants for which the negative Pell equation is solvable was also employed for computing the size of Selmer groups of elliptic curves with a rational point of order $2$. See Heath-Brown [@HB], Feng [@Feng], Li & Tian [@LT], and Zhao [@Zh1; @Zh2]. [99]{} F. Arndt, [*Disquisitiones nonnullae de fractionibus continuis*]{}, Diss. Sundia 1845, 32pp F. Arndt, [*Bemerkungen über die Verwandlung der irrationalen Quadratwurzel in einen Kettenbruch*]{}, J. Reine Angew. Math. [**31**]{} (1846), 343–358 R.M. Avanzi, U.M. Zannier, [*Genus one curves defined by separated variable polynomials and a polynomial Pell equation*]{}, Acta Arith. [**99**]{} (2001), no. 3, 227–256 T. Azuhata, [*On the fundamental units and the class numbers of real quadratic fields*]{} Nagoya Math. J. [**95**]{} (1984), 125–135 T. Azuhata, [*On the fundamental units and the class numbers of real quadratic fields. II*]{}, Tokyo J. Math. [**10**]{} (1987), no. 2, 259–270 E. Barbeau, [*Pell’s Equation*]{}, Springer Verlag 2003 E. Benjamin, F. Lemmermeyer, C. Snyder, [*Real quadratic number fields with Abelian Gal($k^2/k$)*]{}, J. Number Theory [**73**]{} (1998), 182–194 L. Bernstein, [*Fundamental units and cycles in the period of real quadratic number fields. I*]{}, Pacific J. Math. [**63**]{} (1976), 37–61 L. Bernstein, [*Fundamental units and cycles in the period of real quadratic number fields. II*]{}, Pacific J. Math. [**63**]{} (1976), no. 1, 63–78 T.G. Berry, [*On periodicity of continued fractions in hyperelliptic function fields*]{}, Arch. Math. (Basel) [**55**]{} (1990), no. 3, 259–266 J. Bloom, [*On the $4$-rank of the strict class group of a quadratic number field*]{}, Selected topics on ternary forms and norms (Sem. Number Theory, California Inst. Tech., Pasadena, Calif., 1974/75), Paper No. 8, 4 pp. California Inst. Tech., Pasadena, Calif., 1976 A. Boutin, [*Résolution complète de l’équation $x^2 - (Am^2 + Bm + C)y^2 = 1$ où $A, B, C$ sont des entiers, par une infinité des polynômes en $m$*]{}, L’Interméd. Math. [**9**]{} (1902), 60 L. Bouvier, [*Sur le 2-groupe des classes de certains corps biquadratiques*]{}, Thèse 3$^e$ cycle, Grenoble L. Bouvier, [*Sur le 2-groupe des classes au sens restreint de certaines extensions biquadratiques de ${\mathbb Q}$*]{}, C. R. Acad. Sci. Paris [**272**]{} (1971), 193–196 E. Brown, [*The class number and fundamental unit of $ Q(\sqrt{2p})$ for $p\equiv 1 \bmod 16$ a prime*]{}, J. Number Theory [**16**]{} (1983), no. 1, 95–99 D. Buell, [*Binary Quadratic Forms*]{}, Springer Verlag 1989 T. Bülow, [*Power residue criteria for quadratic units and the negative Pell equation*]{}, Canad. Math. Bull. [**24**]{} (2002), no. 2, 55–60 J. Carroll, [*The Redei-Reichardt theorem*]{}, Selected topics on ternary forms and norms (Sem. Number Theory, California Inst. Tech., Pasadena, Calif., 1974/75), Paper No. 7, 7 pp. California Inst. Tech., Pasadena, Calif., 1976 E. Catalan, [*Rectification et addition à la note sur un problème d’analyse indéterminée*]{}, Atti dell’Accad. Pont. Nuovi Lincei [**20**]{} (1867), 1ff; 77ff H. Cohn, [*Introduction to the Construction of Class Fields*]{}, Cambridge 1985 H. Cohn, J. Lagarias, [*Is there a density for the set of primes $p$ such that the class number of ${\mathbb Q}(\sqrt{-p}\,)$ is divisible by $16$?*]{}, Colloqu. Math. Soc. Bolyai [**34**]{} (1981), 257–280 H. Cohn, J. Lagarias, [*On the existence of fields governing the $2$-invariants of the class group of ${\mathbb Q}(\sqrt{dp}\,)$ as $p$ varies*]{}, Math. Comp. [**41**]{} (1983), 711–730 A. Costa, F. Gerth, [*Densities for $4$-class ranks of totally complex quadratic extensions of real quadratic fields*]{}, J. Number Theory [**54**]{} (1995), no. 2, 274–286 J.E. Cremona, R.W.K. Odoni, [*Some density results for negative Pell equations; an application of graph theory*]{}, J. Lond. Math. Soc. (2) [**39**]{} (1989), 16–28 P. Damey, J.-J. Payan, [*Existence et construction des extensions galoisiennes et non-abéliennes de degré 8 d’un corps de caractéristique differente de 2*]{}, J. Reine Angew. Math. [**244**]{} (1970), 37–54 G. Degert, [*Über die Bestimmung der Grundeinheit gewisser reell-quadratischer Zahlkörper*]{}, Abh. Math. Sem. Univ. Hamburg [**22**]{} (1958), 92–97 P. Despujols, [*Norme de l’unité fondamentale du corps quadratique absolu*]{}, C. R. Acad. Sci. Paris [**221**]{} (1945), 684–685 L.E. Dickson, [*History of the Theory of Numbers*]{}, vol I (1920); vol II (1920); vol III (1923); Chelsea reprint 1952 L.E. Dickson, [*Studies in the Theory of numbers*]{}, Chicago 1930 G.P.L. Dirichlet, [*Einige neue Sätze über unbestimmte Gleichungen*]{}, Abh. Kön. Akad. Wiss. Berlin 1834, 649–664; Gesammelte Werke I, 221–236 L. Euler, [*De usu novi algorithmi in problemate Pelliano solvendo*]{}, Novi Acad. Sci. Petropol. [**11**]{} (1765) 1767, 28–66; Opera Omnia I-3, 73–111 L. Euler, [*Nova subsidia pro resolutione formulae $axx + 1 = yy$*]{}, Sept. 23, 1773; Opusc. anal. [**1**]{} (1783), 310; Comm. Arith. Coll. [**II**]{}, 35–43; Opera Omnia I-4, 91–104 A. Faisant, [*L’équation diophantienne du second degré*]{}, Hermann 1991 K. Feng, [*Non-congruent numbers, odd graphs and the Birch-Swinnerton-Dyer conjecture*]{}, Acta Arith. [**75**]{} (1996), 71–83 A. Fröhlich, [*The generalization of a theorem of L. Rédei’s*]{}, Quart. J. Math. (2) [**5**]{} (1954), 130–140 Y. Furuta, [*Norm of units of quadratic fields*]{}, J. Math. Soc. Japan [**11**]{} (1959), 139–145 I.-Kh. I. Gerasim, [*On the genesis of Rédei’s theory of the equation $x^2-Dy^2 = -1$*]{} (Russian), Istor.-Mat. Issled. No. [**32-33**]{} (1990), 199–211 F. Gerth, [*The $4$-class ranks of quadratic fields*]{}, Invent. Math. [**77**]{} (1984), no. 3, 489–515 F. Gerth, [*The $4$-class ranks of quadratic extensions of certain real quadratic fields*]{}, J. Number Theory [**33**]{} (1989), no. 1, 18–31 G. Gras, [*Sur les $l$-classes d’idéaux dans les extensions cycliques relatives de degré premier $l$*]{} I, II, Ann. Inst. Fourier [**23**]{} (1973), 1–48; ibid. [**23**]{} (1973), 1–44 G. Gras, [*Sur la norme du groupe des unités d’extensions quadratiques relatives*]{}, Acta Arith. [**61**]{} (1992), 307–317 F. Halter-Koch, [*Über den $4$-Rang der Klassengruppe quadratischer Zahlkörper*]{}, J. Number Theory [**19**]{} (1984), 219–227 F. Halter-Koch, [*Über Pellsche Gleichungen und Kettenbrüche*]{}, Arch. Math. [**49**]{} (1987), 29–37 D.S. Hart, [*A new method for solving equations of the form $x^2 - Ay^2 = 1$*]{}, Educat. Times [**28**]{} (1878), 29 H. Hasse, [*Über mehrklassige, aber eingeschlechtige reell-quadratische Zahlkörper*]{}, Elem. Math. [**20**]{} (1965), 49–59 F. Hazama, [*Pell equations for polynomials*]{}, Indag. Math. [**8**]{} (1997), 387–397 D.R. Heath-Brown, [*The size of Selmer groups for the congruent number problem*]{}, Invent. Math. [**111**]{} (1993), 171–195 C. S. Herz [*Construction of Class Fields*]{}, Seminar on complex multiplication (Chowla et al. eds.) (1957), Lecture Notes Math. 21, Springer Verlag E. Inaba, [*Über die Struktur der $l$-Klassengruppe zyklischer Zahlkörper vom Primzahlgrad $l$*]{}, J. Fac. Sci. Imp. Univ. Tokyo. Sect. I. [**4**]{} (1940), 61–115 S. Iyanaga, [*Sur les classes d’idéaux dans les corps quadratiques*]{}, Actual. scient. et industr. 1935, Nr. 197 (Exposés math. VIII), 15 p. (1935) Ch. U. Jensen, [*On the solvability of a certain class of non-Pellian equations*]{}, Math. Scand. [**10**]{} (1962), 71–84 Ch. U. Jensen, [*On the Diophantine equation $\xi^2-2m^2\eta^2 = -1$*]{}, Math. Scand. [**11**]{} (1962), 58–62 Ch. U. Jensen, [*Über eine Klasse nicht-Pellscher Gleichungen*]{}, J. Reine Angew. Math. [**209**]{} (1962), 36–38 C.G. Ji, [*Norms of fundamental units in real quadratic function fields*]{}, J. Nanjing Norm. Univ. Nat. Sci. Ed. [**18**]{} (1995), no. 4, 7–12 C.G. Ji, [*The norms of fundamental units in real quadratic function fields*]{} J. Math. (Wuhan) [**17**]{} (1997), no. 2, 173–178 E. de Jonquières, [*Formules générales donnant des valeurs de $D$ pour lesquelles l’équation $t^2 - Du^2 = -1$ est résoluble en nombres entiers*]{}, C. R. Acad. Sci. Paris [**126**]{} (1898), 1837 P. Kaplan, [*Sur le $2$-groupe des classes d’idéaux des corps quadratiques*]{}, J. Reine Angew. Math. [**283/284**]{} (1976), 313–363 P. Kaplan, [*À propos des équations antipelliennes*]{}, Enseign. Math. (2) [**29**]{} (1983), 323–327 S. Katayama, [*On fundamental units of real quadratic fields with norm $-1$*]{}, Proc. Japan Acad. [**67**]{} (1991), 343–345 S. Katayama, [*On fundamental units of real quadratic fields with norm $+1$*]{}, Proc. Japan Acad. [**68**]{} (1992), 18–20 R.J. Kingan, [*Tournaments and ideal class groups*]{}, Canad. Math. Bull. [**38**]{} (1995), no. 3, 330–333 H. Kisilevsky, [*The Rédei-Reichardt theorem—a new proof*]{}, Selected topics on ternary forms and norms (Sem. Number Theory, California Inst. Tech., Pasadena, Calif., 1974/75), Paper No. 6, 4 pp. California Inst. Tech., Pasadena, Calif., 1976 Y. Kohno, T. Nakahara, [*Oriented graphs of $2$-class group constructions of quadratic fields*]{} (Japanese), Combinatorial structure in mathematical models (Kyoto, 1993), RIMS Kokyuroku [**853**]{}, (1993), 133–147 Y. Kohno, S. Kitamura, T. Nakahara, [*$2$-rank component evaluation for class groups of quadratic fields using graphs*]{} (Japanese), Optimal combinatorial structures on discrete mathematical models (Kyoto, 1992) Surikaisekikenkyusho Kokyuroku No. [**820**]{} (1993), 1–15 H. Konen, [*Geschichte der Gleichung $t^2 -Du^2 =1$*]{}, Leipzig (1901), 132 pp M. Kutsuna, [*On the fundamental units of real quadratic fields*]{}, Proc. Japan Acad. [**50**]{} (1974), 580–583 J.C. Lagarias, [*On the computational complexity of determining the solvability or unsolvability of the equation $X^2-DY^2=-1$*]{}, Trans. Amer. Math. Soc. [**260**]{} (1980), no. 2, 485–508 J.C. Lagarias, [*On determining the $4$-rank of the ideal class group of a quadratic field*]{} J. Number Theory [**12**]{} (1980), 191–196 A.M. Legendre, [*Théorie des Nombres*]{}, third edition 1830 F. Lemmermeyer, [*Reciprocity Laws. From Euler to Eisenstein*]{}, Springer Verlag 2000 F. Lemmermeyer, [*Higher Descent on Pell Conics II. Two Centuries of Missed Opportunities*]{}, preprint 2003 F. Lemmermeyer, [*Higher Descent on Pell Conics III. The First $2$-Descent*]{}, preprint 2003 H. Lenstra, [*Solving the Pell equation*]{}, Notices AMS [**49**]{} (2002), 182–192 C. Levesque, [*Continued fraction expansions and fundamental units*]{}, J. Math. Phys. Sci. [**22**]{} (1988), no. 1, 11–44 C. Levesque, G. Rhin, [*A few classes of periodic continued fractions*]{}, Utilitas Math. [**30**]{} (1986), 79–107 D. Li, Y. Tian, [*On the Birch-Swinnerton-Dyer conjecture of elliptic curves $E_D: y^2 = x^3 - D^2x$*]{}, Acta Math. Sin. [**16**]{} (2000), no. 2, 229–236 D.J. Madden, [*Constructing families of long continued fractions*]{} Pac. J. Math. [**198**]{} (2001), 123–147 N. Malebranche, cf. C. Henry, Bull. Bibl. Storia Sc. Mat. Fis. [**12**]{} (1879), 696–698 E. Malo, [*Solution de l’équation $x^2 - Dy^2 = -1$*]{}, L’Interméd. Math. [**13**]{} (1906), 246 J. Mc Laughlin, [*Polynomial solutions of Pell’s equation and fundamental units in real quadratic fields*]{}, J. London Math. Soc. (2) [**67**]{} (2003), 16–28 J. Mc Laughlin, [*Multi-variable polynomial solutions to Pell’s equation and fundamental units in real quadratic fields*]{}, Pacific J. Math. [**210**]{} (2003), 335–349 D.A. Mitkin, [*On some diophantine equations connected with Pellian equation*]{}, Proc. Int. Conf. in honour of J. Kubilius; New Trends in Probab. Stat. [**4**]{} (1997), 27–32 R.A. Mollin, [*Polynomial solutions for Pell’s equation revisited*]{}, Indian J. Pure Appl. Math. [**28**]{} (1997), 429–438 R. Mollin, [*Polynomials of Pellian type and continued fractions*]{}, Serdica Math. J. [**27**]{} (2001), 317–342 C. Moreau, [*Solution de la question 1055*]{}, Nouv. Ann. (2) [**12**]{} (1873) 330–331 P. Morton, [*On Rédei’s theory of the Pell equation*]{}, J. Reine Angew. Math. [**307/308**]{} (1979), 373–398 P. Morton, [*Density results for the $2$-classgroups and fundamental units of real quadratic fields*]{}, Studia Sci. Math. Hungar.[**17**]{} (1982), no. 1-4, 21–43 P. Morton, [*Density result for the $2$-classgroups of imaginary quadratic fields*]{}, J. Reine Angew. Math. [**332**]{} (1982), 156–187 P. Morton, [*The quadratic number fields with cyclic $2$-classgroups*]{}, Pac. J. Math. [**108**]{} (1983), 165–175 P. Morton, [*Governing fields for the $2$-class group of ${\mathbb Q}(\sqrt{-q_1q_2p}\,)$ and a related reciprocity law*]{}, Acta Arith. [**55**]{} (1990), 267–290 P. Morton, [*On the nonexistence of abelian conditions governing solvability of the $-1$ Pell equation*]{}, J. Reine Angew. Math. [**405**]{} (1990), 147–155 T. Nagell, [*Über die Lösbarkeit der Gleichung $x^2-Dy^2=-1$*]{}, Ark. Mat. Astron. Fys. B [**23**]{}, No.6 (1933), 1–5 T. Nagell, [*On a special class of Diophantine equations of the second degree*]{}, Ark. Mat. [**3**]{} (1954), 51–65 M.B. Nathanson, [*Polynomial Pell’s equations*]{}, Proc. Amer. Math. Soc. [**56**]{} (1976), 89–92 M. Newman, [*A note on an equation related to the Pell equation*]{}, Amer. Math. Monthly (1977), 365–366 B. Oriat, [*Rélations entre les 2-groupes des classes d’idéaux des extensions quadratiques $k(\sqrt{d}\,)$ et $k(\sqrt{-d}\,)$*]{}, Ann. Inst. Fourier [**27**]{} (1977), No.2, 37–59 B. Oriat, [*Rélations entre les 2-groupes des classes d’ideaux de $k(\sqrt{d}\,)$ et $k(\sqrt{-d}\,)$*]{}, Astérisque [**41-42**]{} (1977), 247–249 B. Oriat, [*Rélation entre les 2-groupes des classes d’idéaux au sens ordinaire et restreint de certains corps de nombres*]{}, Bull. Soc. Math. Fr. [**104**]{} (1976), 301–307 G. Pall, [*Discriminantal divisors of binary quadratic forms*]{}, J. Number Theory [**1**]{} (1969), 525–533 O. Perron, [*Die Lehre von den Kettenbrüchen*]{}, Teubner 1913 K. Petr, [*Über die Pellsche Gleichung*]{}, Rozpravy [**35**]{} (1926), 7pp K. Petr, [*On Pell’s equation*]{} (Czech), Casopis [**56**]{} (1927), 57–66 J. Perott, [*Sur l’équation $t^2 - Du^2 = -1$. Premier mémoire*]{}, J. Reine Angew. Math. [**102**]{} (1887), 185–225 A. van der Poorten, H.C. Williams, [*On certain continued fraction expansions of fixed period length*]{}, Acta Arith. [**89**]{} (1999), no. 1, 23–35 D. Pumplün, [*Über die Klassenzahl und die Grundeinheit des reellquadratischen Zahlkörpers*]{}, J. Reine Angew. Math. [**230**]{} (1968), 177–210 A.M.S. Ramasamy, [*Polynomial solutions for the Pell’s equation*]{}, Indian J. Pure Appl. Math. [**25**]{} (1994), no. 6, 577–581 L. Rédei, [*Die Anzahl der durch $4$ teilbaren Invarianten der Klassengruppe eines beliebigen quadratischen Zahlkörpers*]{}, Math. Naturwiss. Anz. Ungar. Akad. d. Wiss. [**49**]{} (1932), 338–363 L. Rédei, [*Über die Pellsche Gleichung $t^2-du^2=-1$*]{}, J. Reine Angew. Math. [**173**]{} (1935), 193–221; transl. from Mat. Termeszett. Ertes. [**54**]{} (1936), 1–44 L. Rédei, [*Über einige Mittelwertfragen im quadratischen Zahlkörper*]{}, Journ. Reine Angew. Math. [**174**]{} (1935), 15–55 L. Rédei, [*Ein asymptotisches Verhalten der absoluten Klassengruppe des quadratischen Zahlkörpers und die Pellsche Gleichung*]{}, Jahresbericht D. M. V. [**45**]{} (1935), 78 kursiv L. Rédei, [*Über die $D$-Zerfällungen zweiter Art*]{} (Hungarian; German summary), Math.-nat. Anz. Ungar. Akad. Wiss. [**56**]{} (1937), 89–125 L. Rédei, [*Über den geraden Teil der Ringklassengruppe quadratischer Zahlkörper, die Pellsche Gleichung und die diophantische Gleichung $rx^2+sy^2=z^{2^n}$ I, II, III*]{}, Math. Naturwiss. Anz. Ungar. Akad. d. Wiss. [**62**]{} (1943), 13–34, 35–47, 48–62 L. Rédei, [*Die $2$-Ringklassengruppe des quadratischen Zahlkörpers und die Theorie der Pellschen Gleichung*]{}, Acta Math. Acad. Sci. Hungaricae [**4**]{} (1953), 31–87 L. Rédei, H. Reichardt, [*Die Anzahl der durch $4$ teilbaren Invarianten der Klassengruppe eines beliebigen quadratischen Zahlkörpers*]{}, J. Reine Angew. Math. [**170**]{} (1933), 69–74 G. Ricalde, Interméd. Math. [**8**]{} (1901), 256 C. Richaud, [*Énoncés de quelques théorèmes sur la possibilité de l’équation $x^2 - Ny^2 = -1$ en nombres entiers*]{}, J. Math. Pures Appl. (2) [**9**]{} (1864), 384–388 C. Richaud, [*Démonstrations de quelques théorèmes concernant la résolution en nombres entiers de l’équation $x^2 - Ny^2 = -1$*]{}, J. Math. Pures Appl. (2) [**10**]{} (1865), 235–292 C. Richaud, [*Sur la résolution des équations $x^2 - Ay^2 = \pm 1$*]{}, Atti Accad. Pont. Nuovi Lincei [**19**]{} (1865), 177–182 C. Richaud, [*Démonstrations de quelques théorèmes concernant la résolution en nombres entiers de l’équation $x^2 - Ny^2 = -1$*]{}, J. Math. Pures Appl. (2) [**11**]{} (1866), 145–176 S. Roberts, [*On forms of numbers determined by continued fractions*]{}, Proc. London Math. Soc. [**10**]{} (1878/79), 29–41 A. Scholz, [*Über die Lösbarkeit der Gleichung $t^2-Du^2 =-4$*]{}, Math. Z. [**39**]{} (1934), 95–111 G. Speckman, [*Über die Auflösung der Pell’schen Gleichung*]{}, Archiv Math. Phys. (2) [**13**]{} (1895), 330 M.A. Stern, [*Theorie der Kettenbrüche und ihre Anwendung*]{}, J. Reine Angew. Math. [**11**]{} (1834), 311–350 P. Stevenhagen, [*Class groups and governing fields*]{}, Ph. D. thesis, Berkeley 1988 P. Stevenhagen, [*Ray class groups and governing fields*]{}, Théorie des nombres, Années 1988/89, Publ. Math. Fac. Sci. Besançon (1989) P. Stevenhagen, [*Rédei-matrices and applications*]{}, Number theory (Paris, 1992–1993), 245–259, London Math. Soc. Lecture Note Ser., 215 P. Stevenhagen, [*Divisibility by $2$-powers of certain quadratic class numbers*]{}, J. Number Theory [**43**]{} (1993), 1–19 P. Stevenhagen, [*The number of real quadratic fields having units of negative norm*]{}, Exp. Math. [**2**]{} (1993), 121–136 Y. Sueyoshi, [*Comparison of the $4$-ranks of the narrow ideal class groups of the quadratic fields ${\mathbb Q}(\sqrt{m}\,)$ and ${\mathbb Q}(\sqrt{-m}\,)$*]{} (Japanese), Algebraic number theory and Fermat’s problem (Kyoto, 1995), Surikaisekikenkyusho Kokyuroku No. [**971**]{} (1996), 134–144 Y. Sueyoshi, [*On a comparison of the $4$-ranks of the narrow ideal class groups of ${\mathbb Q}(\sqrt{m}\,)$ and ${\mathbb Q}(\sqrt{-m}\,)$*]{}, Kyushu J. Math. [**51**]{} (1997), 261–272 Y. Sueyoshi, [*Relations betwen the narrow $4$-class ranks of quadratic number fields*]{}, Adv. Stud. Contemp. Math. [**2**]{} (2000), 47–58 Y. Sueyoshi, [*On Rédei matrices with minimal rank*]{}, Far East J. Math. Sci. (FJMS) [**3**]{} (2001), no. 1, 121–128 A. Takaku, S.-I. Yoshimoto, [*Fundamental unit of the real quadratic field ${\mathbb Q}(\sqrt{v(v\sp 3+1)(v\sp 6+3v\sp 3+3)})$*]{}, Ryukyu Math. J. [**6**]{} (1993), 57–67 F. Tano, [*Sur quelques théorèmes de Dirichlet*]{}, J. Reine Angew. Math. [**105**]{} (1889), 160–169 M. von Thielmann, [*Zur Pellschen Gleichung*]{}, Math. Ann. [**95**]{} (1926), 635–640 K. Tomita, [*Explicit representation of fundamental units of some real quadratic fields*]{}, Proc. Japan Acad. [**71**]{} (1995), 41–43 K. Tomita, [*Explicit representation of fundamental units of some real quadratic fields. II*]{}, J. Number Theory [**63**]{} (1997), no. 2, 275–285 H. F. Trotter, [*On the norms of units in quadratic fields*]{}, Proc. Amer. Math. Soc. [**22**]{} (1969), 198–201 T. Uehara, [*On the $4$-rank of the narrow ideal class group of a quadratic field*]{}, J. Number Theory [**31**]{} (1989), 167–1731 Th. P. Vaughan, [*The construction of unramified cyclic quartic extensions of ${\mathbb Q}(\sqrt{-m}\,)$*]{}, Math. Comp. [**45**]{} (1985), 233–242 A. Vazzana, [*On the $2$-primary part of $K_2$ of rings of integers in certain quadratic number fields*]{}, Acta Arith. [**80**]{} (1997), 225–235 A. Vazzana, [*Elementary abelian $2$-primary parts of $K_2 {\mathcal O}$ and related graphs in certain quadratic number fields*]{}, Acta Arith. [**81**]{} (1997), No.3, 253–264 A. Walfisz, [*Pell’s Equation*]{} (Russian), Tbilisi 1952; 90 pp G. Walsh, [*The Pell equation and powerful numbers*]{}, M. Sc. thesis, Univ. Calgary 1988 G. Walsh, [*On a question of Kaplansky*]{}, Amer. Math. Monthly [**109**]{} (2002), no. 7, 660–661 W.A. Webb, H. Yokota, [*Polynomial Pell’s equation*]{}, Proc. Amer. Math. Soc. [**131**]{} (2003), 993–1006 E.E. Whitford, [*The Pell equation*]{}, New York 1912, 193 pp H.C. Williams, [*Some generalizations of the $S_n$ sequence of Shanks*]{}, Acta Arith. [**69**]{} (1995), no. 3, 199–215 H.C. Williams, [*Solving the Pell equation*]{}, Proc. Millenial Conference on Number Theory (Urbana 2000), Peters 2002, 397–435 K. S. Williams, D. Liu, [*Representation of primes by the principal form of negative discriminant $\Delta$ when $h(\Delta)$ is $4$*]{}, Tamkang J. Math. [**25**]{} (1994), 321–334 Y. Yamamoto, [*Real quadratic number fields with large funamental units*]{}, Osaka J. Math. [**8**]{} (1971), 261–270 H. Yokoi, [*On real quadratic fields containing units with norm $-1$*]{}, Nagoya Math. J. [**33**]{} (1968), 139–152 H. Yokoi, On the fundamental unit of real quadratic fields with norm $1$. J. Number Theory [**2**]{} (1970), 106–115 Ch. Zhao, [*A criterion for elliptic curves with lowest $2$-power in $L(1)$*]{}, Math. Proc. Cambridge Philos. Soc. [**121**]{} (1997), no. 3, 385–400 Ch. Zhao, [*A criterion for elliptic curves with second lowest 2-power in $L(1)$*]{}, Math. Proc. Cambridge Philos. Soc. [**131**]{} (2001), no. 3, 385–404 E. W. Zink, [*Über die Klassengruppe einer absolut zyklischen Erweiterung*]{}, Diss. Humboldt Univ. Berlin (1974) [^1]: Theorems on the solvability of the equations of the form $Mx^2 - Ny^2 = \pm 1$ or $\pm 2$. [^2]: Using the solvability of $Mx^2 - Ny^2 = 1$ for primes $M \equiv N \equiv 3 \bmod 4$, Legendre later proved a special case of the quadratic reciprocity law, namely that $(\frac{p}q) = - (\frac{q}p)$ for such primes. [^3]: Given an arbitrary nonsquare number $A$ it is always possible to decompose this number into two factors $M$ and $N$ such that one of the two equations $Mx^2 - Ny^2 = \pm 1$, $Mx^2 - Ny^2 = \pm 2$ is satisfied, the sign of the second number being conveniently taken. [^4]: We start with a short presentation of Legendre’s method. Let $A$ denote a given positive number without quadratic factors, i.e., whose prime factors are all pairwise distinct, and let $p$ and $q$ be the smallest values (except $p = 1$ and $q = 0$) satisfying the equation (\[ED1\]), which is known to be always solvable. Writing (\[ED1\]) in the form $(p+1)(p-1) = Aq^2$ and observing that $p+1$ and $p-1$ are relatively prime or have the common factor $2$ according as $p$ is even or odd, it can be seen that the equation (\[ED1\]) in the first case implies the following: $$p+1 = Mr^2, \quad p-1 = Ns^2, \quad A = MN, \ q = rs,$$ and similarly in the second case: $$p+1 = 2Mr^2, \quad p-1 = 2Ns^2, \quad A = MN, \ q = 2rs,$$ where $M, N$ and therefore $r, s$ are completely determined by $p$. In fact, $M$, $N$ are, in the first case, the greatest common divisors of $A$, $p+1$ and $A$, $p-1$, in the second case of $A$, $\frac{p+1}2$ and $A$, $\frac{p-1}2$. From these equations we deduce (\[ED2\]). If we do not have a solution of (\[ED1\]), thus if $p$ is not known, then we only know that one of these equations must have a solution, and since under this assumption $M$ and $N$ are not given explicitly, each of the equations (\[ED2\]) contains several equations which we get by letting $M$ run through all factors of $A$ (including $1$ and $A$) and putting $N = \frac{A}{M}$.
--- abstract: 'Dynamic allocation of resources to the *best* link in large multiuser networks offers considerable improvement in spectral efficiency. This gain, often referred to as *multiuser diversity gain*, can be cast as double-logarithmic growth of the network throughput with the number of users. In this paper we consider large cognitive networks granted concurrent spectrum access with license-holding users. The primary network affords to share its under-utilized spectrum bands with the secondary users. We assess the optimal multiuser diversity gain in the cognitive networks by quantifying how the sum-rate throughput of the network scales with the number of secondary users. For this purpose we look at the optimal pairing of spectrum bands and secondary users, which is supervised by a central entity fully aware of the instantaneous channel conditions, and show that the throughput of the cognitive network scales double-logarithmically with the number of secondary users ($N$) and linearly with the number of available spectrum bands ($M$), i.e., $M\log\log N$. We then propose a *distributed* spectrum allocation scheme, which does not necessitate a central controller or any information exchange between different secondary users and still obeys the optimal throughput scaling law. This scheme requires that *some* secondary transmitter-receiver pairs exchange $\log M$ information bits among themselves. We also show that the aggregate amount of information exchange between secondary transmitter-receiver pairs is [*asymptotically*]{} equal to $M\log M$. Finally, we show that our distributed scheme guarantees fairness among the secondary users, meaning that they are equally likely to get access to an available spectrum band.' author: - Ali Tajer  - ' Xiaodong Wang [^1]' bibliography: - 'IEEEabrv.bib' - 'CR\_MUD.bib' title: Multiuser Diversity Gain in Cognitive Networks --- [**kewords:**]{} Cognitive radio, distributed, fairness, multiuser diversity, spectrum allocation. Introduction {#sec:intro} ============ Dense multiuser networks offer significant spectral efficiency improvement by dynamically identifying and allocating the communication resources to the *best* link. The improvements thus attained are often referred to as *multiuser diversity gain* and rest on the basis of opportunistically allocating all the resources to the most reliable link. The performance of such resource allocation scheme relies on the peak, rather than average, channel conditions and improves as the number of users increases, as it becomes more likely to have a user with an instantaneously strong link. The notion of opportunistic communication and multiuser diversity was first introduced [@Knopp:ICC95] for uplink transmissions, and further developed in [@Tse:ISIT97; @Viswanath:IT02; @Sanayei:IT07] for downlink transmissions. The analysis of multiuser diversity gain in downlink multiple-input multiple-output (MIMO) channels is provided in [@Sharif:IT05; @Sharif:COM07]. In all these transmission schemes, the sum-rate capacity exhibits a double-logarithmic growth with the number of users. The recent advances in secondary spectrum leasing [@FCC] and cognitive networks [@Mitola:thesis] suggest accommodating unlicensed users (secondary users or cognitive radios), within the license-holding networks and allowing them to access under-utilized spectrum bands. Among different spectrum sharing schemes, [*underlaid*]{} spectrum access [@Goldsmith:IEEE09] has received significant attention. This scheme allows for simultaneous spectrum access by the primary and secondary users, provided that the power of secondary users is controlled such that they impose limited interference to the primary users. In this paper we consider opportunistic underlaid spectrum access by secondary users and assess the multiuser diversity gain by analyzing the sum-rate throughput scaling of the cognitive network. Such analysis for cognitive networks differs from those of primary networks studied in [@Knopp:ICC95; @Tse:ISIT97; @Viswanath:IT02; @Sharif:IT05; @Sharif:COM07] in two directions. First, the transmissions in the cognitive network are contaminated by the interferences induced by the primary users. The existence of such interference does not make opportunistic communication possible by merely finding the strongest secondary link, and necessitates accounting for the effect of interference as well. Secondly, and more importantly, the uplink/downlink transmissions in the networks referenced above, require feedback from the users to the base station and it is the base station that dynamically decides which user(s) should receive the resources. Cognitive networks, in contrary, are often assumed to lack any infrastructure or central entity and spectrum allocation should be carried out in a distributed way. To address these two issues, we first focus on only examining the for the effects of interference and assume that the cognitive network has a central decision-making entity, fully aware of all cognitive users’ instantaneous channel realizations. This result, providing the optimal scaling factor, presents an upper bound on the throughput yielded by any distributed spectrum allocation scheme. In the next step, we offer a *distributed* algorithm where the secondary users decide about accessing a channel merely based on their own perception of instantaneous network conditions. Our analyses reveal that, interestingly, in both centralized and distributed setups, the sum-rate throughput scales double-logarithmically with the number of users, which is the optimal growth and is the same as that of centralized primary networks. Therefore, the interference from the primary network incurs no loss on the multiuser diversity gain of the cognitive network. We also examine how fairness is maintained in our distributed scheme. Generally, in opportunistic communication schemes there exists a conflict between fairness and multiuser diversity gain, as the network tends to reserve the resources for the most reliable links, which leads the network to be dominated by the users with strong links. We show that, however, in our distributed scheme, we can ensure fairness among the secondary users by providing them with the same opportunity for accessing an available spectrum band. The remaining part of the paper is organized as follows. In Section \[sec:descriptions\] we provide the system model as well as the statement of the problem. Sections \[sec:centralized\] and \[sec:distributed\] discuss the sum-rate throughput scaling laws in centralized and distributed cognitive networks, respectively. Our distributed algorithm requires some information exchange between each cognitive transmitter and its designated receiver. The amount of such information is quantified in Section \[sec:information\]. As we are considering opportunistic type of spectrum access, it is crucial to also look at the fairness among the users. The discussion on the fairness is given in Section \[sec:fairnes\]. Some remarks on the implementation of the distributed spectrum access algorithm are provided in Section \[sec:discussions\] and Section \[sec:conclusion\] concludes the paper. In order to enhance the flow of the material, we have confined most of the proofs in the appendices. System Descriptions {#sec:descriptions} =================== System Model {#sec:model} ------------ We consider a *decentralized* cognitive network comprising of $N$ secondary transmitter-receiver pairs coexisting with the primary transmitters via *underlaid* [@Goldsmith:IEEE09] spectrum access. Therefore, the primary and secondary users can coexist simultaneously on the same spectrum band. The primary network affords to accommodate $1\leq M\ll N$ secondary users and allows them to access the non-overlapping spectrum bands $B_1,\dots, B_{M}$ such that each band is allocated to *exactly one* secondary transmitter-receiver pair. We assume that the secondary transmitters and receivers are paired up [*a priori*]{} such that each secondary transmitter knows its designated receiver and vice versa. We also assume that each secondary transmitter and receiver is potentially capable of operating on each of the $M$ spectrum bands, a feature facilitated by having appropriate reconfigurable hardware. We assume quasi-static flat fading channels and denote the channel from the $j^{th}$ primary transmitter to the $i^{th}$ secondary receiver in the $m^{th}$ spectrum band ($B_m$) by $h^m_{i,j}\in\mathbb{C}$ and denote the channel between the $i^{th}$ secondary transmitter-receiver pair in the $m^{th}$ spectrum band ($B_m$) by $g^m_i\in\mathbb{C}$. Let $x^p_i(t)$ and $x^s_i(t)$ represent the transmitted signals by the $i^{th}$ primary transmitter and the $i^{th}$ secondary transmitter, respectively. We assume that there might be a group of active primary users on each spectrum band $B_m$ and define the set $\mathcal{B}_m$ such that it contains the indices of such users. If the $n^{th}$ secondary pair transmits on $B_m$, then the received signal at the $n^{th}$ secondary receiver is given by $$\label{eq:model1} y^m_n=\sqrt{\eta_n}g^m_nx^s_n+\sum_{j\in\mathcal{B}_m}\sqrt{\gamma_{n,j}}h^m_{n,j}x^p_j+z^m_n,$$ where $z^m_n\in\mathcal{CN}(0,N_0)$ is the additive white Gaussian noise at the $n^{th}$ receiver. In a non-homogeneous network, the users experience different path-loss and shadowing effects, which we account for by incorporating the terms $\{\gamma_{i,j}\}$ and $\{\eta_i\}$. Also, we assume that the primary and secondary transmitters satisfy average power constraints $P_p$ and $P_s$, respectively, i.e., $\bbe[|x^p_i|^2]\leq P_p$ and $\bbe[|x^s_i|^2]\leq P_s$ and the channel coefficients $\{h^m_{i,j}\}_{i,j,m}$ and $\{g^m_i\}_{i,m}$ are i.i.d. and distributed as complex Gaussian $\mathcal{CN}(0,1)$. Each secondary receiver treats all undesired signals (interference from the primary users) as Gaussian interferers. Therefore, the signal-to-interference-plus-noise-ratio ($\sinr$) of the $n^{th}$ secondary pair on the spectrum band $B_m$ is given by $$\begin{aligned} \label{eq:sinr} \nonumber \sinr_{m,n}&=& \frac{\eta_n\bbe[|g^m_nx^s_n|^2]}{N_0+\sum_{j\in\mathcal{B}_m}\gamma_{n,j}\bbe[|x^p_jh^m_{n,j}|^2]}\\ &=& \frac{P_s\eta_n|g^m_n|^2}{N_0+P_p\sum_{j\in\mathcal{B}_m}\gamma_{n,j}|h^m_{n,j}|^2}.\end{aligned}$$ We define the transmission signal-to-noise ratio ($\snr$) by $\rho\dff\frac{P_s}{N_0}$. Throughout the paper we say that $a_N$ and $b_N$ are *asymptotically* equal, denoted by $a_N\doteq b_N$ if $\lim_{N\rightarrow\infty}\frac{a_N}{b_N}=1$, and define $\dotlt$ and $\dotgt$, accordingly. We also define the set of secondary users indices by $\mathcal{N}=\{1,\dots,N\}$. All the rates in the paper are in bits/sec/Hz and $\log$ refers to the logarithm in base 2. Problem Statement {#sec:definition} ----------------- Our goal is to assess the multiuser diversity gain of cognitive networks. For this purpose we identify $M$ secondary transmitter-receiver pairs out of $N$ available ones and assign one spectrum band $B_m$ to each of them, such that the cognitive network throughput is maximized. We assume that all spectrum bands $B_m$ are of the same bandwidth. Therefore, the maximum throughput is given by $$\label{eq:Rmax} R_{\max}=\bbe\left[\max_{A\subset\mathcal{N},\;|A|=M} \sum_{m=1}^{M}\log\left(1+\sinr_{m,{A_m}}\right)\right],$$ where $A_m$ denotes the $m^{th}$ element of set $A$, for $m=1,\dots,M$ and the maximization is taken over all *ordered* subsets of $\mathcal{N}$. In order to find the optimal multiuser diversity gain in the cognitive network we first consider a centralized setup. We assume that there exists a central decision-making entity in the cognitive network, which is fully and instantaneously aware of the channel conditions of all secondary users. The central node solves the problem cast in (\[eq:Rmax\]) by an exhaustive search for pairing up $M$ secondary users with the $M$ available channels. For such secondary user-channel pairs we analyze how the sum-rate of the cognitive network scales as the number of cognitive users ($N$) increases. Such centralized setup imposes extensive information exchange[^2] which can be prohibitive in large network sizes. Next, motivated by alleviating the amount of information exchange imposed by the centralized setup and noting that our cognitive network is *ad-hoc* in nature and lacks a central-decision making entity, we propose a decentralized spectrum allocation scheme. In the distributed scheme each secondary user decides about taking over a channel solely based on its own perception of the network realization. We prove that the proposed distributed scheme retains the same throughput scaling law as in the centralized setup, i.e., is asymptotically optimal. Centralized Spectrum Allocation {#sec:centralized} =============================== The central decision-making unit has access to all $\{\sinr_{m,n}\}$ and performs an exhaustive search over all possible user-channel (spectrum band) combinations in order to find the one that maximizes the sum-rate throughput given in (\[eq:Rmax\]). In order to find the throughput scaling, we establish lower and upper bounds on $R_{\max}$, denoted by $R^l_{\max}$ and $R^u_{\max}$, respectively, and show that these bounds are asymptotically equal, i.e., $R^l_{\max}\doteq R^u_{\max}$, which in turn provide the optimal throughput scaling law of the cognitive network. We define the most favorable user of the $m^{th}$ spectrum band as the user with the largest $\sinr$ on this band, i.e., $$\label{eq:favorable} n^*_m\dff\arg\max_{1\leq n\leq N}\sinr_{m,n}.$$ In general, it might so happen that one user is the most favorable user for two different spectrum bands, i.e., $n^*_m=n^*_{m'}$, while $m\neq m'$, and as a result, these two spectrum bands cannot be allocated to their most favorable users simultaneously (we have assumed that each user may get access to only one spectrum band). Let us define $\mathcal{D}$ as the event that different spectrum bands have distinct most favorable users i.e., no single user is the most favorable user for two distinct spectrum bands. Note that pairing the secondary users and the spectrum bands conditioned on the event $\mathcal{D}$ is equivalent to allocating each channel to its most favorable user, i.e., $$\begin{aligned} \nonumber &\bbe\left[\max_{A\subset\mathcal{N},\;|A|=M}\sum_{m=1}^{M}\log\left(1+\sinr_{m,{A_m}}\right)\bigg |\;\mathcal{D}\right]\\ \label{eq:D1}&\hspace{1.2 in}=\bbe\left[\sum_{m=1}^{M}\log \left(1+\sinr_{m,n^*_m}\right)\right].\end{aligned}$$ On the other hand, under event $\mathcal{\bar D}$, at least one spectrum band will not be allocated its most favorable user and therefore we have $$\begin{aligned} \nonumber &\bbe\left[\max_{A\subset\mathcal{N},\;|A|=M}\sum_{m=1}^{M}\log\left(1+\sinr_{m,{A_m}}\right)\bigg |\;\mathcal{\bar D}\right]\\ \label{eq:D2}&\hspace{1.2 in}\leq \bbe\left[\sum_{m=1}^{M}\log \left(1+\sinr_{m,n^*_m}\right)\right].\end{aligned}$$ Equations (\[eq:D1\]) and (\[eq:D2\]) give rise to $$\begin{aligned} \label{eq:RUmax} \nonumber R_{\max}&=\bbe\left[\max_{A\subset\mathcal{N},\;|A|=M}\sum_{m=1}^{M}\log \left(1+\sinr_{m,{A_m}}\right) \;\bigg |\;\mathcal{D}\right]P(\mathcal{D}) \\ \nonumber &+\bbe\left[\max_{A\subset\mathcal{N},\;|A|=M}\sum_{m=1}^{M}\log \left(1+\sinr_{m,{A_m}}\right)\;\bigg |\;\mathcal{\bar D}\right]P(\mathcal{\bar D})\\ &\leq \bbe\left[\sum_{m=1}^{M}\log \left(1+\sinr_{m,n^*_m}\right)\right]\dff R^u_{\max}.\end{aligned}$$ Also it can be readily shown that $$\begin{aligned} \nonumber R_{\max}&\geq\bbe\left[\max_{A\subset\mathcal{N},\;|A|=M}\sum_{m=1}^{M}\log \left(1+\sinr_{m,{A_m}}\right)\;\bigg|\;\mathcal{D}\right]P(\mathcal{D})\\ \label{eq:RLmax}&=R^u_{\max}P(\mathcal{D})\dff R^l_{\max}.\end{aligned}$$ \[lemma:D\] $R^l_{\max}$ and $R^u_{\max}$ are asymptotically equal, i.e., $R^l_{\max}\doteq R^u_{\max}$. See Appendix \[app:lemma:D\]. Now, we find how $R_{\max}^u$ scales as $N$ increases. Note that the $\sinr$s are statistically independent for all users and spectrum bands. The reason is that $\sinr_{m,n}$ given in (\[eq:sinr\]) inherits its randomness from the randomness of $g^m_n$ (fading coefficient of the channel between the $n^{th}$ secondary pair on the $m^{th}$ band) and $\{h^m_{n,j}\}_j$ (the fading coefficient of the channels from the $j^{th}$ primary user to the $n^{th}$ secondary receiver on the $m^{th}$ band). Since for any two different pairs of $(m,n)\neq(m',n')$, the fading coefficients $g^m_n$ and $g^{m'}_{n'}$ refer to fading in different locations or in different spectrum bands, therefore they are statistically independent. Similarly it can be argued that $h^m_{n,j}$ and $h^{m'}_{n',j}$ are also statistically independent for $(m,n)\neq(m',n')$. As a result, all the random ingredients of $\sinr_{m,n}$ and $\sinr_{m',n'}$ for $(m,n)\neq(m',n')$ are independent which in turn justifies the independence of the $\sinr$s. Nevertheless, $\sinr$s are not identically distributed since different users experience different path-losses and shadowing effects. Hence, for more mathematical tractability we build two other sets whose elements provide lower and upper bounds on $\sinr_{m,n}$ and are i.i.d. For this purpose we define $$\begin{aligned} &\gamma_{\max}\dff\max_{i,j}\left\{\frac{\gamma_{i,j}}{\eta_i}\right\}, \;\;\; &&\eta_{\max}=\max_i\eta_i,\\ \mbox{and}\quad\quad & \gamma_{\min}\dff\min_{i,j}\left\{\frac{\gamma_{i,j}}{\eta_i}\right\},\;\;\;&&\eta_{\min}=\min_i\eta_i.\end{aligned}$$ For $m=1,\dots, M$ we also define the sets $\mathcal{S}_l(m)=\{S_l(m,n)\}_{n=1}^N$ and $\mathcal{S}_u(m)=\{S_u(m,n)\}_{n=1}^N$ such that for $n=1, \dots, N$ $$\begin{aligned} \label{eq:Sl}S_l(m,n)&\dff\frac{|g^m_n|^2}{\frac{1}{\rho\eta_{\min}}+ \frac{P_p}{P_s}\gamma_{\max}\sum_{j\in\mathcal{B}_m}|h^m_{n,j}|^2},\\ \label{eq:Su}\mbox{and}\;\;\;S_u(m,n)&\dff\frac{|g^m_n|^2}{\frac{1}{\rho\eta_{\max}}+ \frac{P_p}{P_s}\gamma_{\min}\sum_{j\in\mathcal{B}_m}|h^m_{n,j}|^2}.\end{aligned}$$ It can be readily verified that $S_l(m,n)\leq\sinr_{m,n}\leq S_u(m,n)$. We use the notations $\mathcal{S}^{(i)}_l(m)$ and $\mathcal{S}^{(i)}_u(m)$ to refer to the $i^{th}$ largest elements of the sets $\mathcal{S}_l(m)$ and $\mathcal{S}_u(m)$, respectively, and use $\sinr^{(i)}_m$ to denote the $i^{th}$ largest element of $\{\sinr_{m,n}\}_{n=1}^N$. In the following lemma we show how these ordered elements are related. \[lemma:order\] For any spectrum band $B_m$ and any $i=1,\dots, N$ we have $\mathcal{S}^{(i)}_l(m)\leq \sinr^{(i)}_m\leq \mathcal{S}^{(i)}_u(m).$ See Appendix \[app:lemma:order\]. Now, by recalling the definition of $R_{\max}^u$ given in (\[eq:RUmax\]) and noting that $\sinr_{m,n^*_m}=\sinr^{(1)}_m$ and by invoking the result of Lemma \[lemma:order\] we get $$\begin{aligned} \label{eq:RUmax_bounds1} &R_{\max}^u\geq \bbe\left[\sum_{m=1}^{M}\log \left(1+\mathcal{S}^{(1)}_l(m)\right)\right], \\ \label{eq:RUmax_bounds2}\mbox{and}\;\;\;&R_{\max}^u\leq \bbe\left[\sum_{m=1}^{M}\log \left(1+\mathcal{S}^{(1)}_u(m)\right)\right].\end{aligned}$$ In order to further simplify the bounds on $R_{\max}^u$ given in (\[eq:RUmax\_bounds1\])-(\[eq:RUmax\_bounds2\]), in the following lemma we provide the cumulative density functions (CDF) of $S_l(m,n)$ and $S_u(m,n)$ (\[eq:Sl\]) and (\[eq:Su\]). \[lemma:CDF\] The elements of $\mathcal{S}_l(m)$ and $\mathcal{S}_u(m)$ are i.i.d. and their CDFs are $$\begin{aligned} \label{eq:CDFl}S_l(m,n) &\sim F_l(x;m)\dff 1-\frac{e^{-x/\rho\eta_{\min}}}{\left(\frac{P_p}{P_s}\gamma_{\max}x+1\right)^{K_m}},\\ \label{eq:CDFu}\mbox{and}\;\; S_u(m,n) &\sim F_u(x;m)\dff 1-\frac{e^{-x/\rho\eta_{\max}}}{\left(\frac{P_p}{P_s}\gamma_{\min}x+1\right)^{K_m}},\end{aligned}$$ where $K_m\dff |\mathcal{B}_m|$. See Appendix \[app:lemma:CDF\]. We denote the $i^{th}$ order statistics of the statistical samples $\mathcal{S}_l(m)$ and $\mathcal{S}_u(m)$ with parent distributions given in (\[eq:CDFl\])-(\[eq:CDFu\]) by $\mathcal{S}^{(i)}_l(m)$ and $\mathcal{S}^{(i)}_u(m)$, respectively. By denoting the CDF of $\mathcal{S}^{(i)}_l(m)$ by $F^{(i)}_l(x;m)$ and that of $\mathcal{S}^{(i)}_u(m)$ by $F^{(i)}_u(x;m)$, for $i=1,\dots, N$ we have  [@Arnold:Book] $$\begin{aligned} \label{eq:CDF:Lj} F_l^{(i)}(x;m) &= \sum_{j=0}^{i-1}{N\choose j}\Big(F_l(x;m)\Big)^{N-j}\Big(1-F_l(x;m)\Big)^j,\\ \label{eq:CDF:Uj}F_u^{(i)}(x;m) &= \sum_{j=0}^{i-1}{N\choose j}\Big(F_u(x;m)\Big)^{N-j}\Big(1-F_u(x;m)\Big)^j.\end{aligned}$$ By invoking the above definitions, (\[eq:RUmax\_bounds1\]) and (\[eq:RUmax\_bounds2\]) can be re-written as $$\begin{aligned} \label{eq:RUmax_bounds2_1}&R_{\max}^u\geq\sum_{m=1}^{M}\int_0^\infty\log(1+x)\;dF_l^{(1)}(x;m),\\ \label{eq:RUmax_bounds2_2}\mbox{and}\;\;&R_{\max}^u\leq\sum_{m=1}^{M}\int_0^\infty\log(1+x)\;dF_u^{(1)}(x;m).\end{aligned}$$ We also define $$\label{eq:G} G(x)\dff1-e^{-x},$$ and let $G^{(i)}(x)$ denote the CDF of the $i^{th}$ order statistic of a statistical sample with $N$ members and with parent distribution $G(x)$. By using this definition we offer the following lemma which is a key step in finding how $R^u_{\max}$ scales with increasing $N$. \[lemma:G\] For the distributions $F_l^{(1)}(x;m)$, $F_u^{(1)}(x;m)$ and $G^{(1)}(x)$ we have $$\begin{aligned} \nonumber\int_0^\infty\log(1+x)\;&dF_u^{(1)}(x;m) \\ \label{eq:lemma:G1}&\leq\int_0^\infty\log(1+\rho\eta_{\max} x)\;dG^{(1)}(x),\\ \nonumber\mbox{and}\;\;\int_0^\infty\log(1+x)\;&dF_l^{(1)}(x;m) \\ \nonumber&\geq\int_0^\infty\log(1+\rho\eta_{\min}x)\;dG^{(1)}(x)\\ \label{eq:lemma:G2}&-\log\bigg[1+\frac{K_mP_p}{P_s}\gamma_{\max}\rho\eta_{\min}\bigg]. \end{aligned}$$ By using the definitions of $F_l^{(1)}(x;m)$ and $F_u^{(1)}(x;m)$ given in (\[eq:CDF:Lj\])-(\[eq:CDF:Uj\]) and using the result of Lemma \[lemma:CDF\] we get $$\begin{aligned} \nonumber F_l^{(1)}&(x;m) =\Big(F_l(x;m)\Big)^N \\ \nonumber &=\Bigg[1-\exp\bigg[-\frac{x}{\rho\eta_{\min}}-K_m\underset{\leq\frac{P_p}{P_s}\gamma_{\max}(x+1)} {\underbrace{\ln\left(\frac{P_p}{P_s}\gamma_{\max}x+1\right)}}\bigg]\Bigg]^N\\ \nonumber &\leq \Bigg[1-\exp\bigg[-\frac{x}{\rho\eta_{\min}}-\frac{K_mP_p}{P_s}\gamma_{\max}(x+1) \bigg]\Bigg]^N\\ \nonumber&= \Bigg[G\bigg(\frac{x}{\rho\eta_{\min}}+\frac{K_mP_p}{P_s}\;\gamma_{\max}(x+1)\bigg)\Bigg]^N\\ \label{eq:FG1}&=G^{(1)}\bigg(\frac{x}{\rho\eta_{\min}}+\frac{K_mP_p}{P_s}\;\gamma_{\max}(x+1)\bigg).\end{aligned}$$ Now, by using (\[eq:FG1\]) and by looking at the solutions $x$ and $x'$ of the equations $$\begin{aligned} u & = F_l^{(1)}(x;m),\\ \mbox{and}\quad u & = G^{(1)}\bigg(\frac{x}{\rho\eta_{\min}}+\frac{K_mP_p}{P_s}\;\gamma_{\max}(x+1)\bigg).\end{aligned}$$ we find that $x\geq x'$, or equivalently $$\Big(F_l^{(1)}\Big)^{-1}(u;m)\geq\frac{\Big(G^{(1)}\Big)^{-1}(u)- \frac{K_mP_p}{P_s}\gamma_{\max}}{\frac{1}{\rho\eta_{\min}}+\frac{K_mP_p}{P_s}\gamma_{\max}},$$ which after some simple manipulations leads to $$\begin{aligned} \nonumber\log\bigg[\Big(F_l^{(1)}\Big)^{-1}(u;m)+1\bigg]&\geq \log\bigg[\rho\eta_{\min}\Big(G^{(1)}\Big)^{-1}(u)+1\bigg]\\ &-\log\bigg[1+\frac{K_mP_p}{P_s}\gamma_{\max}\rho\eta_{\min}\bigg].\end{aligned}$$ Therefore, for the lower bound on $R^u_{\max}$ given in (\[eq:RUmax\_bounds2\_1\]) we have $$\begin{aligned} \int_0^\infty\log(1+x)\;&dF_l^{(1)}(x;m)\\ &= \int_0^1\log\bigg[1+\Big(F_l^{(1)}\Big)^{-1}(u;m)\bigg]du \\ &\geq \int_0^1\log\bigg[\rho\eta_{\min}\Big(G^{(1)}\Big)^{-1}(u)+1\bigg]du\\ &-\log\bigg[1+\frac{K_mP_p}{P_s}\gamma_{\max}\rho\eta_{\min}\bigg],\end{aligned}$$ which is the desired inequality in (\[eq:lemma:G2\]). Now, note that $F_u(x;m)\geq G(\frac{x}{\rho\eta_{\max}})$ or equivalently, $$\label{eq:lemma:G4} \Big(F_u^{(1)}\Big)^{-1}(u;m)\leq \rho\eta_{\max}\Big(G^{(1)}\Big)^{-1}(u).$$ Therefore, $$\begin{aligned} \int_0^\infty\log(1+x)\;&dF_u^{(1)}(x;m) \\ &= \int_0^1\log\bigg[1+\Big(F_u^{(1)}\Big)^{-1}(u;m)\bigg]du \\ &\leq \int_0^1\log\bigg[1+\rho\eta_{\max}\Big(G^{(1)}\Big)^{-1}(u)\bigg]du\\ &=\int_0^\infty\log(1+\rho\eta_{\max}x)\;dG^{(1)}(x),\end{aligned}$$ which establishes the inequality in (\[eq:lemma:G1\]) and completes the proof. Next, by using the result of the following lemma, we establish the scaling law of $R^u_{\max}$. \[lemma:exp\_scaling\] For a family of exponentially distributed random variables of size $N$ and parent distribution $G(x)$ (CDF) and for any positive real number $a\in\mathbb{R}_+$ we have $$\int_0^\infty\log(1+a x)dG^{(1)}(x)\doteq\log\log N+\log a.$$ See Appendix \[app:lemma:exp\_scaling\]. Now, by recalling the bounds provided in (\[eq:RUmax\_bounds2\_1\]) and (\[eq:RUmax\_bounds2\_2\]) and taking into account the results of Lemmas \[lemma:D\], \[lemma:G\] and \[lemma:exp\_scaling\] we find the optimal throughput scaling law of cognitive networks. \[th:centralized\] In a centralized cognitive network with $N$ secondary transmitter-receiver pairs and $M$ available spectrum bands, by optimal user-channel assignments, the sum-rate throughput of the network scales as $$R_{\max}\doteq M\log\log N.$$ By invoking the results of Lemmas \[lemma:G\] and \[lemma:exp\_scaling\] on the lower and upper bounds on $R^u_{\max}$ given in (\[eq:RUmax\_bounds2\_1\]) and (\[eq:RUmax\_bounds2\_2\]) we find $$\begin{aligned} &R^u_{\max}\;\dotgt M\log\log N-M\log\bigg[\frac{1}{\rho\eta_{\min}}+\frac{K_mP_p}{P_s}\gamma_{\max}\bigg],\\ \mbox{and}\quad&R^u_{\max} \dotlt M\log\log N+M\log(\rho\eta_{\max}),\end{aligned}$$ or equivalently, $$\begin{aligned} \lim_{N\rightarrow\infty}\frac{R^u_{\max}}{M\log\log N}&\geq 1-\lim_{N\rightarrow\infty}\frac{\log\bigg[1+\frac{K_mP_p}{P_s} \gamma_{\max}\rho\eta_{\min}\bigg]}{\log\log N}\\ &=1,\end{aligned}$$ and $$\begin{aligned} \lim_{N\rightarrow\infty}\frac{R^u_{\max}}{M\log\log N}\leq \;1+\lim_{N\rightarrow\infty}\frac{\log(\rho\eta_{\max})}{\log\log N}=1,\end{aligned}$$ which confirms that $R^u_{\max}\doteq M\log\log N$. This result, along with what stated in Lemma \[lemma:D\] concludes that $R^l_{\max}\doteq R^u_{\max}\doteq M\log\log N$, which establishes the proof of the theorem. So far, we have assumed that there exists a decision-making center that has full knowledge of all instantaneous channel realizations, i.e., $\{h^m_{i,j}\}$, $\{\gamma_{i,j}\}$, and $\{\eta_i\}$. Also there is no complexity constraints in order to enable exhausting all the possible user-channel assignments and choosing the one which maximizes the sum-throughput of the network. The assumptions made in this section, while not being practical, are useful in shedding light on the sum-throughput limit of such cognitive networks. The results provided in this section can be exploited as the benchmark to quantify the efficiency of our distributed algorithm proposed in the following section. Distributed Spectrum Allocation {#sec:distributed} =============================== In this section we offer our distributed algorithm, where each user independently of others, makes decision regarding taking over transmission on any specific spectrum band. We analyze the achievable sum-throughput of the cognitive network when this distributed algorithm is utilized and show that it is asymptotically optimal. Distributed Algorithm {#sec:algorithm} --------------------- In order to refrain from exhaustively searching for the best user-channel matches, we consider assigning the available spectrum bands to only secondary users with a pre-determined minimum link strength. The distributed algorithm involves two major steps; normalizing the $\sinr$s and comparing it with a given threshold. The underlying motivation for normalizing the $\sinr$s is to balance fairness among the secondary users, in the sense that they get equal opportunities for accessing the spectrum. It so happens that some transmitter-receiver pairs have a very strong link and some other pair a very weak link. This becomes even more likely when we have a large number of secondary pairs. In such scenarios if spectrum allocation is carried out merely based on the links’ strengths, all the strong users will dominate the network and the weak users will hardly have an opportunity for accessing it. So for maintaining fairness, instead of comparing the links’ strengths (or $\sinr$s), we compare normalized $\sinr$s. However, it should be noted that the normalization factors have to be designed carefully such that we do not sacrifice achieving the optimal scaling in favor of achieving fairness. In other words, the objective is to achieve the optimal scaling and fairness simultaneously. In the next step, the normalized $\sinr$s are compared against a pre-determined threshold level and only the users with strong enough links that satisfy the threshold condition will take part in the competition for accessing the spectrum. Specifically, to each user $n=1,\dots, N$ and channel $m=1,\dots, M$, we assign a minimum acceptable level of $\sinr$, denoted by $\lambda(m,n)$, which is defined as follows. Let $T(x;m,n)$ denote the CDF of $\sinr_{m,n}$ given in (\[eq:sinr\]). $\lambda(m,n)$ is set such that we have $$\label{eq:lambda} T\Big(\lambda(m,n);m,n\Big)=1-\frac{1}{N}.$$ Note that for any given $m$ and $n$, $T(x;m,n)$ is a non-decreasing function on $[0,+\infty)\times[0,1]$ which ensures that there always exists a unique solution for $\lambda(m,n)$. Also note that as $\sinr_{m,n}$ depends only on the incoming channels to the $n^{th}$ secondary receiver on the $m^{th}$ spectrum band, $\lambda(m,n)$ can be computed locally at the $n^{th}$ secondary receiver and does not impose any information exchange between the secondary users. It is assumed that the secondary users are aware of the number of secondary pairs $N$ in the cognitive network. Now, each user $n$ computes $\sinr_{1,n},\dots,\sinr_{M,n}$, normalizes them via dividing them by $\lambda(1,n)$ $,\dots,\lambda(M,n)$, respectively, and identifies the channel with the largest normalized $\sinr$, and denotes its index by $m^\dag_n$, i.e., $$\label{eq:m_hat} m^\dag_n\dff\arg\max_{m\in\{1,\dots,M\}}\left\{\frac{\sinr_{m,n}}{\lambda(m,n)}\right\}.$$ In the next step, the $n^{th}$ user compares $\sinr_{m^\dag_n,n}$ against $\lambda(m^\dag_n,n)$ and if $\sinr_{m^\dag_n,n}\geq \lambda(m^\dag_n,n)$, then deems itself as a candidate for accessing the channel indexed by $m^\dag_n$. Based on the definition in (\[eq:m\_hat\]) we define the mutually disjoint sets $\mathcal{H}_m$ for $m=1,\dots,M$, such that $\mathcal{H}_m$ contains the indices of the users deemed as candidates for taking over the $m^{th}$ channel, i.e., $$\mathcal{H}_m\dff\left\{n\;\Big |\; \arg\max_{m'}\frac{\sinr_{m',n}}{\lambda(m',n)}=m\;\&\;\frac{\sinr_{m,n}|}{\lambda(m,n)}\geq 1 \right\}.$$ Finally, a user with its index in $\mathcal{H}_m$ is *randomly* opted for utilizing the $m^{th}$ channel. This can be facilitated in a distributed way via any contention based random media access method, e.g., Aloha, carrier sense multiple access, etc. As soon as one user takes a channel, the other users will no longer try to access that channel. In the following section, we analyze the sum-rate throughput scaling factor of the proposed algorithm. Sum-Rate Throughput Scaling {#sec:throughput} --------------------------- We denote the sum-rate throughput by $R_{\rm sum}$ and denote the throughput of the $m^{th}$ channel by $R_m$. Note that the construction of $\mathcal{H}_m$ guarantees that no single user will be regarded as a candidate for more than one channel and also we have $R_{\rm sum}=\sum_{m=1}^{M}R_m$. By defining $R_{m\med \mathcal{H}_m}$ as the throughput achieved for the $m^{th}$ conditioned on having users with indices in $\mathcal{H}_m$ be candidates for taking over $B_m$ we have $$\label{eq:Rm1} R_m=\sum_{\mathcal{H}_m\subseteq \mathcal{N},\ \mathcal{H}_m\neq \emptyset}R_{m\med \mathcal{H}_m}P(\mathcal{H}_m).$$ On the other hand, by noting that one member of $\mathcal{H}_m$ will be *randomly* picked for accessing $B_m$ we get $$\label{eq:R_Hm} R_{m\med \mathcal{H}_m}=\frac{1}{|\mathcal{H}_m|}\sum_{i\in\mathcal{H}_m} \bbe\bigg[\log\big(1+\sinr_{m,i}\Big)\;\Big |\;\mathcal{H}_m\bigg].$$ From (\[eq:Rm1\]) and (\[eq:R\_Hm\]) for any $\mathcal{H}_m\neq\emptyset$ we get $$\label{eq:Rm2} R_m=\sum_{\mathcal{H}_m\subseteq \mathcal{N}} \frac{P(\mathcal{H}_m)}{|\mathcal{H}_m|}\sum_{i\in\mathcal{H}_m} \bbe\bigg[\log\big(1+\sinr_{m,i}\Big)\;\Big |\;\mathcal{H}_m\bigg].$$ As shown in Appendix \[app:lower\] we have $$\begin{aligned} \nonumber \sum_{i\in\mathcal{H}_m}\bbe\bigg[\log\Big(1+&\sinr_{m,i}\Big)\;\bigg|\; \mathcal{H}_m\bigg]\\ \label{eq:Rm_lower} \geq & \sum_{i=1}^{|\mathcal{H}_m|}\bbe\bigg[\log\Big(1+\sinr_m^{(i)}\Big)\;\bigg].\end{aligned}$$ Therefore, (\[eq:Rm2\]) and (\[eq:Rm\_lower\]) together give rise to the following lower bound on $R_m$ $$\begin{aligned} \label{eq:Rm3} \nonumber R_m &\geq\sum_{\mathcal{H}_m\subseteq \mathcal{N},\ \mathcal{H}_m\neq \emptyset}\frac{P(\mathcal{H}_m)}{|\mathcal{H}_m|} \sum_{i=1}^{|\mathcal{H}_m|}\bbe\bigg[\log\Big(1+\sinr_m^{(i)}\Big)\;\bigg]\\ \nonumber & = \sum_{n=1}^N\sum_{|{\cal H}_m|=n}\frac{P(\mathcal{H}_m)}{|\mathcal{H}_m|} \sum_{i=1}^{|\mathcal{H}_m|}\bbe\bigg[\log\Big(1+\sinr_m^{(i)}\Big)\;\bigg]\\ \nonumber & = \sum_{n=1}^N \sum_{i=1}^{n}\bbe\bigg[\log\Big(1+\sinr_m^{(i)}\Big)\;\bigg]\sum_{|{\cal H}_m|=n}\frac{P(\mathcal{H}_m)}{n}\\ \nonumber & = \sum_{n=1}^N \sum_{i=1}^{n}\bbe\bigg[\log\Big(1+\sinr_m^{(i)}\Big)\;\bigg]\frac{P(|\mathcal{H}_m|=n)}{n}\\ & = \sum_{i=1}^{N}\bbe\bigg[\log\Big(1+\sinr_m^{(i)}\Big)\;\bigg]\sum_{n=i}^N \frac{P(|\mathcal{H}_m|=n)}{n}.\end{aligned}$$ By further defining $$\label{eq:q1} Q^m_i\dff \sum_{n=i}^N\frac{1}{n}P\Big(|\mathcal{H}_m|=n\Big),$$ and $$\label{eq:q2} R_m^l\dff\sum_{i=1}^N Q^m_i \bbe\bigg[\log\Big(1+\sinr_m^{(i)}\Big)\;\bigg],$$ we can re-write (\[eq:Rm3\]) as $R_m\geq R^l_m$. If we also define $Q_0=P\Big(|\mathcal{H}_m|=0\Big)$ we get $$\begin{aligned} \sum_{i=0}^NQ^m_i&=P\Big(|\mathcal{H}_m|=0\Big)+\sum_{i=1}^N \sum_{n=i}^N\frac{1}{n}P\Big(|\mathcal{H}_m|=n\Big)\\ &= \sum_{n=0}^NP\Big(|\mathcal{H}_m|=n\Big)=1,\end{aligned}$$ which suggests that $\{Q^m_i\}_{i=0}^N$ is a valid probability mass function (pmf). In the sequel, we concentrate on finding the scaling behavior of $R^l_m$. By using the definitions of $\mathcal{S}_l(m)$ and $\mathcal{S}_u(m)$ and exploiting Lemma \[lemma:order\], from (\[eq:q2\]) we have $$\begin{aligned} \label{eq:Rl_bounds1_1}R^l_m\geq \sum_{i=1}^N Q^m_i \bbe\bigg[\log\Big(1+\mathcal{S}_l^{(i)}(m)\Big)\;\bigg],\\ \label{eq:Rl_bounds1_2}\mbox{and}\;\;\;R^l_m\leq \sum_{i=1}^N Q^m_i \bbe\bigg[\log\Big(1+\mathcal{S}_u^{(i)}(m)\Big)\;\bigg].\end{aligned}$$ By recalling that the CDFs of $\mathcal{S}^{(i)}_l(m)$ and $\mathcal{S}^{(i)}_u(m)$ are $F^{(i)}_l(x;m)$ and $F^{(i)}_u(x;m)$ provided in (\[eq:CDF:Lj\]) and (\[eq:CDF:Uj\]), respectively, (\[eq:Rl\_bounds1\_1\]) and (\[eq:Rl\_bounds1\_2\]) can be stated as $$\begin{aligned} \label{eq:Rl_bounds2_1} R^l_m\geq \sum_{i=1}^NQ^m_i\int_0^1\log(1+x)\;dF^{(i)}_l(x;m)\\ \label{eq:Rl_bounds2_2} \mbox{and}\;\;\;R^l_m\leq \sum_{i=1}^NQ^m_i\int_0^1\log(1+x)\;dF^{(i)}_u(x;m),\end{aligned}$$ Next, for the given set of $\{Q^m_i\}$ we define $$\begin{aligned} \label{eq:CDFN1}F^N_l(x;m)&\dff\sum_{i=1}^NQ^m_iF^{(i)}_l(x;m),\\ \label{eq:CDFN2}F^N_u(x;m)&\dff\sum_{i=1}^NQ^m_iF^{(i)}_u(x;m),\\ \label{eq:CDFN3}\mbox{and}\;\;\;G^N(x)&\dff\sum_{i=1}^NQ^m_iG^{(i)}(x).\end{aligned}$$ Since $\{Q^m_i\}$ is a valid pmf, $F^N_l(x;m)$, $F^N_u(x;m)$, and $G^N(x)$ can be cast as valid CDFs. Therefore, (\[eq:Rl\_bounds2\_1\]) and (\[eq:Rl\_bounds2\_2\]) give rise to $$\label{eq:Rl_bounds2} \int_0^1\log(1+x)\;dF^N_l(x;m)\leq R^l_m \leq\int_0^1\log(1+x)\;dF^N_u(x;m).$$ The two subsequent lemmas are key in finding how $R^l_m$ scales with increasing $N$. \[lemma:f\] For a real variable $x\in[0,1]$ and integer variables $N$ and $i$, $0\leq i\leq N-1$, the function $$f(x,i)\dff\sum_{j=0}^i{N\choose j}x^{N-j}(1-x)^j,$$ is increasing in $x$. See Appendix \[app:lemma:f\]. \[lemma:G2\] For the distributions $F_l^N(x)$, $F_u^N(x)$ and $G^N(x)$ we have $$\begin{aligned} \label{eq:lemma:G2_1}\int_0^\infty\log(1+x)\;dF_u^N(x;m) &\leq \int_0^\infty\log(1+\rho\eta_{\max} x)\;dG^N(x),\end{aligned}$$ and $$\begin{aligned} \nonumber \int_0^\infty\log(1+x)\;dF^N_l(x;m) &\geq \int_0^\infty\log(1+\rho\eta_{\min}x)\;dG^N(x)\\ \label{eq:lemma:G2_2}&- \log\bigg[1+\frac{K_mP_p}{P_s}\gamma_{\max}\rho\eta_{\min}\bigg].\end{aligned}$$ By using the definition of $f(x,j)$ provided in Lemma \[lemma:f\] and recalling (\[eq:CDF:Lj\])-(\[eq:CDF:Uj\]) we have $$\begin{aligned} F_l^{(i)}(x;m)&=f\Big(F_l(x;m),i-1\Big),\\ F_u^{(i)}(x;m)&=f\Big(F_u(x;m),i-1\Big),\\ \mbox{and}\;\;\;G^{(i)}(x)&=f\Big(G(x),i-1\Big).\end{aligned}$$ By following the same lines as in (\[eq:FG1\]) we also have $$\begin{aligned} F_l(x;m) \leq G\bigg(\frac{x}{\rho\eta_{\min}}+\frac{K_mP_p}{P_s}\;\gamma_{\max}(x+1)\bigg),\end{aligned}$$ and consequently by applying Lemma \[lemma:f\] and using the definition in (\[eq:CDFN1\])-(\[eq:CDFN3\]) we have $$\begin{aligned} \nonumber F^N_l&(x;m)\\ \nonumber &=\sum_{i=1}^NQ^m_iF^{(i)}_l(x;m)=\sum_{i=1}^NQ^m_if\Big(F_l(x;m),i-1\Big)\\ \nonumber &\leq \sum_{i=1}^NQ^m_if\Bigg(G\bigg(\frac{x}{\rho\eta_{\min}}+ \frac{K_mP_p}{P_s}\;\gamma_{\max}(x+1)\bigg),i-1\Bigg)\\ \nonumber&=\sum_{i=1}^NQ^m_iG^{(i)}\bigg(\frac{x}{\rho\eta_{\min}}+\frac{K_mP_p}{P_s}\;\gamma_{\max}(x+1)\bigg)\\ \nonumber &=G^N\bigg(\frac{x}{\rho\eta_{\min}}+ \frac{K_mP_p}{P_s}\;\gamma_{\max}(x+1)\bigg)\end{aligned}$$ By following a similar approach as in Lemma \[lemma:G\], and (\[eq:FG1\])-(\[eq:lemma:G4\]) the inequality in (\[eq:lemma:G2\_2\]) can be established. Proof of (\[eq:lemma:G2\_1\]) follows a similar line of argument. \[lemma:exp\_scaling2\] For a family of exponentially distributed random variables of size $N$ and parent distribution $G(x)$ (CDF) and for any set of $\{Q^m_i\}_{i=1}^N$ such that $\sum_{i=0}^NQ^m_i=1$, if the condition $$\label{eq:Q1} \lim_{N\rightarrow\infty}\frac{\sum_{i=1}^NiQ^m_i}{N}=0,$$ is satisfied, then for any positive real number $a\in\mathbb{R}_+$ we have $$\int_0^\infty\log(1+a x)dG^N(x)\doteq\log\log N+\log a.$$ See Appendix \[app:lemma:exp\_scaling2\]. By using the results of the Lemmas \[lemma:G2\] and \[lemma:exp\_scaling2\] we offer the main result of the distributed algorithm in the following theorem \[th:distributed\] The sum-rate throughput of the cognitive network by exploiting the proposed distributed algorithm scales as $$R_{\rm sum}\doteq M\log\log N.$$ We start by demonstrating that the set $\{Q^m_i\}$ as defined in (\[eq:q1\]) fulfils the condition (\[eq:Q1\]) of Lemma \[lemma:exp\_scaling2\]. From (\[eq:q1\]) we have $$\begin{aligned} \label{eq:Q2} \nonumber\sum_{i=1}^NiQ^m_i&=\sum_{i=1}^Ni\sum_{n=i}^N\frac{1}{n}P\Big(|\mathcal{H}_m|=n\Big)\\ \nonumber &= \sum_{n=1}^N\frac{1}{n}P\Big(|\mathcal{H}_m|=n\Big)\sum_{i=1}^ni\\ \nonumber &= \sum_{n=1}^N\frac{n+1}{2}P\Big(|\mathcal{H}_m|=n\Big)\\ \nonumber&= \frac{1}{2}\sum_{n=1}^NnP\Big(|\mathcal{H}_m|=n\Big)\\ &+ \frac{1}{2}\underset{=1}{\underbrace{\sum_{n=0}^NP\Big(|\mathcal{H}_m|=n\Big)}}-\frac{1}{2}P\Big(|\mathcal{H}_m|=0\Big).\end{aligned}$$ Note that $|\mathcal{H}_m|$ has *compound* binomial distribution with parameters $\{p(m,n)\}_{n=1}^N$ [@Johnson], where $p(m,n)$ denotes the probability that the $m^{th}$ channel is allocated to the $n^{th}$ user. Therefore, according to the properties of compound binomial distributions we have [@Johnson] $$\label{eq:Q3} \sum_{n=1}^NnP\Big(|\mathcal{H}_m|=n\Big)=\bbe\left[|\mathcal{H}_m|\right]=\sum_{n=1}^Np(m,n).$$ From (\[eq:Q2\]) and (\[eq:Q3\]) we get $$\sum_{i=1}^NiQ^m_i\leq \frac{1}{2}\left(\sum_{n=1}^Np(m,n)+1\right).$$ On the other hand, the probability that any specific user $n$ can be a candidate for taking over *any* of the $M$ channels is $$\begin{aligned} \nonumber \omega(n)&\dff P\left(\max_m\left\{\frac{\sinr_{m,n}}{\lambda(m,n)}\right\}\geq1\right)\\ &= 1-\prod_{m=1}^{M}P\Big(\sinr_{m,n}\leq \lambda(m,n)\Big)\\ \nonumber &=1-\prod_{m=1}^{M}\underset{=1-1/N}{\underbrace{T\Big(\lambda(m,n);m,n\Big)}} =1-\left(1-\frac{1}{N}\right)^{M}.\end{aligned}$$ Therefore, since $\sum_{m=1}^Mp(m,n)=\omega(n)$, for all $m,n$ we have $p(m,n)\leq \omega(n)$. Hence, $$\sum_{i=1}^NiQ^m_i\leq \frac{1}{2}\left(N\omega(n)+1\right).$$ On the other hand, $$\lim_{N\rightarrow\infty}N\omega(n)= \lim_{N\rightarrow\infty}\frac{1-\left(1-\frac{1}{N}\right)^{M}}{\frac{1}{N}}=M.$$ Therefore, $\sum_{i=1}^NiQ^m_i\leq \frac{1}{2}(M+1)$ and the set $\{Q^m_i\}$ satisfies the condition in Lemma \[lemma:exp\_scaling2\]. Therefore, Lemmas \[lemma:G2\] and \[lemma:exp\_scaling2\] together establish the following $$\begin{aligned} \int_0^\infty\log(1+x)\;dF_u^N(x;m) &\dotlt \log\log N+\log(\rho\eta_{\max}), \\ \nonumber\mbox{and}\;\;\int_0^\infty\log(1+x)\;dF^N_l(x;m) &\dotgt \log\log N\\ - \log\bigg[\frac{1}{\rho\eta_{\min}}&+\frac{K_mP_p}{P_s}\gamma_{\max}\bigg].\end{aligned}$$ The two inequalities above, in conjunction with (\[eq:Rl\_bounds2\]) and noting that $R_{\rm sum}=\sum_{m=1}^{M}R_m$ provide $$\begin{aligned} \nonumber M\log\log N-&M\log\bigg[\frac{1}{\rho\eta_{\min}}+\frac{K_mP_p}{P_s}\gamma_{\max}\bigg]\\ \nonumber&\dotlt \; R_{\rm sum}\\ &\dotlt\; M\log\log N+M\log(\rho\eta_{\max}),\end{aligned}$$ which concludes the desired result. Simulation Results ------------------ ![Sum-rate throughput versus the number of secondary users for $M=1,\dots, 4$, $\rho=10$ dB, and the number of the primary users $K_m=4$.[]{data-label="fig:sumrate"}](sumrate.eps "fig:"){width="3.7"}\ The simulation results in Fig. \[fig:sumrate\] demonstrates the sum-rate throughput achieved under the centralized setup given in (\[eq:Rmax\]) and the distributed setup given in Section \[sec:algorithm\]. We consider a primary network consisting of 4 users and look at the throughput scaling for the cases that there exist $M=1,\dots,4$ available spectrum bands to be utilized by the secondary users. We set all path-loss terms $\{\eta_i\}$ and $\{\gamma_{i,j}\}$ equal to 1 and find the sum-rate throughput as the number of secondary users increases. As shown in Fig. \[fig:sumrate\], as the number of secondary users increases, the sum-rate throughput achieved by the centralized and distributed schemes exhibit the same scaling factor. Note that what Theorems 1 and 2 convey is that the ratio of $R_{\max}$ and $R_{\rm sum}$ in the centralized and distributed setups, respectively, approaches to 1 as $N\rightarrow\infty$, i.e., $$\lim_{N\rightarrow\infty}\frac{R_{\max}}{R_{\rm sum}}=1$$ which does not necessarily mean that $R_{\max}$ and $R_{\rm sum}$ have to coincide. As a matter of fact, as observed in Fig. 1, there is a gap between $R_{\max}$ and $R_{\rm sum}$, that according to the results of Theorems 1 and 2 must be diminishing with respect to $R_{\max}$ and $R_{\rm sum}$ such that we obtain the asymptotic equality $R_{\max}\;\doteq\;R_{\rm sum}\;\doteq\; M\log\log N$. This gap accounts for the cost incurred for enabling distributed processing in the distributed spectrum access algorithm. The throughput achieved under the distributed setup uniformly is less than that of the centralized setup. This is justified by recalling that the centralized scheme finds the best secondary user for each available spectrum band, whereas the distributed network finds all the secondary users whose quality of communication on a specific channel satisfies a constraint ($\lambda(m,n)$) and among all such secondary user one is randomly selected to access the spectrum band. This, not necessarily guarantees finding the best user for each available spectrum band and as a result leads to some degradation in the sum-rate throughput. ![$\lambda(m,n)$ versus the number of secondary users for $M=4$ available spectrum bands and $K_m=4$ primary users.[]{data-label="fig:threshold1"}](threshold.eps "fig:"){width="3.7"}\ Finding the metric $\lambda(m,n)$ as defined in (\[eq:lambda\]) is the heart of the distributed spectrum allocation algorithm. As it is not mathematically tractable to formulate the CDF of $\sinr_{m,n}$ (note that $F_l(x;m)$ and $F_u(x;m)$ are only the CDFs of the lower and upper bounds on $\sinr_{m,n}$), we are not able to find a closed form expression for $\lambda(m,n)$. However, by solving (\[eq:lambda\]) numerically, we provide the following two figures which are helpful in shedding light on how $\lambda(m,n)$ varies with other network parameter, i.e., primary and cognitive network sizes as well as the number of available spectrum bands. Figure \[fig:threshold1\] demonstrates the dependence of $\lambda(m,n)$ on the transmission $\snr$ denoted by $\rho$. It is seen that $\lambda(m,n)$ monotonically increases with $\rho$. Intuitively, as $\rho$ increases, the users are expected to have more reliable communication and as a result the algorithm will impose more stringent conditions on the secondary users for considering themselves as a candidate for accessing any specific spectrum band. More stringent conditions will translate to having higher values of $\lambda(m,n)$ such that the condition in (\[eq:lambda\]) is satisfied. The numerical evaluations provided in Fig. \[fig:threshold2\] show that $\lambda(m,n)$ increases as the size of the primary network decreases. Again as in Fig. \[fig:threshold1\], this is justified by noting that smaller number of primary users leads to less interference from the primary network to the cognitive network and thereof, more reliable secondary links. Thus, decreasing the primary network size again requires more stringent conditions to be satisfied for a secondary user to be deemed as a candidate for taking over a spectrum band, which in turn results in an increase in $\lambda(m,n)$. It is noteworthy that the choices of the thresholds given in (\[eq:lambda\]) have been [*heuristic*]{} choices that satisfy all the desired properties (optimal scaling as well as fairness and limited information exchange as discussed in Section \[sec:properties\]). Nevertheless, we cannot prove that these are the only choices of the thresholds and it [*might*]{} be possible to find some other threshold settings that satisfy all these conditions and yet do not depend on the size of the primary network. Hence, while the scaling of the sum-rate throughput does not depend on the size of the primary network, the choices of the thresholds for achieving this scaling in the distributed algorithm do depend on the size of the primary network. ![$\lambda(m,n)$ versus the number of secondary users for different sizes for the primary network $K_m=1,\dots, 4$ and $\rho=10$ dB.[]{data-label="fig:threshold2"}](threshold2.eps "fig:"){width="3.7"}\ Information Exchange and Fairness {#sec:properties} ================================= Information Exchange {#sec:information} -------------------- In the distributed algorithm we assume that the $n^{th}$ secondary *receiver* measures $\{\sinr_{m,n}\}_{m=1}^{M}$ corresponding to different spectrum bands, selects the largest one, whose index is denoted by $m_n^\dag$, and compares it against a pre-determined quality metric $\lambda(m,n)$. If $\sinr_{m^\dag_n,n}\geq \lambda(m,n)$, this secondary receiver should notify its designated secondary transmitter to participate in a contention-based competition for taking over channel $m^\dag_n$. Such notification requires transmitting $\log M$ *information bits* from the secondary receiver to its respective secondary transmitter. Although not all of the secondary pairs will be involved in such information exchange, it is imperative to analyze the aggregate amount of such information for large networks ($N\rightarrow\infty$). In the following theorem we demonstrate that for the choice of $\lambda(m,n)$ provided in (\[eq:lambda\]), the asymptotic average amount of information exchange is a constant independent of $N$ and therefore does not harm the sum-rate throughput of the cognitive network. \[th:information\] In the cognitive network with distributed spectrum access, when $\lambda(m,n)$ satisfies $$T\Big(\lambda(m,n);m,n\Big)=1-\frac{1}{N},$$ the average aggregate amount of information exchange between secondary transmitter-receiver pairs is asymptotically equal to $M\log M$. As stated earlier, the probability that the a user satisfies the $\lambda(m,n)$ constraint is $\omega(n)=1-(1-1/N)^{M}$. Therefore, the average aggregate amount of information exchange, denoted by $R_{\rm ie}$, is $$\begin{aligned} \nonumber R_{\rm ie} &= \lim_{N\rightarrow\infty}N\omega(n)\log M\\ \label{eq:IE1} &=\log M \lim_{N\rightarrow\infty}\frac{1-(1-\frac{1}{N})^{M}}{\frac{1}N}\\ \label{eq:IE2}&= \log M \lim_{N\rightarrow\infty}\frac{-\frac{1}{N^2}M(1-\frac{1}{N})^{M-1}}{-\frac{1}{N^2}}= M\log M,\end{aligned}$$ where for the transition from (\[eq:IE1\]) to (\[eq:IE2\]) we have used the L’Hopital’s rule. Fairness {#sec:fairnes} -------- In general, opportunistic user selections might lead to the situation that the network be dominated by the secondary pairs with their receiver far from the primary users so that see less amount of interference from them, or by those pairs where the transmitter and the receiver are closely located and enjoy a good communication channel. In despite of these facts, we show that in our network, by appropriately choosing $\{\lambda_{m,n}\}$ we can provide equiprobable opportunity for all users to access the available spectrum bands. This can be made possible by enforcing more stringent conditions (higher $\lambda(m,n)$) for the users benefitting from smaller path-loss and shadowing effects. In the following theorem we show that by the choice of $\lambda(m,n)$ provided in (\[eq:lambda\]) all the users have the equal opportunities for accessing a channel. \[th:fairness\] In the cognitive network with distributed spectrum allocation, when $\lambda(m,n)$ satisfies $$T\Big(\lambda(m,n);m,n\Big)=1-\frac{1}{N},$$ all the users have the same probability for being a allocated a channel. As shown earlier, the probability that user $n$ satisfies the $\sinr$ constraint $\lambda(m,n)$ is $\omega(n)=1-(1-1/N)^{M}$, which is the same for all users. Discussions {#sec:discussions} =========== Impact on the Primary Network ----------------------------- In cognitive networks with [*underlaid*]{} spectrum access, the secondary and primary users may coexist simultaneously. Therefore, in order to protect the primary users, the secondary users must adjust their transmission power such that they operate within the tolerable noise level of the primary users and thereof do not harm the communication of the primary users. Hence, it is imperative to investigate whether such power adjustments affect the achievable throughput scaling in the centralized and distributed setups. According to theorems \[th:centralized\] and \[th:distributed\], the sum-rate throughput of the cognitive network scales as $M\log\log N$ which does not depend on the $\snr$ or the transmission power of the secondary users. Therefore, irrespective of the transmission policy and power control mechanism (i.e., for any arbitrary $\snr$ or transmission power), the secondary users achieve the scaling law of $M\log\log N$. Hence, deploying any power management mechanism of interest along with the proposed spectrum access algorithms, does not harm the optimal scaling. Distributed Algorithm {#distributed-algorithm} --------------------- For implementing the distributed spectrum access protocol there are two major steps involved. First the random selection of a user out of the candidates for taking over a specific spectrum band. For randomly selecting a user out of the set of users in ${\cal H}_m$ to access the $m^{th}$ spectrum band, one distributed approach is to equip all the users with a backoff timers. Then when a user learns that it is a candidate for accessing the $m^{th}$ spectrum band with run the backoff timer with an initial random value. The first cognitive pair whose backoff timer goes off will take over the channel and with a beacon message can notify it to the rest of the network. Secondly, the distributed algorithm requires some secondary receivers transmit $\log M$ information bits to their respective transmitters. Transmitting $\log M$ information bits requires a very low rate communication. An appropriate approach for such communication rate is to deploy ultra-wide band (UWB) communication between a secondary transmitter and receiver pair. This will allow the secondary users to communicate the low-rate information bits well below the noise level of the primary users. It is noteworthy that the cognitive radios are often assumed to be equipped with wideband filters which enable them to transmit and receive in a wide range of frequency spectrum. This feature of the cognitive radios provides an appropriate context for implementing UWB communication. Networks of Limited Size ------------------------ Practical networks do not have large enough number of users to fully capture the multiuser diversity gain (double-logarithmic growth of capacity with the number of users). Therefore, due to such degradation in multiuser diversity gain, the network cannot support the throughput expected in theory. Therefore, for practical networks we have only [*upper bounds*]{} on the actual sum-rate throughputs. Knowing such upper bounds help to find in an insight about what to expect from the cognitive networks at the design state. Although the result hold analytically for only $N\rightarrow\infty$, from the simulation results in Fig. 1 we observe that as low as $N=50$ (which is around the point that we start to observe steady increase in the throughput) secondary users are enough to start observing the multiuser diversity gain. This is not far from the size of practical networks. Conclusions {#sec:conclusion} =========== In this paper we investigated the multiuser diversity gain in cognitive networks. We first obtained the optimal gain achieved in a network with a central authority and show that the gain achieved in such cognitive networks is similar to that of the interference-free networks, i.e., the network throughput scales double-logarithmically with the number of users. Then we proposed a distributed spectrum access scheme which is proven to achieve the optimal throughput scaling factor. This scheme imposes the exchange of $\log M$ information bits per transmitter-receiver cognitive pair for some pairs, and no information exchange for the others. The other specification of the distributed algorithm are that the network-wide average aggregate amount of information bits it requires is asymptotically equal to $M\log M$, and it ensures fairness among the secondary users. Proof of Lemma \[lemma:D\] {#app:lemma:D} ========================== We equivalently show that $\lim_{N\rightarrow\infty}P(\mathcal{D})=1$. An intuitive justification is that if we put the $\sinr$s in an $M\times N$ array, and locate the maximum element of each row, event ${\cal D}$ occurs when no two such maximum are located in the same column. Therefore, as the number of the columns increases, in the asymptote of very large values of $N$, the even $\cal D$ must occur with probability 1. For set $C\subseteq\mathcal{M}\dff\{1,\dots,M\}$ such that $|C|\geq 2$ we define $P(C)$ as the probability that the spectrum bands with indices in $C$ have the same most favorable user. Therefore, we get $$\begin{aligned} \nonumber P(C)&\dff& P\left\{\forall \;m,m'\in C,\;n^*_{m}=n^*_{m'}\right\}\\ \nonumber &=&\sum_{n=1}^NP\left\{\forall \;m\in C,\;n^*_{m}=n\;\Big|\; n^*_m=n\right\}\\ \nonumber && \hspace{.5in}\times{P(n^*_m=n)}\\ \nonumber &=&\sum_{n=1}^N\prod_{m\in C}P\Big(\sinr_{m,n}\geq\sinr_{m,n'},\;\forall n'\neq n\Big)\\ \label{eq:lemma:D_proof1} && \hspace{.5in}\times{P(n^*_m=n)}\\ \nonumber &=&\sum_{n=1}^N\prod_{m\in C}\prod_{n\neq n'}\underset{\dff\;q(m,n,n')}{\underbrace{P\Big(\sinr_{m,n}\geq\sinr_{m,n'}\Big)}}\\ \nonumber&&\hspace{.5in}\times{P(n^*_m=n)}\\ \label{eq:lemma:D_proof2} &\leq& N(q_{\max})^{|C|(N-1)},\end{aligned}$$ where $q_{\max}=\max_{m,n,n'} q(m,n,n')$ and (\[eq:lemma:D\_proof1\]) and (\[eq:lemma:D\_proof2\]) hold due to the statistical independence of the elements in $\{\sinr_{m,n}\}$, and . Therefore we have $$\begin{aligned} P(\mathcal{D}) &=& 1-\sum_{C\subseteq\mathcal{M}, |C|\geq 2}P(C) \\ &=&1-\sum_{m=2}^{M}\sum_{C\subseteq\mathcal{M},\;{|C|=m}}P(C)\\ & \geq &1-\sum_{m=2}^{M}\underset{\rightarrow 0\;\mbox{as}\;N\rightarrow\infty}{\underbrace{{M\choose m }N(q_{\max})^{m(N-1)}}}\\ &\doteq&1,\end{aligned}$$ which completes the proof. Note that $q_{\max}$ is a function of $\{\eta_i\}$ and $\{\lambda_{i,j}\}$ and does not depend on $N$. Proof of Lemma \[lemma:order\] {#app:lemma:order} ============================== We first show that for any $i=1,\dots, N$, $\mathcal{S}_l^{(i)}(m)\leq \sinr^{(i)}_m$. For $i=1$ we have $$\sinr^{(1)}_m=\max_n\sinr_{m,n}\geq\max_nS_l(m,n)=\mathcal{S}_l^{(1)}(m).$$ For any $i=2,\dots,N$, from the definition of $\sinr^{(i)}_m$ and $\mathcal{S}^{(i)}_m$ it can be deduced that each of the $(N-i+1)$ terms $\sinr^{(i)}_m, \dots, \sinr^{(N)}_m$ is greater than one corresponding element in the set $\mathcal{S}_l(m)$. Therefore, there cannot be more than $(i-1)$ elements in $\mathcal{S}_l(m)$ which are all greater than $\sinr^{(i)}_m, \dots, \sinr^{(N)}_m$. Now, if $\mathcal{S}_l^{(i)}(m)> \mathcal\sinr^{(i)}_m$, then all the $i$ terms $\mathcal{S}_l^{(1)}(m)$, $\mathcal{S}_l^{(2)}(m), \dots, \mathcal{S}_l^{(i)}(m)$ should be greater than all the $(N-i+1)$ terms $\sinr^{(i)}_m,\dots, \sinr^{(N)}_m$. Therefore, we have found $i$ elements in $\mathcal{S}_l(m)$ that are all greater than $\sinr^{(i)}_m,\dots, \sinr^{(N)}_m$ and this contradicts with what we found earlier. Hence, we should have $\mathcal{S}_l^{(i)}(m)\leq \mathcal\sinr^{(i)}_m$. By following the same lines, we can show that also for $i=1,\dots, N$, we always have $\sinr_m^{(i)}\leq\mathcal{S}_u^{(i)}(m)$, which concludes the proof of the lemma. Proof of Lemma \[lemma:CDF\] {#app:lemma:CDF} ============================ Let $Y\dff|g^m_n|^2$, which has exponential distribution with unit variance. Also define $Z\dff\sum_{j=1}^{K_m}|h^m_{n,j}|^2$ which is the the summation of $K_m$ independent exponentially distributed random variables each with unit variance, and thereof has a ${\rm Gamma}(K_m,1)$ distribution. By denoting the probability density functions (PDF) of $Z$ and $Y$ by $$\begin{aligned} f_Y(y) &=& e^{-z}, \\ \mbox{and}\;\;\;f_Z(z) &=& \frac{z^{K_m-1}\;e^{-z}}{(K_m-1)!},\end{aligned}$$ the PDF of $S_l(m,n)=\frac{Y}{1/\rho\eta_{\min}+P_p/P_s\gamma_{\max}Z}$, denoted by $f_S(x)$ is $$\begin{aligned} f_S(x) &= \int_0^{\infty}f_{S\med Z}(x\med z)f_Z(z)dz \\ &= \int_0^{\infty}\left(\frac{1}{\rho\eta_{\min}}+ \frac{P_p\gamma_{\max}}{P_s}z\right)e^{\left(-\frac{x}{\rho\eta_{\min}}- \frac{P_p\gamma_{\max}}{P_s}zx\right)}dz\\ &\hspace{1in}\times\frac{z^{K_m-1}\;e^{-z}}{(K_m-1)!}\;dz\\ &=\frac{e^{-x/\rho\eta_{\min}}}{\left(\frac{P_p}{P_s}\gamma_{\max}x+1\right)^{K_m+1}}\\ &\times \left[\frac{\frac{P_p}{P_s}\gamma_{\max}x+1}{\rho\eta_{\min}}+K_m\frac{P_p}{P_s}\gamma_{\max}\right],\end{aligned}$$ where the last step holds as $\int_0^\infty e^{-u}u^M=M!$. Therefore The CDF is $$F_l(x;m)= 1-\frac{e^{-x/\rho\eta_{\min}}}{\left(\frac{P_p}{P_s}\gamma_{\max}x+1\right)^{K_m}}.$$ $F_u(x;m)$ can be found by following the same lines. Proof of Lemma \[lemma:exp\_scaling\] {#app:lemma:exp_scaling} ===================================== We start by citing the following theorem. \[th:limit\] *[@Sanayei:WCOM07 Theorem 4]* Let $\{X_n\}_{n=1}^N$ be a family of positive random variables with finite mean $\mu_N$ and variance $\sigma^2_N$, also $\mu_N\rightarrow\infty$ and $\frac{\sigma_N}{\mu_N}\rightarrow 0$ as $N\rightarrow\infty$. Then, for all $\alpha>0$ we have $$\bbe\Big[\log(1+\alpha X_N)\Big]\doteq \log\Big(1+\alpha\bbe[X_N]\Big).$$ Consider the set of random variables $\{Y_1,\dots,Y_N\}$ where $Y_i\sim G(y)$ and define $X_i\dff Y^{(N-i+1)}$, where $Y^{(i)}$ is the $i^{th}$ order statistic of the set $\{Y_1,\dots,Y_N\}$; hence, $X_N\sim G^{(1)}(x)$. Therefore, as provided in [@Arnold:Book Sec. 4.6] $$\begin{aligned} \mu_N &\dff& \bbe[X_N]=\sum_{n=1}^N\frac{1}{n}\\ \sigma^2_N &\dff& \bbe[|X_N-\mu_N|^2]=\sum_{n=1}^N\frac{1}{n^2},\end{aligned}$$ which confirms that for finite $N$, $\mu_N$ is also finite. Also as shown in [@Sanayei:IT07], $\mu_N\doteq\log N$ and $\sigma^2_N\doteq\frac{\pi^2}{6}$, from which it is concluded that $\frac{\sigma_N}{\mu_N}\rightarrow 0$ as $N\rightarrow\infty$. Therefore the conditions of the above theorem are satisfied and we have $$\begin{aligned} \int_0^{\infty}\log(1+\alpha x)G^{(1)}dx&=\bbe\Big[\log(1+\alpha X_N)\Big]\\ &\doteq\log\Big(1+\alpha\bbe[X_N]\Big)\\ &=\log(1+\alpha\log N)\\ &\doteq\log\log N+\log a,\end{aligned}$$ which is the desired result. {#app:lower} By using the following lemma, we further find a lower bound on $R_m$ which will be more mathematically tractable. \[lemma:condition\] For a continuous random variable $X$, increasing function $g(\cdot)$ and real values $b\geq a$ $$\bbe\Big[g(X) \med X\geq b \Big]\geq \bbe\Big[g(X) \med X\geq a\Big].$$ See Appendix \[app:lemma:condition\]. By recalling the definition of $\sinr_m^{(i)}$ we have $$\begin{aligned} \nonumber &\sum_{i\in\mathcal{H}_m}\bbe\bigg[\log\Big(1+\sinr_{m,i}\Big)\;\bigg|\; \mathcal{H}_m\bigg]\\ \nonumber=& \sum_{i\in\mathcal{H}_m}\bbe\bigg[\log\Big(1+\sinr_{m,i}\Big)\;\bigg|\; \sinr_{m,i}\geq\lambda(m,i)\;\\ \label{eq:Rm_lower0}&\hspace{.9 in};\forall m'\neq m: \frac{\sinr_{m,i}}{\lambda(m,i)}\geq \frac{\sinr_{m',i}}{\lambda(m',i)}\bigg]\\ \label{eq:Rm_lower1} \geq& \sum_{i\in\mathcal{H}_m}\bbe\bigg[\log\Big(1+\sinr_{m,i}\Big)\;\bigg|\; \sinr_{m,i}\geq\lambda(m,i)\bigg]\\ \label{eq:Rm_lower2} \geq &\sum_{i\in\mathcal{H}_m}\bbe\bigg[\log\Big(1+\sinr_{m,i}\Big)\;\bigg|\; \sinr_{m,i}\geq\min_{j}\lambda(m,j)\bigg]\\ \nonumber=& \sum_{i\in\mathcal{H}_m}\bbe\bigg[\log\Big(1+\sinr_{m,i}\Big)\;\bigg|\; \sinr_{m,i}\geq\min_{j}\lambda(m,j)\\ \label{eq:Rm_lower3}&\hspace{.8 in}; \forall l\notin\mathcal{H}_m: \sinr_{l,m}<\min_{j}\lambda(m,j) \bigg]\\ \label{eq:Rm_lower4}=& \sum_{j=1}^{|\mathcal{H}_m|}\bbe\bigg[\log\Big(1+\sinr_m^{(j)}\Big)\;\bigg|\; \sinr_m^{(j)}\geq\min_{i}\lambda(m,i)\bigg]\\ \label{eq:Rm_lower5} \geq & \sum_{j=1}^{|\mathcal{H}_m|}\bbe\bigg[\log\Big(1+\sinr_m^{(j)}\Big)\;\bigg],\end{aligned}$$ where (\[eq:Rm\_lower0\]) is obtained by replacing $\mathcal{H}_m$ by an equivalent representation. Transition from (\[eq:Rm\_lower0\]) to (\[eq:Rm\_lower1\]) holds by applying Lemma \[lemma:condition\] for $b=\sinr_{m',i}\cdot\frac{\lambda(m,i)}{\lambda(m,i')}$ and $a=0$ for all $m'\neq m$. Transition to (\[eq:Rm\_lower2\]) is again justified by using Lemma \[lemma:condition\]. Due to the statistical independence of $\sinr_{m,i}$ and $\sinr_{m,l}$ for $m\in\mathcal{H}_m$ and $l\notin\mathcal{H}_m$ the additional constraints imposed in (\[eq:Rm\_lower3\]) do not result in any changes. The conditions in (\[eq:Rm\_lower3\]) are equivalent to having the $|\mathcal{H}_m|$ largest components of $\{\sinr_{m,n}\}_{n=1}^N$ be greater than $\min_{1\leq i\leq N}\lambda(i)$, which is mathematically stated in (\[eq:Rm\_lower4\]). Finally, (\[eq:Rm\_lower5\]) holds by applying Lemma \[lemma:condition\] one more time. Proof of Lemma \[lemma:f\] {#app:lemma:f} ========================== By the expansion of $\Big(x+(1-x)\Big)^N$ we have $$\begin{aligned} f(x,i)&=& 1-\sum_{j=i+1}^N{N\choose j}x^{N-j}(1-x)^j\\ &=& 1-\sum_{j=i+1}^N{N\choose N-j}x^{N-j}(1-x)^j\\ &=& 1-\sum_{k=0}^{N-(i+1)}{N\choose k}(1-x)^{N-k}x^k\\ &=&1-f(1-x,N-i-1),\end{aligned}$$ where it can be concluded that $f'(u,i)\big|_{u=x}=f'(u,N-i-1)\big|_{u=1-x}$. So it is sufficient to show that $f'(x,i)\geq 0$ for $x\leq \frac{1}{2}$ and for all $i=1,\dots, N-1$. For this purpose we consider two cases of $i\leq\lfloor\frac{N}{2}\rfloor$ and $i>\lfloor\frac{N}{2}\rfloor$.\ $$\begin{aligned} \nonumber f'(x,i) &=& \sum_{j=0}^{i}{N\choose j}(N-j)x^{N-j-1}(1-x)^j\\\ \nonumber &-&{N\choose j}jx^{N-j}(1-x)^{j-1}\\ \nonumber &=& \sum_{j=0}^{i}{N\choose j}x^{N-j-1}(1-x)^{j-1}\Big[N(1-x)-j\Big],\end{aligned}$$ where since $0\leq j\leq i$ it can be shown that for $x\leq \frac{1}{2}$ $$\begin{aligned} \label{eq:lemma:f1} \nonumber N(1-x)-j&\geq N(1-x)-i\\ &\geq \nonumber N(1-x)-\frac{N}{2}\\ \nonumber&=\frac{N}{2}(1-2x)\\ &\geq 0.\end{aligned}$$\ Define $a_j=1-\frac{1}{2}\delta(\lfloor\frac{N}{2}\rfloor-j)$, where $\delta(\cdot)$ is the Dirac delta function. Therefore, we get $$\begin{aligned} f(x,i)&=f(x,N-i-1)\\ &+\sum_{j=N-i}^{\lfloor\frac{N}{2}\rfloor}a_j{N\choose j}\Big[x^{N-j}(1-x)^j+x^j(1-x)^{N-j}\Big].\end{aligned}$$ For $x\leq \frac{1}{2}$ we get $$\begin{aligned} \nonumber f'(x,i) &= f'(x,N-i-1)\\ \nonumber &+\sum_{j=N-i}^{\lfloor\frac{N}{2}\rfloor}a_j{N\choose j}\bigg\{ x^{N-j-1}(1-x)^{j-1}\Big[N-j-Nx\Big]\\ \nonumber &+\underset{\geq x^{N-j-1}(1-x)^{j-1}}{\underbrace{x^{j-1}(1-x)^{N-j-1}}}\Big[j-Nx\Big]\bigg\}\\ \nonumber &\geq f'(x,\underset{\leq \lfloor\frac{N}{2}\rfloor}{\underbrace{N-i-1}})\\ \nonumber & + \sum_{j=N-i}^{\lfloor\frac{N}{2}\rfloor}a_j{N\choose j} x^{N-j-1}(1-x)^{j-1}\underset{\geq 0}{\underbrace{\Big[N-2Nx\Big]}}\\ \label{eq:lemma:f2}&\geq 0.\end{aligned}$$ From (\[eq:lemma:f1\]) and (\[eq:lemma:f2\]) it is concluded that for $x\leq \frac{1}{2}$, $f(x,i)$ is an increasing function of $x$, which completes the proof. Proof of Lemma \[lemma:exp\_scaling2\] {#app:lemma:exp_scaling2} ====================================== This proof follows the same spirit as the analysis provided in [@Sanayei:IT07]. However, due to some differences in our setting, we provided an independent treatment. For any given number of users $N$, we define a random variable $X_N$, distributed as $X_N\sim G^N(x)$ and also for $j=1,\dots, N$ we define $$\begin{aligned} \mu_{(j)} &\dff& \int_0^{\infty}x\;dG^{(j)}(x),\\ \mbox{and}\;\;\;\sigma^2_{(j)} &\dff& \int_0^{\infty}\Big(x-\mu_{(i)}\Big)^2\;dG^{(j)}(x),\\ \mbox{and}\;\;\;\mu_N &\dff& \bbe[X_N]=\int_0^{\infty}x\;dG^N(x)\\ &=&\sum_{j=1}^NQ_j \int_0^{\infty}x\;dG^{(j)}(x)=\sum_{j=1}^NQ_j\mu_{(j)}.\end{aligned}$$ As given in [@Arnold:Book Sec. 4.6] and discussed in details in [@Sanayei:IT07], for ordered exponentially distributed random variables $X_N$ we have $$\label{eq:sigma_n} \sigma^2_N<2+2\mu_{(1)}\Big(\mu_{(1)}-\mu_{N}\Big),$$ $$\label{eq:mu_1} \mbox{and}\;\;\;\log N+\zeta+\frac{1}{2(N+1)}\leq\mu_{(1)}\leq\log N+\zeta+\frac{1}{2N},$$ and therefore, $$\mu_{(1)}\doteq \log N,$$ where $\zeta\approx0.577$ is the Euler-Mascheroni constant. Also $$\mu_{(1)}-\log\bigg(\sum_{j=1}^NjQ_j\bigg)-\zeta-0.5\leq\mu_N\leq\mu_{(1)}.$$ By taking into account the constraint in (\[eq:Q1\]), as $N\rightarrow\infty$ $$\label{eq:mu_n} 1-\frac{\log\bigg(\sum_{j=1}^NjQ_j\bigg)-\zeta-0.5}{\log N}\leq\mu_N\leq 1.$$ Equations (\[eq:mu\_1\]) and (\[eq:mu\_n\]) together show that $$\label{eq:mu_n2} \mu_{(1)}\doteq\mu_N\doteq\log N,$$ which also implies that $\mu_N\rightarrow\infty$. Taking into account (\[eq:sigma\_n\])  and (\[eq:mu\_n2\]) we also conclude that $\lim_{N\rightarrow\infty}\frac{\sigma_N}{\mu_N}=0$ and therefore the conditions of Theorem \[th:limit\] are met. Hence, from Theorem \[th:limit\] $$\begin{aligned} \int_0^{\infty}\log(1+ax)\;dG^N(x)&=&\bbe\Big[\log(1+a X_N)\Big]\\ &\doteq& \log\Big(1+a\bbe[X_N]\Big)\\ &=&\log\Big(1+a\mu_N\Big)\\ &\doteq&\log\log N+\log(a).\end{aligned}$$ Proof of Lemma \[lemma:condition\] {#app:lemma:condition} ================================== $$\begin{aligned} \bbe\Big[g(X) \med X\geq b \Big] &= \int_b^{\infty}g(x)f_{X|X\geq b}(x)\;dx \\ &= \frac{1}{\pr(X\geq b)}\int_b^{\infty}g(x)f_X(x)\;dx \\ &\geq\bigg[1-\frac{\pr(X\geq b)}{\pr(X\geq a)}\bigg]g(b)\\ &+\frac{1}{\pr(X\geq a)}\int_b^{\infty}g(x)f_X(x)dx \\ &=\frac{g(b)}{\pr(X\geq a)}\;\pr(a\leq X\leq b) \\ &+\frac{1}{\pr(X\geq a)}\int_b^{\infty}g(x)f_X(x)dx\\ &\geq \frac{1}{\pr(X\geq a)}\int_a^bg(x)f_X(x)\;dx\\ &+\frac{1}{\pr(X\geq a)}\int_b^{\infty}g(x)f_X(x)dx\\ &=\int_a^{\infty}g(x)f_{X|X\geq a}(x)\;dx\\ &=\bbe\Big[g(X) \med X\geq a \Big].\end{aligned}$$ [^1]: The authors are with the Department of Electrical Engineering, Columbia University, New York, NY 10027 (email:{tajer, wangx}@ee.columbia.edu). [^2]: $M$ real numbers per user.
--- abstract: '[ We give a general review of recent developments in the theory of vortices in superfluids and superconductors, discussing why the dynamics of vortices is important, and why some key results are still controversial. We discuss work that we have done on the dynamics of quantized vortices in a superfluid. Despite the fact that this problem has been recognized as important for forty years, there is still a lot of controversy about the forces on and masses of quantized vortices. We think that one can get unambiguous answers by considering a broken symmetry state that consists of one vortex in an infinite ideal system. We argue for a Magnus force that is proportional to the superfluid density, and we find that the effective mass density of a vortex in a neutral superfluid is divergent at low frequencies. We have generalized some of the results for a neutral superfluid to a charged system.]{}' address: | $^1$University of Washington, Seattle, WA, USA\ e-mail thouless@phys.washington.edu\ $^2$Umeå University, Sweden\ $^3$University of Texas, Austin, TX, USA\ $^4$Simon Fraser University, Burnaby, Canada\ $^5$[Present address:]{} University of Georgia, Athens, GA, USA\ $^6$[Present address:]{} University of Florida, Gainesville, FL, USA author: - 'D. J. THOULESS$^1$, Ping AO$^{1,2}$, Qian NIU$^{3}$, M. R. GELLER$^{1,4,5}$ and C. WEXLER$^{1,6}$' title: QUANTIZED VORTICES IN SUPERFLUIDS AND SUPERCONDUCTORS --- \#1\#2\#3\#4[[\#1]{} [**\#2**]{}, \#3 (\#4)]{} Introduction ============ From the very beginning it has been realised that quantized vortices play an important part in the behavior of superfluids [@onsager49]. Both in neutral superfluids and in superconductors it is the vortices that provide a mechanism for the decay of superfluid currents in a ring. The circulation, for a neutral superfluid, or the trapped flux, for a superconducting ring, is quantized, and the current can only decay by a change of the quantum number by an integer, which can occur by the passage of a vortex (or quantized flux line) across the ring, from one edge to the other. In superconductors in a high magnetic field, the motion of flux lines is the main mechanism for electrical resistance. At high temperatures the movement of vortices is a thermally activated process, but at low enough temperatures the dominant mechanism must be by quantum tunneling. It is therefore important to understand the dynamics of vortices, in order to be able to evaluate the dissipative processes that occur in neutral superfluids and in superconductors. Despite the obvious importance of the problem, the theory has been in a most unsatisfactory state. There are many conflicting results in the literature. We did not realise the extent of this disagreement, and it was initially a surprise to us that a result we obtained would be dismissed by one knowledgeable critic as too obvious to be worth discussing, and by another equally eminent critic as well known to be wrong; this has happened to us several times. There are real problems here, connected with questions of suitable boundary conditions, and there is often a question of whether two different calculations are finding the same result by two different ways, or if they are finding two different contributions which must be added. To our surprise, we found, about two years ago, that we could get an exact result for one of the two parameters that determine the transverse force on a moving vortex, using only general properties of superfluid order [@tan96]. The second parameter was determined by a straightforward thermodynamic argument a little over a year later [@wexler97]. The first of these results still seems to be controversial [@volovik96], and some elements of the argument deserve closer scrutiny than they have received so far, but it is our belief that it should be possible to construct a firmly founded theory on the basis that we have tried to establish. Electrons in magnetic fields and vortices ========================================= There are strong analogies between the behavior of electrons in strong magnetic fields and of vortices in superfluids. These analogies enable us to make use of some of the insights that have been obtained in the study of the quantum Hall effect to understand problems connected with vortex dynamics. In both cases there is a transverse force proportional to velocity. The Lorentz force for electrons is proportional to the vector product of the electron velocity and the magnetic field, ${\bf F}_L=-e{\bf v}\times{\bf B}$. This can be represented by a term $eBx\dot y$ in the Lagrangian. The Magnus force acting on a vortex is proportional to the vector product of the velocity of the vortex relative to the fluid and a vector directed along the vortex core. Each of these forces leads to a path-dependent but speed independent term in the action. In a quantum theory the phase is equal to the action divided by $\hbar$, this corresponds to a Berry phase, a phase which depends on the path, but not on the rate at which the path is traversed [@haldanewu85; @ao93]. In both cases there is considerable arbitrariness in the value of this phase. For the electron this is due to the arbitrariness of the vector potential which is used to represent the magnetic field, while for the vortex there is a similar arbitrariness in the way the transverse force is represented in a Lagrangian. In either case the change in action or phase when the electron or vortex traverses a [*closed*]{} path is well determined. For the electron the phase change on a closed path is equal to $2\pi$ times the number of flux quanta enclosed by the path. For the vortex the phase change is equal to $2\pi$ times the average number of atoms enclosed by the surface swept out by the closed path of the vortex. The dominant correction to quantization of the Hall conductance comes from tunneling or activated transport between states on the two edges of a quantum Hall bar. The dominant mechanism for decay of supercurrents is tunneling or activated transport of vortices across the system. In many real systems there is a tangle of pre-existing vortices frozen in when the system is cooled below the critical temperature, and these can serve as sources for the vortices that cross the system. Under ideal conditions, and some modern experiments on helium approach such ideal conditions [@avenel94; @packard95], there are no vortices in equilibrium, and a vortex loop must be created from nothing in the interior, or a line must be created at the boundary (with its image this constitutes a loop), and cross the system to be annihilated at the opposite boundary. Electrons in a magnetic field have a fast cyclotron motion around the guiding center. Canonical variables can be chosen as the two pairs $v_x,v_y$, rescaled by $m^2/eB$, which give the fast motion, and the guiding center coordinates $X,Y$, rescaled by $eB$, which give the slow motion. Vortices also have such a fast cyclotron-like motion, in which the vortex core circles around the center of the flow it induces. In addition, since the vortex is a string rather than a point, it has the low frequency Thomson modes, circularly polarized modes of oscillation analogous to the modes of oscillation of a stretched string. For the vortex the two coordinates $X,Y$ of the position of the vortex in the plane perpendicular to the core are also conjugate variables, as is manifest from the classical theory of vortex motion which can be found in Lamb’s [*Hydrodynamics*]{}, and which Lamb credits to an 1880’s book on Mechanics by Kirchhoff. The most important difference is that we think we understand the Schrödinger equation for electrons, whereas a vortex in a superfluid is a complicated many-body entity. The relation of the collective variables describing the vortex motion to the single-particle variables describing the superfluid is not obvious. General features of vortices in superfluids =========================================== A vortex is a composite object in a many-body system. Its motion may be described by collective variables, but its structure depends on all the single-particle variables of the superfluid, and the relation between these single-particle variables and the collective variables is, as usual, obscure. Feynman [@feynman55] proposed describing a vortex by taking the ground state wave function, symmetric in all the single particle variables for a boson superfluid, and multiplying it by a factor of the form \_[j=1]{}\^N e\^[i\_j]{}f(r\_j),where $\theta_j$ is the azimuthal angle made by the particle $j$ with the vortex core, $r_j$ is its distance from the core, and $f$ is some real function which is close to unity everywhere except where $r_j$ is of the order of the radius of the vortex core, and which goes to zero at $r_j=0$ in order to prevent large kinetic energy contributions from the rapid variation of phase close to the vortex core. A similar description of the core is obtained for the Ginzburg–Pitaevskii equations for the order parameter near the critical temperature [@ginzburgpita58], or from the Gross–Pitaevskii nonlinear Schrödinger equation for the condensate of the dilute Bose gas at zero temperature [@pitaevskii61; @gross61]. In these theories the vortex is described by mean-field-like equations, so that the position of the singularity at the vortex core has a sharp value, although we know that the two components of its position in the plane are conjugate variables. Somehow we should be able to construct a quantized version of the theory taking account of this result of the Magnus force. In a strongly type II (Shubnikov) superconductor the situation is somewhat similar, except that the current circulating round the vortex core generates a magnetic field parallel to the core, which in turn generates a vector potential that reduces the current, so that a total of one quantum of flux $h/2e$ is associated with the vortex, or flux line, and no current is associated with the change of the phase angle at large distances. In a type I (Pippard) superconductor the character of the singularity is mainly trapped flux, but the singly quantized flux line is not stable in a uniform magnetic field, and it is thermodynamically favorable for the flux lines to aggregate and form a region of normal metal. In either case Landau–Ginzburg theory can be used to describe the vortex. In classical incompressible fluid mechanics the hydrodynamic mass of a vortex is of order of mass of fluid displaced, but it depends in detail on the core structure. Since vortices in low temperature superfluid helium are measured to have a rather small vortex core radius, smaller than the average interatomic spacing, this mass density is relatively small, and is taken to be zero in some calculations. In recent work Duan and Leggett showed that the inertial mass of a vortex in a superconductor is finite[@duanleggett92], but Duan argued that the mass density of a vortex in a neutral superfluid is infinite[@duan94]. He originally described this as a result of the quantum nature of the fluid, and we found this very hard to accept. Actually it is true of all compressible fluids, but the divergence is logarithmic in the frequency of the motion, with quite a small coefficient, as Demircan, Ao and Niu have pointed out[@demircan96]. Under realistic circumstances, such as in the free cyclotron motion of the vortex, or in vortex tunneling, the logarithm may be quite small, and this term relatively unimportant. In liquid helium at relatively high temperatures, close to the critical temperature, the largest force on a moving vortex, or on a vortex that is held still while the fluid streams past it, is likely to be a drag force due to the scattering by the vortex of the excitations that make up the normal fluid. At lower temperatures the transverse (Magnus) force should dominate, but the understanding of the Magnus force is complicated by the existence of the two components of the fluid, which may affect the vortex very differently. As we have discussed already, the Magnus force has important implications for the Berry phase, and for the quantum uncertainty of the position of the vortex. For a superconductor the situation is far more complicated, since not only is there the magnetic field due to the motion of the electrons to be considered, but the effects of disorder in the positive background are vital. Disorder makes the [*conductivity*]{} of the normal metal finite, and produces a drag force on vortices even at rather low temperatures, but also, if the disorder is on a large enough scale, pins the vortices and reduces the flux flow [*resistivity*]{}. In our work we have concentrated on understanding an isolated vortex in an ideal, uniform, infinite superfluid. Our aim has been to understand the parameters that come into the dynamics of a vortex when its velocity relative to the background fluid is small — the effective mass, the transverse component of the force, and the longitudinal (dissipative) component of the force. This is clearly not a program for a complete understanding of vortex dynamics, since, even if it were completely successful, we might still be concerned with strongly nonlinear regions in realistic situations, such as those found when quantum tunneling appears to be observed. Particularly for the transverse force, we think we have clean and precise results that are — inevitably— in conflict with widely accepted theories. In our work the vortex is controlled by some pinning potential that can be manipulated from outside. The pinning potential can be rather weak, or a macroscopic wire, so long as it has cylindrical symmetry. For quantities such as the effective mass and the longitudinal force the nature of the pinning potential has an effect on the answer, and we may need to consider some suitable limiting process to make the strength of the potential tend to zero, but for the transverse force we find that the answer is independent of the form or strength of the pinning potential. The Magnus force in neutral superfluids ======================================= There is no agreement about what the forces acting on a vortex in a neutral superfluid are. The simplest quantity to calculate should be the component of force perpendicular to the motion of the vortex relative to the substrate, the analog of the Magnus force for classical fluids, yet two recently quoted forms look quite different. In Donnelly’s book on [*Quantized Vortices in Helium II*]{} [@donnelly91] he quotes the force per unit length as \_t= [**K**]{},\[eq:donnelly\]where ${\bf K}$ is a vector along the vortex line whose magnitude is the quantum of circulation $h/M$, ${\bf v}_V$, ${\bf v}_s$ and ${\bf v}_n$ are the velocities of the vortex, the superfluid component and the normal fluid component, and $\rho_s$ and $\rho_n$ are the superfluid and normal fluid densities; $\sigma$ is a coefficient whose value is not exactly determined. Volovik, however, in a number of recent papers [@volovik93; @volovik95], quotes the form \_t= [**K**]{},\[eq:volovik\] where the term with coefficient $C_F$ occurs only for fermion superfluids, such as the $B$ phase of superfluid $^3$He, and is due to spectral flow of the low energy states in the vortex core. The first term in each of these expressions is referred to as the [**Magnus force**]{}, the term proportional to $\rho_n$ as the [**Iordanskii force**]{}, so the use of these two terms is quite different for the two authors. Donnelly’s term proportional to $\sigma$ comes from phonon or roton scattering by the vortex, and it is only if this is equal to zero that the two expressions are in agreement for the case of bosons. Whatever the form of this force, Galilean invariance tells us that there are only two parameters to be determined. If we know the coefficients of ${\bf v}_V$ and ${\bf v}_s$, the coefficient of ${\bf v}_n$ must be equal to minus the sum of the other two coefficients. We argue, in the rest of this section, that the only transverse force has the form \_t= \_s[**K**]{}([**v**]{}\_V-[**v**]{}\_s) , by determining separately the coefficients of $v_s$ and $v_V$. Wexler [@wexler97] has given a thermodynamic argument to show that the coefficient of ${\bf K}\times {\bf v}_s$ is indeed $-\rho_s$. This result seems to be uncontroversial, and is in agreement with both Donnelly and Volovik. The argument is essentially a thermodynamic argument, which considers a reversible change of the circulation in a ring by moving a vortex slowly across the system under equilibrium conditions. Consider a macroscopic ring, such as the one shown in fig. 1, with average radius $R$, width (difference between outer radius and inner radius) $L_y$ and height $L_z$. For simplicity we assume $L_y<<R$, but this is not essential, and the result is independent of the shape of the ring. Initially there are $n$ quanta of circulation trapped in the ring, giving superfluid velocity $v_s=n\kappa_0/2\pi R$, and the normal fluid velocity is zero, since the boundaries of the ring are stationary. A pinning potential is used to insert adiabatically one vortex, which is created at the outer boundary, moved slowly across the system under constant temperature conditions, and annihilated at the inner boundary. The effect of this extra vortex is to increase the circulation from $n$ units to $n+1$, increasing the superfluid velocity by $\delta v_s=\kappa_0/2\pi R$. This increases the free energy by F= (2RL\_yL\_z)\_s v\_sv\_s=\_s\_0 v\_sL\_yL\_z,since superfluid density is defined in terms of the free energy change when the superfluid velocity is changed. This must be compared with the work done in moving the vortex of length $L_z$ a distance $L_y$ isothermally across the ring, which is W= F\_tL\_yL\_z.Comparison of these two shows that the magnitude of the transverse force per unit length, under conditions in which $v_n$ and $v_V$ are both negligible, is |F\_t|=\_s\_0 v\_s.More careful analysis gives the sign and direction of this force as \_t= -\_s[**K**]{}\_s.This argument determines the coefficient of $v_s$ in the transverse force. To determine the coefficient of $v_V$, Thouless, Ao and Niu [@tan96] consider an infinite system with superfluid and normal fluid asymptotically at rest (${\bf v}_n=0={\bf v}_s$) in the presence of a single vortex which is constrained to move by moving the pinning potential. For simplicity we describe the two-dimensional problem of a vortex in a superfluid film, but the three-dimensional generalization is straightforward. Also we restrict this discussion to the ground state of the vortex, but the generalization to a thermal equilibrium state is straightforward. The reaction force on the pinning potential is calculated to lowest order in the vortex velocity ${\bf v}_V$. This can be studied as a time-dependent perturbation problem, but this can be transformed into a steady state problem, with the perturbation due to motion of the vortex written as $i{\bf v}_V\cdot{\bf grad}_0$. The force in the $y$ direction on a vortex moving with speed $v_V$ in the $x$ direction can then be written as F\_y=iv\_V+ [comp. conj.,]{} where $\cal P$ projects off the ground state of the vortex. Since ${\partial V/\partial x_0}$ is the commutator of $H$ with the partial derivative $\partial/\partial x_0$, the denominator cancels with the $H$ in the denominator, and so the expression is equal to the Berry phase form F\_y= -iv\_V+iv\_V . Since the Hamiltonian consists of kinetic energy, a translation invariant interaction between the particles of the system, and the interaction with the pinning center, which depends on the difference between the pinning center coordinates and the particle coordinates, the derivatives $\partial/\partial x_0$, $\partial/\partial y_0$, can be replaced by the total particle momentum operators $-\sum\partial/\partial x_j$, $-\sum\partial/\partial y_j$. This gives the force as a commutator of components, $P_x,P_y$ of the total momentum, F\_y=-iv\_V .At first sight one might think that the two different components of momentum commute, but this depends on boundary conditions, since the momentum operators are differential operators. Actually this expression is the integral of a curl, and can be evaluated by Stokes’ theorem to get F\_y= d[**r**]{}=\_sd[**r**]{},where the integral is taken over a loop at a large distance from the vortex core. This gives the force in terms of the circulation of momentum density (mass current density) at large distances from the vortex. Our result that the transverse force is equal to $v_V$ times the line integral of the mass current is independent of the nature or size of the pinning potential. The general form of this is \_t= \_s[**K**]{}\_s\_V +\_n[**K**]{}\_n\_V,where ${\bf K}_n$ represents the normal fluid circulation. In equilibrium the circulation of the normal fluid around a stationary vortex is zero, since circulation of the normal fluid gives rise to viscous dissipation of energy, which in turn leads to growth of the area of the normal fluid vortex core. If there is any nonequilibrium normal fluid circulation, it is not obvious that it should be quantized, or that the motion of the normal fluid vortex should be correlated with the motion of the superfluid vortex. If only the superfluid participates in the circulation round the vortex core, which seems to us to be the most reasonable assumption, this gives \_t= \_s[**K**]{}\_V. In combination with the Wexler result for the coefficient of ${\bf v}_s$, the total transverse force on a vortex is \_t= \_s[**K**]{}\_s([**v**]{}\_V-[**v\_s**]{}) . Only the superfluid Magnus force exists unless the normal component participates in the circulation of the superfluid to some extent. This disagrees with Donnelly’s eq. (\[eq:donnelly\]) unless the phonon-scattering term proportional to $\sigma$ cancels with the Iordanskii term proportional to $\rho_n$, and disagrees with Volovik’s eq.(\[eq:volovik\]) unless his coefficient $C_F$ is equal to $\rho_n$ even for bosons. The most striking feature of this is that the force is independent of the normal fluid velocity. It agrees with the obvious generalization of the classical Magnus force argument to two-fluid dynamics. This argument considers the force– momentum balance in a large cylinder surrounding a vortex which is held stationary while the fluid flows past it. Bernoulli pressure on the cylinder, and momentum flux across the boundary of the cylinder balance with the force on the vortex. In a two-fluid generalization of this there are separate contributions from the product of superfluid circulation with superfluid velocity, and from the product of normal fluid velocity with normal fluid velocity. Since we have only had to consider global properties involving momentum conservation and conditions at a long distance from the vortex core, and have not needed to make any detailed consideration of conditions at the core of the vortex, we believe that our arguments are valid for a fermion superfluid described by a single complex order parameter. The $B$ phase of $^3$He does not quite meet this condition, but the order parameter is an essentially isotropic combination of a $P$-wave orbital state and a triplet spin, so this should behave in much the same way. Volovik [@volovik93; @volovik95] has argued that spectral flow of the unpaired states in the vortex core of a fermion superfluid leads to a contribution to the transverse force that cancels most of the Magnus force, but Stone [@stone96] has examined this argument more closely, and does not find that this mechanism is operative unless there is a background to take momentum from these excitations. We do not think that there is such a canceling contribution in a homogeneous fermion superfluid. Forces due to phonon scattering =============================== The result obtained in the previous section, that the coefficient of the vortex velocity in the transverse force is equal and opposite to the coefficient of the superfluid velocity leads to the surprising conclusion that normal fluid flow does not affect the force on the vortex, unless there is also normal fluid circulation round the vortex. This is surprising, because Pitaevskii [@pitaevskii??] and Iordanskii [@iordanskii66] argued that the asymmetrical scattering of rotons or phonons by vortices should lead to a transverse force when the vortex moves relative to the normal fluid component. In the low temperature limit an explicit calculation of the phonon–vortex scattering can be made, and the literature quotes a transverse force proportional to $\rho_n{\bf K}\times ({\bf v}_V-{\bf v}_n)$. There are two problems with this result: 1. The derivation assumes that the phonons interact only with the vortex, but in our argument we assume that the phonons, which make up the normal fluid, must be in equilibrium with one another. 2. In papers from Cleary (1968) [@cleary68] to Sonin (1997) [@sonin97] the expression for the transverse force, which is proportional to 4[T\^3\^2 c\^3]{}( v\_V- v\_n)\_m\_m\_[m+1]{} (\_[m+1]{}-\_m), has been rewritten as ( v\_V- v\_n)\_m2(\_[m+1]{}-\_m). This would be fine, except that $\delta_m$ does not tend to zero. If one substitutes the formula \_m(m)\_0 T/c\^2,which is correct to lowest order in temperature $T$, into the original formula, a result is obtained which is (at least) cubic in $\kappa_0$ and of sixth power in $T$, or 3/2 power in $\rho_n$. The second expression is obtained from the first by canceling two divergent series, and this gives the quoted expression which is linear in $\kappa_0$ and linear in $\rho_n$; we cannot see any justification for a term of this magnitude. Superconductivity ================= The situation for the transverse force on a vortex in a superconductor is even more confused than the situation for a neutral superfluid. In the 1960s, Bardeen and Stephen [@bardeen65] argued for a very small Magnus force, but an analysis by Nozières and Vinen [@nozieres66] of an idealized model of a superconductor gave the full value of the Magnus force suggested by classical hydrodynamics. Wexler’s argument [@wexler97] for the coefficient of $v_s$ can be applied to the case of a superconductor. When the substrate velocity and the vortex velocity zero in the presence of a superfluid electron velocity, this argument gives the expected result that there is a Lorentz force on the vortex equal to the integral of $e\rho_e{\bf v}_s\times{\bf B}$, where $\rho_e$ is the conduction electron density. To find the coefficient of $v_V$, Geller, Wexler and Thouless [@gwt98] have adapted the arguments of sec. 4 to the very idealized model of a charged system with a uniform positive background, which is essentially the situation considered by Nozières and Vinen [@nozieres66], although they, unlike us, also had to assume that the superconductor was extreme type II. This is not completely straightforward, even though we have taken the uniform positive background so that we can continue to use momentum conservation, because any choice of the gauge field which is used to describe magnetic effects breaks the explicit translation invariance, and makes the implicit translation invariance obscure. Rather than introduce a gauge field, we can write the electromagnetic interactions in terms of a Coulomb interaction between electrons and between electrons and positive background, together with an instantaneous current–current interaction between the electrons. Darwin showed that this is correct up to second order in electron velocity, apart from a relativistic variation of the mass with velocity which is unimportant for this problem. We also need a Galilean invariant attractive interaction between the charges to produce a paired superconducting state. This gives a Hamiltonian with explicit translation invariance, so the arguments of sec. 4 can be taken over. The result for the coefficient of $v_V$ is formally unchanged, and is the line integral of the canonical momentum density on a loop which surrounds the flux line at a distance which is large compared with the penetration length. This is actually a surprising result, as at these distances there is no magnetic field or current density produced by the vortex line, and the integral is related to the Aharonov–Bohm effect rather than to any classical quantity. Since the integral is equal to the trapped magnetic flux, the transverse force can be written as \_t= \_e ([**v**]{}\_V-[**v**]{}\_e)d\^3r,so that the transverse force depends only on the motion of the vortex relative to the electrons. It can be rewritten, in a form that makes its physical origin more transparent, as \_t=\_e ([**v**]{}\_p-[**v**]{}\_e)d\^3r +\_e ([**v**]{}\_V-[**v**]{}\_p)d\^3r. The first term is the Lorentz force given by the interaction of the electric current, which is a Galilean invariant, with the magnetic field. The second is a Magnus force that acts on the positive substrate moving with velocity ${\bf v}_p$ relative to the vortex. The moving vortex generates a dipolar electric charge distribution, which in turn produces a dipolar elastic stress on the positive substrate, and this leads to a net force on the positive substrate. A similar analysis was carried out by Nozières and Vinen [@nozieres66], and their results were essentially the same. There have been recent measurements made by Zhu, Brandstrom and Sundqvist [@zhu97] that support a fairly large value of the Magnus force. Conclusions =========== We have succeeded in determining the transverse force on a vortex in a neutral superfluid under assumptions that are both general and reasonably realistic. Like most other exact results in quantum many-body theory, they are related to general conservation laws, and apply, in a slightly different form, to classical systems as well. Our generalization to superconductors is far from realistic, since it relies on the uniformity of the positive substrate. With a uniform positive substrate an electron gas has infinite conductivity, even in the absence of a pairing interaction, so our results can only form a first step towards a plausible theory of the Magnus force in superconductors. We may be able to extend the reults from a uniform substrate to an ideal periodic sustrate, but even that is quite inadequate for the description of a real metal. We have to be able to take the next step of considering disorder, but there is no chance that we will be able to get exact results in that case. It would also be interesting to generalize these results to finite systems, nonzero frequency of the vortex motion, and a finite density of vortex lines. Another line that we are pursuing is the connection between the Magnus force and the quantization of the vortex line. We know that there is an intimate connection between the strength of the Lorentz force on an electron and the density of degenerate levels of an electron in a magnetic field, and there are good reasons to think that there is a similar connection between the strength of the Magnus force on a vortex and the density of degenerate levels for a vortex. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported in part by NSF Grant No. DMR 95-28345. [99]{} Onsager L 1949. Nuovo Cimento [**6**]{}, Suppl. 2, 249. Thouless DJ, Ao P and Niu Q 1996. Phys. Rev. Lett., 3758. Wexler C 1997. Phys. Rev. Lett. [**79**]{}, 1321. Volovik GE 1996. Phys. Rev. Lett. [**77**]{}, 4687. Haldane FDM and Wu YS 1985. Phys. Rev.Lett. [**55**]{}, 2887. Thouless DJ, Ao P and Niu Q 1993. Physica A [**200**]{}, 42. Varoquaux E and Avenel O 1994. Physica B [**197**]{}, 306. Steinhauer J, Schwab K, Mukharsky Y, Davis JC and Packard RE 1995. Phys Rev Lett [**74**]{}, 5056. Feynman RP 1955. In [*Progress in Low Temperature Physics 1,*]{} ed. C. J. Gorter (North-Holland, Amsterdam), pp. 17-53. Ginzburg VL and Pitaevskii LP 1958. Zhur.Eksp. Teor. Fiz. [**34**]{}, 1240 \[translation in Soviet Phys. JETP [**7**]{}, 858\]. Pitaevskii LP 1961. Zhur. Eksp. Teor. Fiz., 454 \[Translation in Soviet Physics JETP [**13**]{}, 451\]. Gross EP 1961. Nuovo Cimento [**20**]{}, 454. Duan JM and Leggett AJ 1992. Phys. Rev.Lett. [**68**]{}, 1216. Duan JM 1994. Phys. Rev. B [**49**]{}, 12381. Demircan E, Ao P and Niu Q 1996. Phys. Rev. B [**54**]{}, 10027. Donnelly RJ 1991. [*Quantized vortices in helium II*]{} (Cambridge University Press). Volovik GE 1993. Sov. Phys. JETP [**77**]{}, 435. Volovik GE 1995. JETP Lett. [**62**]{}, 65. Stone M 1996. Phys. Rev. B [**54**]{} 13222. Pitaevskii LP 1958. Zhur. Eksp. Teor. Fiz., 1271 \[Translation in Soviet Phys. JETP [**8**]{}, 888\]. Iordanskii SV 1965. Zhur. Eksp. Teor. Fiz., 225 \[Translation in Sov. Phys. JETP [**22**]{}, 160\]. Cleary RM 1968. Phys. Rev. [**175**]{}, 587. Sonin EB 1997. Phys. Rev. B [**55**]{}, 485. Bardeen J and Stephen MJ 1965. Phys. Rev. [ **140**]{}, A1197. Nozières P and Vinen WF 1966. Phil. Mag. [ **14**]{} 667. Geller MG, Wexler C and Thouless DJ, in preparation. Zhu XM, Brandstrom E and Sundqvist B 1997. Phys. Rev.Lett. [**78**]{}, 122.
--- abstract: 'In this work we provide an updated description of the [*Vertical Current Approximation Nonlinear Force-Free Field (VCA-NLFFF)*]{} code, which is designed to measure the evolution of the potential, nonpotential, free energies, and the dissipated magnetic energies during solar flares. This code provides a complementary and alternative method to existing traditional NLFFF codes. The chief advantages of the VCA-NLFFF code over traditional NLFFF codes are the circumvention of the unrealistic assumption of a force-free photosphere in the magnetic field extrapolation method, the capability to minimize the misalignment angles between observed coronal loops (or chromospheric fibril structures) and theoretical model field lines, as well as computational speed. In performance tests of the VCA-NLFFF code, by comparing with the NLFFF code of Wiegelmann (2004), we find agreement in the potential, nonpotential, and free energy within a factor of $\lapprox 1.3$, but the Wiegelmann code yields in the average a factor of 2 lower flare energies. The VCA-NLFFF code is found to detect decreases in flare energies in most X, M, and C-class flares. The successful detection of energy decreases during a variety of flares with the VCA-NLFFF code indicates that current-driven twisting and untwisting of the magnetic field is an adequate model to quantify the storage of magnetic energies in active regions and their dissipation during flares. - The VCA-NLFFF code is also publicly available in the [*Solar SoftWare (SSW)*]{}.' author: - 'Markus J. Aschwanden$^1$' title: ' The Vertical Current Approximation Nonlinear Force-Free Field Code - Description, Performance Tests, and Measurements of Magnetic Energies Dissipated in Solar Flares' --- INTRODUCTION ============== The measurement of the magnetic field in the solar corona is one of the major challenges in solar physics, while measurements of the photospheric field is a long-standing industry. Some researchers state that the coronal field cannot be measured (directly), but we take the standpoint here that a successful modeling method that matches the observed coronal loop geometries actually equates to a real measurement of the coronal magnetic field. The knowledge of the coronal magnetic field is paramount in many problems in solar physics, such as coronal seismology, coronal heating, magnetic energy storage, solar wind, magnetic instabilities, magnetic reconnection, and magnetic energy dissipation in solar flares and coronal mass ejections, which further ties into the global energetics of particle acceleration and propagation in the coronal and heliospheric plasma. While a potential field model is a suitable tool to explain the approximate geometry of coronal loops, a more important capability is the nonpotential or free energy, which can be liberated in the solar corona and is able to trigger magnetic instabilities and to drive eruptive phenomena on the Sun and stars. Traditional methods compute the magnetic field in the solar corona by potential-field extrapolation of the photospheric line-of-sight component $B_{z}(x,y)$ of the magnetic field, or by force-free extrapolation of the photospheric 3D vector field ${\bf B}(x,y)$. Although these methods have been widely and frequently used in the solar physics community during the last three decades, inconsistencies with the observed geometry of coronal loops have been noticed recently (Sandman et al. 2009; DeRosa et al. 2009), since coronal loops are supposed to accurately trace out the magnetic field in a low plasma-$\beta$ corona (Gary 2001). Misalignment angles between theoretical [*nonlinear force-free field (NLFFF)*]{} solutions and observed loop directions amount to $\mu \approx 24^\circ-44^\circ$ for both potential and nonpotential field models (DeRosa et al. 2009). Several studies have been carried out to pin down the uncertainties of NLFFF codes, investigating insufficient field-of-views, the influence of the spatial resolution, insufficient constraints at the computation box boundaries, and the violation of the force-free assumption in the lower chromosphere (Metcalf et al. 2008; DeRosa et al. 2009, 2015), but a solution to reconcile theoretical magnetic field models with the observed geometry of coronal loops has not been achieved yet and requires a new approach. Thus there are two different types of NLFFF codes. The first type is the traditional NLFFF code that uses the 3D vector field ${\bf B}(x,y)=[B_x(x,y), B_y(x,y), B_z(x,y)]$ from a vector magnetograph as input for the photospheric boundary and uses an extrapolation scheme to compute magnetic field lines in coronal heights that are consistent with the photospheric boundary condition and fulfill the divergence-freeness and the force-freeness conditions. Examples and comparisons of such recent NLFFF codes are given in Metcalf et al. (2008) and DeRosa et al. (2009, 2015), which include the optimization method (Wheatland et al. 2000; Wiegelmann 2004; Wiegelmann et al. 2006, 2008; Wiegelmann and Inhester 2010), the magneto-frictional method (Valori et al. 2007, 2010), the Grad-Rubin method (Wheatland 2007; Amari et al. 2006), the conservation-element/solution-element space-time scheme (CESE-MHD-NLFFF: Jiang and Feng 2013), and other methods. The second type is an alternative NLFFF code, which uses only the line-of-sight magnetogram $B_z(x,y)$ to constrain the potential field, while forward-fitting of an analytical approximation of a special NLFFF solution in terms of vertical currents to (automatically traced) coronal (or chromospheric) loop coordinates $[x(s),y(s)]$ is carried out in order to determine the nonlinear force-free $\alpha$-parameters for a number of unipolar (subphotospheric) magnetic sources. The theory of the vertical-current approximation is originally derived in Aschwanden (2013a), while the numeric code (called the VCA-NLFFF code here) has been continuously developed and improved in a number of previous studies (Aschwanden and Sandman 2010; Sandman and Aschwanden 2011; Aschwanden et al. 2012, 2014a, 2014b; Aschwanden 2013a, 2013b, 2013c, 2015; Aschwanden and Malanushenko 2013). Because the recent developments reached now an unprecedented level of accuracy in the determination of nonpotential magnetic energies, it is timely to provide a comprehensive description and performance tests of the latest version of the VCA-NLFFF method. A related forward-fitting code, using a quasi-Grad-Rubin method to match a NLFFF solution to observed coronal loops has been pioneered independently (Malanushenko et al. 2009, 2011, 2012, 2014). In this study we provide a short analytical description of the VCA-NLFFF method. We start with a description of the organization of the code (Section 2), and proceed with the three major parts of the code: (i) the determination of the potential field (Section 3), (ii) the automated tracing of coronal and chromospheric curvi-linear structures (Section 4), and the forward-fitting of the VCA-NLFFF model to the observed loop geometries (Section 5). Besides brief analytical descriptions, we provide extensive performance tests using recently published [*Atmospheric Imager Assembly (AIA)*]{} (Lemen et al. 2012) and [*Helioseismic Magnetic Imager (HMI)*]{} (Scherrer et al. 2012) data from the [*Solar Dynamics Observatory (SDO)*]{} (Pesnell et al. 2011), with particular emphasis on the determination of the time evolution of the free (magnetic) energy in active regions and the dissipation of magnetic energies during X-class flares. Discussions and conclusions are provided in Sections 6 and 7. ORGANIZATION OF THE VCA-NLFFF CODE ==================================== In contrast to traditional NLFFF codes, such as the optimization method (Wheatland et al. 2000; Wiegelmann and Inhester 2010), the magneto-frictional method (Valori et al. 2007, 2010), or the Grad-Rubin method (Wheatland 2007; Amari et al. 2006), which all are designed to match [*photospheric*]{} data, the VCA-NLFFF code developed here is designed to match [*coronal*]{} as well as [*chromospheric*]{} data, a capability that does not exist in traditional NLFFF codes. The VCA-NLFFF code has a modular architecture, which can be grouped into three major sections: (1) The decomposition of a magnetic line-of-sight map into a number of magnetic charges; (2) the automated feature recognition of curvi-linear features in coronal (EUV) and chromospheric (UV) images; and (3) the forward-fitting of a nonpotential magnetic field model to the observed curvi-linear patterns of coronal loops or chromospheric fibrils, which we will describe in turn. A flow chart of the various modules of the VCA-NLFFF code is depicted in Fig. 1. Time Grid ----------- The initial input for a series of runs is specified by a starting time $t_{start}$ (in the format of universal time, UT), a desired time interval for the entire duration ($t_{dur}$) of magnetic field computations, a time cadence ($t_{cad}$), a heliographic position in longitude $l(t=t_{start})$ and latitude $b(t=t_{start}$) for the center of the chosen field-of-view specified at the starting time, and a desired field-of-view (FOV) size for a rectangular subimage (in cartesian coordinates) in which a solution of the magnetic field is computed. The subimage should be chosen inside the solar disk, because foreshortening near the limb hampers any type of magnetic modeling. The time input information defines a time series, $$t_i = t_{start} + i \times t_{cad} \ , i=1,...,n_t \ ,$$ where the number $n_t$ of time frames amounts to, $$n_t={t_{dur} \over t_{cad}} \ ,$$ for which magnetic solutions are computed. For flare events we typically use a cadence of $t_{cad}$=6 minutes (down to 1 min), a duration $t_{dur} = \Delta t_{flare} + 2 t_{margin}$ that corresponds to the flare duration $\Delta t_{flare}$ augmented with a margin of $t_{margin}= \pm 1.0$ hrs before and after the flare event. For the study of the time evolution of an active region, for instance, we used a total duration of 5 days with a cadence of $t_{cad}=6$ minutes, which requires $n_t = 5 \times 24 \times 10 = 1200$ time steps (Fig. 2). The architecture of the VCA-NLFFF code entails sequential processing of the three major tasks to compute nonpotential field solutions for each single time step, but an arbitrary number of time steps can be simultaneously processed in parallel, since the code treats all data from each time step independently. The efficient mode of parallel processing allows us to compute results in real time, in principle for any time cadence, if a sufficient number of parallel runs is organized. Heliographic Coordinates -------------------------- The heliographic position of the center of the chosen FOV subimages is automatically updated from the change in heliographic longitude according to the differential rotation, $$l(t) = l(t_{start}) + {(t-t_{start}) \over t_{syn}} \times 360^\circ \ ,$$ with the synodic period being $t_{syn}(b=0) = 27.2753$ days. The absolute accuracy of the time-dependent heliographic coordinates is not critical in the accuracy and continuity of the time-dependent free energy $E_{free}(t)$, as long as the chosen field-of-view has weak magnetic fields at the boundaries. The relative accuracy among different wavelengths is given by the instrument cadence, which is $\Delta t_{cad}=12$ s for AIA data from SDO. Based on this input, the code generates first a catalog with $n_t$ time entries that contain the times $t_i$ and the time-dependent heliographic positions $[l_i,b_i]=[l(t_i),b(t_i)]$ for each time step. All computations are carried out in a cartesian coordinate system $[x,y,z]$ with the origin of the coordinate system at Sun center and $z$ being aligned with the observer’s line-of-sight. All solar images are required to have the solar North-South direction co-aligned with the $y$-axis (which is provided in the level-1.5 data). The heliographic positions $[l,b]$ are transformed into cartesian coordinates $[x,y]$ by taking the sinusoidal (annual) variation of the latitude $b_0(t)$ of the Sun center into account, $$b_0(t) = 7.24^{\circ} \sin{\left[ 2 \pi (t-t_0) \right]} \ ,$$ where $t_0$ is the time when the tilt angle of the solar axis is zero ($b_0=0$), when the solar equator coincides with the East-West axis, which occurs on June 6. From this we can calculate the cartesian coordinates $[x_0, y_0]$ of a target with heliographic coordinates $[l, b]$ by, $$x_0 = \sin{( l {\pi \over 180^\circ})} \cos{(b {\pi \over 180^\circ})} \ ,$$ $$y_0 = \sin{\left[ ( b-b_0 ) {\pi \over 180^\circ} \right]} \ .$$ The left and right-hand side of a chosen square field-of-view with length $FOV$ has then the cartesian coordinates $x_1=x_0-FOV/2$ and $x_2=x_0+FOV/2$, while the bottom and top boundaries are $y_1=y_0-FOV/2)$ and $y_2=y_0+FOV/2$. The relationship between the cartesian and heliographic coordinate systems is illustrated in Fig. 2 for a time sequence of EUV subimages that follow the track of NOAA active region 11158. Wavelength Selection ---------------------- The data input requires a minimum of a line-of-sight magnetogram and at least one EUV (or UV) image that shows curvi-linear features, such as coronal loops or chromospheric fibrils. However, an arbitrary large number of EUV (or UV) images can be supplied for each time step. The present version of the code deals with HMI/SDO magnetograms and EUV and UV images from AIA/SDO (94, 131, 171, 193, 211, 304, 335, 1600 ), from the [*Interface Region Imaging Spectrograph (IRIS)*]{} (De Pontieu et al. 2014) slit-jaw images (1400, 2796, 2832 ), from the [*Interferometric Bidimensional Spectrometer (IBIS)*]{} (Cavallini 2006) (8542 ), or from the [*Rapid Oscillations in the Solar Atmosphere (ROSA)*]{} (Jess et al. 2010) instrument (6563 ). In the present code version, up to 8 different wavelengths from the same instrument are processed in a single run. We will subdivide the wavelengths chosen in each instrument further by coronal (94, 131, 171, 193, 211, 335 ) and chromospheric contributions (304, 1400, 1600, 1400, 2796, 2832, 6563, 8542 ). A sample of such a multi-wavelength data set used in magnetic modeling with the VCA-NLFFF code is depicted in Fig. 3, revealing loop structures in coronal wavelengths as well as fibrils and moss in chromospheric wavelengths, which pose an intricate problem for automated tracing of magnetic field-aligned curvi-linear features. The data input in the VCA-NLFFF code is organized in two ways, either by reading the data FITS files from a specified directory on the local computer, or by searching level-1.5 data from local disk cache, or remotely from the official SDO and IRIS data archives (at Stanford University and Lockheed Martin). AIA/SDO, HMI/SDO, and IRIS level-1.5 data are available in form of FITS files with all necessary pointing information, while other data (e.g., from IBIS and ROSA) need to be prepared with a minimal FITS header that contains the pointing information (specified with the FITS descriptors NAXISi, CDELT1i, CRVALi, CRPIXi, and CROTAi, i=1,2; see Thompson 2006). Performance Test Data ----------------------- The validity of the results obtained with the VCA-NLFFF code can most rigorously be tested with independent data from other NLFFF codes. The most suitable data set containing published results of potential, nonpotential, and free energies has been computed with the weighted optimization NLFFF method (Wiegelmann 2004; Wiegelmann et al. 2006, 2008; Wiegelmann and Inhester 2010) by Xudong Sun for active region AR 11158 during 5 days (2011 February 12-17) with a cadence of 12 minutes, yielding 600 time steps for NLFFF comparisons (Aschwanden, Sun, and Liu 2014b). The corresponding VCA-NLFFF solutions are calculated for a time cadence of 6 minutes, yielding 1200 time steps (Fig. 2). We will use a second test data set of 11 GOES X-class flares with measurements of potential, nonpotential, and free energies, computed by Xudong Sun using Wiegelmann’s NLFFF method, with a 12-minute cadence during each flare time interval, providing 119 time steps for NLFFF comparisons (Aschwanden, Xu, and Jing 2014a). A third test data set is used here from the 2014 March 29 GOES X-class X1.0 flare, the first X-class flare that was observed by IRIS, where we have simultaneous coverage with AIA and IRIS, so that magnetic energies based on wavelengths from the coronal as well as chromospheric regime can be compared, which was the topic of a previous study (Aschwanden 2015). In particular we will use this test data set for a parametric study of the control parameters operating in the VCA-NLFFF code (Section 5.6), as listed in Table 1. A list of the observing times, time ranges, cadences, number of time steps, GOES class, heliographic position, NOAA active region number, and observing instruments of the three test data sets is compiled in Table 2. THE POTENTIAL FIELD ===================== Analytical Description ------------------------ The simplest representation of a magnetic potential field that fulfills Maxwell’s divergence-free condition ($\nabla \cdot {\bf B}=0$) is a unipolar magnetic charge $m$ that is buried below the solar surface (Aschwanden 2013a), defining a magnetic field ${\bf B}_m({\bf x})$ that falls off with the square of the distance $r_m$, $${\bf B}_m({\bf x}) = B_m \left({d_m \over r_m}\right)^2 {{\bf r}_m \over r_m} \ ,$$ where $B_m$ is the magnetic field strength at the solar surface above a buried magnetic charge, $(x_m, y_m, z_m)$ is its subphotospheric position, $d_m$ is the depth of the magnetic charge, $$d_m = 1-\sqrt{x_m^2+y_m^2+z_m^2} \ ,$$ and ${\bf r}_m=[x-x_m, y-y_m, z-z_m]$ is the vector between an arbitrary location ${\bf x}=(x,y,z)$ in the solar corona and the location $(x_m, y_m, z_m)$ of the buried charge. We define a cartesian coordinate system $(x,y,z)$ with the origin at the Sun center and the direction $z$ chosen along the line-of-sight from Earth to Sun center. The distance $r_m$ from the magnetic charge is $$r_m = \sqrt{(x-x_m)^2+(y-y_m)^2+(z-z_m)^2} \ .$$ The absolute value of the magnetic field $B_m(r_m)$ is a function of the radial distance $r_m$ (with $B_m$ and $d_m$ being constants for a given magnetic charge), $$B_m(r_m) = B_m \left({d_m \over r_m}\right)^2 \ .$$ We can generalize from a single magnetic charge to an arbitrary number $n_{mag}$ of magnetic charges and represent the general magnetic field with a superposition of $n_{mag}$ buried magnetic charges, so that the potential field can be represented by the superposition of $n_{mag}$ fields ${\bf B}_m$ from each magnetic charge $m=1,...,n_{mag}$, $${\bf B}({\bf x}) = \sum_{m=1}^{n_{mag}} {\bf B}_m({\bf x}) = \sum_{m=1}^{n_{mag}} B_m \left({d_m \over r_m}\right)^2 {{\bf r_m} \over r_m} \ .$$ An example of a unipolar charge with a radial magnetic field is a single sunspot (Fig. 4, top panels), while a dipole field requires two magnetic charges with opposite magnetic polarity (Fig. 4, second row). Typically, the magnetic field of an active region can be represented by a superposition of $n_{mag} \approx 20-100$ unipolar magnetic sources, depending on the topological complexity of the magnetic field. Examples of magnetic models with $n_{mag}=10$ and $n_{mag}=100$ components are shown in Fig. 4 also (Fig. 4, third and bottom row panels). A numerical algorithm that deconvolves a line-of-sight magnetogram $B_z(x,y,z_{phot})$ has to solve the task of inverting the four observables $(B_z, \rho_m, z_m, w_m)$ into the four model parameters $(B_m, x_m, y_m, z_m)$ for each magnetic charge, where $B_z$ is the observed line-of-sight component of the magnetic field at the photosphere, $\rho_m$ is the apparent distance of a magnetic source from Sun center, $z_m=\sqrt{1-\rho_m^2}$ is the distance from the plane-of-sky (through Sun center), $w_m$ is the FWHM of the magnetic source, $B_m$ is the magnetic field of the magnetic charge $m$ at the solar surface, and $(x_m, y_m, z_m)$ are the 3D coordinates of a buried magnetic charge. The geometric parameters are defined in Fig. 5. An analytical derivation of the inversion of an observed line-of-sight magnetogram into the model parameters of the VCA-NLFFF code is described in Appendix A of Aschwanden et al. (2012). Starting from an approximate initial guess of the aspect angle $\alpha$, since $tan{(\alpha)} =(\rho_m/z_m) \approx (\rho_p/z_p)$, we obtain an accurate value by iterating the following sequence of equations a few times, $$\begin{array}{ll} \alpha &\approx \arctan({\rho_p / z_p}) \\ \beta_p &=\arctan{\left[ \left( \sqrt{9 + 8 \tan^2 \alpha}-3 \right) / 4\ \tan{\alpha} \right]} \\ B_m &={ B_z / [\cos^2{\beta_p} \ \cos{(\alpha-\beta_p)}]} \\ \beta_2 &=\arccos{\left[ \left( (\cos{\beta_p})^3 / 2 \right)^{1/3} \right]} \\ d_m &={w / \left[ \tan{\beta_2}\ \cos{\alpha} \ (1-0.1\alpha) \right]} \\ r_m &=(1-d_m) \\ \rho_m &=\rho_p - d_m {\sin{(\alpha-\beta_p)} / \cos{\beta_p} } \\ z_m &=\sqrt{r_m^2-\rho_m^2} \\ x_m &=\rho_m \ \cos{\gamma} \\ y_m &=\rho_m \ \sin{\gamma} \\ \alpha &=\arctan({\rho_m / z_m}) \\ \end{array}$$ This decomposition procedure of a line-of-sight magnetogram $B_z(x,y)$ into a finite number $n_{mag}$ of unipolar magnetic charges, each one parameterized with 4 parameters $(B_m, x_m, y_m, z_m)$, allows us to compute the 3D potential field vectors ${\bf B}({\bf x})$ at any location of a 3D computation box above the photosphere (with $r=\sqrt{(x^2+y^2+z^2)} > 1$ solar radius), where the line-of-sight component $B_z(x,y)$ corresponding to the magnetogram is just one special component at the curved solar surface, while the transverse components $B_x(x,y)$ and $B_y(x,y)$ are defined by the same potential model with Eq. (11). This algorithm is able to deconvolve magnetograms out to longitudes of $l \lapprox 80^\circ$ with an accuracy of $q_e=E_{model}/E_{obs}=1.000 \pm 0.024$ in the conservation of the potential energy for a dipolar configuration (see Fig. 21 in Aschwanden et al. 2014a), but we limit the application to $l \lapprox 45^\circ$ for general magnetograms. The numerical VCA-NLFFF code contains 4 control parameters (Table 1): the number of magnetic charges $n_{mag}$, the width $w_{mag}$ of a local map where the magnetic source components are deconvolved, the depth range $d_{mag}$ in which magnetic charges are buried, and the degradation scale $n_{rebin}$ of magnetogram smoothing. While the number $n_{mag}$ of magnetic charges is a free parameter that can be selected by the user, the other control parameters were optimized for robustness of results using HMI magnetograms (with a pixel size of $0.5\arcsec$), and are set to the constants $w_{mag}=3$ pixels, $d_{mag}=20$ pixels, and $n_{rebin}=3$ pixels. The robustness of the results as a function of these control parameters is also shown in the parameter study in Fig. 15 of Aschwanden et al. (2012). Performance Test of Potential Energy -------------------------------------- In a first performance test we compare the potential energies $E_{pot}$ that have been computed simultaneously with the weighted optimization NLFFF code (Wiegelmann 2004; Wiegelmann et al. 2006, 2008; Wiegelmann and Inhester 2010), which we briefly call W-NLFFF in the following, and with our VCA-NLFFF code. The potential energy of an active region or flare is defined here as the volume integral of the magnetic potential energy integrated over the entire 3D computation box, with the volume defined by the chosen FOV in $x$ and $y$-direction, i.e., $[x_1,x_2]$ and $[y_1,y_2]$, while the $z$-range covers the height range bound by the photosphere, $z_1(x,y)=\sqrt{(1-x^2-y^2)}$, and a curved surface at a height of $h_{max} =0.2$ solar radii, i.e., $z_2(x,y)=\sqrt{([1+h_{max}]^2-x^2-y^2}$, $$E_{pot} = \int_{x_1}^{x_2} \int_{y_1}^{y_2} \int_{z_1(x,y)}^{z_2(x,y)} {B_{pot}^2(x,y,z) \over 8\pi} \ dx\ dy\ dz\ \ .$$ Note that the total magnetic energy density $B^2/8\pi=(B_x^2+B_y^2+B_z^2)/8\pi$ includes both the line-of-sight component $B_z$ and the two transverse components $B_x$ and $B_y$. The transverse components are much less accurately known than the line-of-sight component, by a factor of about 20 for HMI (Hoeksema et al. 2014). The observed transverse components enter the energy estimate with the W-NLFFF method, while they are self-consistently determined from the potential field model with the VCA-NLFFF model, and thus have a similar accuracy as the line-of-sight component in the VCA-NLFFF model. The total potential (or nonpotential) energy is found in a range of $E_{pot} \approx 10^{32} - 10^{33}$ erg for the active region NOAA 11158, and increases to $E_{pot} \approx 10^{33} - 10^{34}$ erg for X-class flares. In Fig. 6 we show the potential energies $E_{pot}^{VCA}$ measured with the VCA-NLFFF code versus the potential energies $E_{pot}^W$ measured with the W-NLFFF code. Both codes integrate nearly over the same volume (although the W-NLFFF code does not take the sphericity of the solar surface into account), but the exact boundaries are not critical since most of the potential energy comes from the central sunspot in the FOV area of an active region or flare. The scatter plots in Fig. 6 show a potential energy ratio of $q_{E,pot}=1.22 \pm 0.39$ for 600 measurements of AR 11158 during 5 days observed on 2011 Febr 12-17 (Aschwanden, Sun, and Liu 2014b), and $q_{E,pot}=0.76 \pm 0.18$ for 119 measurements of 11 X-class flares observed during 2011-2014 (Aschwanden, Sun, and Liu 2014b). Thus the average accuracy of the two NLFFF methods agrees within $\approx 25\%$ for the total potential energy. The accuracy is similar for the nonpotential energies (Fig. 6, middle panels), and for the free energy (Fig. 6, bottom panels). This accuracy is similar to the differences of 12%-24% in the potential energy that was found among different NLFFF codes for NOAA active region 10978 (DeRosa et al. 2015). There are three effects that mostly influence the accuracy of the potential field measurement, namely the spatial resolution of the magnetogram, the finite number of magnetic source components in the magnetic model, and the asymmetry of sunspots. The spatial resolution of a HMI magnetogram is given by the pixel size of $0.5\arcsec$, which is further downgraded to $n_{rebin}=3$ pixels in our VCA-NLFFF code, which prevents a fragmentation into too many small magnetic elements that do not significantly contribute to the total energy. We can measure this effect by calculating the ratio $q_{E,rebin}$ of potential energies from both the magnetogram with the original full resolution of $0.5\arcsec$ and the rebinned magnetogram with $1.5\arcsec$ used in the decomposition of magnetic sources, $$q_{E,rebin} = {\int \int B_{z,rebin}^2(x,y) \ dx\ dy \over \int \int B_{z,full}^2(x,y) \ dx\ dy } \ .$$ We find a ratio of $q_{E,rebin}=0.95\pm0.02$ for the potential energies due to rebinning, for the 1195 time steps of AR 11158 (Fig. 7, top left panel), and $q_{E,rebin}=0.95\pm0.06$ for 11 X-class flares (Fig. 7, top right panel). Thus the degradation of the magnetogram introduces only an underestimate of $\approx 5\%$ in the potential or nonpotential energy. Similarly, we can investigate the effect of the finite number $n_{mag}$ of magnetic source components, by calculating the potential energies from the line-of-sight magnetogram, and by comparing with the potential energies obtained from the model (with $n_{mag} \approx 30$ source components), $$q_{E,model} = {\int \int B_{z,model}^2(x,y) \ dx\ dy \over \int \int B_{z,obs}^2(x,y) \ dx\ dy } \ .$$ We find a ratio of $q_{E,model}=0.87\pm0.13$ for the potential energies, using a model with 30 magnetic sources, for the 1195 time steps of AR 11158 (Fig. 7, bottom left panel), and $q_{E,model}=0.68\pm0.12$ for 11 X-class flares (Fig. 7, bottom right panel). Thus the model representation with $\approx 30$ magnetic source components leads to an average underestimate of about 13% for magnetic energies in active regions, and up to 32% for the largest flares. The optimization NLFFF code of Wiegelmann is found to have a similar degree of uncertainty, based on the ratio of the total potential energy between the model and observed data, i.e., $q_{E,model} =1.12-1.24$ (see ratio $E/E_0$ in Table 2 of DeRosa et al. 2015). Further testing revealed that the highest accuracy is not necessarily controlled by the number of magnetic source components, although this is true as a statistical trend. Ultimate accuracy can be achieved when the model parameterization matches the observed magnetic field distribution, which is fulfilled to the highest degree for sunspots or magnetic sources with spherical symmetry, due to the spherical symmetry of the model definition of vertical currents in the VCA-NLFFF method (Eq. 11). In contrast, asymmetric sunspots require a deconvolution into secondary source components, which generally fit the tails of an asymmetric magnetic field distribution less accurately than the primary central source at a local peak in the magnetogram. AUTOMATED LOOP TRACING ======================== General Description --------------------- The second major task of the VCA-NLFFF code is the automated tracing of loop coordinates $[x(s), y(s)]$ (as a function of the loop length coordinate $s$) in a coronal or chromospheric image, observed in EUV, UV, optical, or H$\alpha$ wavelengths. The underlying principle of this task corresponds to convert a 2-D brightness image $(x,y)$ into a set of 1-D curvi-linear structures $[x(s),y(s)]$. Although there exist a number of software codes that aim to perform the task of automated pattern recognition (e.g., see image segmentation methods in Gonzales and Woods 2008), it is our experience that none of the standard methods yields satisfactory results for solar data, and thus we developed a customized code that is optimized for automated detection of curvi-linear features with relatively large curvature radii (compared with the width of a loop structure) observed in solar high-resolution images. In an initial study, five different numerical codes, designed for automated tracing of coronal loops in [*Transition Region And Coronal Explorer (TRACE)*]{} images, were quantitatively compared (Aschwanden et al. 2008), including: i) the oriented-connectivity method (OCM), ii) the dynamic aperture-based loop segmentation method, iii) the unbiased detection of curvi-linear structures code, iv) the oriented-direction method, and the v) ridge detection by automated scaling method. One scientific result of this study was that the size distribution of automatically detected loops follows a cumulative powerlaw distribution $N(>L) \propto L^{-\beta}$ with $\beta \approx 2.0-3.2$, which indicates a scale-free process that determines the distribution function of coronal loop segments. One of the original five codes was developed further, a prototype based on the method of [*Oriented Coronal CUrved Loop Tracing (OCCULT-1)*]{}, which approached an accuracy that was matching visual perception of “hand-traced” loops (Aschwanden 2010). An improved code (OCCULT-2) includes a second-order extrapolation technique (Fig. 8) for tracing of curvi-linear features (Aschwanden et al. 2013a), permitting extended applications to AIA images, to chromospheric H$\alpha$ images, as well as applications to images in biophysics. While AIA/SDO images have a pixel size of $0.6\arcsec$, the automated loop tracing was also extended to higher spatial resolution, to images with pixel sizes corresponding to $0.16\arcsec$ (IRIS) and $0.1\arcsec$ (IBIS, ROSA) (Aschwanden, Reardon, and Jess 2016). The analytical description of the OCCULT-2 code is given in Appendix A.1 of Aschwanden, De Pontieu, and Katrukha (2013a). The IDL source code is available in the SolarSoftWare (SSW), see the IDL procedure [*LOOPTRACING$\_$AUTO4.PRO*]{}, and a tutorial is available at the website [*http://www.lmsal.com/$\sim$aschwand*]{} [*/software/tracing/tracing$\_$tutorial1.html*]{}. We briefly summarize the numerical algorithm and the control parameters that can affect the results (see also Table 1). The first step is the background subtraction, which can be quantified by a minimum level in the original intensity image ($q_{thresh,1}$), as well as by a minimum level in the bipass-filtered image ($q_{thresh,2}$). A bipass-filtered image is then created from a lowpass filter $I_{low}(x,y)$ (i.e., smoothing with a boxcar of $n_{sm1}$ pixels) and a highpass filter $I_{high}(x,y)$ (i.e., smoothing with a boxcar of $n_{sm2}=n_{sm1}+2$), while the bipass-filter image $\Delta I(x,y)$ is the difference between the two filters, $$\Delta I(x,y) = I_{high}(x,y) - I_{low}(x,y) \ .$$ We find that a lowpass filter with $n_{sm1}=1$ and a highpass filter with $n_{sm2}=n_{sm1}+2=3$ pixels yields best results for most AIA images (Aschwanden et al. 2013a). The lowpass filter eliminates large-scale variations in the background, while the highpass filter eliminates random data noise. For noisy images, a somewhat higher value is recommended, such as $n_{sm1}=3$ and $n_{sm2}=n_{sm1}+2=5$. The OCCULT-2 algorithm traces individual curvi-linear structures by finding first the location ($x_1, y_1$) of the absolute intensity maximum in the image $\Delta I(x,y)$, then measures the direction from the first derivative $dy/dx(x=x_1, y=y_1)$ of the ridge that passes true the flux maximum location ($x_1, y_1$), as well as the curvature radius from the second derivative $d^2y/dx^2(x=x_1, y=y_1)$ (see geometric diagram in Fig. 8), and traces the direction of the ridge pixel by pixel, until the end of the segment traced in forward direction reaches a negative flux (in the bipass-filtered image). The tracing is then also carried out in backward direction, in order to find the other end of the loop segment, yielding the coordinates $[x(s), y(s)]$ of the entire traced loop as a function of the loop length coordinate $s$. The pixels that are located within a half width ($w_{half}=n_{sm2}/2-1)$ of the ridge coordinates $[x(s), y(s)]$ are then erased and the residual image serves as input for tracing of the second structure, starting with next flux maximum ($x_2, y_2$), and repeating the same steps to find the coordinates of the second brightest loop. The algorithm has 9 control parameters (Table 1), which includes the maximum number of traced structures $n_{struc}$ per wavelength, the lowpass filter $n_{sm1}$, the highpass filter $n_{sm2}=n_{sm1}+2$, the minimum accepted loop length $l_{min}$, the minimum allowed curvature radius $r_{min}$, the field line step $\Delta s$ along the (projected) loop coordinate, the flux threshold $q_{thresh1}$ in units of the median value of positive fluxes in the original image), the filter flux threshold $q_{thresh}$ (also in units of the median value in the positive bipass-filtered fluxes), and a maximum proximity distance $d_{prox}$ from the location of the next-located magnetic source. An additional parameter in the original code is $n_{gap}$, which allowed to skip segments with negative bipass-filtered fluxes, but is set to $n_{gap}=0$ in the current version of the code. Numerical Examples -------------------- Examples of bipass-filtered images of all AIA and IRIS (slit-jaw) wavelengths are shown in Fig. 9, which correspond to the observed original images shown in Fig. 3, while the corresponding automated loop tracings are shown in Fig. 10, sampled for a minimum loop length of $l_{min} \ge 5$ pixels. A few special features of the loop tracing code applied to solar data include the elimination of curvi-linear artifacts resulting from the boundaries of image portions with saturated fluxes or pixel bleeding (occurring in CCDs, see AIA 171 and 193 in Fig. 9c and 9d), as well as the elimination of strictly horizontal and vertical features, that result from edges of incomplete image data, vignetting, or slit markers (especially in slit-jaw images from IRIS, see Fig. 9). Scanning through thousands of images we find many other curvi-linear features that appear not to be aligned with the magnetic field, which are eliminated by restricting the maximum allowed misalignment angle $\mu_2$ iteratively to smaller values in the forward-fitting algorithm of the VCA-NLFFF code. THE NONLINEAR FORCE-FREE FIELD ================================ Analytical Description ------------------------ A non-potential field can be constructed by introducing a vertical current above each magnetic charge, which introduces a helical twist about the vertical axis (e.g., Fig. 4, top right panel). There is an exact analytical solution for a straight uniformly twisted flux tube (e.g., Priest 1982), which can be generalized to the 3-D spherical coordinates of a vertical flux tube that expands in cross-section according to the divergence-free and force-free condition and is accurate to second-order of the force-free $\alpha$-parameter (Aschwanden 2013a). This vertical-current approximation can be expressed by a radial potential field component $B_r(r, \theta)$ and an azimuthal non-potential field component $B_{\varphi}(r, \theta)$ in spherical coordinates $(r, \varphi, \vartheta)$, $$B_r(r, \theta) = B_0 \left({d^2 \over r^2}\right) {1 \over (1 + b^2 r^2 \sin^2{\theta})} \ ,$$ $$B_\varphi(r, \theta) = B_0 \left({d^2 \over r^2}\right) {b r \sin{\theta} \over (1 + b^2 r^2 \sin^2{\theta})} \ ,$$ $$B_\theta(r, \theta) \approx 0 \ ,$$ $$\alpha(r, \theta) \approx {2 b \cos{\theta} \over (1 + b^2 r^2 \sin^2{\theta})} \ .$$ $$b = {2 \pi n_{twist} \over l} \ ,$$ where $\alpha(r, \theta)$ generally known as force-free $\alpha$-parameter, is related to the parameter $b$ that expresses the number $n_{twist}$ of helical turns over the loop length $l$. The non-potential field of each magnetic charge $m=1,...,n_{mag}$ can be described with this approximation, and the associated field components $B_r$ and $B_{\varphi}$ have to be transformed into a common cartesian coordinate system ${\bf B}_m^{np}({\bf x})$, and can then be added in linear superposition, $${\bf B}^{np}({\bf x}) = \sum_{m=1}^{n_{mag}} {\bf B}_m^{np} ({\bf x}) \ ,$$ which still fulfills the divergence-freeness and force-freeness to second-order accuracy (Aschwanden 2013a). This way we have a space-filling non-potential field solution that is parameterized by five variables $(B_m$, $x_m$, $y_m$, $z_m$, $\alpha_m$) for each magnetic charge $m=1,..,n_{mag}$, whereof the first four variables are already determined from the potential-field solution. From this parameterization we obtain directly a positively defined expression for the free energy $E_{free}$ (Aschwanden 2013b), which is the difference between the non-potential field $E_{NP}$ and the potential field energy $E_P$ integrated over the 3-D computation box, $$E_{free} = E_{NP} - E_P = \int {1 \over 8\pi} B_{\varphi}({\bf x})^2 \ q_{iso} \ dV \ .$$ where $q_{iso}=(\pi/2)^2 \approx 2.5$ is a correction factor that generalizes the vertical twist orientation to isotropy (Aschwanden et al. 2014a). The main task of our VCA-NLFFF code is then to optimize the non-potential field parameters $\alpha_m, m=1,...,n_{mag}$ by forward-fitting to observed loop coordinates $[x(s), y(s)]$, which is accomplished by minimizing the misalignment angles $\mu_{i,j}$ between the theoretical magnetic field model ${\bf B}^{theo}={\bf B}^{np}$ and the observed loop directions ${\bf B}^{obs}$, $$\mu_3 = \cos^{-1}\left( { ({\bf B}^{\rm theo} \cdot {\bf B}^{\rm obs}) \over |{\bf B}^{\rm theo}| \cdot |{\bf B}^{\rm obs}|} \right) \ ,$$ where the 3-D misalignment angles $\mu_{i,j}$ are measured for $i=1,...,n_{loop}$ loops at a number of $j=1,...,n_{seg}$ segments along each loop. The optimization criterion minimizes the median of all $\mu_{i,j}$ values. Numerical Code ---------------- The numerical implementation of the VCA-NLFFF code has been gradually improved over time, including significant changes that are different from earlier numeric code versions (Aschwanden and Sandman 2010; Sandman and Aschwanden 2011; Aschwanden et al. 2012, 2014a, 2014b; Aschwanden 2013b,c, 2015; Aschwanden and Malanushenko 2013). The VCA-NLFFF code starts with the parameters of the magnetic charges that were obtained from the potential field fit to the line-of-sight magnetogram and adds an additional force-free $\alpha$-parameter for each magnetic charge, so that we have a parameterization of $(x_m, y_m, z_m, B_m, \alpha_m)$, for $m=1,...,n_{mag}$. On the other side, we have the input of loop coordinates $[x_{ij}, y_{ij}]$ from $i=1,...,n_{loop}$ loops measured at $j=1,...,n_{seg}$ segments, where $n_{seg}$ is interpolated to a fixed number of $n_{seg}=9$ segments, regardless how long the loops are, so that each loop or loop segment has the same weight in the fitting. Of course, the third coordinates $z_{ij}$, the line-of-sight coordinates of each loop are not known a priori, but lower and upper boundaries are given by the photospheric height $h_{min}=0$ and by the upper boundary of the computation box at a chosen height of $h_{max}=0.2$ solar radii. In the recent versions of the code, an approximate geometry of the height dependence is fitted to each loop segment. This approximate geometry encompasses a circular segment that extends over an arbitrary height range $0 < [h_1, h_2] < h_{max}$, has a variable orientation of the loop plane, a variable range of curvature radii, and covers a variable angular range of a full circle. The loop segment can appear as a half circle, a concave or convex circular segment, of even as a straight line in the extreme limit. A set of such circular geometries is visualized in Fig. 11. Alternative geometries used in the parameterization of the height dependence are Bezier spline functions (Gary et al. 2014). Fitting the 2D projections of this set of variable circular geometries in height to the 2D projections of the observed loop coordinates $[x_{ij}, y_{ij}]$ yields then the best-fit line-of-sight coordinates $z_{ij}$ for each loop segment, as well as the vector components of the loop directions at each location $(i,j)$, $${\bf v}_{ij} = [(x_{i,j+1}-x_{i,j-1}), (y_{i,,j+1}-y_{i,j-1}), (x_{i,j+1}-z_{i,j-1}) ] \ .$$ The code calculates then the misalignment angle $\mu_2$ in 2D (in $x-y$-plane) and the 3D misalignment angle $\mu_3$ from the scalar product (Eq. 24) between the observed loop direction ${\bf v}_{ij}$ and the magnetic field vector ${\bf B}_{ij}^{np}$. or the potential field (Eq. 11), of a trial nonpotential magnetic field (Eq. 17, 18, 22) based on the set of variables $\alpha_m, m=1,...,n_{mag}$. Thus, in this first iteration step of the forward-fitting procedure we obtain a set of 3D misalignment angles $\mu_{3,ij}$ for each loop segment, from which we define an optimization parameter by taking the median of all misalignment angles, $$\mu_3 = {\rm Median}(\mu_{3,ij}) \ , i=1,...,n_{loop}, \ j=1,...,n_{seg} \ .$$ In the second half of an iteration procedure we optimize the global misalignment angle $\mu_3$ with the minimization procedure of Powell’s method in multi-dimension (Press et al. 1986, p.294), which calculates in each iteration cycle all gradients $(\partial \mu/\partial \alpha_m), m=1.,,,,n_{mag}$ of each magnetic charge parameter $\alpha_m$, and improves the next iteration value by $$\alpha_m^{new} = \alpha_m^{old} - \Delta \alpha_0 {(\partial \mu / \partial \alpha_m) \over max[(\partial \mu/\partial \alpha_m)]} \ ,$$ which optimizes the misalignment angles by $$\mu^{new} = \mu^{old} + \Delta \alpha_0 (\partial \mu/\partial \alpha_m) \ ,$$ where $\Delta \alpha_0=1.0 \ R_{\odot}^{-1}$ is the maximum increment of change in $\alpha_m$ during each iteration step. After the first iteration cycle is completed, a second (or subsequent) cycle is performed, in each one first optimizing the altitudes to obtain an improved third coordinate $z_{ij}$, and then optimizing the $\alpha_m$. The final result of a NLFFF solution is contained in a set of coefficients $(x_m, y_m, z_m, B_m, \alpha_m), m=1,...,n_{mag}$, from which a volume-filling NLFFF solution ${\bf B}_{np}=[B_x(x,y,z), B_y(x,y,z), B_z(x,y,z)]$ can be computed in the entire computation box. Individual field lines can be calculated from any starting point $(x,y,z)$ by sequential extrapolation of the local B-field vectors in both directions, until the field line hits a boundary of the computation box. One of the biggest challenges is the elimination of “false” loop tracings, which occur due to insufficient spatial resolution, over-crossing structures, moss structures (Berger et al. 1999; De Pontieu et al. 1999), data noise, and instrumental effects (image edges, vignetting, pixel bleeding, saturation, entrance filter mask, etc.). While we attempted to identify such irregularities in earlier code versions, we find it more efficient to eliminate “false” structures iteratively based on their excessive misalignment angle. In the present code, “false” tracings are automatically eliminated in each iteration step if they exceed an unacceptable large value of the misalignment angle. In the latest version of the code we set a final limit of $\mu_0 \le 20^\circ$ for the 2D misalignment angle, $\mu_2 \le \mu_0$, which is gradually approached after a sufficient number of iteration steps to warrant convergence. We set a minimum number of $n_{itmin}=40$ iteration steps, during which the the maximum acceptable misalignment angle limit $\mu_0$ is linearly reduced from $\mu_0=90^\circ$ to the final limit of $\mu_0=20^\circ$. The maximum number of iterations is limited to $n_{itmax}=100$. Performance Test : Active Region NOAA 11158 --------------------------------------------- The most extensive data set of potential, nonpotential, and free energies, computed with both a traditional NLFFF code and our alternative VCA-NLFFF code is available from NOAA active region 11158, observed during the 5 days of 2011 Febr 12-17 (Aschwanden, Sun, and Liu 2014b). We already compared the potential energies obtained with both codes in Fig. 6. A time-dependent comparison of the energies obtained with both codes is shown in Fig. 12. Interestingly, the agreement between the VCA-NLFFF code the W-NLFFF code is quite good (within $\lapprox 20\%$) during all 5 days, for the potential (Fig. 12b), the nonpotential (Fig. 12c), as well as for the free energy (Fig. 12d), which represents a large improvement over previous studies (see Fig. 8 in Aschwanden, Sun and Liu 2014b), where the free energy obtained with VCA-NLFFF is substantially lower than the value from W-NLFFF during all days (June 13-17) except for the first day when the energies have the lowest values. The good agreement indicates a higher degree of fidelity for both the VCA-NLFFF and the W-NLFFF code, at least in the temporal average. The short-term fluctuations are much larger when modeled with the VCA-NLFFF code, and at this point we cannot discern whether the VCA-NLFFF code produces a larger amount of random errors, or whether the W-NLFFF code produces too much temporal smoothing due to the preprocessing procedure. In Fig. 13 we show the expanded time profile plot that corresponds to Fig. 9 in Aschwanden, Sun and Liu 2014b). The displayed time profile of the free energy $E_{free}(t)$ (blue curve in Fig. 13) represents the 3-point median values (smoothed with a boxcar of 3 pixels), which eliminates single-bin spikes (with a cadence of 6 min here). The time profile $E_{free}(t)$ exhibits much less random fluctuations than the previous results (Fig. 10 in Aschwanden, Sun and Liu 2014b), which indicates that the remaining fluctuations are more likely to be real changes in the free energy. Indeed, most of the GOES flares show a corresponding dip or decrease in the free energy, although the detailed timing is not always strictly simultaneous. In contrast, the time profile of the free energy $E_{free}(t)$ determined with W-NLFFF shows almost no temporal fluctuations, and only very small changes or decreases after a flare, if at all. We suspect that the preprocessing procedure of the W-NLFFF code over-smoothes changes in the free energy, and thus underestimates the dissipated energy in a flare. It was already previously noticed that the W-NLFFF code yields about an order of magnitude lower energy decreases during flares than the VCA-NLFFF code (see Fig. 11 in Aschwanden, Sun, and Liu 2014b). During the observing time interval of 2011 Feb 12-17, a total of 36 GOES C, M, and X-class flares were identified in the NOAA flare catalog. For most of these 36 events we see a significant decrease of free energy with the VCA-NLFFF method. We show 9 examples with the most significant energy decreases in Fig. 14, for both the VCA-NLFFF and the W-NLFFF method. We show the slightly smoothed (3-point median) evolutionary curves, from which the energy decrease $\Delta E_{free}=E_{free}(t_2)-E_{free}(t_1)$ is measured between the maximum energy before, and the minimum energy after the flare peak time, allowing for a time margin of $\pm 0.5$ hrs, i.e., $t_{start}-0.5 < t_1 < t_{peak}$ and $t_{peak} < t_2 < t_{end}+0.5$. The free energy $E_{free}(t)$ agrees well between the VCA-NLFFF and the W-NLFFF code, as mentioned before (Fig. 12d), but the energy decreases during flares are much more pronounced with the VCA-NLFFF code than with the W-NLFFF code, which may indicate that the W-NLFFF code suffers from over-smoothing in the preprocessing procedure. In Fig. 15 we compile the decreases of the free energy $-\Delta E_{free}$ during flares as a function of the free energy $E_{free}$ before the flare and find that in the average 21% of the free energy is dissipated during flares according to the VCA-NLFFF code, or 11% according to the W-NLFFF code. This is similar to the earlier study (Fig. 11 in Aschwanden, Sun, and Liu 2014b). Thus there is a discrepancy of a factor of $\approx 2$ between the VCA-NLFFF and the W-NLFFF code. Performance Test : X-Class Flares ----------------------------------- In Fig. 16a and 16b we present the results of the free energy evolution $E_{free}(t)$ for 11 X-class flare events, which includes all X-class events that occurred in the first 3 years of the SDO mission (Aschwanden, Xu, and Jing 2014a). For each flare we show the GOES light curve, the GOES time derivative (which is a proxy for the hard X-ray emission according to the Neupert effect), and the evolution of the free energy according to both the VCA-NLFFF and the W-NLFFF codes. Since the X-class flares are the most energetic events, we expect that they exhibit most clearly a step function from a high preflare value of the free energy to a lower postflare level after the flare. Such a step function is most clearly seen with the W-NLFFF code for flare events \#12, \#66, \#67, and \#147 (red diamonds in Figs. 16 and 17), and with the VCA-NLFFF code for the events \#67 and \#384 (blue curve in Figs. 16 and 17). The flare event \#67 exhibits the best agreement between the W-NLFFF and VCA-NLFFF code, displaying not only a large step function, but also good agreement between the levels of free energy before and after the flare. No significant energy decrease during the flare time interval is detected for event \#148 with the W-NLFFF code, or for event \#147 with the VCA-NLFFF code. Generally, the VCA-NLFFF code reveals a larger step of free energy decrease than the W-NLFFF code (Fig. 15). Thus we can conclude from the performance tests of the VCA-NLFFF code: (1) A significant decrease in the free energy during X-class flares is detected in 10 out of the 11 cases; (2) the maximum of the energy decrease of free energy occurs within the impulsive flare phase (when hard X-rays or the GOES time derivative culminate), and (3) the energy decreases detected with the VCA-NLFFF code are a factor of $\approx 2$ larger than detected with the W-NLFFF code. In Figs. 18 and 19 we show the results of the VCA-NLFFF solution for the flare events \#67 (X1.8 flare on 2011 Sept 7), \#384 (X1.2 flare on 2014 Jan 7), and \#592 (X1.9 flare on 2014 Mar 29). We show two representations of the magnetic field solution, one by selecting field lines that intersect with the midpoints of the automatically traced loops (Fig. 18), and the other one by a regular grid of field lines with footpoint magnetic field strengths of $B > 100$ G (Fig. 19). Since these events show the most consistent evolution of the free energy decrease with both the VCA-NLFFF and the W-NLFFF code, they should convey most clearly the topology change from a helically twisted nonpotential field before the flare to a relaxed near-potential field after the flare. Fig 18a or 18b shows the NLFFF solution at 22:02 UT, just when the free energy reaches the highest value ($E_{free}=165 \times 10^{30}$ erg) at the start of the flare, which indeed reveals a highly twisted field around the leading sunspot. Fig. 18b shows the NLFFF solution at 23:08 UT, just after the impulsive flare phase when the free energy drops to the lowest value ($E_{free}=14 \times 10^{30}$ erg), which indeed exhibits an untwisted, open field above the sunspot, while a postflare arcade grows in the eastern part of the sunspot, where obviously most of the magnetic reconnection process during the flare took place. The open-field configuration above the sunspot is a consequence of an erupting CME. This example shows an unambiguous magnetic energy decrease by $\approx 90\%$ of the free energy ($\Delta E_{free} =-151 \times 10^{30}$ erg), which is dissipated during a magnetic reconnection process and lauch of a CME. Also the second case shown in Fig. 18 and 19 (flare \#384, X1.2 class, 2014 Jan 7), exhibits an untwisting of the magnetic field above the sunspot during the flare, most clearly seen as helical twist in clock-wise direction (Fig. 19c) that de-rotates clock-wise to an almost radial potential field after the flare (Fig. 19d). The detection of loop structures appears to be spotty in Fig. 18c and 18d, but the model picks up sufficient field directions around the leading sunspot to measure the untwisting of the sunspot field and the associated magnetic energy decrease. Performance Test : AIA versus IRIS ------------------------------------ The first X-class flare observed with IRIS occurred on 2014 Mar 29, 17:40 UT, which has already been modeled with a previous version of the VCA-NLFFF code, showing a similar amount of energy decrease in coronal data from AIA (94, 131, 171, 193, 211, 335 ), chromospheric data from AIA (304, 1600 ), and in chromospheric data from IRIS (1400, 2796 ). This result delivered the first evidence that both coronal as well as chromospheric features (loops or fibrils) can be used to constrain a nonpotential magnetic field model of an active region or flare (Aschwanden 2015). However, the early version of the VCA-NLFFF code was not very sensitive to the faint coronal and chromospheric features during the preflare phase, so that substantially less free energy was detected during the preflare phase than is found with the improved VCA-NLFFF code. This lack of coronal and chromospheric structures in the preflare phase was interpreted in terms of a coronal illumination effect, conveyed by filling of coronal loops with the chromospheric evaporation process. However, since the new VCA-NLFFF code is sufficiently sensitive to detect nonpotential structures in the preflare phase, an explanation in terms of a coronal illumination effect is not needed anymore. In Fig. 18 and 19 we show new results of the evolution of the free energy $E_{free}(t)$ for the 2014 March 29 flare, independently modeled with the VCA-NLFFF code in three different wavelength domains and with two independent instruments (AIA and IRIS). The representation of the results in Fig. 20 can be compared with Fig. 3 in Aschwanden (2015). We detect about the same free energy at the flare peak, $E_{free}(t_{peak}) \approx 40 \times 10^{30}$ erg versus $E_{free}(t_{peak})=(45 \pm 2) \times 10^{30}$ erg earlier (Fig. 3 in Aschwanden 2015). We find about the same decrease in the free energy, flare peak, $E_{free}(t_{peak} \approx 30 \times 10^{30}$ erg versus $E_{free}(t_{peak}=(29 \pm 3) \times 10^{30}$ erg earlier (Fig. 3 in Aschwanden 2015). However, what is much better than in the previous study is the preflare level of the free energy, being significantly higher than the postflare level, which is expected in the simplest scenario of magnetic energy dissipation. Therefore, the present VCA-NLFFF can be considered to be superior compared to earlier (less sensitive) versions. The change in the magnetic configuration during the flare shows an untwisting in counter clock-wise direction (Figs. 19e and 19f), while the sigmoidal bundle of field lines in the North-East sector of the sunspot (Fig. 19e) evolves into a near-potential dipolar postflare loop arcade (see yellow loop tracings in Fig. 18f). Performance Test : Parametric Study ------------------------------------- Finally we perform a parametric study in order to investigate the robustness and sensitivity of the VCA-NLFFF solutions to the control parameters of the numerical code. A list of 24 control parameters is given in Table 1, which includes four groups, one specifying the data selection (instrument, spatial resolution, wavelength, and field-of-view choice), a second one with 4 control parameters enables the potential field deconvolution, a third one with 9 control parameters enables the automated loop tracing, and a fourth one with 7 control parameters enables the forward-fitting. In order to assess the robustness of the VCA-NLFFF code, we investigate how a variation of the control parameters affects the results of the evolution of the free energy $E_{free}(t)$ (Figs. 21, 22, 23). Specifically we show how the time resolution of the data or cadence (Fig. 21), the wavelength choice (Fig. 22), and the variation of 12 tunable control parameters changes the time-dependent free energy values (Fig. 23) of the X-flare observed on 2014 March 29 with both AIA and IRIS, of which various data are shown in Figs. 3, 4, 9, 10, 18e, 18f, 19e, 19f, and 20. Varying the time resolution or cadence from $t_{cad}=6$ min to 3, 2, and 1 min (Figs. 21, 23a) we find that the smoothed time profile $E_{free}(t)$ (using the median values from time intervals that are equivalent to the cadence) are invariant and thus yield exactly the same free energy decreases associated with the flare at any value of the time resolution. This is not trivial, because there is a factor of 6 times different amount of information that is used between a cadence of 6 min and 1 min. If the free energies determined with VCA-NLFFF code are entirely due to noise, the energy drop between the preflare and postflare time interval would vary arbitrarily, rather than being invariant. Investigating the evolution of the free energy at a cadence of 1 min, however, appears to reveal some coherent quasi-periodic fluctuations with an approximate period of $P \approx 3$ min (Fig. 21 bottom left), which could be associated with the helioseismic global p-mode oscillations (a property that will be examined elsewhere). In Fig. 21 we show also the time evolution of the number of detected and fitted loops (Fig. 21, middle column), which seems to be roughly constant during this flare and is not correlated with the step-like (dissipative) decrease in free energy. We show also the time evolution of the misalignment angle $\mu_2(t) \approx 10^\circ$ (Fig. 21, right column), which is essentially constant and is not affected by the flare evolution, although the flare area coverage of automatically traced loops varies substantially during the flare (Fig. 18e and 18f). The choice of wavelengths could crucially affect the accuracy of the inferred free energy, because each wavelength covers only a limited amount of the flare temperature range, and some wavelengths are only sensitive to chromospheric temperatures. In Fig. 22 we perform the experiment to determine the evolution of the free energy $E_{free}(t)$ for each of the 8 AIA and 3 IRIS wavelengths separately. Interestingly, the decrease in the free energy is detected in almost all wavelengths independently, which provides a strong argument for the robustness of the code. The largest energy decrease ($\Delta E_{free}=-(37 \pm 10) \times 10^{30}$ erg is detected in the AIA 211  wavelengths, while the smallest amount is found with IRIS 2832 , which is a chromospheric line. Combining two wavelengths pair-wise together, we find a consistent energy decrease in each wavelength pair (Fig. 23b). Consequently, our strategy is to combine all coronal AIA wavelengths (94, 131, 171, 193, 211, 335 ) as a default, because the joint wavelength response complements hot and cool temperatures in flares and active region areas. Since the number of loops above some minimum length value $l_{min}$ falls off like a powerlaw distribution function, i.e., $N(>L) \propto L^{-2}$ (Aschwanden et al. 2008), the number of loops that constrain a NLFFF solution increases drastically towards smaller values on one side, while the ambiguity of true loops and loop-unrelated curved features increases towards smaller values on the other hand. The test shown n Fig. 23c exemplifies that a value of $l_{min}\approx 4$ or 5 is the optimum, while higher values of $l_{min}=6$ or 7 lead to slight underestimates of the free energy. Allowing a large range of 2D misaligned angles $\mu_0 < 45^\circ$ clearly hampers the convergence of the VCA-NLFFF code, which implies that data noise dominates the fit and thus reduces the signal of nonpotential field energies and free energies. The test shown in Fig. 23d demonstrates that the free energy increases systematically by reducing the misalignment limit from $\mu_0 < 40^\circ$ to $\mu_0 < 10^\circ$. However, if we push the limit to small values, the number of fitable features decreases, which has a diminuishing effect on the accuracy of a nonpotential field solution. Therefore we choose a compromise of $\mu_0 < 20^\circ$. The number of iterations in the forward-fitting algorithm dictates how quickly misaligned features are eliminated, say for an acceptable limit of $\mu_0 < 20^\circ$. Reducing the iterations thus changes the observational constraints of fitable loops faster than the code can converge, and thus can inhibit convergence of the code. The test shown in Fig. 23e clearly demonstrates that the free energy is only fully retrieved for a larger number of iterations, say $n_{nitmin} \gapprox 40$. The test shown in Fig. 23f indicates that smoothing of the EUV loop image with $n_{smo}=1, 2$, or 3 reduces the estimate of the free energy because faint and thin loop structures are eliminated, which reduces the observational constraints and accuracy of the forward-fitting code. We expect that increasing the number of magnetic sources increases the accuracy of the free energy. The test shown in Fig. 23g, however shows the opposite, probably because too many small magnetic sources counter-balance the opposite polarities of closely-spaced pairs of magnetic sources, and thus diminuishes the total nonpotential or free energy. We thus choose a relatively small number of $n_{mag}=30$ magnetic sources, which also speeds up the computation time of a VCA-NLFFF solution. The tests shown in Fig. 23h and 23i indicate that the threshold of detecting loop structures is not critical, as long as we detect a sufficient number of structures. We choose therefore the lowest threshold values of $q_{thresh,1}=0$ and $q_{thresh,2}=0$. In a previous version of the VCA-NLFFF code we used a distance limit $d_{prox}$ of a loop position to the next location of a magnetic source to eliminate “false” loop detections. The test shown in Fig. 23l demonstrates that large distance limits of $d_{prox}=4$ or 10 source depths (which is about equivalent to the apparent full width of a magnetic source at the solar surface) does not effect the accuracy of the free energy estimate, while very short limits of $d_{prox}=1$ or 2 eliminate too many relevant structures and thus leads to underestimates of the free energy. The lower limit of the curvature radius of automatically detected structures is important to exclude “false” curvi-linear structures that occur through coagulation of random structures. The test in Fig. 23m with curvature radii limits of $r_{min}=4, 6, 8,$ and 10 pixels demonstrates an invariant free energy profile, and thus no sensitivity of the free energy to the curvature radius limit. If the field-of-view is too small, the free energy is retrieved only partially. The test shown in Fig. 23n indicates indeed a steady increase of the free energy from $FOV=0.08$ to 0.14 solar radii. On the other side, an upper limit of the FOV is given by the distance to the next neighboured active region. In summary, our parametric tests are performed on 12 parameters, each one with 4 variations, for a time profile of the free energy with 13 time steps, yielding a total of 624 VCA-NLFFF solutions. The test results shown in Fig.23 show us at one glance that all of the 48 time profiles exhibit clearly the step-wise decrease of the free energy during a flare, corroborating the robustness of our VCA-NLFFF code. Even the magnitude of the energy decrease agrees within $\approx 20\%$ for each parameter combination. An estimate of the uncertainty in the determination of the free energy can be made by the number of loops $n_{loop}$ that constrain a solution, which is expected to be according to Poisson statistics, $$\sigma_{E,free} = {E_{free} \over \sqrt{n_{loop}}} \ .$$ In other words, if only one single loop is used in fitting the VCA-NLFFF magnetic field model, we have an error of 100%, while the error drops down to 10% for $n_{loop}\approx 100-200$, which is a typical value (Fig. 21, middle column). However, this error estimate is a lower limit, applicable to the ideal case when a sufficient number of loops is available in the locations of strong magnetic fields. If there is an avoidance of loops near sunspots, no significant amount of free energy is detected, leading to a much larger systematic error than the statistical error due to Poisson statistics of the number of loops. DISCUSSION ============ The main purpose of the VCA-NLFFF code is the measurement of the coronal magnetic field and its time evolution in active regions and during flares, based on our vertical-current approximation model. The following discussion mostly focuses on the example of active region NOAA 11158 and its famous X2.2 flare in the context of previous studies. Magnetic Field Changes in Active Region NOAA 11158 ---------------------------------------------------- A most studied example is the evolution of [**active region NOAA 11158**]{}, which is the subject of a number of recent papers, focussing on the magnetic evolution of this active region (Sun et al. 2012a, 2012b, 2015; Liu and Schuck 2012; Jing et al. 2012; Vemareddy et al. 2012a, 2012b, 2013; 2015; Petrie 2012; Tziotziou et al. 2013; Inoue et al. 2013, 2014; Liu et al. 2013; Jiang and Feng 2013; Dalmasse et al. 2013; Aschwanden et al. 2014b; Song et al. 2013; Zhao et al. 2014; Tarr et al. 2013; Toriumi et al. 2014; Sorriso-Valvo et al. 2015; Jain et al. 2015; Guerra et al. 2015; Kazachenko et al. 2015; Li and Liu 2015; Chintzoglou and Zhang 2013; Zhang et al. 2014; Gary et al. 2014), or specifically on the [**X2.2 flare that occurred on 2011 February 15**]{} in this active region (Schrijver et al. 2011; Wang et al. 2012; Aschwanden et al. 2013b; Jiang et al. 2012; Gosain 2012; Maurya et al. 2012; Petrie 2013; Alvarado-Gomez et al. 2012; Young et al. 2013; Malanushenko et al. 2014; Wang et al. 2014; Beauregard et al. 2012; Inoue et al. 2015; Wang et al. 2013; Shen et al. 2013; Jing et al. 2015; Raja et al. 2014), or on the [**M6.6 flare that occurred on 2011 February 13**]{} in the same active region (Liu et al. 2012, 2013; Toriumi et al. 2013). Comparing the workings of the VCA-NLFFF code with the theoretical concepts and observational results that have been published on AR 11158, we have to be aware that the vertical-current approximation used in the VCA-NLFFF code implies a slow helical twisting of the sunspot-dominated magnetic field during the energy storage phase, and sporadic episodes of reconnection-driven untwisting during flare times. Therefore, this concept is equivalent to the concept of sunspot rotation (Jiang et al. 2012; Vemareddy et al. 2012a, 2012b, 2013, 2015; Li and Liu 2015). Sunspot rotation forms sigmoid structures naturally, as it is observed in AR 11158 (Schrijver et al. 2011; Sun et al. 2012a; Jiang et al. 2012; Young et al. 2013; Jiang and Feng 2013; Aschwanden et al. 2014b). Strongly twisted magnetic field lines ranging from half-turn to one-turn twists were found to build up just before the M6.6 and X2.2 flare and disappear after that, which is believed to be a key process in the production of flares (Inoue et al. 2013; Liu et al. 2013). The vortex in the source field suggests that the sunspot rotation leads to an increase of the non-potentiality (Song et al. 2013). In addition, the energy storage in active region 11158 is also supplied by fast flux emergence and strong shearing motion that led to a quadrupolar ($\delta$-type) sunspot complex (Sun et al. 2012a, 2012b; Jiang et al. 2012; Liu and Schuck 2012; Toriumi et al. 2014). Both the upward propagation of the magnetic and current helicities synchronous with magnetic flux emergence contribute to the gradual energy build up for the X2.2 flare (Jing et al. 2012; Tziotziou et al. 2013). The amount of nonpotential field energies stored in AR 11158 before the X2.2 GOES-class flare was calculated to $E_{np}=2.6 \times 10^{32}$ erg (Sun et al. 2012a), $E_{np}=5.6 \times 10^{32}$ erg (Tarr et al. 2013), $E_{np}=1.0 \times 10^{32}$ erg (COR-NLFFF; Aschwanden et al. 2014b), $E_{np}=2.6 \times 10^{32}$ erg (W-NLFFF; Aschwanden et al. 2014b), $E_{np}=10.6 \times 10^{32}$ erg (Kazachenko et al. 2015), $E_{np}=2.8 \times 10^{32}$ erg (W-NLFFF: Fig. 16), $E_{np}=1.8 \times 10^{32}$ erg (VCA-NLFFF: Fig. 16), which vary within an order of magnitude. The amount of dissipated energy during the X2.2 flare was calculated to $\Delta E_{free}=-1.7 \times 10^{32}$ erg (Tarr et al. 2013), $\Delta E_{free}=-1.0 \times 10^{32}$ erg (Malanushenko et al. 2014), $\Delta E_{free}=-0.3 \times 10^{32}$ erg (Sun et al. 2015), $\Delta E_{free}=-0.6 \times 10^{32}$ erg (COR-NLFFF; Aschwanden et al. 2014b), $\Delta E_{free}=-0.6 \times 10^{32}$ erg (W-NLFFF; Aschwanden et al. 2014b), $\Delta E_{free}=-1.0 \times 10^{32}$ erg (W-NLFFF: Fig. 16), $\Delta E_{free}=-0.3 \times 10^{32}$ erg (VCA-NLFFF: Fig. 16), which agree within an factor of $\approx 6$. Untwisting of helical fields (as assumed in the VCA-NLFFF model) may not be the full explanation of the magnetic field evolution during flares. Since the VCA concept assumes twisting around vertically oriented axes, untwisting would reduce the azimuthal field component $B_{\varphi}$, which corresponds to a reduction of the horizontal field components $B_x$ and $B_y$ (for a location near disk center), as it is the case for the X2.2 flare on 2011 February 15 in NOAA 11158. In contrast, however, the response of the photospheric field to the flare was found to become more horizontal after eruption, as expected from the tether-cutting reconnection model (Wang et al. 2012; Liu et al. 2012, 2013; Inoue et al. 2014, 2015), or from the [*coronal implosion scenario*]{} (Gosain 2012; Wang et al. 2014). On the other side, reduction of magnetic twist was explained by a large, abrupt, downward vertical Lorentz-force change (Petrie 2012, 2013). In summary, twisting and untwisting of magnetic field lines, the basic concept of our vertical-current approximation model in describing the magnetic evolution, plays a leading role in many of the theoretical and observationally inferred flare models of the X2.2 flare in active region NOAA 11158, and thus justifies the application of the VCA-NLFFF code to flares in general, although the underlying analytical formulation encapsulates only on particular family of solutions among all possible nonlinear force-free field models. The Pro’s and Con’s of NLFFF Codes ------------------------------------ After we performed non-potential magnetic field modeling with chromospheric and coronal data we can assess the feasibility of NLFFF modeling in a new light, which we contrast here between the W-NLFFF code and our VCA-NLFFF code. Standard W-NLFFF codes have the following features: (1) They are using photospheric vector magnetograph data (where the transverse field components have a much larger degree of noise than the line-of-sight component), (2) use the assumption of a force-free photosphere, (3) have no mechanism to fit the magnetic field solution to observed chromospheric or coronal features, (4) use a preprocessing method to make the photospheric boundary condition more force-free, (5) are designed to converge to a divergence-free and force-free solution as accurately as possible, (6) map the sphericity of the Sun onto a plane-parallel boundary, and (7) are computationally expensive (with computation times in the order of hours or days). In contrast, the VCA-NLFFF code can be characterized in the following way: (1) It uses only the line-of-sight magnetic field component and avoids the noisy transverse components; (2) do not use the assumption of a force-free photosphere; (3) has the capability to minimize the difference between the theoretical magnetic field model and observed curvi-linear features in coronal and/or chromospheric images; (4) does not modify the photospheric boundary condition with a preprocessing method; (5) fulfills the divergence-free and force-free conditions with second-order accuracy; (6) takes the sphericity of the Sun fully into account, and (7) is computationally relatively fast (with computation times in the order of minutes). Comparing these seven characteristics, it appears that the VCA-NLFFF code has a superior design in six out of the seven criteria. The only design where the W-NLFFF code has a superior performance is item (5), since they are designed to optimize divergence-freeness, force-freeness, and constancy of the $\alpha$-parameter along each field line, while the VCA-NLFFF code uses an approximate analytical solution that is accurate to second order in $\alpha$ only. However, the reduced accuracy of the VCA-NLFFF code outweighs the biggest short-coming of the W-NLFFF code, i.e., that it is unable to optimize the match between model and observations. The two types of NLFFF codes are truly complementary and one might use the VCA-NLFFF code as a first initial guess that can quickly be calculated before a more accurate NLFFF solution with the W-NLFFF code is attempted. Perhaps it could provide improved boundary conditions for the W-NLFFF code, exploiting both chromospheric and coronal constraints. Open Problems: Questions and Answers -------------------------------------- Although this benchmark test of the VCA-NLFFF code presented here is the most comprehensive performance test carried out to-date, which demonstrates encouraging results, there are still a number of open problems that should be pursued in future studies. These open problems concern fundamental limitations of the VCA-NLFFF method, such as: (1) The suitability of the magnetic model; (2) the fidelity of automated feature tracking; (3) the sensitivity to nonpotential field components; (4) the ambiguity of nonpotential field solutions; and (5) the usefulness in practical applications, in particular for forecasting the magnetic evolution of active regions and flares. We discuss these five open issues briefly. \(1) The magnetic field model used in the VCA-NLFFF model is based on buried magnetic charges and helically twisted field lines (or flux tubes) with an azimuthal field component $B_{\varphi}$ that is caused by vertical currents at the photospheric boundaries where the unipolar magnetic charges are buried. There are alternative nonlinear force-free magnetic field models, such as that of a sheared arcade (e.g., see textbooks of Priest 1982; Sturrock 1994; Aschwanden 2004), for which an analytical approximation could be derived and fitted to observed loop geometries. In principle, the suitability of such alternative models could be tested by building forward-fitting codes that are equivalent to the VCA-NLFFF code, and by comparing the best-fit misalignment angles between the models and data. More generally, a force-free minimization code such as the W-NLFFF code could be designed that fits coronal loops without using the transverse components at the photospheric boundary. Such a concept was pioneered by Malanushenko et al.( 2009, 2011, 2012, 2014), but the major drawbacks of that code using a quasi Grad-Rubin method is the lack of an automated loop-tracing capability and the prohibitively long computation times. \(2) The fidelity of automated feature tracking depends on the complexity and noise level of the used EUV and UV images, and whether there are a sufficient number of curvi-linear features that can be detected in the data. The usage of AIA images has demonstrated that the biggest challenge is the elimination of “false” loop tracings, which occur due to insufficient spatial resolution, over-crossing structures, moss structures (Berger et al. 1999; De Pontieu et al. 1999), data noise, and instrumental effects (image edges, vignetting, pixel bleeding, saturation, entrance filter mask, etc.). The usage of IRIS and IBIS images, which have 4-6 times higher spatial resolution than AIA, provide a much larger amount of resolved curvi-linear features that are helpful in the reconstruction of the magnetic field, even when they display chromospheric features (fibrils) rather than coronal structures (active region loops and post-flare loops) (Aschwanden, Reardon, and Jess 2016). Future versions of the VCA-NLFFF code may combine the most suitable features in coronal and chromospheric wavelengths. \(3) The sensitivity to nonpotential field components depends on the availabiliy of detectable curvi-linear features in the penumbral regions of sunspots, where the largest magnetic field strengths occur outside the umbra, while the umbra is generally void of loops and fibrils. The detectability of penumbral structures works best for heliographic locations near the Sun center, while regions faw away from the Sun center generally cause confusion between loops in the foreground and moss structures in the background plages. The accuracy of the total free energy of an active region or flare is dominated by the magnetic field structures near the leading or dominant sunspot, because of the highly nonlinear $B^2$-dependence of the free energy on the twisted field. Thus the sensitivity of the VCA-NLFFF code to the free energy is determined by the detectability of structures near the dominant sunspot, in contrast to W-NLFFF codes, where the sensitivity of the free energy is limited by the noise and uncertainty of the transverse field components $B_x$ and $B_z$ in the dominant sunspot region. \(4) The VCA-NLFFF code has typically $n_{mag}=20-100$ magnetic source components, and thus the numerical convergence of the VCA-NLFFF code towards a best-fit solution is in principle not unique, given the uncertainties of automatically traced curvi-linear features. However, we have demonstrated in the parametric study (Section 5.6) that our results are extremely robust when the various control parameters of the VCA-NLFFF code are varied, which implies a stable convergence to the same solution. What we can say about the convergence ambiguity is that the solution is dominated by the strongest magnetic sources, which implies near-unambiguity for the strongest sources (such as the dominant sunspot), while the degree of ambiguity increases progressively for weaker magnetic sources. Since we are mostly interested in the total (volume-integrated) free energy, which is also dominated by the strongest sources due to the $B^2$-dependence, the value of the free energy $E_{free}(t)$ is nearly unambiguous, at any instant of time. \(5) What is the usefulness of the VCA-NLFFF code for practical applications. The main product of the VCA-NLFFF code is the free energy and its time evolution, $E_{free}(t)$, which includes also estimates of the dissipated magnetic energy during flares, i.e., $E_{diss}=E_{free}(t_{end})-E_{free}(t_{start})$. In principle, other W-NLFFF codes can provide the same information, but their uncertainty is not known, because the inferred values may be affected by the violation of the force-freeness in the photosphere and lower chromosphere, the mismatch between the NLFFF model magnetic field and the observed loops, and the smoothing of the preprocessing technique. Ideally, if the evolution of the free energy can be obtained with either method, this parameter is one of the most relevant quantities for flare forecasting. A machine-learning algorithm called [*support vector machine (SVM)*]{} has been applied to an HMI data base of 2071 active regions and it was found that the total photospheric magnetic free energy density is the third-best predictor for flares (out of 25 tested quantities), besides the total unsigned current helicity and the total magnitude of the Lorentz force (Bobra and Couvidat 2015). CONCLUSIONS ============= In this study we present an updated description and performance tests of the vertical current approximation nonlinear force-free field (VCA-NLFFF) code, which is designed to calculate the free energy $E_{free}(t)$ and its time evolution in active regions and flares, after it has been continuously developed over the last five years. This code provides a complementary and alternative method to existing traditional NLFFF codes, such as the W-NLFFF code (Wiegelmann 2004). The VCA-NLFFF method requires the input of a line-of-sight magnetogram and automated curvi-linear tracing of EUV or UV images in an arbitrary number of wavelengths and instruments. In contrast, the W-NLFFF code requires the input of 3D magnetic field vectors in the photospheric boundary, but has no capability to match observed features in the corona or chromosphere. The performance tests presented here include comparisons of the potential, non-potential, free energy, and flare-dissipated magnetic energy between the VCA-NLFFF and the W-NLFFF code. We summarize the major conclusions: 1. [The chief advantages of the VCA-NLFFF code over the W-NLFFF code are the circumvention of the unrealistic assumption of a force-free photosphere in the magnetic field extrapolation method, the capability to minimize the misalignment angles between observed coronal loops (or chromospheric fibril structures) and theoretical model field lines, as well as computational speed.]{} 2. [Comparing 600 W-NLFFF solutions from active region NOAA 11158 and 119 W-NLFFF solutions from 11 X-class flares with VCA-NLFFF solutions we find agreement in the potential, non-potential, and free energy within a factor of $\approx 1.2$, which compares favorably with respect to the range of free energies that have been obtained with other NLFFF codes, scattering by about an order of magnitude for published values of the X2.2 flare in NOAA 11158.]{} 3. [The time evolution of the free energy $E_{free}(t)$ in 11 X-class flares modeled with both NLFFF codes yields a significant decrease of the free energy during the flare time interval in 10 out of the 11 cases, but the the energy amount determined with the W-NLFFF code is statistically a factor of 2 lower, probably because of over-smoothing in the preprocessing technique. In addition we tested the VCA-NLFFF code for 36 C, M, and X-class flares in AR 11158 and detected a significant energy decrease in most cases.]{} 4. [The amount of magnetic energy decrease during flares agrees within a factor of $\approx 2-3$ between the two NLFFF codes. We suspect that the VCA-NLFFF code fails to detect the full amount of magnetic energy decreases in cases with insufficient loop coverage in penumbral regions. In contrast, the W-NLFFF code may fail to detect the full amount of magnetic energy decreases as a consequence of over-smoothing in the preprocessing procedure and the violation of the force-free condition in the photosphere.]{} 5. [Both the VCA-NLFFF and the W-NLFFF codes are able to measure the magnetic energy evolution in active regions and the magnetic energy dissipation in flares, but each code has different systematic errors, and thus the two types of codes are truly complementary. The present absolute error in the determination of changes in the free energy during large flares, due to systematic errors, is about a factor of 2, based on the discrepancy between the compared two codes.]{} The obtained results are encouraging to justify further developments of the VCA-NLFFF code. Future studies may focus on a deeper understanding of the systematic errors of various NLFFF codes, which will narrow down the accuracy of free energies and flare-dissipated energies. The depth of the buried magnetic charges inferred from the VCA-NLFFF code may give us deeper insights into the solar dynamo and local helioseismology results, an aspect that we have not touched on here. Correlation studies of time series of the free energy may reveal new methods to improve flare forecasting in real time. The VCA-NLFFF code is also publicly available in the [*Solar SoftWare (SSW)*]{} library encoded in the [*Interactive Data Language (IDL)*]{}, see website [*http://www.lmsal.com/$\sim$aschwand/software/*]{}. The author is indebted to helpful discussions with Bart De Pontieu, Mark DeRosa, Anna Malanushenko, Carolus Schrijver, Alberto Sainz-Dalda, Ada Ortiz, Jorrit Leenarts, Kevin Reardon, and Dave Jess. Part of the work was supported by the NASA contracts NNG04EA00C of the SDO/AIA instrument and NNG09FA40C of the IRIS mission. [lll]{} Data selection: &Instruments & HMI; AIA; IRIS; IBIS; ROSA\ &Spatial pixel size & $0.5\arcsec$; $0.6\arcsec$; $0.16\arcsec$; $0.1\arcsec$; $0.1\arcsec$\ &Wavelengths & 6173; \[94,131,171,193,211,304,335,1600\];\ & & \[1400,2796,2832\]; 8542; 6563\ &Field-of-view & $FOV = 0.1,...,0.4 R_{\odot}$\ Magnetic sources: &Number of magnetic sources & $n_{mag} = 30$\ &Width of fitted local maps & $w_{mag}=3$ pixels\ &Depth range of buried charges & $d_{mag} = 20$ pixels\ &Rebinned pixel size & $\Delta x_{mag}=3$ pixels = $1.5\arcsec$\ Loop tracing: &Maximum of traced structures & $n_{struc}=1000$\ &Lowpass filter & $n_{sm1} = 1$ pixel\ &Highpass filter & $n_{sm2} = n_{sm1}+2 = 3$ pixels\ &Minimum loop length & $l_{min} = 5$ pixels\ &Minimum loop curvature radius & $r_{min} = 8$ pixels\ &Field line step & $\Delta s=0.002 R_{\odot}$\ &Threshold positive flux & $q_{thresh,1} = 0$\ &Threshold positive filter flux & $q_{thresh,2} = 0$\ &Proximity to magnetic sources & $d_{prox}=10$ source depths\ Forward-Fitting:&Misalignment angle limit & $\mu_0 = 20^\circ$\ &Minimum number of iterations & $n_{iter,min}= 40$\ &Maximum number of iterations & $n_{iter,max}= 100$\ &Number loop segment positions & $n_{seg}=9$\ &Maximum altitude & $h_{max}=0.2 R_{\odot}$\ &$\alpha$-parameter increment & $\Delta \alpha_0=1.0 \ R_{\odot}^{-1}$\ &Isotropic current correction & $q_{iso}=(\pi/2)^2\approx 2.5$\ [rlllrllll]{} 10-15 &2011 Feb 12-17 & 5 days &12, 6 min & 600, 1200 & C1.0-X2.2 & S20E27-W38 & 11158 & SDO\ & & & & & & & &\ 12 &2011 Feb 15, 00:44 & 2.4 hrs &12, 6 min & 12, 24 & X2.2 & S21W12 & 11158 & SDO\ 37 &2011 Mar 09, 22:13 & 2.3 hrs &12, 6 min & 11, 23 & X1.5 & N10W11 & 11166 & SDO\ 66 &2011 Sep 06, 21:12 & 2.3 hrs &12, 6 min & 11, 23 & X2.1 & N16W15 & 11283 & SDO\ 67 &2011 Sep 07, 21:32 & 2.3 hrs &12, 6 min & 6, 12 & X1.8 & N16W30 & 11283 & SDO\ 147 &2012 Mar 06, 23:02 & 2.7 hrs &12, 6 min & 13, 27 & X5.4 & N18E31 & 11429 & SDO\ 148 &2012 Mar 07, 00:05 & 2.4 hrs &12, 6 min & 12, 24 & X5.4 & N18E29 & 11430 & SDO\ 220 &2012 Jul 12, 14:37 & 3.9 hrs &12, 6 min & 19, 39 & X1.4 & S15W05 & 11520 & SDO\ 344 &2013 Nov 05, 21:07 & 2.2 hrs &12, 6 min & 11, 22 & X3.3 & S08E42 & 11890 & SDO\ 349 &2013 Nov 08, 03:20 & 2.2 hrs &12, 6 min & 11, 22 & X1.1 & S11E11 & 11890 & SDO\ 351 &2013 Nov 10, 04:08 & 2.2 hrs &12, 6 min & 11, 22 & X1.1 & S11W17 & 11890 & SDO\ 384 &2014 Jan 07, 17:04 & 3.0 hrs &12, 6 min & 15, 30 & X1.2 & S12E02 & 11944 & SDO\ & & & & & & & &\ 592 &2014 Mar 29, 17:05 & 1.3 hrs &..., 6 min & ..., 13 & X1.0 & N10W33 & 12017 & SDO,IRIS\
--- abstract: 'Medical researchers are coming to appreciate that many diseases are in fact complex, heterogeneous syndromes composed of subpopulations that express different variants of a related complication. Time series data extracted from individual electronic health records (EHR) offer an exciting new way to study subtle differences in the way these diseases progress over time. In this paper, we focus on answering two questions that can be asked using these databases of time series. First, we want to understand whether there are individuals with similar *disease trajectories* and whether there are a small number of *degrees of freedom* that account for differences in trajectories across the population. Second, we want to understand how important clinical outcomes are associated with disease trajectories. To answer these questions, we propose the Disease Trajectory Map (DTM), a novel probabilistic model that learns low-dimensional representations of sparse and irregularly sampled time series. We propose a stochastic variational inference algorithm for learning the DTM that allows the model to scale to large modern medical datasets. To demonstrate the DTM, we analyze data collected on patients with the complex autoimmune disease, scleroderma. We find that DTM learns meaningful representations of disease trajectories that the representations are significantly associated with important clinical outcomes.' author: - | Peter Schulam\ Dept. of Computer Science\ Johns Hopkins University\ Baltimore, MD 21218\ `pschulam@cs.jhu.edu`\ Raman Arora\ Dept. of Computer Science\ Johns Hopkins University\ Baltimore, MD 21218\ `arora@cs.jhu.edu`\ title: Disease Trajectory Maps ---
--- abstract: 'Motivated by a previous work of Zheng and the second named author, we study pinching constants of compact Kähler manifolds with positive holomorphic sectional curvature. In particular we prove a gap theorem following the work of Petersen and Tao on Riemannian manifolds with almost quarter-pinched sectional curvature.' address: - 'Xiaodong Cao. Department of Mathematics, 310 Malott Hall, Cornell University, Ithaca, NY 14853-4201, USA.' - 'Bo Yang. Department of Mathematics, 310 Malott Hall, Cornell University, Ithaca, NY 14853-4201, USA.' author: - Xiaodong Cao - Bo Yang date: 'This is the version which the authors submitted to a journal for consideration for publication in June 2017. The reference has not been updated since then' title: A note on the almost one half holomorphic pinching --- [^1] [^2] The theorem =========== Let $(M, J, g)$ be a complex manifold with a Kähler metric $g$, one can define the *holomorphic sectional curvature* ($H$) of any $J$-invariant real $2$-plane $\pi=\operatorname{Span}\{X, JX\}$ by $$H(\pi)=\frac{R(X, JX, JX, X)}{||X||^4}.$$ It is the Riemannian sectional curvature restricted on any $J$-invariant real $2$-plane (p165 [@KN]). In terms of complex coordinates, it is equivalent to write $$H(\pi)=\frac{R(V, \overline{V}, V, \overline{V})}{||V||^4}$$ where $V=X-\sqrt{-1}JX \in T^{1,0}(M)$. In this note we study pinching constants of compact Kähler manifolds with positive holomorphic sectional curvature ($H>0$). The goal is to prove the following rigidity result on a Kähler manifold with the almost one half pinching. \[almost half\] For any integer $n \geq 2$, there exists a positive constant $\epsilon(n)$ such that any compact Kähler manifold with $\frac{1}{2}-\epsilon(n) \leq H \leq 1$ of dimension $n$ is biholomorphic to any of the following 1. $\mathbb{CP}^{n}$, 2. $\mathbb{CP}^{k} \times \mathbb{CP}^{n-k}$, 3. An irreducible rank $2$ compact Hermitian symmetric space of dimension $n$. Before we discuss the proof, let us review some background on compact Kähler manifolds with $H>0$. The condition $H>0$ is less understood and seems mysterious. For example, $H>0$ does not imply positive Ricci curvature, though it leads to positive scalar curvature. Essentially one has to work on a fourth order tensor from the viewpoint of linear algebra, while usually the stronger notion of holomorphic bisectional curvature leads to bilinear forms. Naturally one may wonder if there is a characterization of such an interesting class of Kähler manifolds. In particular, Yau ([@Yau1] and [@Yau2]) asked if the positivity of holomorphic sectional curvature can be used to characterize the rationality of algebraic manifolds. For example, is such a manifold a rational variety? There is much progress on Kähler surfaces with $H>0$. In 1975 Hitchin [@Hitchin] proved that any compact Kähler surface with $H>0$ must be a rational surface and conversely he constructed examples of such metrics on any Hirzebruch surface $M_{2, k}=\mathbb{P}(H^{k}\oplus 1_{\mathbb{CP}^1})$. It remains an interesting question to find out if Kähler metrics of $H>0$ exist on other rational surfaces. In higher dimensions, much less is known on $H>0$ except recent important works of Heier-Wong (see [@HeierWong2015] for example). One of their results states that any projective manifold which admits a Kähler metric with $H>0$ must be rationally connected. It could be possible that any Kähler manifold with $H>0$ is in fact projective, again it is an open question. We also remark that some generalization of Hitchin’s construction of Kähler metrics of $H>0$ in higher dimensions has been obtained in [@AHZ]. If Yau’s conjecture is true, then how do we study the complexities of rational varieties which admit Kähler metrics with $H>0$? A naive thought is that the global and local holomorphic pinching constants of $H$ should give a stratification among all such rational varieties. Here the local holomorphic pinching constant of a Kähler manifold $(M, J, g)$ of $H>0$ is the maximum of all $\lambda \in (0,1]$ such that $0<\lambda H(\pi^{,}) \leq H(\pi) $ for any $J-$invariant real $2-$planes $\pi, \pi^{,} \subset T_p(M)$ at any $p \in M$, while the global holomorphic pinching constant is the maximum of all $\lambda \in (0,1]$ such that there exists a positive constant $C$ so that $\lambda C \leq H(p,\pi) \leq C$ holds for any $p \in M$ and any $J$-invariant real 2-plane $\pi \subset T_p(M)$. Obviously the global holomorphic pinching constant is no larger than the local one, and there are examples of Kähler metrics with different global and local holomorphic pinching constants on Hirzebruch manifolds ([@YZ]). In a previous work of Zheng and the second named author ([@YZ]), we observe the following result, which follows from some pinching equality on $H>0$ due to Berger [@Berger1960] and recent works on nonnegative orthogonal bisectional curvature ([@ChenX], [@GuZhang], and [@Wilking]). \[[@YZ]\] \[half pinching\] Let $(M^n,g)$ be a compact Kähler manifold with $0<\lambda \leq H \leq 1$ in the local sense for some constant $\lambda$, then the following holds: \(1) If $\lambda>\frac{1}{2}$, then $M^n$ is bibolomorphic to $\mathbb{CP}^{n}$. \(2) If $\lambda=\frac{1}{2}$, then $M^n$ satisfies one of the following 1. $M^n$ is biholomorphic to $\mathbb{CP}^{n}$. 2. $M^n$ is holomorphically isometric to $\mathbb{CP}^k \times \mathbb{CP}^{n-k}$ with a product metric of Fubini-Study metrics. Moreover, each factor must have the same constant $H$. 3. $M^n$ is holomorphically isometric to an irreducible compact Hermitian symmeric space of rank $2$ with its canonical Kähler-Einstein metric. Let us remark that in the case that Kähler manifold in Proposition \[half pinching\] is projective and endowed with the induced metric from the Fubini-Study metric of the ambient projective space, a complete characterization of such a projective manifold and the corresponding embedding has been proved by Ros [@Ros]. Comparing with Proposition \[half pinching\], we may view Theorem \[almost half\] as a rigidity result on compact Kähler manifolds with almost one half-pinched $H>0$. For example, Hirzebruch manifolds can not admit Kähler metrics whose global pinching constants are arbitrarily close to $\frac{1}{2}$. It is very interesting to find the next threshold for holomorphic pinching constants and prove some characterization of Kähler manifolds with such a threshold pinching constant. Before making any reasonable speculation, it is helpful to understand examples on such holomorphic constants of some canonical Kähler metrics. In this regard, the Kähler-Einstein metric on a irreducible compact Hermitian symmetric space has its holomorphic pinching constant exactly the reciprocal of its rank ([@Chen1977]). The Kähler-Einstein metrics on many simply-connected compact homogeneous Kähler manifolds (Kähler $C$-spaces) also have $H>0$, and it seems very tedious to work with corresponding Lie algebras carefully to determine these holomorphic pinching constants except in lower dimensions. It was observed in [@YZ] that the flag $3$-manifold, the only Kähler $C$-space in dimesion $3$ which is not Hermitian symmetric, has $\frac{1}{4}$-holomorphic pinching for its canonical Kähler-Einstein metric. Note that Alvarez-Chaturvedi-Heier [@ACH] studied pinching constants of Hitchin’s examples of Kähler metrics with $H>0$ on a Hirzebruch surface. However, it remains unknown what is the best pinching constant among all Kähler metrics with $H>0$ on such a surface. We refer the interested reader to [@ACH] and [@YZ] for more discussions. The proof ========= The proof is motivated by the work of Petersen-Tao [@PT] on Riemannian manifolds with almost quarter-pinched sectional curvature. Assume for some complex dimension $n \geq 2$, there exists a sequence of compact Kähler manifolds $(M_k, J_k, g_k)$ ($k \geq 1$) whose holomorphic sectional curvature satisfies $\frac{1}{2}-\frac{1}{4k} \leq H(M_k, g_k) \leq 1$, and none of $(M_k, J_k)$ is biholomorphic to any of the three listed in the conclusion of Theorem \[almost half\]. In the following steps, $c(n)$, maybe different from line to line, are all constants which only depend on $n$. **Step 1:** (A uniform lower bound for the maximal existence time of the Kähler-Ricci flow) It is well-known ([@KN] for example) that bounds on holomorphic sectional curvature lead to bounds on Riemannian sectional curvature and the full curvature tensor. In particular, for any unit orthogonal vectors $X$ and $Y$ we have $$\begin{aligned} Sec(X, Y)=R(X, Y, Y, X)=&\frac{1}{8}\ \Big[3H(\frac{X+JY}{\sqrt{2}})+3H(\frac{X-JY}{\sqrt{2}}) \\&-H(\frac{X+Y}{\sqrt{2}})-H(\frac{X-Y}{\sqrt{2}}) -H(X)-H(Y) \Big]\end{aligned}$$ From the works of Hamilton and Shi ([@H1], [@H2], and [@Shi], and Cor 7.7 in [@ChowKnopf] for an exposition of these results on compact manifolds), we conclude that for any $k \geq 1$, there exists a constant $T(n)>0$ such that the Kähler-Ricci flow $(M_k, J_k, g_k(t))$ with the initial metric $g_k$ is well-defined on the time interval $[0, T(n)]$ for any $k \geq 1$. Moreover, we have $|Rm (M_k, g_k(t))|_{g(t)} \leq c(n)$ for some constant $c(n)$ and all $t \in [0, T(n)]$ and $k \geq 1$. **Step 2:** (An improved curvature bound on a smaller time interval) This step is due to Ilmanen, Shi, and Rong (Proposition 2.5 in [@Rong]). Namely, there exists constants $\delta(n)<T(n)$ and $c(n)$ such that for any $t \in [0, \delta(n)]$ $$\begin{aligned} \min_{p, V \subset T_p(M_k)}Sec(M_k, g_k(t), p, V)-c(n)t &\leq Sec(M_k, g_k(t), p, P) \\ &\leq \max_{p, V \subset T_p(M_k)}Sec(M_k, g_k(t), p, V)+c(n)t.\end{aligned}$$ It is direct to see that a similar estimate holds for holomorphic sectional curvature. Indeed, there exists $\delta(n)$ and $c(n)$ such that for any $t \in [0, \delta(n)]$ $$\begin{aligned} \frac{1}{2}-c(n)\, t \leq H(M_k, g_k(t)) \leq 1+c(n) \, t.\end{aligned}$$ **Step 3:** (An injective radius bound on $g_k(t_0)$ for some fixed $t_0 \in [0, \delta]$) We observe that Klingenberg’s injectivity radius estimates on even-dimensional Riemannian manifolds with positive sectional curvature (Theorem 5.9 in [@CE] or p178 of [@Petersen] for example) can be adapted to show that for any $t \in [0, \delta_1(n)]$ such that $inj (M_k, g_k(t)) \geq c(n)$ for some constant $c(n)>0$. Indeed it will follow from the claim below: Let $(M^n, g)$ be a compact Kähler manifold with positive holomorphic sectional curvature $H \geq \delta>0$ and $Sec \leq K$ where the constant $K>0$, then the injectivity radius ${inj} (M^n, g) \geq c(K)$ for some constant $c(K)$. The proof of the above claim goes along as the proof of Theorem 5.9 in [@CE] except that we need to use the variational vector field as $J \gamma'(t)$ where $\gamma(t)$ is a closed geodesic. This is where we use $H>0$. In fact such a kind of estimate has been proved in [@ChenTian] assuming positive bisectional curvature. **Step 4:** (An lower bound on orthogonal bisectional curvatures of $(M_k, g_k(t)$) This step is motivated by Peterson-Tao [@PT], where they derived a similar lower bound estimate for isotropic curvatures of almost quarter-pinched Riemannian manifolds along Ricci flow. \[Guzhang bounds\] There exists some constant $\delta_2(n)$ such that any $t \in [0, \delta_2(n)]$, the orthogonal bisectional curvature of $(M_k, g_k(t))$ has a lower bound $-\frac{1}{k} e^{c(n)t}$ for some constant $c(n)>0$. Note that Berger’s inequality [@Berger1960] (see also Lemma 2.5 in [@YZ] for an exposition) implies the orthogonal bisectional curvature of $(M_k, g_k(0))$ is bounded from below by $-\frac{1}{4k}$. Now the proof is based on a maximum principle developed in [@H2]. In the setup of orthogonal bisectional curvature, it is proved (in [@ChenX], [@GuZhang], and [@Wilking]) that nonnegative orthogonal bisectional curvature is preserved under Kähler-Ricci flow. For the sake of convenience, we simply write $(M, J, g(t))$ where $t \in [0, T(n)]$ instead of the sequence $(M_k, J_k, g_k (t))$. Following [@H2], one may use Uhlenbeck’s trick. Consider a fixed complex vector bundle $E \rightarrow M$ isomorphic to $TM \rightarrow M$, with a suitable choice of bundle isomorphims $\iota_t: E \rightarrow TM$, one obtain a fixed metric $\iota_t^{\ast} g(t)$ on $E$. Now choose an unitary frame $\{e_{\alpha}\}$ on $T^{1, 0}(E)$ which corresponds to an evolving unitary frame on $TM$ via $\iota_t$, let $R_{\alpha \overline{\alpha} \beta \overline{\beta}}$ denote $R(\iota_t^{\ast} g(t), e_{\alpha}, \overline{e_{\alpha}}, e_{\beta}, \overline{e_{\beta}})$, the evolution equation of bisectional curvature reads ([@GuZhang] for example) $$\frac{\partial}{\partial t} R_{\alpha \overline{\alpha} \beta \overline{\beta}}=\Delta_{g(t)} R_{\alpha \overline{\alpha} \beta \overline{\beta}}+\sum_{\mu, \nu} (R_{\alpha \overline{\alpha} \mu \overline{\nu}} R_{\beta \overline{\beta} \mu \overline{\nu}} -|R_{\alpha \overline{\mu} \beta \overline{\nu}}|^2+|R_{\alpha \overline{\beta} \mu \overline{\nu}}|^2) \label{evolution}$$ Now assume $m(t)=\min_{U \perp V \in T^{1, 0}(E)} R(U, \overline{U}, V, \overline{V})$, and assume $m(t_0)=R_{\alpha \overline{\alpha} \beta \overline{\beta}}$ for some $t_0$ and some point $p \in M$. Consider the first and the second variation of $R_{\alpha \overline{\alpha} \beta \overline{\beta}}$, follow the proof of Proposition 1.1 in [@GuZhang] and the curvature bounds in Step 1 we conclude from (\[evolution\]) that $\frac{d^{-} m(t)}{d t}|_{t=t_0} \geq c(n) m(t_0) $ whenever $m(t_0)<0$. Therefore there exists some time interval $[0, \delta_2(n)]$ either $m(t) \geq 0$ or $\frac{d^{+}(-m(t))}{d t} \leq c(n) (-m(t))$ if $m(t)<0$. Recall $m(0) \geq -\frac{1}{4k}$, in any case we end up with $m(t) \geq -\frac{1}{k} e^{c(n) t}$ for all $t \in [0, \delta_2(n)]$. **Step 5:** (A contradiction after taking the limit of $(M_k, , J_k, g_k(t))$) Let us consider $(M_k, J_k, g_k(t))$ where $t \in (0, \delta_2(n)]$ be a sequence of Kähler-Ricci flow, from previous steps we conclude that there exist $\delta_3(n)$ and $c(n)$ such that 1. $|Rm|_{g_k(t)} (M_k, g_k(t)) \leq c(n)$ for and $k \geq 1$ and $t \in [0, \delta_3(n)]$. 2. $inj (M_k, g_k(t_0)) \geq \frac{1}{c(n)}$ for some $t_0 \in [0, \delta_3(n)]$ 3. $Ric (M_k, g_k(t_0)) \geq \frac{1}{c(n)}$ for any $k$, this follows from Step 2 and 4. It follows from Hamilton’s compactness theorem of Ricci flow [@H3] that $(M_k, J_k, g_k(t))$ converges to a compact limiting Kähler-Ricci flow $(M_{\infty}, J_{\infty}, g_{\infty}(t))$ where $t \in (0, \delta_2(n)]$. It follows from Step 4 that $g_{\infty}(t)$ has nonnegative orthogonal bisectional curvature and $H(g_{\infty}(t))>0$ for any $0<t \leq t_0$. Note that all $M_k$ and $M_{\infty}$ are simply-connected ([@Tsukamoto]), it follows from [@ChenX], [@GuZhang], and [@Wilking] that $(M_{\infty}, g_{\infty}(t_0))$ must be of the following form. $$(\mathbb{CP}^{k_1}, g_{k_1}) \times \cdots \times (\mathbb{CP}^{k_r}, g_{k_r}) \times (N^{l_1}, h_{l_1}) \times \cdots (N^{k_r}, h_{l_s}). \label{product list}$$ Where each of $(\mathbb{CP}^{k_i}, g_{k_i})$ has nonnegative bisectional curvature and each of $(N^{l_i}, h_{l_i})$ is a compact irreducible Hermitian symmetric spaces of rank $\geq 2$ with its canonical Kähler-Einstein metric. Now consider a time $t_1<t_0$ close to $t=0$, it follows from Step 2 that $g_{\infty}(t_1)$ is close to $\frac{1}{2}$-holmorphic pinching and also have the same decomposition as (\[product list\]). Indeed the decomposition (\[product list\]) is reduced to exactly the list in the conclusion of Proposition \[half pinching\]. To see it one may also apply the formula of piching constants of a product of metrics with $H>0$ ([@ACH]). However $(M_k, J_k)$ is not biholomorphic to any of the three three listed in the conclusion of Theorem \[almost half\]. Now we have a sequence of Kähler manifolds $(M_{\infty}, \phi_k^{\ast} J_k, \phi_k^{\ast} g_k(t_1))$ converging to $(M_{\infty}, J_{\infty}, g_{\infty}(t_1))$ where $\phi_k: M_{\infty} \rightarrow M_k$ are the diffeomorphisms from Hamilton’s compactness theorem. This is a contradiction since any compact Hermitian symmetric space is infinitesimally rigid, i.e. $H^{1}(M_{\infty}, \Theta_{M_{\infty}})=0$ where $\Theta$ is the sheaf of holomorphic vector fields on $M_{\infty}$ (see Bott [@Bott]). This finishes the proof of Theorem \[almost half\]. A remark ======== Note that Proposition \[half pinching\] works in the case of the local one half pinching, therefore it seems natural to ask: Does Theorem \[almost half\] hold if we replace the global almost one half pinching to the local one? Another optimistic hope is that $H>0$ is preserved along Kähler-Ricci flow as long as the initial metric has a suitable large holomorphic pinching constant. We refer to [@YZ] for more discussions. [99]{} Alvarez, A.; Chaturvedi, A.; Heier, G.. *Optimal pinching for the holomorphic sectional curvature of Hitchin’s metrics on Hirzebruch surfaces.* Contemp. Math., 654, 2015, 133-142. Alvarez, A.; Heier, G.; Zheng, F.. *On projectivized vector bundles and positive holomorphic sectional curvature.* arXiv:1606.08347. Berger, M.. *Pincement riemannien et pincement holomorphe.* Ann. Scuola Norm. Sup. Pisa (3), 14, 1960, 151-159. and *Correction d’un article antérieur*. ibid. (3), 16, (1962), 297. Bott, R.. *Homogeneous vector bundles.* Ann. of Math. (2) 66 (1957), 203-248. Cheeger, J.; Ebin, D. G.. *Comparison theorems in Riemannian geometry.* Revised reprint of the 1975 original. AMS Chelsea Publishing, 2008. Chen, B.-y.. *Extrinsic spheres in Kähler manifolds, II.* Michigan Math. J. 24 (1977), no. 1, 97-102. Chen, X. X.. *On Kähler manifolds with positive orthogonal bisectional curvature.* Adv. Math. 215 (2007), no. 2, 427-445. Chen, X. X.; Tian, G.. *Ricci flow on Kähler-Einstein surfaces.* Invent. Math. 147 (2002), no. 3, 487-544. Chow, B.; Knopf, D.. *The Ricci flow: an introduction.* Mathematical Surveys and Monographs, 110. American Mathematical Society, 2004. Gu, H.; Zhang, Z.. *An extension of Mok’s theorem on the generalized Frankel conjecture.* Sci. China Math. 53 (2010), no. 5, 1253-1264. Hamilton, R. S. *Three-manifolds with positive Ricci curvature.* J. Differential Geom. 17 (1982), no. 2, 255-306. Hamilton, R. S. *Four-manifolds with positive curvature operator.* J. Differential Geom. 24 (1986), no. 2, 153-179. Hamilton, R. S. *A compactness property for solutions of the Ricci flow.* Amer. J. Math. 117 (1995), no. 3, 545-572. Heier, G.; Wong, B.. *On projective Kähler manifolds of partially positive curvature and rational connectedness.* arXiv:1509.02149. Hitchin, N.. *On the curvature of rational surfaces.* Differential geometry, Proc. Sympos. Pure Math., Vol. XXVII, Part 2, 65-80. Amer. Math. Soc., Providence, R. I., 1975. Kobayashi, S.; Nomizu, K.. Foundations of differential geometry. Vol. II. Reprint of the 1969 original. John Wiley & Sons, Inc., New York, 1996. Petersen, P. *Riemannian geometry.* Second edition. Graduate Texts in Mathematics, 171. Springer, New York, 2006. Petersen, P.; Tao, T.. *Classification of almost quarter-pinched manifolds.* Proc. Amer. Math. Soc. 137 (2009), no. 7, 2437-2440. Rong, X.. *On the fundamental groups of manifolds of positive sectional curvature.* Ann. of Math. (2) 143 (1996), no. 2, 397-411. Ros, A.. *A characterization of seven compact Kaehler submanifolds by holomorphic pinching.* Ann. of Math. (2) 121 (1985), no. 2, 377-382. Shi, W.-X.. *Deforming the metric on complete Riemannian manifolds.* J. Differential Geom. 30 (1989), no. 1, 223-301. Tsukamoto, Y.. *On Kählerian manifolds with positive holomorphic sectional curvature.* Proc. Japan Acad. 33 (1957), 333-335. Wilking, B.. *A Lie algebraic approach to Ricci flow invariant curvature conditions and Harnack inequalities.* J. Reine Angew. Math. 679 (2013), 223-247. Yang, B.; Zheng, F.. *Hirzebruch manifolds and positive holomorphic sectional curvature.* arXiv:1611.06571. Yau, S.-T.. *A review of complex differential geometry.* Several complex variables and complex geometry, Part 2 (Santa Cruz, CA, 1989), 619-625, Proc. Sympos. Pure Math., 52, Part 2, Amer. Math. Soc., 1991. Yau, S.-T.. *Open problems in geometry.* 1-28, Proc. Sympos. Pure Math., 54, Part 1, Amer. Math. Soc., 1993. [^1]: Research partially supported by a Simons Collaboration Grant [^2]: Research partially supported by an AMS-Simons Travel Grant
--- abstract: 'We discuss the possibility of controlling biological systems, by exciting in the near infrared region [*hybrid*]{} metallic nanotube ropes, dressed with proteins and embedded in the biosystems. If one nanotube, in a double-tube rope, is filled with metallofullerenes and the other is empty, the two tubes change their opposite equilibrium charging during the irradiation. The resulting change of the local electric field can deform proteins attached to the tubes, and change their catalytic properties.' address: 'Department of Chemical Physics, Weizmann Institute of Science, 76100 Rehovot, Israel' author: - Petr Král title: | Control of catalytic activity of proteins [*in vivo*]{} by nanotube ropes\ excited with infrared light --- Introduction ============ Over billions of years, a fascinating internal and external complexity evolved in myriads of competing biological species [@Kral]. For example, many bacteria and multi-cellular organisms use structure-sensitive proteins to biomineralize nanocrystals [@Bauerlein], that form unique nanodevices. These [*cold*]{}-growth techniques [@growth] could greatly complement growth methods developed by humans. In general, biosystems and artificial nanosystems can coexist and supplement each other in a number of other directions, in particular, during the release of drugs [@Uhrich]. Their coevolution can lead to the formation of new hybrid [*bio-nano*]{} (BIONA) systems, with unprecedented organizational and functional level. In this Letter, we discuss possible forms of communication between BIONA components. The (direct) “talk" from the nanosystem to the biosystem could be realized by controlling catalytic activity of its proteins. The (backward) “talk" could be done by the cells, if they change their local micro-environment or emit (electromagnetic) signals. The direct talk, that we explore here in more details, could be based on electrochemical methods, used in bioelectronics [@Willner]. Unfortunately, these techniques require the presence of electrodes [@Mrksich], that cannot be easily applied inside cells. More promising is thus their combination with [*contactless*]{} methods. Optical techniques [@PUMA] are convenient for their selectivity, but the sensitive interior of cells prohibits the use of large optical frequencies that can easily manipulate chemical bonds. We could thus use the fact that cells are transparent (up to 5 mm thick samples) for the [*near-infrared*]{} (NIR) radiation (0.74-1.2$\mu$m). The activity of biosystems could be manipulated via artificial nanosystems, embedded in them, that absorb in the NIR region. This approach is followed in photodynamic therapies, where tumor cells are destructed chemically, via NIR excitation of porphyrin-based molecules with many extended electronic states [@MacDonald]. Similar results can be obtained by heating the system locally with ultrasound or microwave radiation or via NIR-radiation heated metallic nanoparticles [@Pitsillides]. Silver and gold nanoparticles [@Sun], that can be produced biologically [@Klaus], are naturally excellent candidates for use in bio-control. In order to control tinny cellular sections, one could also think of using nanotubes. Metallic C nanotubes form sensitive detectors in liquid environments [@DRAG], and when dressed with bio-molecules, via structure-selective [@Attached11; @Attached12] or less specific hydrophobic coupling [@Attached21], they can work as sensitive biosensors [@Attached22]. NIR-radiation control of protein activity ========================================= The bio-control could be elegantly realized by a [*hybrid*]{} nanotube rope, heated by NIR-radiation, that is formed of two adjacent metallic C nanotubes, where one is (peapod) filled with [*metallofullerenes*]{} [@Luzzi] and the other is empty. In peapods, electrons can be transferred to/from the C$_{60}$ fullerenes under electric bias [@Kavan]. In isolated metallofullerenes like Dy@C$_{82}$, several electrons are passed from Dy to C$_{82}$. When these are used in a peapod, the transferred electrons can be passed further to the nanotube [@Roth]. In a double-rope formed by this peapod and a “twin" (empty) nanotube, the last would absorb the excessive charge too, so the two would become oppositely charged. This process can be partly [*inverted*]{} at elevated temperatures [@Roth], since the metallofullerenes have levels close to the Fermi level [@DLee; @Cho]. In Fig. \[BIONA1\], we schematically show the NIR-control of protein (enzyme) activity, based on this process. The NIR excitation heats the two nanotubes, so that electrons, released in equilibrium from the fullerenes to the peapod [@Roth] and the twin tube, become transferred [*back*]{}. This transfer is accompanied by recharging of the tubes, and the resulting change of the local electric field causes deformation of proteins [@Gerstein; @Field], that are selectively attached to the nanotubes. Their new conformation can have a largely different catalytic activity [@WillnerB]. The system thus works in an opposite way than some biosensors [@Benson], where antibodies bind to proteins attached to material surfaces, bend them, and thus change the surface electric parameters. We can tune the system by using different nanotubes, fullerenes and their filling, and especially proteins, to be controlled. The attached proteins, that in general could be much bigger than the tubular system, might help to dissolve the hydrophobic nanotubes in water. Modeling of the control ======================= We consider that the system is formed by two metallic (10,10) carbon nanotubes of the radius $r_T\approx 0.68$ nm, where [*one*]{} of them is the peapod. In a double-rope [@Stahl; @Mele], their centers are separated by $D_T\approx 1.7$ nm, which determines the tunneling time, $\tau_t \approx 1$ ps, of electrons between the tubes. The fullerenes are separated one from another by $d_F\approx 2$ nm [@Cho], and their charging strongly but locally deforms electronic bands of the peapod [@DLee]. We can excite the metallic nanotubes at deliberate NIR frequencies. Their absorbtion fits the Drude formula for the dielectric function [@Lee] $$\begin{aligned} \varepsilon(\omega)=\varepsilon_{\infty} \Bigl(1-\frac{\omega_p^2}{\omega^2+i\omega/\tau} \Bigr)\, , \label{epsilon}\end{aligned}$$ where $\hbar\omega_p=0.86$ eV is the plasma frequency, $\varepsilon_{\infty} =4.6$ and $\tau=5\times 10^{-15}$ s. The irradiation, needed to induce reabsorbtion of electrons by the fullerenes, would have to heat the nanotubes by several tens of degrees [@Roth]. This should not be harmful [*in vivo*]{}, if we use, for example, short (few nanometers long) nanotube [*capsules*]{} [@Tomanek], and isolate them in liposomes [@Haran], that can be delivered to the cells by special techniques. In the lack of available data, we assume here that one electron [*per*]{} fullerene is reabsorbed during the irradiation, and $\approx 20\, \%$ of those come from the empty tube. This gives the NIR-radiation induced [*recharging*]{} density $\sigma=0.2\,e/d_F\approx 0.1\, e$ nm$^{-1}$. From this $\sigma$, we can calculate the change of electric field between the two tubes. If we assume that these two are [*ideal*]{} metallic cylinders of length L, their electric capacity is [@Slater] $(\varepsilon =\varepsilon_0\, \varepsilon_r)$ $$\begin{aligned} C=\frac{\pi\varepsilon L}{{\rm cosh}^{-1}(D_T/2\, r_T)}\, . \ \ \ \label{cap}\end{aligned}$$ Thus, the potential difference between the nanotubes due to the induced charge transfer is $$\begin{aligned} \Delta \varphi=\sigma L/C\approx 0.1\ {\rm V} \, , \label{pot}\end{aligned}$$ where we use the permitivity of water $\varepsilon_r \approx 4.6$. Similar voltage was used, for example, in manipulation of proteins attached to metals [@WillnerB]. Activation of the attached protein by this NIR-radiation induced potential can be realized by moving a [*charged tip*]{} of one of its domains [@Benson] (see Fig. \[BIONA2\]). We model this process, by evaluating first the potential energy of a charge $q$ at the position ${\bf r}=(x,y)$. The two cylinders with charge densities $\sigma$ and $-\sigma$ have their centers are at ${\bf r}_1$ and ${\bf r}_2$, respectively. The potential energy of the charge $q$ is formed by the [*direct*]{} Coulombic component [@Slater] $$\begin{aligned} V_C({\bf r})= -\frac{\sigma \, q\, }{2\pi\varepsilon\, } \, \ln\left(\frac{|{\bf r}-{\bf r}_1|}{|{\bf r}-{\bf r}_2|} \right)\, , \label{VC}\end{aligned}$$ that can be either positive or negative, depending on which of the oppositely charged tubes is closer. It also has a negative [*screening*]{} component [@TIS], originating in the reflection of the external charge in the metallic tube, that close to the surface of both tubes has the form $$\begin{aligned} V_S({\bf r}) \approx -\frac{q^2}{16\pi\varepsilon}\left( \frac{1}{|{\bf r}-{\bf r}_1|-r_T}+ \frac{1}{|{\bf r}-{\bf r}_2|-r_T}\right)\, . \label{VS}\end{aligned}$$ Here, we simply add the screening potentials of the two tubes, neglecting thus multiple reflections. Typically, structural domains in proteins perform [*hinge*]{} or [*shear*]{} motion [@Gerstein; @Field]. These domains are often formed by (rigid) $\alpha$-helices, connected by (flexible) $\beta$-sheets. The structures (conformations) of deformed proteins in nature are usually close in energies, so that they can be flipped over by room temperature energies k$_{B}$T [@Gerstein]. The conformations of the externally controlled proteins should have different catalytic properties and be more energetically distant, so that they are not changed at room temperatures. In the present system, where the control of protein’s motion is realized via dynamical charging of nanotubes, we can also consider these two generic (hinge and shear) configurations, shown schematically in Fig. \[BIONA2\]. Since, the tubes are different and become charged in equilibrium, the proteins should be able to [*distinguish*]{} them and deposit on them [*asymmetrically*]{} (see Figs. \[BIONA1\]-\[BIONA2\]). In the bend configuration (left), the trajectory of the controlled protein domain is practically vertical, toward one of the tube’s centers. In the shear configuration (right), the trajectory of the domain goes approximately in parallel with the vector connecting the tube’s centers of masses. We assume that the balance of internal forces in the protein, in the presence of equilibrium charging of the tubes, adjusts the charged tip to a position ${\bf r}_0=(x_0,y_0)$. The dependence of the protein energy around (close to) this position can be considered to be parabolic $$\begin{aligned} V_R({\bf r}) \approx C_x\, (x-x_0)^2+ C_y\, (y-y_0)^2\, , \label{VR}\end{aligned}$$ where the constants $C_x$, $C_y$ describe rigidity of the deformed protein [@Field]. Their values should be such, that the difference in energies $\Delta_E$ between the used conformations is kT$_{B}< \Delta_E< 100$ kJ/mol (1 eV), where the last value is the lower energy limit required to deform [*individual*]{} protein domains [@Field]. Discussion of the protein motion ================================ In order to estimate which of the configurations, in Fig. \[BIONA2\], can be more easily controlled, we calculate the [*distance*]{} over which the protein domains move during the NIR-radiation induced charge transfer. The tip moves from the equilibrium position ${\bf r}_0$ to a new position ${\bf r}_0^{'}$, given by the local minimum of the total potential $$\begin{aligned} V_T({\bf r})=V_C({\bf r})+V_S({\bf r})+V_R({\bf r})\, . \label{VT}\end{aligned}$$ \ In Fig. \[BIONA3\] (up), we search this minimum for the “hinge" configuration, shown in Fig. \[BIONA2\] (left). The tube centers are located at ${\bf r}_{1,2} =(x_{1,2},y_{1,2})$, $x_{1,2}= \mp 1.1\, r_T$, $y_{1,2}=0$. We present the dependence of the potentials $V_C$, $V_S$, $V_R$ and $V_T$ on the $y$ distance from the center of the right nanotube, and assume that a unit charge $q=e$ is at the tip of the domain [@Benson]. The results are calculated for the (effective) charge density $\sigma=0.1$ e/nm (equilibrium) and $\sigma=0$ (irradiation). We position the domain tip at $x_0=x_2$, $y_0=2.5\, r_T$ and use the rigidity constant $C_y=2$ eV/nm$^{-2}$. We can see that the [*change*]{} of the $V_C$ potential energy, due to the induced charge transfer, is small, while $V_S$ is rather large. With the above parameters, the $V_R$ potential can [*locally*]{} compensate the steep $V_S$, so that close to the tubes their sum is almost flat. Then, the weak $V_C$ potential can control the position of the local minimum in $V_T$, but the overall motion is quite small (thin and thick solid line correspond to equilibrium and irradiation, respectively). The magnitude of motion could be enlarged, if we let the $V_T$ potential to loose the local minimum in the [*absence*]{} of irradiation. Then the domain position would [*fluctuate*]{} from being adjacent to the tube to being almost at ${\bf r}_0$. Such a large motion can be obtained directly in the “shear" configuration, shown in Fig. \[BIONA2\] (right). Here, the in-plane (of the tubes) components of the screening forces largely cancel each other, and the out-of-plane components are not effective. Thus the system responds more sensitively to the charging given by $V_C$, as we show in Fig. \[BIONA3\] (down). We use parameters, $x_0=0$, $y_0=1.7\, r_T$ and $C_x=0.5$ eV/nm$^{-2}$, so that $V_R$ can [*flatten*]{} the two-well minima of $V_S$. In equilibrium, the tube charging causes that $V_T$ develops a minimum, close to one of the $V_S$ minima. During the irradiation, the charging decreases and the minimum shifts to $x=0$. Since the two conformations are shifted in energy by $\Delta_E \approx 50$ meV, they would not flip one to another at room temperatures kT$_{B} <30$ meV. This domain motion is large enough to control the protein (enzyme) activity. It could open or block pockets on the “back side" of the protein, that is not exposed to the nanotube, and change the catalytic strength of the proteins. We have demonstrated that NIR-radiation excited hybrid nanotube ropes could control the activity of proteins [*in vivo*]{}. Nanotube systems might also [*directly*]{} activate chemical reactions used in phototherapy [@MacDonald], in particular, if special “porphyrin-like" defects are formed in the nanotube walls. [*In vitro*]{}, one could also bias nanotubes externally in order to control biochemical reactions or use them in other applications on the nanoscale [@Baughman]. [**Acknowledgments**]{} The author would like to thank S. Gross and E. J. Mele for valuable discussions. EU COCOMO is acknowledged for a support. [00]{} P. Král, J. Theor. Biol. [**212**]{} (2001) 355. E. Baüerlein, Angew. Chem. Int. Ed. [**42**]{} (2002) 614. E. Dujardin and S. Mann, Adv. Eng. Mat. [**4**]{} (2002) 461. K. E. Uhrich, S. M. Cannizzaro, R. S. Langer and K. M. Shakesheff, Chem. Rev. [**99**]{} (1999) 3181. I. Willner and B. Willner, Trends Biotechnol. [**19**]{} (2001) 4222. C. D. Hodneland and M. Mrksich, J. Am. Chem. Soc. [**122**]{} (2000) 4235. P. Král and D. Tománek, Phys. Rev. Lett. [**82**]{}, (1999) 5373. For a review, see, I. I. MacDonald and T. J. Doughery, J. Porphyrins Phthalocyanines [**5**]{} (2001) 105. C. M. Pitsillides, E. K. Joe, X. Wei, R. R. Anderson and C. P. Lin, Biophysical J. [**84**]{} (2003) 4023. Y. Sun and Y. Xia, Science [**298**]{} (2002) 2176. T. Klaus, R. Joerger, E. Olsson and C.-G. Granqvist, PNAS [**96**]{} (1999) 13611. P. Král and M. Shapiro, Phys. Rev. Lett. [**86**]{} (2001) 134; S. Ghosh, A. K. Sood and N. Kumar, Science [**299**]{} (2003) 1042. B. F. Erlanger, B.-X. Chen, M. Zhun and L. Brus, Nano Letters [**1**]{} (2001) 465; S. Wang [*et al.*]{}, Nature Materials [**2**]{} (2003) 196. C. Richard [*et al.*]{}, Science [**300**]{} (2003) 775. R. J. Chen [*et al.*]{}, PNAS [**100**]{} (2003) 4984. D. J. Hornbaker [*et al.*]{}, Science [**295**]{} (2002) 828; C. L. Kane [*et al.*]{}, Phys. Rev. B [**66**]{} (2002) 235423. L. Kavan, L. Dunsch and H. Kataura, Chem. Phys. Lett. [**361**]{} (2002) 79. P. W. Chiu, S. F. Yang, S. H. Yang, G. Gu and S. Roth, Appl. Phys. Lett. [**79**]{} (2001) 3845; [*ibid*]{} Appl. Phys. A [**76**]{} (2003) 463. D. Lee [*et al.*]{}, Nature [**415**]{} (2002) 1005. Y. Cho, S. Han, G. Kim, H. Lee and J. Ihm, Phys. Rev. Lett. [**90**]{} (2003) 106402. M. Gerstein, A. M. Lesk and C. Chothia, Biochemistry [**33**]{} (1994) 6739. K. Hinsen, A. Thomas and M. J. Field, Proteins: Structure, Function, and Genetics, [**34**]{} (1999) 369. V. P.-Yissar [*et al.*]{}, Faraday Discuss. [**116**]{} (2000) 119. D. E. Benson [*et al.*]{}, Science [**293**]{} (2001) 1641. H. Stahl [*et al.*]{}, Phys. Rev. Lett. [**85**]{} (2000) 5186. A. A. Maarouf, C. L. Kane and E. J. Mele, Phys. Rev. B [**61**]{}, (2000) 11156; M.-W. Lee and Y.-C. Chen, Jpn. J. Appl. Phys. [**41**]{} (2002) 4663. Y.-K. Kwon, D. Tománek and S. Iijima, Phys. Rev. Lett. [**82**]{} (1999) 1470. E. Rhoades, E. Gussakovsky and G. Haran, PNAS [**100**]{} (2003) 3197. J. C. Slater and N. H. Frank, [*Electromagnetism*]{} (McGraw-Hill, NY 1947). B. E. Granger, P. Král, H. R. Sadeghpour and M. Shapiro, Phys. Rev. Lett. [**89**]{} (2002) 133506. R. H. Baughman [*et al.*]{}, Science [**284**]{} (1999) 1340.
-5mm -5mm Temporal chaos in discrete one dimensional gravity model of traffic flow\ Elman Mohammed Shahverdiev,[^1]\ Department of Information Science,[^2] Saga University, Saga 840, Japan\ Shin-ichi Tadaki\ Department of Information Science,[^3] Saga University, Saga 840, Japan\ There are mainly two approaches to traffic flow dynamics: At a microscopic level, the system can be described in terms of variables such as the position and velocity of each vehicle (optimal velocity model \[1,2 \],cellular automaton model \[3,4 \]);at a macroscopic level important variables include the car density, average velocity, the rate of traffic flow, the total number of trips between two zones.(mean field theory \[5-11\], origin-destination (or the so- called gravity)model \[12,13\]).The gravity model originates from an anology withNewton’s gravitational law \[13\]. As a rule traffic flow dynamics is the nonlinear one.Nowadays it is well-known that some deterministic nonlinear dynamical systems depending on the value of system’s parameters exhibit unpredictable,chaotic behavior,see,e.g.\[14-16\] and references therein. The main reason for such a behavior is the instability of the nonlinear system. Such an instability in general was undesirable not only in traffic flow dynamics ,but also in dynamical systems in mechanics, engineering,etc, due to the frightening nature of unpredictability as the unstability could lead to chaos. The stability or unstability of traffic flow dynamics are highly valued conceptions in traffic management and planning.\ Since the pioneering papers \[17,18 \] on chaos control theory,the attitude to chaos has been changed dramatically. Nowadays in some situations chaotic behavior is considered even as an advantage.In general the main idea behind chaos control theory is to modify the nonlinear systems’ dynamics so that previously unstable states (fixed points,periodic states,etc.)now become stable.In practice such modifications could be realized by changing system’s parameters,through some feedback or nonfeedback mechanisms,or even by changing the dynamical variables of the system in an “appropriate” manner in “due”time (adaptively or nonadaptively),etc. The interest to the chaos control theory is due to the application of this phenomenen in secure communication, in modelling of brain activity and recognition processes, etc,. Also methods of chaos control may result in the improved performance of chaotic systems.(For the latest comprehensive review of chaos and its control see Focus Issue \[19\] and references therein;also see \[20-22\]).\ In this Brief Report we report on the possible chaotic behaivor in one dimensional gravity model of traffic flow dynamics. The gravity model assumes that the number of trips between zones (origins and destinations) depends on the number produced at and attracted to each zone,and on the travel cost between zones.In the dynamic formulation of the gravity model the travel costs are a function of the number of trips between zones.According to \[12\],the discrete dynamic trip distribution gravity model takes the form $$x_{ij}(t+1)=f(c_{ij}(t)),\hspace*{3cm}(1)$$ where $x_{ij}$ is the relative number of trips from zone $i$ to zone $j$,normalised so that $\sum_{ij} x_{ij}=1$, $c_{ij}$ is the travel cost from zone $i$ to zone $j$ given the trips $x_{ij}$.\ $$c_{ij}(t)=c_{ij}^{0}(1+\alpha (\frac{x_{ij}}{z_{ij}})^{\gamma}),\hspace*{1cm}(2)$$ where $c_{ij}^{0}$ is the uncongested travel cost,$q_{ij}$ is the relative capacity of the roads between origin and destination and $\alpha$ and $\gamma$ are constants. $f(c_{ij})$ is a function which relates the number of trips to the travel costs.The following cost function $$f(c_{ij})=c_{ij}^{\mu}\exp(-\beta c_{ij}),\hspace*{3cm}(3)$$ where $\mu$ and $\beta$ are constants is refered to as combined cost function and unites both the power and exponential forms of cost functions. It is known that for continuous dynamical systems for chaotic behavior the number of dynamical variables should be three or more than that.For discrete systems chaos is possible even in one dimensional systems. According to \[12\],for the unconstrained (the model is the unconstrained one in the sense that it cannot guarantee that the number of trips originating from or terminating at a given zone has a value which is predetermined.)and singly-constrained (the model is the singly-constrained one in the sense that it supposes that either the number of trips originating from or terminating at a given zone has a value which is predetermined.) gravity models the dynamics of trip distribution model in the one dimensional case could be written as $$x(t+1)= A f(x(t))=A(c^{(0)})^{\mu}(1+\alpha (\frac{x(t)}{q})^{\gamma})^{\mu}\exp(-\beta c^{(0)}(1+\alpha (\frac {x(t)}{q})^{\gamma})),\hspace*{1cm}(4)$$ where $A$ is the normalizing constant factor;the definition of other constants are given above. Authors of \[12\] claim that one dimensionsal gravity model does not exhibit chaotic behavior. We will show that this model could be reduced to one dimensional chaotic model known as exponential map in ecology \[23\]: $$x(n+1)=f(x(n))=x(n)\exp(r(1-x(n))),\hspace*{4cm}(5)$$ where $r$ is positive control parameter of the chaotic mapping (5).\ Indeed let us take $\mu =\gamma =1$. Then the mapping (4) could be written in the following form: $$x(t+1)=m_{1}(1+mx(t))\exp(-\beta c^{(0)}mx(t)),\hspace*{5cm}(6)$$ where $m_{1}=Ac^{(0)}, m=\frac{\alpha}{q}$.Further by linear transformation of variables $y=1+mx(t)$ the mapping (6) could be related to the mapping: $$y(t+1)=m_{2}y(t)\exp(\beta c^{(0)}(1-y(t))),\hspace*{4cm}(7)$$ where $m_{2}=m_{1}\exp (-\beta c^{(0)})$. Comparing (5) and (7) one can see that in the gravity model the $\beta c^{(0)}$ could be taken as a control parameter. Thus we have shown that the temporal chaotic behavior is possible even in discrete one dimensional gravity model for traffic flow dynamics.Moreover this chaotic behavor could be controlled by the constant rate harvesting approach developed in \[24\] for unimodal one dimensional mappings,including (5) or by some other methods for one dimensional dynamical systems.\ Acknowledgments\ The author thanks the JSPS for the Fellowship.\ [99]{} W.Leutzbach,Introduction to the theory of traffic flow (Springer-verlag,Berlin,1988). M.Bando,K.Hasebe,A.Nakayama,A.Shibata and Y.Sugiyama,Phys.Rev.E,[**51**]{},1035(1995). S.Tadaki,Phys.Rev.E [**49**]{},1168 (1994). S.Tadaki,Phys.Rev.E [**54**]{},2409 (1996). I.Prigogine and R.Herman,Kinetic theory of vehicular traffic (Elsevier,New York,1971). O.Biham, A.A.Middleton and D.Levine,Phys.Rev.A[**46**]{},R6124(1992). M.Fukui and Y.Ishibashi,J.Phys.Soc.Jpn.[**62**]{},3841 (1993). D.Helbing,Phys.Rev.E [**51**]{},3164 (1995). E.Ben-Naim, P.L.Krapivsky and S.Redner,Phys.Rev E[**50**]{},822 (1994). B.H.Wang and P.M.Hui,J.Phys.Soc.Jpn.[**66**]{},1238 (1997). T.Nagatani,Phys.Rev E [**48**]{},3290 (1993). X.Zang and D.F.Jarret,Chaos [**8**]{}, 503(1998). J.de D.Ortuzar and L.G.Willumsen,Modelling Transport(Wiley,New York,1990). E.N.Lorenz,J.Atmos.Sci[**20**]{},130(1963). H.G.Schuster,Deterministic chaos:an introduction(Physic-Verlag,Weinheim,1984). H.Haken,Advanced Synergetics,3rd ed.(Springer-Verlag,Berlin,1993). L.M.Pecora and T.L.Carroll,Phys.Rev.Lett.[**64**]{},8(1990). E.A.Ott, C.Grebogi and J.A.Yorke,Phys.Rev.Lett.[**64**]{},1196(1990). CHAOS[bf7]{},509(1997). E.M.Shahverdiev,J.Phys.Soc.Jpn.[**67**]{},1912 (1998). E.M.Shahverdiev, and L.A.Shelepin,Lebedev Physics Institute Reports (Moscow) N9-10,131(1997). E.M.Shahverdiev,ICTP(Trieste,Italy) preprint N.IC98119(1998). J.M.Smith, Models in Ecology (Cambridge University Press,Cambridge,1974). S.Gueron,Phys.Rev. E [**57**]{},3645(1998). [^1]: e-mail:shahverdiev@lan.ab.az\ On leave from Institute of Physics, 33, H.Javid avenue, Baku 370143, Azerbaijan [^2]: e-mail:elman@ai.is.saga-u.ac.jp [^3]: e-mail:tadaki@ai.is.saga-u.ac.jp
--- abstract: 'This paper considers precoding for multi-group multicasting with a common message. The multiple antenna base station communicates with $K$ clusters, each with $L$ users. There is a common message destined to all users and a private multicast message for each cluster. We study the weighted sum rate (WSR) maximization problem for two different schemes: (i) the base station transmits the superposition of common and multicast messages, (ii) the base station concatenates the multicast message vector with the common message. We also formulate a second problem, weighted minimum mean square error (WMMSE) minimization, and prove that WSR maximization and WMMSE minimization are equivalent at the optimal solution. Inspired by the WMMSE problem, we suggest a suboptimal algorithm, based on alternating optimization. We apply this algorithm to the two transmission schemes, and understand that there is a fundamental difference between the two. We compare the results with maximal ratio transmission (MRT), and zero-forcing (ZF) precoding, and investigate the effects of the number of base station antennas, the number of groups and the number of users in a group. Finally, we study imperfect successive interference cancellation (SIC) at the receivers and show that the first transmission scheme is more robust.' author: - Ahmet Zahid Yalcin and Melda Yuksel  bibliography: - 'referencesMult.bib' title: 'Precoder Design For Multi-group Multicasting with a Common Message' --- Common and private data transmission, multicast transmission, multiple input multiple output, physical layer precoding, superposition coding. Introduction ============ With the introduction of the fifth generation (5G) mobile communication systems, mobile communications will diffuse into all areas of daily life. While the fourth generation (4G) is mainly about Internet access, video calls, and cloud computing, 5G will be about smart-health, smart-cities, 4K video, factory automation, self-driving cars, real-time cloud services and more. All these new and different applications have different requirements, such as high data rates, ultra-low latency and ultra-high reliability, or ability to handle massive radio access. The multiple-input multiple-output (MIMO) technology is one of the main enablers for high data rates required for enhanced mobile broadband. The capacity of MIMO channels are found in [@Foschini1998_OnLimitsOf], [@telatar_99]. These papers reveal that the degrees of freedom in a MIMO channel is limited by the minimum of the number of transmit and receive antennas. Then the degrees of freedom of a communication link between a multiple antenna base station and a single antenna user simply becomes equal to 1. This degrees of freedom limitation can be alleviated by multi-user MIMO (MU-MIMO) transmission [@Vaze12]. In MU-MIMO systems, multiple users simultaneously receive their own unicast data streams [@Goldsmith2006_OnTheOptimality]. The capacity of MU-MIMO channels is achieved by dirty paper coding (DPC) [@Weingarten2006_TheCapacityRegion]. However, DPC is a highly complex, non-linear precoding scheme. Therefore, in the literature, simple linear precoding strategies have been investigated for MU-MIMO transmission [@Sharif2007_AComparisionTimeSharingDPC], [@Christensen2008_WeightedSumRate]. Although sub-optimal in general, superposition coding (SPC) is also used as a simpler alternative to DPC. It offers good interference management [@Zhang2011_AUnifiedTreatmentofSPC], [@Vanka2012_SPCStrategies], [@Joudeh2016_SumRateMax] and harvests some of the benefits of DPC in MU-MIMO systems with applications to non-orthogonal multiple access (NOMA) [@ding2014performance; @seo2018high]. While simultaneous multiple unicast data transmission is important, applications such as mobile application updates, mass advertisements and public group communications require multicasting [@multicast_1], [@multicast_2], [@multicast_3]. In multicast transmission, the same data is sent to a group of users. Precoding for multicasting is studied in [@Jindal2006_CapacityLimitsofMultipleAnt], [@Abdelkader2010_MultipleAntennaMulticasting] and [@Chiu2009_TransmitPrecoding]. Precoder design for max-min fairness for multicasting is studied in [@Sidiropoulos2006_TransmitBeamforming]. Unicast and multicast traffic can also coexist in cellular networks. Scheduling unicast and multicast traffic is considered in [@Silva2006_AdaptiveBeamforming] and [@Baek2009_AdaptiveTransmission], while their simultaneous transmission via superposition coding for an enhanced sum rate is studied in [@Baek2009_AdaptiveTransmission]. While [@Baek2009_AdaptiveTransmission] is for a very limited system with two users, [@Yalcin18downlink] studies precoder design problem for a system in which all users have to decode the common message and a subset of these users have to receive their own unicast messages. In multi-group multicasting there are multiple groups, where each user in the same group seeks for the same message and different groups receive different messages [@Silva2009_LinearTransmitBeamforming]. Precoding for a guaranteed quality of service with minimum transmission power is studied in [@Gao2005_GroupOrientedBeamforming] and [@Karidipis2008_QoSMaxMinFairTransmit]. Max-min fair transmit precoding for multi-group multicasting under a total power constraint is also studied in [@Karidipis2008_QoSMaxMinFairTransmit]. Weighted fair multi-group multicasting precoding under per antenna power constraints is investigated in [@Christopoulos2014_WeightedFair]. A rate splitting approach is suggested for max-min fair transmit precoder design in [@Joudeh2017_RateSplittingforMaxMin]. The max-min fair transmit precoding problem for multi-group multicasting in massive MIMO systems is studied in [@Zhou2015_JointMulticast] and [@Sadeghi2018_MMFMassiveMIMO]. Maximum sum rate for multigroup multicast precoding under per-antenna power constraints is studied in [@christopoulos2015multicast]. The authors in [@kaliszan2012multigroup] also study sum rate maximization, yet under a total power constraint, and allow for coding over multiple blocks. Apart from the above discussion on the objective function (sum rate maximization or max-min fairness) or the system topology (unicast/ multicast transmission etc.), another issue is whether successive interference cancellation (SIC) is perfect or not. In general imperfect channel state information (CSI) and/or hardware impairments result in imperfect SIC. While [@Vanka2012_SPCStrategies; @ding2014performance; @seo2018high; @Yalcin18downlink; @Joudeh2017_RateSplittingforMaxMin; @ekrem2012outer] consider perfect CSI and perfect SIC, [@Joudeh2016_SumRateMax] considers imperfect CSI and perfect SIC, and [@Kim2015_DesgnOfUserClustering; @senel2017optimal] consider imperfect SIC. All the above multi-group multicasting papers, as well as the others we have encountered in the literature, consider non-overlapping multicasting groups. As an initial step in finding sum rate optimal precoders for the most general problem with overlapping multicast groups with unicast messages, in this work we study multi-group multicasting with common messages. In this system there are non-overlapping multicast groups, yet all groups are also interested in a common message. In this paper we follow the outline listed below: 1. To be able to send the common message along with the private multicast messages, we define two different transmission schemes: (i) the base station transmits the superposition of the common and multicast messages, (ii) the base station regards the common message as another message and concatenates the multicast message vector with the common message. 2. For these transmission schemes, we study the weighted sum rate (WSR) maximization problem. To be able to propose the iterative algorithm described in the next step, we also define the weighted minimum mean square error (WMMSE) problem. We then write the gradient expressions and the Karush-Kuhn-Tucker (KKT) conditions of the Lagrangian equations for both problems. Then, we prove that these two problems are equivalent at the optimum point. Although we cannot find a closed form expression, we state the equations the optimal precoders, receivers and the Lagrange multipliers satisfy. 3. The WSR maximization problem is non-convex and the optimal solution is hard to find. Inspired by the equivalency between WMMSE minimization and WSR maximization, we propose a low-complexity iterative algorithm for finding a locally optimum solution. The algorithm is based on alternating optimization, which iterates between precoders, mean square error (MSE) weights and receiver structures written for the WMMSE problem. We also discuss and display convergence for this algorithm. 4. We apply this algorithm to both of the transmission schemes, and compare the results with maximal ratio transmission (MRT), and zero-forcing (ZF) precoding. 5. We also investigate the effect of imperfect SIC on achievable WSR. As a result, we learn that there is a fundamental difference between transmitting the common message via the sum of the private multicast message precoders and via a separate precoder. We find that the two schemes favor common and private multicast messages differently. Also, the former transmission scheme is always better than the latter and is more robust against imperfect SIC. To the best of our knowledge, in the literature, there is no other paper which compares these two schemes to reveal their fundamental differences. We present the system model, formulate the WSR and WMMSE problems and prove their equivalence in Section \[sec:systemmodel\]. We propose the iterative algorithm in Section \[sec:precoder\]. We present the simulation results in Section \[sec:simresult\], and conclude the paper in Section \[sec:conclusion\]. System Model {#sec:systemmodel} ============ We consider a single cell downlink communication system. The base station is equipped with $M$ transmit antennas. The base station communicates with $K$ clusters and there are $L$ single antenna users in each cluster. Each user belongs to only one cluster. The base station has a common data $s_c$ across all users, and private multicast data ${s}_{u_k}$ ($k=1,\ldots,K$) destined to each of the $K$ clusters. In this work, our aim is to understand the conditions under which superposition coding is beneficial. For this purpose, we study two different signal models. In the first model, the common message is superposed onto the private multicast message vector, and in the second model, the common message is appended to the private multicast message vector. Signal Model 1: : The base station employs a 2-layer superposition coding scheme, in which the base and enhancement layers respectively carry common and private multicast data. The input data vector is denoted as $\mathbf{s}^{(1)} = {[{s}_1,\ldots,{s}_{K}]}^T$ $\in \mathbb{C}^{K \times 1}$, and each input data stream ${s}_k$, $k= 1,\ldots ,K$ is the sum of common and private multicast data, respectively denoted as ${s}_{c}$ and ${s}_{u_k}$. Thus, the input data vector can be written as $\mathbf{s}^{(1)}=\mathbf{s}_c + \mathbf{s}_u$, where $\mathbf{s}_c = {[{s}_c,\ldots,{s}_c]}^T$ $\in \mathbb{C}^{K \times 1}$ and $\mathbf{s}_u = {[{s}_{u_1},\ldots,{s}_{u_{K}}]}^T$ $\in \mathbb{C}^{K \times 1}$. We assume $s_c$ and all $s_{u_k}$ are independent and $\mathbb{E}\{{s}_c{s}_c^{\ast}\} = \alpha$ and $\mathbb{E}\{{s}_{u_k}{s}_{u_k}^{\ast}\} = \bar{\alpha}$. Here, $\bar{\alpha} = 1-\alpha$ and $\alpha$ is the ratio of power allocated to common data. The input data vector $\mathbf{s}^{(1)}$ is linearly processed by a precoder matrix $\mathbf{P}^{(1)} = [\mathbf{p}_1^{(1)},\ldots,\mathbf{p}_{K}^{(1)}]$ $\in \mathbb{C}^{M \times K}$. Each precoding vector $\mathbf{p}_k^{(1)}$ is of size $M \times 1$. Signal Model 2: : The input data vector is defined as $\mathbf{s}^{(2)} = {[s_c,{s}_{u_1},\ldots,{s}_{u_K}]}^T$ $\in \mathbb{C}^{(K+1) \times 1}$, where $s_c$ and ${s}_{u_k},k=1,\ldots,K$, are the same as in the above signal model 1. We assume $s_c$ and all $s_{u_k}$ are independent and $\mathbb{E}\{\mathbf{s}^{(2)}{\mathbf{s}^{(2)}}^H\} = \mathbf{I}$. The input data vector $\mathbf{s}^{(2)}$ is linearly processed by a precoder matrix $\mathbf{P}^{(2)} = [\mathbf{p}_c, \mathbf{p}_1^{(2)},\ldots,\mathbf{p}_{K}^{(2)}]$ $\in \mathbb{C}^{M \times (K+1)}$, where both the precoding vector $\mathbf{p}_k^{(2)}$ for each multicast data and $\mathbf{p}_c$ for common data are of size $M \times 1$. Then, for signal model $m=1,2$, the overall transmit data vector $\mathbf{x}$ $\in \mathbb{C}^{M \times 1}$ at the base station can be written as $$\begin{aligned} \mathbf{x}^{(m)} &= \mathbf{P}^{(m)}\mathbf{s}^{(m)} = \mathbf{p}_A^{(m)} {s}_c + \sum_{k=1}^{K} \mathbf{p}_k^{(m)} {s}_{u_k}.\label{precodedSignal}\end{aligned}$$ Here, $\mathbf{p}_A^{(1)} = \sum_{k=1}^{K}\mathbf{p}_k^{(1)}$ and $\mathbf{p}_A^{(2)} = \mathbf{p}_c$ for signal models 1 and 2, respectively. As user-level precoding is assumed, there is an average total power constraint, $$\begin{aligned} \mathbb{E}\{{\mathbf{x}^{(m)}}^{H}{\mathbf{x}^{(m)}}\}&= B^{(m)} \operatorname{Tr}(\mathbf{p}_A \mathbf{p}_A^H) + C^{(m)} \sum_{k=1}^{K} \operatorname{Tr}(\mathbf{p}_k^{(m)}{\mathbf{p}_k^{(m)}}^H )\leq E_{tx}. \label{pow_const}\end{aligned}$$ In (\[pow\_const\]), $(B^{(1)} ,C^{(1)} ) =(\alpha, \bar{\alpha} )$ and $(B^{(2)} ,C^{(2)} ) = (1,1)$ for signal models $1$ and $2$, respectively. Note that, signal model 1 seems to be more restrictive, as $\mathbf{p}_A^{(1)}$ has to be set to the sum of the private multicast data precoders. However, it introduces a new degrees of freedom due to the parameter $\alpha$. The iterative precoder design algorithm we propose in Section \[sec:precoder\] designs precoder directions and power levels jointly. The $\alpha$ parameter will serve as an external handle that allows for adjustments in precoder power levels. Because of this difference, in Section \[sec:simresult\], we will observe that the two signal models are fundamentally different. The received signal at the $l$-th user of the $k$-th cluster can be expressed as $$\begin{aligned} {y}_{l,k}^{(m)} &=\mathbf{h}_{l,k} \mathbf{p}_A^{(m)} {s}_c + \mathbf{h}_{l,k} \sum_{i=1}^{K} \mathbf{p}_i^{(m)} {s}_{u_i} + {n}_{l,k}.\label{rec_signal}\end{aligned}$$ In (\[rec\_signal\]), ${\mathbf{h}_{l,k}}$ $\in \mathbb{C}^{1 \times M}$ is the channel gain vector of the $l$-th user of the $k$-th cluster, $l = 1,...,L$, $k = 1,...,K$. The entries in $\mathbf{h}_{l,k}$ denote complex valued channel gains. The noise component ${n}_{l,k}$ is independent and circularly symmetric complex Gaussian random variable with zero mean and unit variance. The base station is assumed to know all $\mathbf{h}_{l,k}$, while the $l$-th user in the $k$-th cluster has to be informed about the composite channel gains $\mathbf{h}_{l,k}\mathbf{p}_A^{(m)}$ and $\mathbf{h}_{l,k}\mathbf{p}_k^{(m)}$. This information can be acquired by utilizing standard training techniques. For example, in time division duplex mode, in the uplink, the users can send training data to the base station, and the base station learns all $\mathbf{h}_{l,k}$. Then, the base station can send training data in the downlink phase *twice*. This way, the users can learn the composite channel gains for the common and the private multicast messages respectively in the first and second downlink transmissions. We further assume that these steps are all error free. Achievable Data Rates --------------------- In this system, all users decode the common message. In addition to this, each user subtracts this common message from its received signal to decode its private multicast message using SIC. Then, the achievable rate for common and private multicast messages for the $l$-th user of the $k$-th cluster are respectively defined as $R_{c,lk}$, and $R_{u,lk}$, $l=1,\ldots,L, k=1,\ldots,K$ and are given as $$\begin{aligned} R_{c,lk}^{(m)} &= \log \big( 1 + B^{(m)} {\mathbf{p}_A^{(m)} }^H \mathbf{h}_{l,k}^H {r}_{c,lk}^{(m)^{-1}} \mathbf{h}_{l,k} \mathbf{p}_A^{(m)} \big),\label{rate_cu}\\ R_{u,lk}^{(m)} &= \log \big( {1} + C^{(m)} {\mathbf{p}_k^{{(m)}}}^H \mathbf{h}_{l,k}^H {r}_{u,lk}^{(m)^{-1}} \mathbf{h}_{l,k} \mathbf{p}_k^{(m)}\big). \label{rate_u}\end{aligned}$$ Here ${r}_{c,lk}^{(m)}$ and ${r}_{u,lk}^{(m)}$ are the effective noise variances for common and multicast data at the $l$-th user of the $k$-th cluster for the $m$-th signal model. They can be calculated as $$\begin{aligned} {r}_{c,lk}^{(m)} &= C^{(m)} \mathbf{h}_{l,k} \left(\sum_{i=1}^{K}\mathbf{p}_i^{(m)}{\mathbf{p}_i^{(m)}}^H\right) \mathbf{h}_{l,k}^H + 1,\\ {r}_{u,lk}^{(m)} &= \delta^2 B^{(m)} \mathbf{h}_{l,k} \mathbf{p}_A^{(m)}{\mathbf{p}_A^{(m)} }^H \mathbf{h}_{l,k}^H + C^{(m)} \mathbf{h}_{l,k} \left(\sum_{i=1,i\ne k}^{K}\mathbf{p}_i^{(m)}{\mathbf{p}_i^{(m)}}^H\right) \mathbf{h}_{l,k}^H + 1,\label{r_ulk_imperfectSIC}\end{aligned}$$where $\delta$ denotes the amount of residual self interference. If SIC is perfect, $\delta=0$. Note that, overall the achievable rate for common data is determined by the minimum of all $R_{c,lk}^{(m)} $ and the achievable rate for multicast data for group $k$, $s_{u_k}$, is determined by the minimum of all $R_{u,lk}^{(m)}$ in group $k$. Thus, we also define $$\begin{aligned} R_{c}^{(m)} &= \min_{k=\{1,...,K\}} \min_{l=\{1,...,L\}} R_{c,lk}^{(m)}, \label{eqn:minRc}\\ R_{u,k}^{(m)} &= \min_{l=\{1,...,L\}} R_{u,lk}^{(m)}. \label{eqn:minRu}\end{aligned}$$ The Maximum WSR Problem ----------------------- In this paper, our aim is to find the optimal precoders $\mathbf{P}^{(1)}$ and $\mathbf{P}^{(2)}$ for signal models 1 and 2 respectively such that the WSR is maximized subject to a total power constraint. The WSR of the system can be computed as $$\begin{aligned} T_1 = \sum_{k=1}^{K} a_{k} R_{u,k}^{(m)} + b R_{c}^{(m)}, \label{T_1}\end{aligned}$$where $a_{k}$ and $b$ respectively denote the rate weights that correspond to multicast data at cluster $k$ and common data. The optimization problems $\mathcal{P}_1$ and $\mathcal{P}_2$ are defined as $$\begin{aligned} &\mathcal{P}_{1}: [{\mathbf{p}_1^{(1)SR}},\ldots,{\mathbf{p}_K^{(1)SR}}] = \arg \max_{\mathbf{p}_k^{(1)}} T_1, \quad \text{s.t. } (\ref{pow_const}), \\ &\mathcal{P}_{2}: [\mathbf{p}_c^{SR},\mathbf{p}_1^{(2)SR},\ldots,\mathbf{p}_{K}^{(2)SR}] = \arg \max_{\mathbf{p}_c,\mathbf{p}_k^{(2)}} T_1, \quad \text{s.t. } (\ref{pow_const}). \end{aligned}$$We need to convert these two problems to smooth constrained optimization problems because they are non-convex and difficult to solve. Thus, we introduce and constrain two new auxiliary variables $t_k, k = {1,...,K}$ and $z$ as $$\begin{aligned} t_k &\leq a_k R_{u,lk}^{(m)},\; \forall l, \forall k, \label{SR_tk} \\ z &\leq b R_{c,lk}^{(m)},\; \forall l, \forall k. \label{SR_z}\end{aligned}$$Then, we can reformulate $\mathcal{P}_{1}$ and $\mathcal{P}_{2}$ as: $$\begin{aligned} \mathcal{P}'_{1}:[{\mathbf{p}_1^{(1)SR}},\ldots,{\mathbf{p}_K^{(1)SR}}] = \arg \max_{\mathbf{p}_k^{(1)},t_k,z} T_2, \quad \text{s.t. } (\ref{pow_const}), (\ref{SR_tk}), (\ref{SR_z}), \label{max_SR_2}\\ \mathcal{P}'_{2}:[\mathbf{p}_c^{SR},\mathbf{p}_1^{(2)SR},\ldots,\mathbf{p}_{K}^{(2)SR}] = \arg \max_{\mathbf{p}_c,\mathbf{p}_k^{(2)},t_k,z} T_2, \quad \text{s.t. } (\ref{pow_const}), (\ref{SR_tk}), (\ref{SR_z}) \label{max_SR_22},\end{aligned}$$where $T_2 = \sum_{k=1}^{K} t_k + z$. Error Variance Definitions and the Minimum WMMSE Problem -------------------------------------------------------- The one-to-one correspondence between mutual information and minimum mean square error (MMSE) for Gaussian channels is established in [@guo2005mutual], and for MIMO broadcast channels in [@Christensen2008_WeightedSumRate]. To establish such a correspondence in our system setup, we first write the MMSE expressions and then compare them with the achievable rates in (\[rate\_cu\]) and (\[rate\_u\]). The $l$-th user of the $k$-th cluster first processes its received signal ${y}_{l,k}^{(m)}$, with the common data receiver $W_{l,k}^{(m)}$ to form an estimate of $s_c$, denoted as $\hat{{s}}_{c,lk}= {W}_{l,k}^{(m)} {y}_{l,k}^{(m)}$. In the second stage, the $l$-th user of the $k$-th cluster forms an estimate for the multicast message $u_k$ as $\hat{{s}}_{u_{lk}} = {V}_{l,k}^{(m)}\cdot\left(y_{l,k}^{(m)}-\mathbf{h}_{l,k}\mathbf{p}_{A}^{(m)}s_c+\delta \mathbf{h}_{l,k}\mathbf{p}_{A}^{(m)}s_c\right)$. In all the following analysis, we will assume perfect SIC, $\delta=0$, and in Section \[sec:simresult\] we will investigate the effect of nonzero $\delta$. MSE expressions of common and multicast data for the $l$-th user of the $k$-th cluster are respectively defined as ${\varepsilon}_{c,lk}^{(m)} = \mathbb{E}\left\{{\left\lVert\hat{{s}}_{c,lk} - {s}_c\right\rVert}^2\right\}$, and ${\varepsilon}_{u,lk}^{(m)} = \mathbb{E}\left\{{\left\lVert\hat{{s}}_{u_{lk}} - {s}_{u_k}\right\rVert}^2\right\}$, and for perfect SIC, their closed form expressions can be written as $$\begin{aligned} {\varepsilon}_{c,lk}^{(m)} &= B^{(m)} \mathbf{h}_{l,k} \mathbf{p}_{A}^{(m)} {\mathbf{p}_{A}^{(m)}}^H \mathbf{h}_{l,k}^H W_{l,k}^{\ast(m)}W_{l,k}^{(m)} + C^{(m)} \sum_{i=1}^{K}\mathbf{h}_{l,k} \mathbf{p}_i^{(m)}{\mathbf{p}_i^{(m)}}^H\mathbf{h}_{l,k}^H W_{l,k}^{\ast(m)}W_{l,k}^{(m)} \nonumber \\ &\quad \: + W_{l,k}^{\ast(m)}W_{l,k}^{(m)} - \mathbf{h}_{l,k}\mathbf{p}_{A}^{(m)}W_{l,k}^{(m)} B^{(m)} -B^{(m)}{\mathbf{p}_{A}^{(m)}}^H\mathbf{h}_{l,k}^H W_{l,k}^{\ast(m)} + B^{(m)}, \label{CMSE_u}\\ {\varepsilon}_{u,lk}^{(m)} &= C^{(m)} \sum_{i=1}^{K}\mathbf{h}_{l,k} \mathbf{p}_i^{(m)} {\mathbf{p}_i^{(m)}}^H \mathbf{h}_{l,k}^H V_{l,k}^{\ast(m)} V_{l,k}^{(m)} + V_{l,k}^{\ast(m)} V_{l,k}^{(m)} - \mathbf{h}_{l,k}\mathbf{p}_{k}^{(m)}V_{l,k}^{(m)}C^{(m)}\nonumber \\ &\quad \: - C^{(m)} {\mathbf{p}_{k}^{(m)}}^H\mathbf{h}_{l,k}^HV_{l,k}^{\ast(m)} + C^{(m)}. \label{UMSE_k}\end{aligned}$$The optimal MMSE receivers for common and multicast data are defined as ${W}_{l,k}^{(m)MMSE} = \arg\min_{{W}_{l,k}} {\varepsilon}_{c,lk}^{(m)}$ and ${V}_{l,k}^{(m)MMSE} = \arg\min_{{V}_{l,k}} {\varepsilon}_{u,lk}^{(m)}$. The closed form expressions for these MMSE receivers are then calculated as $$\begin{aligned} {W}_{l,k}^{(m)MMSE} &= B^{(m)} \mathbf{p}_A^{(m)^H} \mathbf{h}_{l,k}^H \left( B^{(m)} \mathbf{h}_{l,k} \mathbf{p}_A^{(m)} \mathbf{p}_A^{(m)^H} \mathbf{h}_{l,k}^H + r_{c,lk}^{(m)} \right)^{-1},\label{W_rec}\\ V_{l,k}^{(m)MMSE} &= C^{(m)} \mathbf{p}_k^{(m)^H} \mathbf{h}_{l,k}^H \left( C^{(m)} \mathbf{h}_{l,k} \mathbf{p}_k^{(m)}\mathbf{p}_k^{(m)^H} \mathbf{h}_{l,k}^H + r_{u,lk}^{(m)} \right)^{-1}. \label{V_rec_sic}\end{aligned}$$Given that these MMSE receivers in (\[W\_rec\]) and (\[V\_rec\_sic\]) are employed, the resulting error variance expressions in (\[CMSE\_u\]) and (\[UMSE\_k\]) become $$\begin{aligned} {\varepsilon}_{c,lk}^{(m)MMSE} &=\left( {\frac{1}{B^{(m)}}} + \mathbf{p}_A^{(m)^H}\mathbf{h}_{l,k}^H {r}_{c,lk}^{(m)^{-1}} \mathbf{h}_{l,k} \mathbf{p}_A^{(m)} \right)^{-1}, \label{error_cov_common}\\ {\varepsilon}_{u,lk}^{(m)MMSE} &=\left({\frac{1}{C^{(m)}} } + \mathbf{p}_k^{(m)^H}\mathbf{h}_{l,k}^H {r}_{u,lk}^{(m)^{-1}} \mathbf{h}_{l,k} \mathbf{p}_k^{(m)} \right)^{-1}.\label{error_cov_uni}\end{aligned}$$Comparing (\[rate\_cu\]) and (\[rate\_u\]) with (\[error\_cov\_common\]) and (\[error\_cov\_uni\]) we can write $$\begin{aligned} R_{c,lk}^{(m)} &= -\log\left(\frac{\varepsilon_{c,lk}^{(m)MMSE}}{ B^{(m)}}\right), \label{rate_cu_errrorCov}\\ R_{u,lk}^{(m)} &= -\log \left( \frac {\varepsilon_{u,lk}^{(m)MMSE}}{C^{(m)}}\right). \label{rate_u_errorCov}\end{aligned}$$ We also define $$\begin{aligned} {\varepsilon}_{c}^{(m)MMSE}&= \max_{k=\{1,...,K\}} \max_{l=\{1,...,L\}} {\varepsilon}_{c,lk}^{(m)MMSE},\\ {\varepsilon}_{u,k}^{(m)MMSE} &= \max_{l=\{1,...,L\}} {\varepsilon}_{u,lk}^{(m)MMSE}.\end{aligned}$$The WMMSE minimization objective function is then $$\begin{aligned} Q_1 = \sum_{k=1}^{K}{v}_{k}^{(m)} {\varepsilon}_{u,k}^{(m)MMSE} + {w}^{(m)}{\varepsilon}_{c}^{(m)MMSE}, \label{eqnQ1}\end{aligned}$$where ${w}^{(m)}$ and ${v}_k^{(m)}$ denote the MMSE weights for common data for all users and multicast data at cluster $k$ respectively. The optimization problems $\mathcal{P}_{3}$ and $\mathcal{P}_{4}$ are defined as $$\begin{aligned} && \mathcal{P}_{3} : [\mathbf{p}_1^{(1)MS},\ldots,\mathbf{p}_{K}^{(1)MS}] =\arg\min_{\mathbf{p}_{k}^{(1)}} Q_1, \quad \text{s.t. } (\ref{pow_const}), \label{eqnP3}\end{aligned}$$ $$\begin{aligned} && \mathcal{P}_{4} : [\mathbf{p}_c^{MS},\mathbf{p}_1^{(2)MS},\ldots,\mathbf{p}_{K}^{(2)MS}] = \arg\min_{\mathbf{p}_{c},\mathbf{p}_{k}^{(2)}} Q_1, \quad \text{s.t. } (\ref{pow_const}) \label{eqnP4}.\end{aligned}$$ To solve these non-convex problems, we reformulate both $\mathcal{P}_{3}$ and $\mathcal{P}_{4}$ as smooth constrained optimization problems by introducing auxiliary variables. Without loss of generality, we choose the auxiliary variables as a function of the independent variables $t_k$ and $z$ defined in (\[SR\_tk\]) and (\[SR\_z\]) such that $$\begin{aligned} {\varepsilon}_{u,lk}^{(m)MMSE} \leq &e^{-t_k/a_k}, \; \forall l, \forall k, \label{MSE_tk} \\ {\varepsilon}_{c,lk}^{(m)MMSE} \leq &e^{-z/b}, \; \forall l, \forall k. \label{MSE_z}\end{aligned}$$ As stated above, this choice does not result in a loss of generality, but is necessary to establish the equivalence between WSR and WMMSE problems. Then, we reformulate $\mathcal{P}_{3}$ and $\mathcal{P}_{4}$ as $$\begin{aligned} \mathcal{P}'_{3} &: [\mathbf{p}_1^{(1)MS},\ldots,\mathbf{p}_{K}^{(1)MS}] = \arg \min_{\mathbf{p}_{k}^{(1)},t_k,z} Q_2, \quad \text{s.t. } (\ref{pow_const}), (\ref{MSE_tk}), (\ref{MSE_z}), \label{min_MSE_2}\\ \mathcal{P}'_{4} &: [\mathbf{p}_c^{MS},\mathbf{p}_1^{(2)MS},\ldots,\mathbf{p}_{K}^{(2)MS}]= \arg \min_{\mathbf{p}_{c},\mathbf{p}_{k}^{(2)},t_k,z} Q_2, \quad \text{s.t. } (\ref{pow_const}), (\ref{MSE_tk}), (\ref{MSE_z}), \label{min_MSE_22}\end{aligned}$$where $$\begin{aligned} Q_2 = \sum_{k=1}^{K}{v}_{k}^{(m)} e^{-t_k/ a_k} + w^{(m)} e^{-z/ b}.\end{aligned}$$ In the following, we first prove that the precoders designed for maximum WSR and minimum WMMSE are equivalent at the optimal point. Then we propose an algorithm for precoder design. Gradient Expressions and KKT Conditions for Maximum WSR ------------------------------------------------------- In this section, we study the gradients for the WSR maximization problem. To investigate the stationary points of the problems $\mathcal{P}'_{1}$ and $\mathcal{P}'_{2}$, we formulate the Lagrangian expression as $$\begin{aligned} f\left(\mathbf{P}^{(m)},t_k,z\right) &= - T_2 + \sum_{k=1}^{K}\sum_{l=1}^{L}\mu_{l,k}^{(m)}(t_k - a_k R_{u,lk}^{(m)}) + \sum_{k=1}^{K}\sum_{l=1}^{L}\eta_{l,k}^{(m)} (z - b R_{c,lk}^{(m)})\nonumber \\ &\;\quad + \lambda^{(m)} \left( B^{(m)} {\left\lVert\mathbf{p}_A^{(m)}\right\rVert}^2 + C^{(m)} \sum_{k=1}^{K}{\left\lVert\mathbf{p}_k^{(m)}\right\rVert}^2 - E_{tx}\right) .\label{lagrangian_SR}\end{aligned}$$Here $\lambda^{(m)}$, $\mu_{l,k}^{(m)}$ and $\eta_{l,k}^{(m)}$ are the Lagrange multipliers. We calculate $\nabla_{\mathbf{p}_k^{(m)}} f\left(\mathbf{P}^{(m)},t_k,z\right)$ in Appendix \[derive\_gradient\_f\][^1] only for signal model $1$ ($m=1$) due to the space limitations. We restate the result. $$\begin{aligned} \nabla_{\mathbf{p}_k^{(1)}} f\left(\mathbf{P}^{(1)},t_k,z\right) &= - \sum_{l=1}^{L} \mu_{l,k}^{(1)} a_k \mathbf{h}_{l,k}^H {r}_{u,lk}^{(1)^{-1}} \mathbf{h}_{l,k} \mathbf{p}_k^{(1)} {\varepsilon}_{u,lk}^{(1)MMSE} + \lambda^{(1)}\left(B^{(1)} \mathbf{p}_A^{(1)} + C^{(1)} \mathbf{p}_k^{(1)}\right) \nonumber \\ &\;\quad + \sum_{i=1,i\neq k}^{K} \sum_{l=1}^{L} C^{(1)} \mu_{l,i}^{(1)} a_i \mathbf{h}_{l,i}^H {r}_{u,li}^{(1)^{-1}} \mathbf{h}_{l,i} \mathbf{p}_i^{(1)}{\varepsilon}_{u,li}^{(1)MMSE}\mathbf{p}_i^{(1)^H} \mathbf{h}_{l,i}^H {r}_{u,li}^{(1)^{-1}} \mathbf{h}_{l,i} \mathbf{p}_k^{(1)} \nonumber \\ &\;\quad + \sum_{i=1}^{K}\sum_{l=1}^{L} C^{(1)}\eta_{l,i}^{(1)} b \mathbf{h}_{l,i}^H {r}_{c,li}^{(1)^{-1}} \mathbf{h}_{l,i} \mathbf{p}_A^{(1)} {\varepsilon}_{c,li}^{(1)MMSE} \mathbf{p}_A^{(1)^H} \mathbf{h}_{l,i}^H {r}_{c,li}^{(1)^{-1}} \mathbf{h}_{l,i}\mathbf{p}_k^{(1)} \nonumber \\ &\;\quad - \sum_{i=1}^{K} \sum_{l=1}^{L}\eta_{l,i}^{(1)} b \mathbf{h}_{l,i}^H {r}_{c,li}^{(1)^{-1}}\mathbf{h}_{l,i}\mathbf{p}_A^{(1)} {\varepsilon}_{c,li}^{(1)MMSE}. \label{grad_f}\end{aligned}$$ Gradient Expressions and KKT Conditions for Minimum WMMSE --------------------------------------------------------- To investigate the stationary points of the minimum WMMSE problems $\mathcal{P}'_3$ and $\mathcal{P}'_4$, we formulate the Lagrangian expression as $$\begin{aligned} g(\mathbf{P}^{(m)},t_k,z) &= Q_2 + \sum_{k=1}^{K}\sum_{l=1}^{L} \bar{\mu}_{l,k}^{(m)}v_k^{(m)} (\varepsilon_{u,lk}^{(m)MMSE}- e^{-t_k/a_k}) \nonumber \\ &\;\quad+ \sum_{k=1}^{K}\sum_{l=1}^{L}\bar{\eta}_{l,k}^{(m)} w^{(m)} (\varepsilon_{c,lk}^{(m)MMSE} - e^{-z/b}) \nonumber \\ &\;\quad + \bar{\lambda}^{(m)} \left(B^{(m)} {\left\lVert\mathbf{p}_A^{(m)}\right\rVert}^2 + C^{(m)} \sum_{k=1}^{K} {\left\lVert\mathbf{p}_k^{(m)}\right\rVert}^2 - E_{tx}\right).\label{lagrangian_MSE}\end{aligned}$$Here $\bar{\lambda}^{(m)}$, $\bar{\mu}_{l,k}^{(m)}$ and $\bar{\eta}_{l,k}^{(m)}$ are the Lagrange multipliers. The gradient $\nabla_{\mathbf{p}_k^{(1)} }g(\mathbf{P}^{(1)} ,t_k,z)$ is computed in a similar manner as $\nabla_{\mathbf{p}_k^{(1)} }f(\mathbf{P}^{(1)},t_k,z)$ and is written as in (\[grad\_g\]). $$\begin{aligned} \lefteqn{ \nabla_{\mathbf{p}_k^{(1)}} g\left(\mathbf{P}_k^{(1)},t_k,z\right)}\nonumber\\ &=& - \sum_{l=1}^{L} \bar{\mu}_{l,k}^{(1)} \mathbf{h}_{l,k}^H {r}_{u,lk}^{(1)^{-1}} \mathbf{h}_{l,k} \mathbf{p}_k^{(1)} {\varepsilon}_{u,lk}^{(1)MMSE} v_k^{(m)} {\varepsilon}_{u,lk}^{(1)MMSE} + \bar{\lambda}^{(1)}\left(B^{(1)}\mathbf{p}_A^{(1)} + C^{(1)}\mathbf{p}_k^{(1)}\right) \nonumber \\ && - \sum_{i=1}^{K} \sum_{l=1}^{L} \bar{\eta}_{l,i}^{(1)} \mathbf{h}_{l,i}^H {r}_{c,li}^{(1)^{-1}} \mathbf{h}_{l,i} \mathbf{p}_A^{(1)} {\varepsilon}_{c,li}^{(1)MMSE} w^{(m)} {\varepsilon}_{c,li}^{(1)MMSE} \nonumber \\ &&+ \sum_{i=1,i\neq k}^{K} \sum_{l=1}^{L} C^{(1)}\bar{\mu}_{l,i}^{(1)} \mathbf{h}_{l,i}^H {r}_{u,li}^{(1)^{-1}} \mathbf{h}_{l,i} \mathbf{p}_i^{(1)} {\varepsilon}_{u,li}^{(1)MMSE} v_i^{(m)} {\varepsilon}_{u,li}^{(1)MMSE} \mathbf{p}_i^{(1)^H} \mathbf{h}_{l,i}^H {r}_{u,li}^{(1)^{-1}} \mathbf{h}_{l,i} \mathbf{p}_k^{(1)} \nonumber \\ &&+ \sum_{i=1}^{K}\sum_{l=1}^{L} C^{(1)} \bar{\eta}_{l,i}^{(1)} \mathbf{h}_{l,i}^H {r}_{c,li}^{(1)^{-1}} \mathbf{h}_{l,i} \mathbf{p}_A^{(1)} {\varepsilon}_{c,li}^{(1)MMSE} w^{(m)} {\varepsilon}_{c,li}^{(1)MMSE} \mathbf{p}_A^{(1)^H} \mathbf{h}_{l,i}^H {r}_{c,li}^{(1)^{-1}}\mathbf{h}_{l,i}\mathbf{p}_k^{(1)} \label{grad_g} \end{aligned}$$ Equivalence of WSR and WMMSE Problems ------------------------------------- When (\[grad\_f\]) and (\[grad\_g\]) are compared, it is observed that for a given set of precoders $\mathbf{P}^{(m)}$ and corresponding error variances ${\varepsilon}_{u,lk}^{(m)MMSE}$ and ${\varepsilon}_{c,lk}^{(m)MMSE}$, $\nabla_{\mathbf{p}_k^{(m)}} f$ and $\nabla_{\mathbf{p}_k^{(m)}} g$ become equal, if the weights $a_k$, $b$, $v_k$ and $w$ are chosen as $$\begin{aligned} {v}_{k}^{(m)} &= a_{k}{{\varepsilon}_{u,lk}^{(m)MMSE}}^{-1}, \label{MSE_weight1}\end{aligned}$$for all $k$ and $l$ for which $\bar{\mu}_{l,k}^{(m)}>0$, and $$\begin{aligned} {w}^{(m)} &= b {{\varepsilon}_{c,lk}^{(m)MMSE}}^{-1}, \label{MSE_weight2}\end{aligned}$$for all $k$ and $l$ for which $\bar{\eta}_{l,k}^{(m)}>0$. These relations could also be read as ${{\varepsilon}_{u,lk}^{(m)MMSE}} = a_{k}/v_k^{(m)}$, and $ {{\varepsilon}_{c,lk}^{(m)MMSE}} = b/w^{(m)} $. In other words, at the optimal solution, the MMSE values for the multicast messages are the same within a group, the MMSE values for the common message are the same over the entire set of users, and ${{\varepsilon}_{u,lk}^{(m)MMSE}}$ and $ {{\varepsilon}_{c,lk}^{(m)MMSE}}$ are equal to their own boundaries; i.e. $e^{-t_k/a_k}$, and $e^{-z/b}$ respectively. If this is not possible for any subset of users, then the corresponding Lagrange multipliers are zero. This fact also reveals that $\partial_{t_k} f(\mathbf{P}^{(m)},t_k,z)$  and $\partial_{t_k} g(\mathbf{P}^{(m)},t_k,z)$, and $\partial_z f(\mathbf{P}^{(m)},t_k,z)$ and $\partial_z g(\mathbf{P}^{(m)},t_k,z)$ are equivalent. In conclusion, the two problems have different variable names, but to find the optimal solution, we solve for exactly the same set of equations. Thus, the two optimization problems are equivalent with each other at the optimal solution. Iterative Precoder Design {#sec:precoder} ========================= In this section, we suggest a suboptimal and iterative precoder design algorithm to solve the WSR problem. The WSR problem is non-convex, and hard to solve. It requires efficient algorithms that perform well in practice. Although the WSR problem does not result in an intuitive algorithm, its equivalent WMMSE problem does. As the WMMSE problem is composed of two parts, the transmit precoders and MSE receivers, one can perform alternating optimization between the two. To do so, we first define a new optimization problem, same as $\mathcal{P}_3$ (or $\mathcal{P}_4$) defined in (\[eqnP3\]) (or (\[eqnP4\])), but instead of defining $Q_1$ in (\[eqnQ1\]) in terms of $\varepsilon_{c,lk}^{(m)MMSE}$ and $\varepsilon_{u,lk}^{(m)MMSE}$, we use $\varepsilon_{c,lk}^{(m)}$ and $\varepsilon_{u,lk}^{(m)}$ defined in (\[CMSE\_u\]) and (\[UMSE\_k\]). In other words, instead of assuming MMSE receivers, we first allow for any receiver structure. $$\begin{aligned} \mathcal{P}_{5} : \arg &\min_{\mathbf{P}^{(m)},t_k,z,W_{l,k}^{(m)},V_{l,k}^{(m)},v_k^{(m)},w^{(m)}} Q_2\\ \quad \text{s.t. } {\varepsilon}_{u,lk}^{(m)} &\leq e^{-t_k/a_k} ,\\ {\varepsilon}_{c,lk}^{(m)} &\leq e^{-z/b} \text{~and~} (\ref{pow_const}).\end{aligned}$$Then, the new Lagrangian objective function becomes $$\begin{aligned} h(\mathbf{P}^{(m)},t_k,z) &= Q_2 + \sum_{k=1}^{K}\sum_{l=1}^{L} \xi_{l,k}^{(m)} v_k^{(m)} (\varepsilon_{u,lk}^{(m)} - e^{-t_k/a_k}) + \sum_{k=1}^{K}\sum_{l=1}^{L} \psi_{l,k}^{(m)} w^{(m)} (\varepsilon_{c,lk}^{(m)} - e^{-z/b}) \nonumber\\ &\quad + \beta^{(m)} \left( B^{(m)} {\left\lVert\mathbf{p}_A^{(m)} \right\rVert}^2 + C^{(m)} \sum_{k=1}^{K} {\left\lVert\mathbf{p}_k^{(m)}\right\rVert}^2 - E_{tx}\right), \label{lagrangian_h}\end{aligned}$$ where $\beta^{(m)}$, $\xi_{l,k}^{(m)}$ and $\psi_{l,k}^{(m)}$ denote the Lagrange multipliers for the $m$-th signal model. Studying the KKT conditions for this problem, similar to the analysis in the previous section, we can state the following theorem. \[theorem\_Pk\] The common data receiver $W_{l,k}^{(m)}$ in (\[W\_rec\_last\]), the multicast data receiver $V_{l,k}^{(m)} $ in (\[V\_rec\_sic\_last\]) and the Lagrange multiplier $\beta^{(m)}$ in (\[lamda\]), transmit precoders $\mathbf{p}_k^{(1)}$ in (\[Pk\_last\]), $\mathbf{p}_k^{(2)}$ in (\[Pk\_last2\]), and $\mathbf{p}_c$ in (\[Pc\_last2\]) satisfy the KKT conditions for the optimization problem defined above with the Lagrangian function $h(\mathbf{P}^{(m)},t_k,z)$ defined in (\[lagrangian\_h\]). $$\begin{aligned} {W}_{l,k}^{(m)} &= B^{(m)} \mathbf{p}_A^{(m)^H} \mathbf{h}_{l,k}^H \left(B^{(m)} \mathbf{h}_{l,k} \mathbf{p}_A^{(m)} \mathbf{p}_A^{(m)^H} \mathbf{h}_{l,k}^H + \sum_{i=1}^{K}C^{(m)} \mathbf{h}_{l,k} \mathbf{p}_i^{(m)}\mathbf{p}_i^{(m)^H} \mathbf{h}_{l,k}^H + 1 \right)^{-1},\label{W_rec_last}\\ {V}_{l,k}^{(m)} &= C^{(m)} \mathbf{p}_k^{(m)^H} \mathbf{h}_{l,k}^H \left(\sum_{i=1}^{K} C^{(m)} \mathbf{h}_{l,k} \mathbf{p}_i^{(m)}\mathbf{p}_i^{(m)^H} \mathbf{h}_{l,k}^H + 1 \right)^{-1},\label{V_rec_sic_last}\\ \beta^{(m)} &= \frac{1}{E_{tx}}\sum_{k=1}^{K} \sum_{l=1}^{L}\left[\xi_{l,k}^{(m)} {v}_k^{(m)} {V}_{l,k}^{(m)} {V}_{l,k}^{(m)^{\ast}} + \psi_{l,k}^{(m)} {w}^{(m)} {W}_{l,k}^{(m)} {W}_{l,k}^{(m)^{\ast}} \right], \label{lamda}\\ \mathbf{p}_k^{(1)} &= \bigg(\beta^{(1)} \mathbf{I} + \sum_{i=1}^{K} \sum_{l=1}^{L}\left( \xi_{l,i}^{(1)} v_i^{(1)} C^{(1)} \mathbf{h}_{l,i}^H {V}_{l,i}^{(1)^{\ast}} {V}_{l,i}^{(1)} \mathbf{h}_{l,i} + \psi_{l,i}^{(1)} w^{(1)} \mathbf{h}_{l,i}^H {W}_{l,i}^{(1)^{\ast}} {W}_{l,i}^{(1)} \mathbf{h}_{l,i} \right)\bigg)^{-1} \nonumber\\ &\quad \times \bigg[\sum_{i=1}^{K}\sum_{l=1}^{L}\left(\psi_{l,i}^{(1)} w^{(1)} B^{(1)} \mathbf{h}_{l,i}^H {W}_{l,i}^{(1)^{\ast}} - \psi_{l,i}^{(1)} w^{(1)} B^{(1)} \mathbf{h}_{l,i}^H {W}_{l,i}^{(1)^{\ast}} {W}_{l,i}^{(1)} \mathbf{h}_{l,i} (\mathbf{p}_A^{(1)} - \mathbf{p}_k^{(1)})\right) \nonumber \\ & \quad + \sum_{l=1}^{L}\xi_{l,k}^{(1)} v_k^{(1)} C^{(1)} \mathbf{h}_{l,k}^H {V}_{l,k}^{(1)^{\ast}} - \beta^{(1)} B^{(1)} (\mathbf{p}_A^{(1)} - \mathbf{p}_k^{(1)})\bigg],\label{Pk_last}\\ \mathbf{p}_k^{(2)} &= \bigg(\beta^{(2)} \mathbf{I} + \sum_{i=1}^{K} \sum_{l=1}^{L} \xi_{l,i}^{(2)} v_i^{(2)} \mathbf{h}_{l,i}^H {V}_{l,i}^{(2)^{\ast}} {V}_{l,i}^{(2)} \mathbf{h}_{l,i} + \sum_{i=1}^{K} \sum_{l=1}^{L} \psi_{l,i}^{(2)} w^{(2)} \mathbf{h}_{l,i}^H {W}_{l,i}^{(2)^{\ast}} {W}_{l,i}^{(2)} \mathbf{h}_{l,i} \bigg)^{-1} \nonumber \\ & \quad \times \bigg(\sum_{l=1}^{L}\xi_{l,k}^{(2)} v_k^{(2)} \mathbf{h}_{l,k}^H {V}_{l,k}^{(2)^{\ast}} \bigg),\label{Pk_last2}\end{aligned}$$ $$\begin{aligned} \mathbf{p}_c &= \bigg(\beta^{(2)} \mathbf{I} + \sum_{i=1}^{K} \sum_{l=1}^{L} \psi_{l,i}^{(2)} w^{(2)} \mathbf{h}_{l,i}^H {W}_{l,i}^{(2)^{\ast}} {W}_{l,i}^{(2)} \mathbf{h}_{l,i} \bigg)^{-1} \bigg( \sum_{i=1}^{K}\sum_{l=1}^{L}\psi_{l,i}^{(2)} w^{(2)} \mathbf{h}_{l,i}^H {W}_{l,i}^{(2)^{\ast}} \bigg).\label{Pc_last2}\end{aligned}$$ The proof is provided in Appendix \[derive\_Pk\]. The receivers $W_{l,k}^{(m)}$ and $V_{l,k}^{(m)}$ in (\[W\_rec\_last\]) and (\[V\_rec\_sic\_last\]) are exactly equal to the MMSE receivers given in (\[W\_rec\]) and (\[V\_rec\_sic\]). \[algo\_1\] input: $m$, $a_k$, $b$, $\epsilon$, $\alpha$, $E_{tx}$, $\Upsilon$ set $n = 0$, $\left[\mathbf{p}_k^{(m)}\right]^{(n)} = \mathbf{p}_k^{init} \: \forall k $, $\nu = \log_2 (KL)/\epsilon $;\ iterate;\ 1. update $n = n + 1$\ 2. compute $\big[{W}_{l,k}^{(m)}\big]^{(n)}$ using (\[W\_rec\_last\])\ 3. compute $\big[{V}_{l,k}^{(m)}\big]^{(n)} \: $ using (\[V\_rec\_sic\_last\])\ 4. compute $\varepsilon_{c,lk}^{(m)}$, $\varepsilon_{u,lk}^{(m)}$ using (\[CMSE\_u\]) and (\[UMSE\_k\])\ 5. compute $w^{(m)}$, $v_k^{(m)}$ using (\[MSE\_weight1\]) and (\[MSE\_weight2\])\ 6. compute $\big[\xi_{l,k}^{(m)}\big]^{(n)}$, $\big[\psi_{l,k}^{(m)}\big]^{(n)}$ using (\[xi\_l\]) and (\[psi\_l\])\ 7. compute $\big[\beta^{(m)}\big]^{(n)}$ using (\[lamda\])\ 8. compute $\big[\mathbf{p}_k^{(m)}\big]^{(n)}$ using (\[Pk\_last\]) or (\[Pk\_last2\]), and $\big[\mathbf{p}_c\big]^{(n)}$ using (\[Pc\_last2\])\ 9. scale $\big[\mathbf{p}_k^{(m)}\big]^{(n)}$ such that\ $ B^{(m)} {\left\lVert\big[\mathbf{p}_A^{(m)}\big]^{(n)} \right\rVert}^2 + C^{(m)} \sum_{k=1}^{K_1} {\left\lVert\big[\mathbf{p}_k^{(m)}\big]^{(n)}\right\rVert}^2 = E_{tx} $\ 10. $\mathbf{If}$ $\operatorname{Tr}\big\{\left(\big[\mathbf{P}^{(m)}\big]^{(n)} - \big[\mathbf{P}^{(m)}\big]^{(n-1)}\right) \left(\big[\mathbf{P}^{(m)}\big]^{(n)} -\big[\mathbf{P}^{(m)}\big]^{(n-1)}\right)^H \big\} < \Upsilon $ then\ terminate\ $\mathbf{else}$\ go to Step 1 If it was possible to solve the equations stated in Theorem \[theorem\_Pk\] in closed form, then the optimal solution would be obtained. As this is not possible, the iterative WMMSE algorithm, given in Algorithm \[algo\_1\], iterates between using the receiver structures (\[W\_rec\_last\]), (\[V\_rec\_sic\_last\]), MSE weights (\[MSE\_weight1\]), (\[MSE\_weight2\]) and transmit precoders (\[Pk\_last\]), (\[Pk\_last2\]) and (\[Pc\_last2\]), until convergence to the local optimum; i.e. the error power of two consequtive precoders is small enough. Calculating the Lagrange multipliers $\xi_{l,k}^{(m)}$ and $\psi_{l,k}^{(m)}$, $\forall l, k$ and $m$ in Algorithm \[algo\_1\] is not trivial. Applying the exponential penalty method for solving min-max problems defined in [@Li2003_ExponentialPenaltyMethod], in the algorithm in each iteration, we update $\xi_{l,k}^{(m)}$ and $\psi_{l,k}^{(m)}$ according to $$\begin{aligned} \xi_{l,k}^{(m)} &= \frac{\exp\{\nu (\varepsilon_{u,lk}^{(m)} - e^{-t_k/a_k})\}}{\sum_{l=1}^{L}\exp\{\nu (\varepsilon_{u,lk}^{(m)} - e^{-t_k/a_k})\}},\label{xi_l}\\ \psi_{l,k}^{(m)} &= \frac{\exp\{\nu (\varepsilon_{c,lk}^{(m)} - e^{-z/b})\}}{\sum_{k=1}^{K}\sum_{l=1}^{L}\exp\{\nu(\varepsilon_{c,lk}^{(m)} - e^{-z/b})\}}.\label{psi_l}\end{aligned}$$Here $\nu$ is a constant and as long as $\nu \geq \log(KL)/\epsilon $, the solution is $\epsilon$-optimal. Note that, this choice satisfies the KKT conditions on $\xi_l^{(m)}$ and $\psi_{l,k}^{(m)}$ since $\sum_{l=1}^{L} \xi_{l,k}^{(m)} = 1$, $\sum_{k=1}^{K}\sum_{l=1}^{L} \psi_{l,k}^{(m)} = 1$ and $\xi_{l,k}^{(m)} \geq 0$, $\psi_{l,k}^{(m)} \geq 0$. Note that, papers such as [@Joudeh2016_SumRateMax] and [@Joudeh2017_RateSplittingforMaxMin] utilize the CVX tool designed for MATLAB [@CVX] and do not need to solve for the Lagrange multipliers or for the optimal structures for the transmit precoders, the common data receivers and the multicast data receivers defined in (\[W\_rec\_last\])-(\[Pc\_last2\]). When these optimal structures are used, the algorithm finishes in approximately 10 minutes, whereas the algorithm lasts for hours when the CVX tool is employed. As there is a total power constraint, and each iteration of the algorithm increases the objective function, the proposed WMMSE algorithm converges to a limit value. Due to the non-convexity of the problem, this limit value is not guaranteed to be the global optimum. However, the algorithm employs the precoders and the MMSE receivers stated in Theorem 1 that satisfy the KKT conditions of the WMMSE problem, and results in a locally optimum solution. Following similar steps as in [@Christensen2008_WeightedSumRate Section IV-A] and [@Kaleva2016_DecentralizedSumRateMaximization], one can prove convergence in full detail. Simulation Results {#sec:simresult} ================== In this section we provide simulation results to compare the two new precoders Algorithm \[algo\_1\] generates. We denote these precoders WMMSE1 and WMMSE2 for signal models 1 and 2 respectively. In the following simulations we consider algorithm convergence, and sum rate for different settings. In the simulations, the entries in $\mathbf{h}_{l,k}$ are assumed to be circularly symmetric complex Gaussian distributed random variables with zero mean and unit variance, and are independent and identically distributed. The presented results are averaged over $10^3$ channel realizations. Ideal Gaussian codebooks are used for transmission. In the algorithm, the maximum number of iterations is limited to $100$, and both $\epsilon$ and $\Upsilon$ are set to $10^{-3}$. ![Sum-rate convergence performance for $M=K=L=2$. Transmit SNR is set to 15 dB, and $a_k = b = 1$. The optimal $\alpha$ is selected for WMMSE1.[]{data-label="iterSumrate_M2_K2_L2"}](sumRate_vs_iteration.eps){width="4.6in"} ![The sum-rate $\left(\sum_{k=1}^{K}R_{u,k} + R_{c}\right)$ curves for $M=2$, $K=2$, $L=1$, $a_k=1$, $b=1$. The optimal $\alpha$ is selected for WMMSE1, ZF and MRT.[]{data-label="SR_2_2_1_optAlfa"}](DPC_vs_others.eps){width="4.6in"} In the simulation results we will provide comparisons with ZF and MRT precoding. Thus, we first describe the two schemes briefly. ZF precoding: : The ZF precoder aims to cancel all interference at all users. When, the number of users is less then or equal to the number of transmit antennas ($KL \leq M$), interference cancellation is easily achieved by the ZF precoder, $\mathbf{P}^{ZF}$ $\in \mathbb{C}^{M \times KL}$, given by [@Joham2005_LinearTransmitProcessing] $$\begin{aligned} \mathbf{P}^{ZF} &\triangleq \sqrt{\frac{E_{tx}}{\operatorname{Tr}((\mathbf{H}\mathbf{H}^H)^{-1})}}\mathbf{H}^H(\mathbf{H}\mathbf{H}^H)^{-1}, \label{ZF_description}\end{aligned}$$where $\mathbf{H} = [\mathbf{h}_{1,1},\mathbf{h}_{1,2}, \ldots,\mathbf{h}_{1,L},\ldots, \mathbf{h}_{K,1},\mathbf{h}_{K,2}, \ldots, \mathbf{h}_{K,L}]^T$ is the composite channel gain matrix of all users, of size $KL \times M$. However, when $KL > M$, total interference cancellation at all users is no longer feasible. Instead, user selection is necessary before designing the ZF precoders. In the simulations, we assume $K = M$. Thus, we select a single user in each group (denoted by $l_k$) and design the ZF precoder for this set of users only, where $\mathbf{H}' = [\mathbf{h}_{1,l_1}, \ldots, \mathbf{h}_{K, l_K}]^T$ is the composite channel gain matrix of all selected users. In order to determine the selected set of users, we consider all possible $KL$ channel gain matrices and choose the one, which maximizes the determinant $\left| \mathbf{H}'\mathbf{H}'^H \right|$. This method was shown to minimize MMSE in [@Dao2010_UserSelectionAlgo]. We would like to mention that this type of user selection ignores the presence of the common message. The common message is superposed onto the private multicast messages according to signal model 1 during transmission. Note that, this method designs ZF precoders according to a single user in each group. Then the achievable multicast rate for a group is determined by the minimum of all multicast rates achievable by each user in a group as in (\[eqn:minRu\]). Similarly, the achievable common data rate is determined according to (\[eqn:minRc\]). MRT precoding: : MRT aims to achieve the highest signal gain at the receivers, and ignores interference. The MRT precoder for cluster $k$ is given in [@Sadeghi2018_MMFMassiveMIMO] as $$\begin{aligned} \mathbf{p}_{k}^{MRT} &= \Gamma \sum_{l=1}^{L} \mathbf{h}_{l,k},\end{aligned}$$ where $\Gamma$ is set to $$\Gamma = \frac{E_{tx}}{\sum_{k=1}^K \left| \sum_{l=1}^L h_{k,l} \right|^2}$$ to satisfy the total power constraint in (\[pow\_const\]). As in the ZF scheme, described above, we assume the common message is superposed onto the private multicast messages. Fig. \[iterSumrate\_M2\_K2\_L2\] displays convergence properties for the two iterative WMMSE algorithms. In the figure, total transmit power, $E_{tx}$, is set to $15$ dB and $a_k = b = 1$ and optimal $\alpha$ is chosen for WMMSE1. The initial precoder matrix, $\mathbf{P}^{init}$, is chosen as the zero forcing precoder, as described above. The figure confirms that both algorithms converge fast. Note that, the parameter $\alpha$ distributes power unequally among common and multicast data. This results in their average received signal to noise ratios (SNR) to be different. In order to be able to plot the common and multicast data rates on the same graph, in Figs. \[SR\_2\_2\_1\_optAlfa\]-\[SR\_222\_diffWeights\] and \[SR\_441\_vs\_442\_vs\_444\]-\[SR\_542\_vs\_432\_vs\_423\], the horizontal axis is defined as the total transmit SNR, namely $E_{tx}/\sigma^2$. Here $\sigma^2$ is the noise variance and assumed to be $1$. Fig. \[SR\_2\_2\_1\_optAlfa\] compares WMMSE1 and WMMSE2 with DPC, ZF and MRT, for $M=K=2$, $L=1$ and $a_k = b=1$. Although this constitutes a limited setting in terms of number of antennas and users, in the literature, capacity region results for MIMO broadcast channels with a common message only exist for this configuration [@ekrem2012outer]. For WMMSE1, ZF and MRT, the optimal $\alpha$ is selected for each protocol for every channel realization. It is observed that both WMMSE1 and WMMSE2 achieve the optimal performance at low SNR, and close to optimal at high SNR. In this setting, ZF removes all undesired interference and is parallel to WMMMSE1 and WMMSE2. However, MRT is quite suboptimal as it performs no interference mitigation. ![The common data rate $\left(R_{c}\right)$, the sum multicast data rate $\left(\sum_{k=1}^{K}R_{u,k}\right)$, and the sum-rate $\left(\sum_{k=1}^{K}R_{u,k} + R_{c}\right)$ curves for $M=K=L=2$, $a_k=b=1$.[]{data-label="SR_222"}](SR_222.eps){width="4.6in"} ![The common data rate $\left(R_{c}\right)$, the sum multicast data rate $\left(\sum_{k=1}^{K}R_{u,k}\right)$, and the sum-rate $\left(\sum_{k=1}^{K}R_{u,k} + R_{c}\right)$ curves for $M=K=L=2$ and weights $a_k = 1$ and $b = \{2,4\}$.[]{data-label="SR_222_diffWeights"}](SR_222_diffWeights.eps){width="4.6in"} Fig. \[SR\_222\] shows sum rate curves for $M=K=L=2$, and $a_k = b= 1$. The results for WMMSE1 are presented for both optimal $\alpha$ values and for a fixed $\alpha$ value. It is observed that WMMSE1 for optimal $\alpha$ outperforms all protocols in weighted sum rate. The sum multicast rate of WMMSE1, both for optimal and fixed $\alpha$, is larger than that of WMMSE2. On the other hand, WMMSE2 achieves a significantly larger common data rate, at the expense of sum multicast rate. This is because, WMMSE1 mainly designs multicast data precoders and sends the common data via the sum precoder $\mathbf{p}_A^{(1)} = \sum_{k=1}^K \mathbf{p}_k^{(1)}$. However, WMMSE2 assigns a separate precoder to common data $\mathbf{p}_c$. This way, it can achieve higher values for $R_c$, but the loss in multicast data rates is significant and performs worse than WMMSE1 for optimal $\alpha$ in terms of the total weighted sum rate. However, this is not the case if $\alpha$ is fixed. Note that, there is a tradeoff between WMMSE1 and WMMSE2. The precoder definition of WMMSE2 is more general and includes WMMSE1 definition as a special case. In other words, in WMMSE2, the common message precoder can be arbitrary, whereas in WMMSE1 it is contrained to be the sum of the private multicast data precoders. On the other hand, the WMMSE algorithm calculates the best directions and the power levels for all precoders $\mathbf{p}_k$ and $\mathbf{p}_A^{(1)}$ jointly. It does not perform direction and power optimization steps separately. The WMMSE1 algorithm avoids this problem via superposition. WMMSE1 allows the designer to adjust the precoder power levels separately via the parameter $\alpha$. If $\alpha$ is optimized, this new degrees of freedom (power optimization gain) becomes dominant over a more constrained precoder definition and WMMSE1 performs better than WMMSE2. Finally, we observe that MRT performs better than ZF, both MRT and ZF perform poorly in terms of weighted sum rate. When there are multiple users in a group, neither MRT, nor ZF can manage interference well. ZF cancels interference at the selected user in each group, but the interference at the other users in the group are not necessarily cancelled. MRT, on the other hand, behaves as if the group itself is one single user with an equivalent channel gain $\sum_{l=1}^{L} \mathbf{h}_{l,k}$, which does not provide any individual adaptation. Fig. \[SR\_222\_diffWeights\] shows sum rate curves for $M=K=L=2$, $a_k=1$ and $b = \{2,4\}$. The results for WMMSE1 are presented for optimal $\alpha$ values. It is observed that when the weight for the common message, $b$, increases, the common data rate increases and the sum multicast data rate decreases for both protocols. However, the total data rate does not change. ![Cumulative distribution function of optimal $\alpha$ for different scenarios.[]{data-label="alfaCDF"}](alfaCDF_v2.eps){width="4.6in"} ![The common data rate $\left(R_{c}\right)$, the sum multicast data rate $\left(\sum_{k=1}^{K}R_{u,k}\right)$, and the sum-rate $\left(\sum_{k=1}^{K}R_{u,k} + R_{c}\right)$ curves for $\{M=K=4, L=1\}$, $\{M=K=4, L=2\}$ and $\{M=K=4,L=4\}$ for $a_k=b=1$. For WMMSE1 optimal $\alpha$ values are used.[]{data-label="SR_441_vs_442_vs_444"}](SR_441_vs_442_vs_444_v2.eps){width="4.6in"} ![The sum-rate $\left(\sum_{k=1}^{K}R_{u,k} + R_{c}\right)$ curves for $\{M=10, K=4, L=2\}$, $\{M=5, K=4, L=2\}$, $\{M=4, K=3, L=2\}$ and $\{M=3, K=2, L=2\}$ for $a_k=b=1$. For WMMSE1 optimal $\alpha$ values are used.[]{data-label="SR_542_vs_432_vs_423"}](SR_M_geq_Mplus1.eps){width="4.6in"} Fig. \[alfaCDF\] shows the cumulative distribution function of optimal $\alpha$ for $M=K=L=2$ and for $M=K=4$ and $L=1,2,4$. The $M=K=L=2$ case is shown to accompany Fig. \[SR\_222\] and the curves for $M=K=4$ and $L=1,2,4$ accompany Fig. \[SR\_441\_vs\_442\_vs\_444\]. For $M=K=4$ and $L=1,2,4$, as $L$ increases, optimal $\alpha$ values become larger. When $L$ increases, interference management becomes harder and private multicast data rates decrease (as confirmed in Fig. \[SR\_441\_vs\_442\_vs\_444\]). However, in order to sustain the common data rate, larger $\alpha$ values become beneficial. The comparison between $M=K=L=2$ and $M=K=4, L=2$ is also in line with this observation. In the latter scenario, interference management is easier, and thus its cumulative distribution function is to the left $M=K=L=2$. Fig. \[SR\_441\_vs\_442\_vs\_444\] investigates the effect of number of users in a group for $M= K = 4$ and $L=1,2$ or $4$. The figure shows sum rate curves for WMMSE1 for optimal $\alpha$ values. As the number of users in a group increase, both $R_{u,k}$ and $R_{c}$ become the minimum of a larger number of random variables, and become smaller. Thus, the weighted sum rate decreases for increasing number of users. As $L$ increases, interference management at the receivers becomes harder. Moreover, with increasing number of users in a group, the difference between WMMSE1 with optimal $\alpha$ and WMMSE2 becomes more significant. WMMSE1 keeps multicasting rates as high as possible, whereas WMMSE2 prefers increasing the common data rate, which is not sufficient to compensate for the decrease in the sum multicast rate. ![The common data rate $\left(R_{c}\right)$, the sum multicast data rate $\left(\sum_{k=1}^{K}R_{u,k}\right)$, and the sum-rate $\left(\sum_{k=1}^{K}R_{u,k} + R_{c}\right)$ curves for $\{M=K=4,L=2\}$ vs. different $\delta$ values for $a_k=b=1$ and transmit SNR is set to 20 dB. For WMMSE1, both optimal $\alpha$ and fixed $\alpha$ is considered.[]{data-label="SR_442_vs_Delta"}](SR_442_vs_Delta.eps){width="4.6in"} Fig. \[SR\_542\_vs\_432\_vs\_423\] compares WMMSE1 for optimal $\alpha$ and WMMSE2 performance for different number of base station antennas and clusters. In particular, the figure shows weighted sum rate curves for $\{M=10, K=4, L=2\}$, $\{M=5, K=4, L=2\}$, $\{M=4, K=3, L=2\}$ and $\{M=3, K=2, L=2\}$. We observe that WMMSE1 for optimal $\alpha$ performs better than WMMSE2 in all cases. Moreover, the gains become significant if $M>K \times L$. Fig. \[SR\_442\_vs\_Delta\] shows the effect of imperfect SIC on WMMSE1 and WMMSE2 performances. In the figure, $M=K=4$, $L=2$, $a_k=b=1$, transmit SNR is 20 dB and WMMSE1 is displayed for both optimal $\alpha$ and for $\alpha=0.2$. In obtaining the figure, we assume that $\delta$ is unknown at the transmitter and the receivers. Thus the precoders and the receivers designed for perfect SIC are continued to be used. The figure reveals that WMMSE1 (either for optimal or for fixed $\alpha$) is quite robust against imperfect SIC. WMMSE1 with optimal $\alpha$ shows almost no degradation with increasing $\delta$, and WMMSE1 for $\alpha = 0.2$ is better than WMMSE2 for $\delta \leq 0.6$. Note that, residual interference power is related with $\delta^2$, and practically $\delta$ is never as large as 0.6. The figure also emphasizes the fundamental difference between WMMSE1 and WMMSE2 once again. While WMMSE1 has a much higher sum multicast rate than of WMMSE2, WMMSE2 transmits at a higher common data rate than WMMSE1. Finally, we find the algorithmic complexities for WMMSE1 and WMMSE2 in Table \[table\_1\] using the techniques discussed in [@Hunger_Complexity2007]. Note that, Table \[table\_1\] does not show the complexity of searching for the optimal $\alpha$ for WMMSE1. Table \[table\_1\] reveals that both WMMSE1 and WMMSE2 are cubic in $M$. For $M=K=L=2$, WMMSE1 and WMMSE2 respectively require 1113 and 989 operations. For $M=K=4,L=2$, WMMSE1 and WMMSE2 respectively require 7033 and 5929 operations. Computational Complexity (Number of Matrix Operations) -- -------------------------------------------------------- Conclusion {#sec:conclusion} ========== In this paper we study multi-group multicasting with a common message in a downlink MIMO broadcast channel. We assume the multicast groups are disjoint and we investigate the precoder design problem for maximum weighted sum rate. We first prove that weighted sum rate maximization problem is equivalent to the weighted minimum mean square error minimization problem. As both problems are non-convex and highly complex, we suggest a low-complexity, iterative precoder design algorithm inspired by the weighted minimum mean square error minimization problem. The algorithm iterates between receiver design for a given precoder, and then updates the precoder for a given receiver until convergence. We apply this algorithm on two transmission schemes. In the first scheme, the common message and private multicast messages are superposed. In the second scheme, the common message is appended to the private multicast message vector. We show that the first scheme consistently performs better than the second tranmission scheme in all settings. We understand that there is a fundamental difference between the two schemes. Although the second scheme is more general by definition, the first scheme introduces the freedom of power adaptation when employed within the proposed algorithm. Secondly, the first scheme favors multicast transmission, and the second scheme puts more emphasis on the common message. Future work includes studying the effect of successive cancellation order, and investigating the performance of the proposed schemes in a massive MIMO setting with imperfect channel state information at the transmitter. {#derive_gradient_f} Due to space limitations, in this appendix, we derive $\nabla_{\mathbf{p}_k^{(m)}}f(\mathbf{P}^{(m)},t_k,z)$ for only $m=1$. For notational convenience, we drop the upper indices $(m)$. The Lagrangian objective function is given by $$\begin{aligned} f(\mathbf{P},t_k,z) &= -\sum_{k=1}^{K}t_{k} - z + \underbrace{\lambda \big(\alpha \operatorname{Tr}(\mathbf{p}_A \mathbf{p}_A^H) + \bar{\alpha} \sum_{k=1}^{K} \operatorname{Tr}(\mathbf{p}_k\mathbf{p}_k^H) - E_{tx}\big)}_\text{C} \nonumber \\ &\:\quad+\underbrace{\sum_{k=1}^{K}\sum_{l=1}^{L}\mu_{l,k}(t_k - a_k R_{u,lk})}_\text{A} + \underbrace{\sum_{k=1}^{K}\sum_{l=1}^{L}\eta_{l,k}(z - b R_{c,lk})}_\text{B}.\label{append_lagrangian_SR} \end{aligned}$$ ### Gradient of A To calculate the gradient of A, in (\[append\_lagrangian\_SR\]) we need $\nabla_{\mathbf{p}_k} R_{u,lk}$ for both $i=k$ and $i \neq k$ for $l=1,\ldots,L$. Using (\[error\_cov\_uni\]) and (\[rate\_u\_errorCov\]), we have $\nabla_{\mathbf{p}_k}R_{u,lk}= (\nabla_{\mathbf{p}_k}{\varepsilon}_{u,lk}^{-1}) {\varepsilon}_{u,lk}$. Note that the noise variance ${r}_{u,lk}^{-1}$ is independent from $\mathbf{p}_k$, and $\nabla_{\mathbf{X}}(\mathbf{X}^H\mathbf{A}\mathbf{X})= \mathbf{A}\mathbf{X}$ [@moon_stirling_math ch E.3]. Then $$\begin{aligned} \nabla_{[\mathbf{p}_k]_{m}}{\varepsilon}_{u,lk}^{-1} &= \mathbf{e}_m^H \mathbf{h}_{l,k}^H {r}_{u,lk}^{-1} \mathbf{h}_{l,k} \mathbf{p}_k, \label{grad_cov_E_eps_k_mn} \end{aligned}$$where $\mathbf{e}_m$ is the unity column vector with $1$ at the $m^{th}$ element and zeros elsewhere and is size of $M\times 1$. As $[\nabla_{\mathbf{p}_k}R_{u,lk}]_{m} = \nabla_{[\mathbf{p}_k]_{m}}R_{u,lk}= \mathbf{e}_m^H \mathbf{h}_{l,k}^H {r}_{u,lk}^{-1} \mathbf{h}_{l,k} \mathbf{p}_k {\varepsilon}_{u,lk}$, we have $$\begin{aligned} \nabla_{\mathbf{p}_k} R_{u,lk} &= \mathbf{h}_{l,k}^H {r}_{u,lk}^{-1} \mathbf{h}_{l,k} \mathbf{p}_k {\varepsilon}_{u,lk}\label{grad_R_k}. \end{aligned}$$Next, we compute $\nabla_{\mathbf{p}_k}R_{u,li}$, for $i \neq k$ as $$\begin{aligned} \nabla_{[\mathbf{p}_k]_{m}}R_{u,li} &= \mathbf{p}_i^H\mathbf{h}_{l,i}^H \nabla_{[\mathbf{p}_k]_{m}}({r}_{u,li}^{-1}) \mathbf{h}_{l,i} \mathbf{p}_i {\varepsilon}_{u,li}.\label{grad_cov_E_eps_i} \end{aligned}$$Using $\nabla_{\mathbf{X}} (\mathbf{X}^{-1}) = -\mathbf{X}^{-1}\nabla (\mathbf{X}) \mathbf{X}^{-1}$ [@matrix_cookbook], we can write $$\begin{aligned} \nabla_{[\mathbf{p}_k]_{m}}({r}_{u,li}^{-1})&= -{r}_{u,li}^{-1}\nabla_{[\mathbf{p}_k]_{m}}({r}_{u,li}){r}_{u,li}^{-1}.\label{grad_pkm_invrn2i} \end{aligned}$$Then we compute $$\begin{aligned} \nabla_{[\mathbf{p}_k]_{m}}({r}_{u,li})&= \mathbf{h}_{l,i} \mathbf{p}_k \bar{\alpha} \mathbf{e}_m^H \mathbf{h}_{l,i}^H.\label{grad_pkm_rn2i} \end{aligned}$$By combining (\[grad\_cov\_E\_eps\_i\]), (\[grad\_pkm\_invrn2i\]) and (\[grad\_pkm\_rn2i\]) we have $$\begin{aligned} \nabla_{[\mathbf{p}_k]_{m}}R_{u,li}&= - \bar{\alpha} \mathbf{e}_m^H \mathbf{h}_{l,i}^H {r}_{u,li}^{-1} \mathbf{h}_{l,i} \mathbf{p}_i {\varepsilon}_{u,li}\mathbf{p}_i^H\mathbf{h}_{l,i}^H {r}_{u,li}^{-1} \mathbf{h}_{l,i} \mathbf{p}_k. \label{grad_cov_R_eps_i_mn} \end{aligned}$$Overall, we have $$\begin{aligned} \nabla_{\mathbf{p}_k}R_{u,li} &= - \bar{\alpha} \mathbf{h}_{l,i}^H {r}_{u,li}^{-1} \mathbf{h}_{l,i} \mathbf{p}_i {\varepsilon}_{u,li}\mathbf{p}_i^H \mathbf{h}_{l,i}^H {r}_{u,li}^{-1} \mathbf{h}_{l,i} \mathbf{p}_k . \label{grad_R_i_last} \end{aligned}$$Using (\[grad\_R\_k\]) and (\[grad\_R\_i\_last\]), we conclude that the gradient of A can be written as $$\begin{aligned} \nabla_{\mathbf{p}_k} \mathrm{A} &= - \sum_{l=1}^{L} \mu_{l,k} a_k \mathbf{h}_{l,k}^H {r}_{u,lk}^{-1} \mathbf{h}_{l,k} \mathbf{p}_k {\varepsilon}_{u,lk} \nonumber \\ &\:\quad + \left(\sum_{i=1,i\neq k}^{K} \sum_{l=1}^{L} \bar{\alpha} \mu_{l,k} a_k \mathbf{h}_{l,i}^H {r}_{u,li}^{-1} \mathbf{h}_{l,i} \mathbf{p}_i {\varepsilon}_{u,li}\mathbf{p}_i^H \mathbf{h}_{l,i}^H {r}_{u,li}^{-1} \mathbf{h}_{l,i}\right) \mathbf{p}_k\label{grad_B_last}. \end{aligned}$$ ### Gradient of B Secondly we compute the gradient of B defined in (\[append\_lagrangian\_SR\]). Note that\ $\nabla_{\mathbf{p}_k} R_{c,lk} = (\nabla {\varepsilon}_{c,lk}^{-1}){\varepsilon}_{c,lk}$. The gradient $\nabla_{[\mathbf{p}_k]_{m}} {\varepsilon}_{c,lk}^{-1}$ is calculated by applying the chain rule on $\nabla_{[\mathbf{p}_k]_{m}} {\varepsilon}_{c,lk}^{-1}$: $$\begin{aligned} \nabla_{[\mathbf{p}_k]_{m}} {\varepsilon}_{c,lk}^{-1} &= \nabla_{[\mathbf{p}_k]_{m}} \big ({\frac{1}{\alpha}} + \mathbf{p}_A^H\mathbf{h}_{l,k}^H r_{c,lk}^{-1}\mathbf{h}_{l,k} \mathbf{p}_A\big) \nonumber \\ &= \mathbf{p}_A^H\mathbf{h}_{l,k}^H \frac{\partial ({r}_{c,lk}^{-1})}{[\partial\mathbf{p}_k^\ast]_{m}}\mathbf{h}_{l,k} \mathbf{p}_A + \frac{\partial (\mathbf{p}_A^H\mathbf{h}_{l,k}^H)}{[\partial \mathbf{p}_k^\ast]_{m}}{r}_{c,lk}^{-1}\mathbf{h}_{l,k}\mathbf{p}_A + \mathbf{p}_A^H\mathbf{h}_{l,k}^H {r}_{c,lk}^{-1}\frac{\partial (\mathbf{h}_{l,k}\mathbf{p}_A)}{[\partial\mathbf{p}_k^\ast]_{m}} \nonumber \\ &= -\mathbf{p}_A^H\mathbf{h}_{l,k}^H {r}_{c,lk}^{-1} \frac{\partial ({r}_{c,lk})}{[\partial\mathbf{p}_k^\ast]_{m}} {r}_{c,lk}^{-1}\mathbf{h}_{l,k}\mathbf{p}_A + \mathbf{e}_m^H\mathbf{h}_{l,k}^H {r}_{c,lk}^{-1}\mathbf{h}_{l,k}\mathbf{p}_A + 0. \end{aligned}$$The last term is $0$ since $\frac{\partial\mathbf{p}_k}{\partial \mathbf{p}_k^\ast} = 0$. Now we continue to compute $$\begin{aligned} \nabla_{[\mathbf{p}_k]_m} {r}_{c,lk} &= \frac{\partial \left(\mathbf{h}_{l,k}\left(\sum_{i=1}^{K}\mathbf{p}_i\bar{\alpha} \mathbf{p}_i^H\right)\mathbf{h}_{l,k}^H + 1 \right)}{[\partial \mathbf{p}_k^\ast]_{m}}\nonumber \\ &= \frac{\partial \left(\mathbf{h}_{l,k} \mathbf{p}_k \bar{\alpha}\mathbf{p}_k^H \mathbf{h}_{l,k}^H + \mathbf{h}_{l,k}\left(\sum_{i=1,i\neq k}^{K}\mathbf{p}_i\bar{\alpha}\mathbf{p}_i^H\right)\mathbf{h}_{l,k}^H + 1 \right)}{[\partial\mathbf{p}_k^\ast]_{m}} \nonumber \\ &=\mathbf{h}_{l,k}\mathbf{p}_k \bar{\alpha} \mathbf{e}_m^H \mathbf{h}_{l,k}^H, \end{aligned}$$as $\nabla_{\mathbf{X}}(\mathbf{X}\mathbf{A}\mathbf{X}^H)= \mathbf{X}\mathbf{A}$[@moon_stirling_math]. Then, we have $$\begin{aligned} \nabla_{[\mathbf{p}_k]_{m}}{\varepsilon}_{c,lk}^{-1} &= -\mathbf{p}_A^H\mathbf{h}_{l,k}^H {r}_{c,lk}^{-1}\mathbf{h}_{l,k}\mathbf{p}_k \bar{\alpha} \mathbf{e}_m^H \mathbf{h}_{l,k}^H {r}_{c,lk}^{-1}\mathbf{h}_{l,k} \mathbf{p}_A + \mathbf{e}_m^H\mathbf{h}_{l,k}^H {r}_{c,lk}^{-1}\mathbf{h}_{l,k}\mathbf{p}_A, \label{append_invEck}\\ \nabla_{[\mathbf{p}_k]_{m}}R_{c,lk} &= - \mathbf{e}_m^H \mathbf{h}_{l,k}^H {r}_{c,lk}^{-1}\mathbf{h}_{l,k} \mathbf{p}_A {\varepsilon}_{c,lk} \mathbf{p}_A^H\mathbf{h}_{l,k}^H {r}_{c,lk}^{-1}\mathbf{h}_{l,k}\mathbf{p}_k \bar{\alpha} + \mathbf{e}_m^H\mathbf{h}_{l,k}^H {r}_{c,lk}^{-1}\mathbf{h}_{l,k}\mathbf{p}_A {\varepsilon}_{c,lk}. \label{grad_R_eps_cl_mn_part1} \end{aligned}$$Finally, we obtain $$\begin{aligned} \nabla_{\mathbf{p}_k}R_{c,lk} &= - \mathbf{h}_{l,k}^H {r}_{c,lk}^{-1}\mathbf{h}_{l,k} \mathbf{p}_A {\varepsilon}_{c,lk} \mathbf{p}_A^H\mathbf{h}_{l,k}^H {r}_{c,lk}^{-1}\mathbf{h}_{l,k}\mathbf{p}_k \bar{\alpha} + \mathbf{h}_{l,k}^H {r}_{c,lk}^{-1}\mathbf{h}_{l,k}\mathbf{p}_A {\varepsilon}_{c,lk}, \label{grad_R_eps_cl_last} \end{aligned}$$and the gradient of B becomes $$\begin{aligned} \nabla_{\mathbf{p}_k} \mathrm{B} &= \left (\sum_{i=1}^{K}\sum_{l=1}^{L} \bar{\alpha}\eta_{l,i} b \mathbf{h}_{l,i}^H {r}_{c,li}^{-1}\mathbf{h}_{l,i} \mathbf{p}_A {\varepsilon}_{c,li} \mathbf{p}_A^H\mathbf{h}_{l,i}^H {r}_{c,li}^{-1}\mathbf{h}_{l,i}\right)\mathbf{p}_k \nonumber \\ &\:\quad - \sum_{i=1}^{K} \sum_{l=1}^{L}\eta_{l,i} b \mathbf{h}_{l,k}^H {r}_{c,lk}^{-1}\mathbf{h}_{l,k}\mathbf{p}_A {\varepsilon}_{c,lk}. \label{grad_A_last} \end{aligned}$$ ### Gradient of C Thirdly, we compute the gradient of C defined in (\[append\_lagrangian\_SR\]) as $$\begin{aligned} \nabla_{\mathbf{p}_k} \mathrm{C} &= \lambda\left(\alpha\mathbf{p}_A + \bar{\alpha}\mathbf{p}_k\right), \label{grad_C_last} \end{aligned}$$since $\nabla_{\mathbf{p}_k} \operatorname{Tr}(\mathbf{p}_A \mathbf{p}_A^H) = \mathbf{p}_A$. Finally combining (\[grad\_B\_last\]), (\[grad\_A\_last\]) and (\[grad\_C\_last\]), we have $$\begin{aligned} \nabla_{\mathbf{p}_k} f(\mathbf{P},t_k,z) &= - \sum_{l=1}^{L} \mu_{l,k} a_k \mathbf{h}_{l,k}^H {r}_{u,lk}^{-1} \mathbf{h}_{l,k} \mathbf{p}_k {\varepsilon}_{u,lk} \nonumber \\ &\:\quad + \left(\sum_{i=1,i\neq k}^{K} \sum_{l=1}^{L} \bar{\alpha} \mu_{l,i} a_i \mathbf{h}_{l,i}^H {r}_{u,li}^{-1} \mathbf{h}_{l,i} \mathbf{p}_i {\varepsilon}_{u,li}\mathbf{p}_i^H \mathbf{h}_{l,i}^H {r}_{u,li}^{-1} \mathbf{h}_{l,i}\right) \mathbf{p}_k \nonumber \\ &\:\quad + \left (\sum_{i=1}^{K}\sum_{l=1}^{L} \bar{\alpha} \eta_{l,i} b \mathbf{h}_{l,i}^H {r}_{c,li}^{-1}\mathbf{h}_{l,i} \mathbf{p}_A {\varepsilon}_{c,li} \mathbf{p}_A^H\mathbf{h}_{l,i}^H {r}_{c,li}^{-1}\mathbf{h}_{l,i}\right)\mathbf{p}_k \nonumber \\ &\:\quad - \sum_{i=1}^{K} \sum_{l=1}^{L}\eta_{l,i} b \mathbf{h}_{l,i}^H {r}_{c,li}^{-1}\mathbf{h}_{l,i}\mathbf{p}_A {\varepsilon}_{c,li} + \lambda\left(\alpha\mathbf{p}_A + \bar{\alpha}\mathbf{p}_k\right) \label{grad_f_append}. \end{aligned}$$ {#derive_Pk} In this appendix, we prove Theorem \[theorem\_Pk\]. In subsections A and B, we do the derivations respectively for signal models 1 and 2. For notational convenience, we drop the upper index $(m)$. Derivations for Signal Model $1$ $(m=1)$ ---------------------------------------- Taking the derivative of the objective function $h$ in (\[lagrangian\_h\]) with respect to ${W}_{l,k}$, then equating it to zero, we obtain the following equation $$\begin{aligned} \psi_{l,k} w \alpha \mathbf{p}_A^H \mathbf{h}_{l,k}^H &= \sum_{i=1}^{K}\psi_{l,k} w \bar{\alpha} \mathbf{h}_{l,k} \mathbf{p}_i\mathbf{p}_i^H \mathbf{h}_{l,k}^H {W}_{l,k} +\psi_{l,k} w \alpha \mathbf{h}_{l,k} \mathbf{p}_A \mathbf{p}_A^H \mathbf{h}_{l,k}^H {W}_{l,k} + \psi_{l,k} {w} {W}_{l,k}. \label{W_power} \end{aligned}$$This leads to $$\begin{aligned} {W}_{l,k} &= \alpha\mathbf{p}_A^H \mathbf{h}_{l,k}^H \big(\sum_{i=1}^{K} \bar{\alpha} \mathbf{h}_{l,k} \mathbf{p}_i\mathbf{p}_i^H \mathbf{h}_{l,k}^H + \alpha \mathbf{h}_{l,k} \mathbf{p}_A \mathbf{p}_A^H \mathbf{h}_{l,k}^H + 1 \big)^{-1}\label{W_rec_append}. \end{aligned}$$Similarly, we take the derivative of (\[lagrangian\_h\]) with respect to ${V}_{l,k}$ and equating it to zero, we have $$\begin{aligned} \xi_{l,k} v_k \bar{\alpha} \mathbf{p}_k^H \mathbf{h}_{l,k}^H &= \sum_{i=1}^{K} \xi_{l,k} {v}_k \mathbf{h}_{l,k} \mathbf{p}_i\bar{\alpha}\mathbf{p}_i^H \mathbf{h}_{l,k}^H {V}_{l,k} + \xi_{l,k} {v}_k {V}_{l,k}, \label{V_power_sic} \end{aligned}$$and this leads to $$\begin{aligned} {V}_{l,k} &= \bar{\alpha} \mathbf{p}_k^H \mathbf{h}_{l,k}^H\big(\sum_{i=1}^{K} \bar{\alpha} \mathbf{h}_{l,k} \mathbf{p}_i\mathbf{p}_i^H \mathbf{h}_{l,k}^H + 1 \big)^{-1}.\label{V_rec_sic_append} \end{aligned}$$Then, taking the gradient of (\[lagrangian\_h\]) with respect to ${\mathbf{p}_k}$, and equating it to zero, we have the following equation $$\begin{aligned} \lefteqn{\sum_{l=1}^{L}\xi_{l,k} v_k \bar{\alpha} \mathbf{h}_{l,k}^H {V}_{l,k}^{\ast} + \sum_{i=1}^{K}\sum_{l=1}^{L}\psi_{l,i} w \alpha \mathbf{h}_{l,i}^H {W}_{l,i}^{\ast} }\nonumber \\ &=& \sum_{i=1}^{K} \sum_{l=1}^{L} \xi_{l,i} v_i \bar{\alpha} \mathbf{h}_{l,i}^H {V}_{l,i}^{\ast} {V}_{l,i} \mathbf{h}_{l,i} \mathbf{p}_k + \sum_{i=1}^{K} \sum_{l=1}^{L} \psi_{l,i} w \bar{\alpha} \mathbf{h}_{l,i}^H {W}_{l,i}^{\ast} {W}_{l,i} \mathbf{h}_{l,i} \mathbf{p}_k \nonumber \\ &&+ \sum_{i=1}^{K} \sum_{l=1}^{L} \psi_{l,i} w \alpha \mathbf{h}_{l,i}^H {W}_{l,i}^{\ast} {W}_{l,i} \mathbf{h}_{l,i} \mathbf{p}_A + \beta\big( \alpha\mathbf{p}_A + \bar{\alpha}\mathbf{p}_k\big).\label{Pk_power} \end{aligned}$$Then, $$\begin{aligned} \mathbf{p}_k &= \big(\beta \mathbf{I} + \sum_{i=1}^{K} \sum_{l=1}^{L} \xi_{l,i} v_i \bar{\alpha} \mathbf{h}_{l,i}^H {V}_{l,i}^{\ast} {V}_{l,i} \mathbf{h}_{l,i} + \sum_{i=1}^{K} \sum_{l=1}^{L} \psi_{l,i} w \mathbf{h}_{l,i}^H {W}_{l,i}^{\ast} {W}_{l,i} \mathbf{h}_{l,i} \big)^{-1} \nonumber\\ &\quad \times \big[\sum_{l=1}^{L}\xi_{l,k} v_k \bar{\alpha} \mathbf{h}_{l,k}^H {V}_{l,k}^{\ast} + \sum_{i=1}^{K}\sum_{l=1}^{L}\psi_{l,i} w \alpha \mathbf{h}_{l,i}^H {W}_{l,i}^{\ast} - \beta\alpha(\mathbf{p}_A-\mathbf{p}_k) \nonumber\\ &\quad - \sum_{i=1}^{K} \sum_{l=1}^{L} \psi_{l,i} w \alpha \mathbf{h}_{l,i}^H {W}_{l,i}^{\ast} {W}_{l,i} \mathbf{h}_{l,i} (\mathbf{p}_A-\mathbf{p}_k) \big].\label{Pk_append} \end{aligned}$$ To calculate $\beta$, we post-multiply both sides of (\[W\_power\]) by ${W}_{l,k}^{\ast}$ and (\[V\_power\_sic\]) by ${V}_{l,k}^{\ast}$, and perform $\sum_{k=1}^{K} \sum_{l=1}^{L}$ on (\[W\_power\]) and (\[V\_power\_sic\]) on both sides. Then we obtain $$\begin{aligned} \sum_{k=1}^{K} \sum_{l=1}^{L}\psi_{l,k} w \alpha \mathbf{p}_A^H \mathbf{h}_{l,k}^H{W}_{l,k}^{\ast} &= \sum_{k=1}^{K} \sum_{l=1}^{L} \sum_{i=1}^{K}\psi_{l,k} w \bar{\alpha} \mathbf{h}_{l,k} \mathbf{p}_i\mathbf{p}_i^H \mathbf{h}_{l,k}^H {W}_{l,k}{W}_{l,k}^{\ast} \nonumber \\ &\quad+ \sum_{k=1}^{K} \sum_{l=1}^{L}\psi_{l,k} w \alpha \mathbf{h}_{l,k} \mathbf{p}_A \mathbf{p}_A^H \mathbf{h}_{l,k}^H {W}_{l,k} {W}_{l,k}^{\ast} + \sum_{k=1}^{K} \sum_{l=1}^{L}\psi_{l,k} {w} {W}_{l,k} {W}_{l,k}^{\ast}\label{W_sum_pow}\\ \sum_{k=1}^{K} \sum_{l=1}^{L} \xi_{l,k} v_k \bar{\alpha} \mathbf{p}_k^H \mathbf{h}_{l,k}^H {V}_{l,k}^{\ast} &= \sum_{k=1}^{K} \sum_{l=1}^{L}\sum_{i=1}^{K} \xi_{l,k} {v}_k \mathbf{h}_{l,k} \mathbf{p}_i\bar{\alpha}\mathbf{p}_i^H \mathbf{h}_{l,k}^H {V}_{l,k} {V}_{l,k}^{\ast} + \sum_{k=1}^{K} \sum_{l=1}^{L}\xi_{l,k} {v}_k {V}_{l,k} {V}_{l,k}^{\ast}. \label{V_sum_pow} \end{aligned}$$The summation of (\[W\_sum\_pow\]) and (\[V\_sum\_pow\]) leads to $$\begin{aligned} \lefteqn{\sum_{k=1}^{K} \sum_{l=1}^{L}\psi_{l,k} w \alpha \mathbf{p}_A^H \mathbf{h}_{l,k}^H{W}_{l,k}^{\ast} + \sum_{k=1}^{K} \sum_{l=1}^{L} \xi_{l,k} v_k \bar{\alpha} \mathbf{p}_k^H \mathbf{h}_{l,k}^H {V}_{l,k}^{\ast}}\nonumber \\ &=&\sum_{k=1}^{K} \sum_{l=1}^{L} \sum_{i=1}^{K}\psi_{l,k} w \bar{\alpha} \mathbf{h}_{l,k} \mathbf{p}_i\mathbf{p}_i^H \mathbf{h}_{l,k}^H {W}_{l,k}{W}_{l,k}^{\ast} + \sum_{k=1}^{K} \sum_{l=1}^{L}\psi_{l,k} w \alpha \mathbf{h}_{l,k} \mathbf{p}_A \mathbf{p}_A^H \mathbf{h}_{l,k}^H {W}_{l,k} {W}_{l,k}^{\ast} \nonumber \\ && + \sum_{k=1}^{K} \sum_{l=1}^{L}\psi_{l,k} {w} {W}_{l,k} {W}_{l,k}^{\ast} + \sum_{k=1}^{K} \sum_{l=1}^{L}\sum_{i=1}^{K} \xi_{l,k} {v}_k \mathbf{h}_{l,k} \mathbf{p}_i\bar{\alpha}\mathbf{p}_i^H \mathbf{h}_{l,k}^H {V}_{l,k} {V}_{l,k}^{\ast} \nonumber \\ && + \sum_{k=1}^{K} \sum_{l=1}^{L}\xi_{l,k} {v}_k {V}_{l,k} {V}_{l,k}^{\ast}.\label{VW_last_power} \end{aligned}$$On the other hand, pre-multiplying (\[Pk\_power\]) with $\mathbf{p}_k^H$ and summing over $k$ from $1$ to $K$, we have $$\begin{aligned} \lefteqn{\sum_{k=1}^{K}\sum_{l=1}^{L}\xi_{l,k} v_k \bar{\alpha} \mathbf{p}_k^H\mathbf{h}_{l,k}^H {V}_{l,k}^{\ast} + \sum_{k=1}^{K} \sum_{i=1}^{K}\sum_{l=1}^{L}\psi_{l,i} w \alpha \mathbf{p}_k^H \mathbf{h}_{l,i}^H {W}_{l,i}^{\ast} }\nonumber \\ &=& \sum_{k=1}^{K} \sum_{i=1}^{K} \sum_{l=1}^{L} \xi_{l,i} v_i \bar{\alpha} \mathbf{p}_k^H \mathbf{h}_{l,i}^H {V}_{l,i}^{\ast} {V}_{l,i} \mathbf{h}_{l,i} \mathbf{p}_k + \sum_{k=1}^{K} \sum_{i=1}^{K} \sum_{l=1}^{L} \psi_{l,i} w \bar{\alpha} \mathbf{p}_k^H \mathbf{h}_{l,i}^H {W}_{l,i}^{\ast} {W}_{l,i} \mathbf{h}_{l,i} \mathbf{p}_k \nonumber \\ &&+ \sum_{k=1}^{K} \sum_{i=1}^{K} \sum_{l=1}^{L} \psi_{l,i} w \alpha \mathbf{p}_k^H \mathbf{h}_{l,i}^H {W}_{l,i}^{\ast} {W}_{l,i} \mathbf{h}_{l,i} \mathbf{p}_A + \sum_{k=1}^{K}\mathbf{p}_k^H\beta\big( \alpha\mathbf{p}_A + \bar{\alpha} \mathbf{p}_k\big).\label{Pk_last_power} \end{aligned}$$Comparing the left sides of (\[VW\_last\_power\]) and (\[Pk\_last\_power\]), we observe that they are equal. Then, the right sides are also equal to each other. As we assume that the power constraint in (\[pow\_const\]) is satisfied with equality, (\[VW\_last\_power\]) = (\[Pk\_last\_power\]) leads to $$\begin{aligned} \beta &= \frac{1}{E_{tx}}\left[\sum_{k=1}^{K} \sum_{l=1}^{L}\xi_{l,k} {v}_k {V}_{l,k} {V}_{l,k}^{\ast} + \sum_{k=1}^{K} \sum_{l=1}^{L}\psi_{l,k} {w} {W}_{l,k} {W}_{l,k}^{\ast} \right]. \label{lamda_append} \end{aligned}$$ Derivations for Signal Model $2$ $(m=2)$ ---------------------------------------- First, we would like to remind that in the second signal model, $\mathbf{p}_A=\mathbf{p}_c$. Taking the derivative of the objective function $h$ in (\[lagrangian\_h\]) with respect to ${W}_{l,k}$, then equating it to zero, we obtain the following equation $$\begin{aligned} \psi_{l,k} w \mathbf{p}_c^H \mathbf{h}_{l,k}^H &= \sum_{i=1}^{K}\psi_{l,k} w \mathbf{h}_{l,k} \mathbf{p}_i\mathbf{p}_i^H \mathbf{h}_{l,k}^H {W}_{l,k} +\psi_{l,k} w \mathbf{h}_{l,k} \mathbf{p}_c \mathbf{p}_c^H \mathbf{h}_{l,k}^H {W}_{l,k} + \psi_{l,k} {w} {W}_{l,k}. \label{W_power2} \end{aligned}$$This leads to $$\begin{aligned} {W}_{l,k} &= \mathbf{p}_c^H \mathbf{h}_{l,k}^H \big(\sum_{i=1}^{K} \mathbf{h}_{l,k} \mathbf{p}_i\mathbf{p}_i^H \mathbf{h}_{l,k}^H +\mathbf{h}_{l,k} \mathbf{p}_c \mathbf{p}_c^H \mathbf{h}_{l,k}^H + 1 \big)^{-1}\label{W_rec_append2}. \end{aligned}$$Similarly, we take the derivative of (\[lagrangian\_h\]) with respect to ${V}_{l,k}$ and equating it to zero, we have $$\begin{aligned} \xi_{l,k} v_k \mathbf{p}_k^H \mathbf{h}_{l,k}^H &= \sum_{i=1}^{K} \xi_{l,k} {v}_k \mathbf{h}_{l,k} \mathbf{p}_i\mathbf{p}_i^H \mathbf{h}_{l,k}^H {V}_{l,k} + \xi_{l,k} {v}_k {V}_{l,k}, \label{V_power_sic2} \end{aligned}$$and this leads to $$\begin{aligned} {V}_{l,k} &= \mathbf{p}_k^H \mathbf{h}_{l,k}^H\big(\sum_{i=1}^{K} \mathbf{h}_{l,k} \mathbf{p}_i\mathbf{p}_i^H \mathbf{h}_{l,k}^H + 1 \big)^{-1}.\label{V_rec_sic_append2} \end{aligned}$$Taking the gradient of (\[lagrangian\_h\]) with respect to ${\mathbf{p}_k}$ and ${\mathbf{p}_c}$, and equating it to zero, we have the following equation $$\begin{aligned} \sum_{l=1}^{L}\xi_{l,k} v_k \mathbf{h}_{l,k}^H {V}_{l,k}^{\ast} =\sum_{i=1}^{K} \sum_{l=1}^{L} \xi_{l,i} v_i \mathbf{h}_{l,i}^H {V}_{l,i}^{\ast} {V}_{l,i} \mathbf{h}_{l,i} \mathbf{p}_k + \sum_{i=1}^{K} \sum_{l=1}^{L} \psi_{l,i} w \mathbf{h}_{l,i}^H {W}_{l,i}^{\ast} {W}_{l,i} \mathbf{h}_{l,i} \mathbf{p}_k + \beta \mathbf{p}_k.\label{Pk_power2} \end{aligned}$$ $$\begin{aligned} \sum_{k=1}^{K} \sum_{l=1}^{L} \psi_{l,k} w \mathbf{h}_{l,k}^H {W}_{l,k}^{\ast} = \sum_{k=1}^{K} \sum_{l=1}^{L} \psi_{l,k} w \mathbf{h}_{l,k}^H {W}_{l,k}^{\ast} {W}_{l,k} \mathbf{h}_{l,k} \mathbf{p}_c + \beta \mathbf{p}_c.\label{Pc_power2} \end{aligned}$$ Then, $$\begin{aligned} \mathbf{p}_k &= \left(\beta \mathbf{I} + \sum_{i=1}^{K} \sum_{l=1}^{L} \xi_{l,i} v_i \mathbf{h}_{l,i}^H {V}_{l,i}^{{\ast}} {V}_{l,i} \mathbf{h}_{l,i} + \sum_{i=1}^{K} \sum_{l=1}^{L} \psi_{l,i} w \mathbf{h}_{l,i}^H {W}_{l,i}^{{\ast}} {W}_{l,i} \mathbf{h}_{l,i} \right)^{-1} \left(\sum_{l=1}^{L}\xi_{l,k} v_k \mathbf{h}_{l,k}^H {V}_{l,k}^{{\ast}} \right),\label{Pk2_append}\\ \mathbf{p}_c &= \left(\beta \mathbf{I} + \sum_{i=1}^{K} \sum_{l=1}^{L} \psi_{l,i} w \mathbf{h}_{l,i}^H {W}_{l,i}^{{\ast}} {W}_{l,i} \mathbf{h}_{l,i} \right)^{-1} \left( \sum_{i=1}^{K}\sum_{l=1}^{L}\psi_{l,i} w \mathbf{h}_{l,i}^H {W}_{l,i}^{{\ast}} \right).\label{Pc_append} \end{aligned}$$ To calculate $\beta$ for signal model $2$, we followed the similar steps in the previous subsection and obtained the following expression, $$\begin{aligned} \beta &= \frac{1}{E_{tx}}\left[\sum_{k=1}^{K} \sum_{l=1}^{L}\xi_{l,k} {v}_k {V}_{l,k} {V}_{l,k}^{\ast} + \sum_{k=1}^{K} \sum_{l=1}^{L}\psi_{l,k} {w} {W}_{l,k} {W}_{l,k}^{\ast} \right]. \label{lamda_append2} \end{aligned}$$ Acknowledgment {#acknowledgment .unnumbered} ============== The authors would like to thank the anonymous reviewers for their valuable comments that improved the results. [^1]: The gradient of a function $f(\mathbf{x})$ with respect to its complex variable $\mathbf{x}$ is denoted as $\nabla_{\mathbf{x}} f(\mathbf{x})$ and its $m$th element is defined as $[\nabla_{\mathbf{x}}f(\mathbf{x})]_{m} = \nabla_{[\mathbf{x}]_{m}}f(\mathbf{x}) = \frac{\partial f(\mathbf{x})}{\partial[\mathbf{x}^\ast]_{m}}$. For detailed derivation rules, we refer the reader to [@matrix_cookbook].
--- abstract: 'In this work a method for reconstructing velocity and acceleration fields is described which uses scattered particle tracking data from flow experiments as input. The goal is to reconstruct these fields faithfully with a limited amount of compute time and exploit known flow properties such as a divergence-free velocity field for incompressible flows and a rotation-free acceleration in case it is known to be dominated by the pressure gradient in order to improve the spatial resolution of the reconstruction.' author: - | Sebastian Gesemann\ Experimental Methods, DLR, Göttingen, Germany title: 'From Particle Tracks to Velocity and Acceleration Fields Using B-Splines and Penalties' --- Introduction ============ Determining 3D velocity fields from flow experiments is difficult especially if a high spatial resolution is desired. For optical particle-based measurement techniques, a high spatial resolution implies a high density of tracer particles that need follow the flow and have to be observed using multiple cameras. Instead of identifying and matching particles separately between different views, which is nontrivial at high particle densities, a tomographic approach has been successfully applied in the past to the particle distribution reconstruction problem. This method is called *TomoPIV* for Tomographic Particle Image Velocimetry. In TomoPIV the measurement volume is discretized and reconstructed as the solution to a large but sparse and constrained linear equation system resulting in a discrete volume of light intensities. One way of deriving flow fields from these volumes is to apply a cross correlation between two subvolumes of reconstructions from neighbouring points of time to detect the average flow velocity within such a subvolume. This is a robust method to compute flow velocities but results in a spatially lowpass filtered representation of the velocity depending on the size of the subvolumes used for cross-correlation usually called window size. With recent advances in particle tracking techniques the density of particles that can be reconstructed directly without discretizing the volume is approaching densities that are typically used in TomoPIV measurements. The advantage of methods for reconstructing particle locations directly instead of a discretized volume has several benefits over a tomographic reconstruction: It typically requires a only small fraction of CPU and RAM resources compared to what is needed for TomoPIV to solve the large constrained equation system. Also, such a direct particle reconstruction method avoids an additional layer of spatial discretization which can be expected to improve the accuracy of particle location measurements. Given a sequence of time-resolved measurement images of a flow with tracer particles, particle tracks can be reconstructed with these new techniques. In this publication, we will describe a method for reconstructing velocity and acceleration fields from scattered and noisy particle tracks which we developed with the goal of preserving much of the information present in the particle tracks and avoiding any unwanted spatial lowpass filtering effect such as the one that is inherent in correlation-based methods. In addition, it’s possible to exploit prior knowledge to improve the spatial resolution of the reconstructed field such as as freedom of divergence of the velocity field for cases with incompressible flow. Overview ======== Our approach to compute velocity and acceleration fields based on noisy particle tracks can be split into two parts: - *trackfit* takes the noisy particle location data of a particle track and computes a B-spline curve for the track. This step includes noise reduction and allows computing 1st and 2nd order derivatives for velocity and acceleration at any position in time within the time interval of the observed particle track. - *flowfit* takes particle locations and any other physical quantity for each such location such as velocity or acceleration for one particular point in time and computes a 3D B-spline curve that optionally satisfies other constraints such as freedom of divergence or curl. This step involves solving a linear weighted least squares problem. These steps share many similarities. Both make use of B-splines to represent the result as a continuous function and both employ a similar form of noise reduction via penalization like it was introduced in [@eilers96]. The difference is that *trackfit* deals with data that is already equidistantly sampled and one-dimensional while *flowfit* deals with scattered data in three dimensions. *flowfit* is also able to compute vector fields that are free of divergence or curl by extending the equation system that is used for computing the B-spline weighting coefficients with appropriate equations penalizing divergence or curl. This is useful for incompressible flows and improves the spatial resolution to some extent. B-splines ========= Instead of restricting ourselves to time and space discrete signals for a particle track or a velocity field we can try to reconstruct a *continuous* function with a finite number of degrees of freedom. Building a function such as a particle track that maps time to particle location or velocity field that maps location to flow speed as a weighted sum of B-splines is one option to model a continuous function and has several benefits: The function will automatically be as smooth as one desires and function evaluation including temporal or spatial derivatives is fast. For example, a cubic B-spline is twice continuously differentiable and so is every linear combination of cubic B-splines. Also, such a representation allows us to express the function’s value or any derivative at any point exactly as a linear combination of weights without the need to numerically approximate derivatives. Throughout this paper the $k$-th order cardinal B-spline function centered at zero will be referred to as $\beta_k$. This family of functions can be defined recursively in the following way: $$\label{eq:bsplinek} \begin{array}{rcl} \beta_k & : & \mathbb{R} \longmapsto \mathbb{R} \\ \beta_0(x) & = & \begin{cases} 1 & \text{for } |x| < \frac{1}{2} \\ \frac{1}{2} & \text{for } |x| = \frac{1}{2} \\ 0 & \text{else} \end{cases} \\ \beta_k(x) & = & \frac{1+k+2x}{2k} \beta_{k-1}(x + \frac{1}{2}) + \\ & & \frac{1+k-2x}{2k} \beta_{k-1}(x - \frac{1}{2}) \end{array}$$ In the special case $k=2$ for a quadratic B-spline, this can be written as $$\label{eq:beta2} \beta_2(x) = \begin{cases} \frac{3}{4} - |x|^2 & \text{for } |x| < \frac{1}{2} \\ \frac{1}{2} \left( \frac{3}{2} - |x| \right)^2 & \text{for } \frac{1}{2} \leq |x| < \frac{3}{2} \\ 0 & \text{for } \frac{3}{2} \leq |x| \end{cases}$$ and for the special case $k=3$, a cubic B-spline, we get $$\label{eq:beta3} \beta_3(x) = \begin{cases} \frac{4}{6} - |x|^2 + \frac{1}{2} |x|^3 & \text{for } |x| < 1 \\ \frac{1}{6} \left( 2 - |x| \right)^3 & \text{for } 1 \leq |x| < 2 \\ 0 & \text{for } 2 \leq |x| \end{cases}$$ Filtering particle tracks ========================= A time-discrete 3D particle track can be viewed as three digital signals that represent how the particle’s x-, y- and z coordinates change over time. A simple but reasonable model of the measured particle locations is that they are the sum of the particle’s real locations and a measurement error signal. Typically, the high frequency portion of these measured location signals are dominated by measurement noise while in the low frequency portion the measurement noise will be negligible compared to the signal. One way of dealing with this is to use a corresponding Wiener filter that has the goal of minimizing the sum of squared errors. ![image](track_spectrum.pdf) Under the assumption that the measurement noise is not correlated with the true particle’s locations the optimal Wiener filter simplifies to a filter with a phase response of constant zero and an amplitude response that is completely determined by the signal-to-noise ratio in the following way where $f$ refers to the frequency and $SNR$ refers to the power spectral density ratio between the noise-free signal and the noise: $$\label{eq:snr2ampresponse} A(f) = \frac{SNR(f)}{SNR(f) + 1}$$ After performing a spectral analysis of particle track data of different flow experiments we observed the following: The square root of the true particle locations’ power spectral density had a shape roughly proportional to $f^{-3}$ near the the frequency where the signal-to-noise ratio equals one. The noise floor was flat which is expected when location measurement errors of different points in time don’t correlate with each other. This leads to our simple model of the signal and noise spectra for which a specific Wiener filter follows using equation \[eq:snr2ampresponse\], see figure \[fig:wienerspectrum\]. After experimenting with different kinds of filter design methods and implementations we noticed that such a Wiener filter can be approximated well using a B-spline fit with the appropriate choice of penalization. The input to *trackfit* are $n$ measured values $y_i$ for $1 \leq i \leq n$, representing for example all the y-components of a particle that was tracked for $n$ equidistantly spaced points in time. Without loss of generality we can assume a time step of one. With knots at locations $0, 1, \ldots, n+1$, a cubic spline function for the interval $[1, n]$ can be represented as the following weighted sum of $n+2$ B-splines $$\label{eq:splinefunc} \vec{p_c}(t) = \sum_{i=0}^{n+1} \vec{c_i} \beta_3(t - i)$$ where $c_i$ are the unknown weighting coefficients and $\beta_3$ refers to the cubic cardinal B-spline defined in the previous section. We compute these coefficients by minimizing the cost function $$\label{eq:trackfitcost} \begin{array}{rl} F(c) = & \sum_{i=1}^n \left| p_c(i) - y_i \right|^2 + \\ & \sum_{i=1}^{n-1} \left| \lambda \left( c_{i-1} - 3 c_i + 3 c_{i+1} - c_{i+2} \right) \right|^2 \end{array}$$ where the parameter $\lambda$ controls how strongly the third order finite differences are penalized in relation to the error between measurement and the fitted curve. A very large value for $\lambda$ approaching infinity will result in a spline curve that approaches a quadratic polynomial for the whole particle track. Smaller values will lead to a spline curve that will follow the measurement data more closely. We can also interpret this approach as a Kalman-like filter: The measured particle locations are combined with a physical model in which the change in acceleration is assumed to be white noise and the parameter $\lambda$ is chosen as ratio between the standard deviation of the location measuement noise and the standard deviation of the unpredictable change in acceleration in order to estimate the most likely particle track under this model. The resulting mathematical problem is weighted linear least squares problem with a sparse matrix. The matrix of the corresponding normal equation system, see equation \[eq:trackfitnormaleqs\], will be a symmetric positive definite 7-band matrix for which efficient factorizations can be computed in-place to solve for the B-spline weights $c_i$ directly. The condition of the normal equation system is acceptible for typical machine accuracies and choices of $\lambda$. $$\label{eq:trackfitnormaleqs} A c = b$$ We refer to the frequency at which the power spectral density of the signal and noise cross each other the cutoff frequency. The optimal Wiener filter would have an amplitude response of $\frac{1}{2}$ at this frequency which follows from equation \[eq:snr2ampresponse\]. For a normalized cutoff frequency $f_{cutoff}$ between $0.1$ and $0.5$ where $1$ represents the Nyquist frequency choosing $\lambda$ according to equation \[eq:trackfitlambda\] ![Amplitude responses of different example filters with varying cutoff frequencies[]{data-label="fig:trackfit_filters"}](trackfit_filters.pdf){width="\columnwidth"} $$\label{eq:trackfitlambda} \lambda = \left(\frac{1}{\pi \cdot f_{cutoff}}\right)^3$$ will result in a a B-spline fit that approximates the optimal Wiener filter quite well, compare figures \[fig:wienerspectrum\] and \[fig:trackfit\_filters\]. The cutoff frequency is in the denominator so that a lower frequency will lead to a stronger penalization of the third order derivative. The third power is due to the fact that we are penalizing the third oder derivative which scales with the third power of a frequency. To determine the magnitude response of the filter we compute the Fourier transform of an impulse response. To compute the impulse response we set $n$ to 201, $y_i$ to zero for all $i \neq 101$ and $y_{101}$ to one. Figure \[fig:trackfit\_filters\] shows five examples with different values for $\lambda$ for different cutoff frequencies. After computing the B-spline weighting coefficients $c$ for a specific particle track, the track curve can be evaluated at any point in time along with its first and second derivatives. This allows temporal super sampling of the curve with consistent particle locations, velocities and accelerations for later processing. It is worth pointing out that under the assumption our particle track model holds and the $\lambda$ parameter was chosen correctly, variances of the B-Spline coefficient errors due to the measurement noise will be larger at the borders of the track than at center of the track. This can be verified by inspecting the diagonal of the inverse of $A$ from equation \[eq:trackfitnormaleqs\] which represents all the variances of $c$ multiplied by $2n-1$ times the variance of the particle location measurement noise. This means that one should not put equal trust in the computed locations, velocities and accelerations over the complete time interval. Reconstructing a 3D vector field ================================ For the spatial reconstruction of a vector field given scattered data for a particular point in time, it is possible to extend the method of penalized B-splines to multiple dimensions on a Cartesian lattice with a point distance $h$. For every lattice point within a certain cube we would have $d$ degrees of freedom, for example, $d=3$ for a velocity or material acceleration field, that are the weighting factors for the corresponding 3D B-spline functions of order $k$. Suppose $l \in \mathbb{R}^3$ is the coordinate of the lower corner lattice point of the volume and $N \in \mathbb{N}^3$ describes the number of grid points in each dimension. Then, we can assign each grid point an index between $1$ and $n = N_1 N_2 N_3$ along with a world coordinate $x_i \in \mathbb{R}^3$ for the $i$-th lattice point derived from $l$ and $h$ and represent the vector field as the function shown in \[eq:flowfit\_curve\_model\] $$\label{eq:flowfit_curve_model} \begin{array}{rcl} \vec{v} &:& \prod_{j=1}^3[l_j + \frac{k-1}{2} h, l_j + \left( N_j - \frac{k+1}{2} \right) h] \longmapsto \mathbb{R}^3 \\ \vec{v}(\vec{x}) &=& \sum_{i=1}^{n} B_k\left( \frac{x - x_i}{h} \right) \vec{c_i} \end{array}$$ Here, $B_k$ is a 3D convolution of separate one-dimensional B-splines $\beta_k$: $$\label{eq:bigbeta} \begin{array}{rcl} B_k & : & \mathbb{R}^3 \longmapsto \mathbb{R} \\ B_k(x) & = & \prod_{i=1}^3 \beta_k(x_i) \end{array}$$ With this model of the vector field each given data point results in $d$ linear equations involving $d \left( k + 1 \right)^3$ unknown B-spline weighting coefficients because for any such point there are $k+1$ lattice points for each spatial dimension that contribute to the value of the resulting function and we have $d$ separate variables for each lattice point. In addition to equations for the data points it is possible to add regularizations for overall smoothness (similar to how a higher order derivative is penalized in the *trackfit* approach) but also to penalize other physical properties such as the divergence of a velocity field or the rotation of a material acceleration field assuming that acceleration is dominated by the pressure gradient. Both the divergence and rotation of the vector field at an arbitrary point in the domain of $\vec{v}$ can be expressed as a linear combination of the unknown variables $\vec{c_i}$ so that resulting optimization problem is still a linear least squares problem. Choosing the B-spline order $k$ is a trade-off between how smooth the function and how sparse the matrix of the resulting equation system should be. This paper is incomplete. Please wait for an updated version. [9]{} Eilers, P.H.C. and Marx, B.D. (1996). [*Flexible smoothing with B-splines and penalties*]{}. Statistical Science 11(2): 89-121.
--- abstract: 'A few observational methods allow the measurement of the mass and distance of the lens-star for a microlensing event. A first estimate can be obtained by measuring the microlensing parallax effect produced by either the motion of the Earth (annual parallax) or the contemporaneous observation of the lensing event from two (or more) observatories (space or terrestrial parallax) sufficiently separated from each other. Further developing ideas originally outlined by @Gould2013b and @Mogavero2016, we review the possibility of measuring systematically the microlensing parallax using a telescope based on the Moon surface and other space-based observing platforms including the upcoming WFIRST space-telescope. We first generalize the Fisher matrix formulation and present results demonstrating the advantage for each observing scenario. We conclude by outlining the limitation of the Fisher matrix analysis when submitted to a practical data modeling process. By considering a lunar-based parallax observation we find that parameter correlations introduce a significant loss in detection efficiency of the probed lunar parallax effect.' author: - 'E. Bachelet$^{1}$, T. C. Hinse$^{2}$ and R. Street$^{1}$' bibliography: - 'biblio\_moonpara.bib' title: Measuring the microlensing parallax from various space observatories --- Introduction {#sec:introduction} ============ Measuring the microlensing parallax is of primary importance, since it constrains the mass-distance relation of the microlensing lens and allows the physical properties of the lens to be measured [@Gould2000]: $$M_l = {{\theta_E}\over{\kappa\pi_E}}$$ where $M_l$ is the lens mass in solar unit, $\theta_E$ is the angular Einstein ring radius, $\pi_E$ is the microlensing parallax and $\kappa=8.144~ \rm{mas.M_\odot^{-1}}$. The microlensing parallax can be measured in three possible observing scenarios. The non-rectilinear motion of the Earth around the Sun imposes an additional kinematic component on the relative lens-source trajectory $\mu$ and is known as the annual parallax effect [@Alcock1995; @Gould2000; @Smith2003; @Gould2004]. The effect is greater for long event timescales and typically events with an angular Einstein ring crossing time $t_E=\theta_E/\mu \ge 30$ days present significants variations. This effect is also greater when the observations occurs near the equinoxes [@Skowron2011]. The second method requires the microlensing event to be observed from two observatories separated by a significant baseline, see for example @Refsdal1966 [@CalchiNovati2015; @Street2016; @Henderson2016]. This method is called the space parallax, since it generally involves the use of ground and space-based observatories. The last method, called the terrestrial parallax, is hard to measure, but has been measured in few cases [@Yee2009; @Gould2009]. The separation between two distinct observatories on Earth, with different location in longitude and latitude, induces a shift in both the time of event magnification $t_0$ and minimum impact parameter $u_0$. Since the separation is small relative to the projected Einstein radius, this effect is measurable only for extreme high magnification events [@Hardy1995; @Holz1996; @Gould1997; @Gould2013a]. Recently, @Gould2013b and @Mogavero2016 (thereafter G13 and M16) explored the capability to measure parallax using space-based observatories only. They concluded that this is feasible for observing platforms on geosynchronous and Low Earth Orbits (LEO), depending on the specific microlensing event signal-to-noise ratio. Their work motivated us to study this aspect of parallax measurements in some more detail. The outline of this work is as follows. We extend the approach of G13 and M16 and worked out a more general description of the Fisher matrix formulation in Section \[sec:Fisher\]. In Section \[sec:single\] and Section \[sec:fleet\], we then study the potential of a wider range of space-based observatories in order to measure the microlensing parallax. In Section \[sec:real\], we highlight the difficulty to detect parallax in practice. This is a consequence of the parallax being an observable obtained from a best-fit model that suffers from correlations between parameters. We conclude our study in Section \[sec:conclusions\]. Parallax formulation and Fisher matrix analysis {#sec:Fisher} =============================================== Parameterization of the problem ------------------------------- Following the method outlined in G13 and M16, we conduct a Fisher matrix analysis for various space-based observatories. G13 and M16 consider observatories with orbital radii that are small compared with 1 AU, which allows some approximations in the Fisher matrix analysis. This approximation can not be applied to the present work because we consider observatories separated by a large orbital radius, for example a satellite orbiting the Sun at 1 AU. Therefore, a general Fisher matrix analysis is required. For simplicity, we consider only circular orbits in this work without loss of generality. As in G13 and M16, we first define $\epsilon_\parallel=\rm{R}/\rm{AU}$ and $\epsilon_\bot=\epsilon_\parallel\sin{\lambda}$, where $R$ is the the orbital radius of the observatory platform (associated with a period $P$) and $\lambda$ is the latitude of the microlensing target relative to the observatory orbital plane. If we consider the problem in the reference frame centered on the observatory at the microlensing peak $t_0$, the motion of the coordinates of the observatory $\mathbf{O} = (o_1,o_2)$ are: $$\begin{aligned} o_1 &=& \epsilon_\parallel\cos{\Omega}-\epsilon_\parallel\cos{\phi} \\ o_2 &=& \epsilon_\bot\sin{\Omega}-\epsilon_\bot\sin{\phi} \end{aligned} \label{eq:positions}$$ with $\Omega = \omega(t-t_0)+\phi$ , $\omega = 2\pi/P$ and $\phi$ is the orbital phase relative to microlensing event time of maximum magnification. This approach is similar to that of @Gould2004. We now define $\tau = (t-t_0)/t_E$, $u_0$ and $\theta$ (the lens-source trajectory angle) as the standard microlensing parameters for the static observatory (see for example @Gould2000 for the definition of these parameters, as well as the Figure \[fig:geometry\]). If one defines the microlensing parallax vector as $\mathbf{\pi_E} = (\pi_\parallel,\pi_\bot) = \pi_E(\cos{\theta},\sin{\theta})$, the moving observatory $(\delta\tau,\delta\beta)$ shifts are: $$\begin{aligned} \delta\tau &=& \mathbf{\pi_E} \cdot \mathbf{O}\\ \delta\beta &=& \mathbf{\pi_E} \times \mathbf{O}\\ \end{aligned}$$ Defining $\tau' = \tau+\delta\tau$ and $\beta = u_0+\delta\beta$, the microlensing trajectory vector is then $\mathbf{u} = (u_1 = \tau'\cos{\theta}-\beta\sin{\theta} ~,~ u_2 = \tau'\sin{\theta}+\beta\cos{\theta})$. The observed flux of the lensing event is: $$f = f_s(A+g)$$ with $f_s$ the source flux and $g=f_b/f_s$ is the blending ratio ($f_b$ is the blend flux). The source flux magnification $A(t)$ for a single point lens is a function of time and is given by [@Paczynski1986] : $$A(t) = {{u(t)^2+2}\over{u(t)\sqrt{u(t)^2+4}}}$$ where $u(t) = \sqrt{u_1^2+u_2^2}$. We follow M16 and [@Bachelet2017] and, assuming Gaussian errors, we define the Fisher matrix as : $$F_{i,j} =\sum_{n}{{1}\over{\sigma_n^2}}{{dF_n}\over{dp_i}}{{dF_n}\over{dp_j}}$$ where $n$ indicates the number of measurements. Here, we follow M16’s approach and eliminate the source flux from the flux derivatives ${dF_n}\over{dp_i}$ of the Fisher matrix and from the weight with $\sigma_n^2 \simeq 0.84 \sigma_m^2 (A+g)/(1+g)$, where $\sigma_m$ is an arbitrary photometric precision (in magnitude units) for the microlensing event baseline magnitude. The individual derivatives can be found in Appendix \[sec:derivatives\]. The covariance matrix is then simply the inverse of the Fisher matrix: $$\rm{cov} =F^{-1}$$ M16 defines the minimum error on the parallax measurement $\sigma_{\pi_E, min}(\phi)$ as: $$\sigma ^2_{\pi_{E}, min}(\phi)= {{\sigma_{\pi_\parallel}^2+\sigma_{\pi_\perp}^2}\over{2}} - {{\sqrt{(\sigma_{\pi_\parallel}^2-\sigma_{\pi_\perp}^2)^2+4~\rm{cov}(\pi_\parallel,\pi_\perp)^2}}\over{2}}$$ As a sanity check, we compare our estimation of $\sigma_{\pi_E,min}(\phi)$ with the one found by M16 for the case of a geosynchronous observatory, assuming P = 23h 56min 4s, $\rm{R} =6.6~\rm{R}_\oplus$, $\lambda=30^{\circ}$, $u_0=0.1$, $t_E=1$day, $\theta=45^{\circ}$, $\pi_E=4.3$, $g=0$, $\sigma_m=0.01$ mag, $\phi=0$, 180 days of observation around $t_0$ and an observing cadence of 3 min. M16 Fisher matrix formulation leads to $\sigma_{\pi_E,min}(0)/\pi_E\sim 0.08$ and our estimation gives a good agreement of $\sigma_{\pi_E,min}(0)/\pi_E\sim 0.06$. ![Schematic representation of the problem. As the observatory travels in its orbit, the source trajectory (solid red) is shifted from the inertial trajectory (dash red). The position of the lens is indicated by a point in the skyplane. $(\delta\tau,\delta\beta)$ are represented at the time $t_0$.[]{data-label="fig:geometry"}](./geometry.pdf){width="8cm"} Hypothesis and assumptions -------------------------- For the remainder of this paper, we will study the microlensing parallax measurement for observatories orbiting the Sun (Section \[sec:fleet\]), the Earth (Section \[sec:moon\]) and the Lagrangian point L2 (Section \[sec:wfirstpotential\]). In principle, the change in the origin of the reference system to each of these locations should be taken into account. However, this introduces considerable additional complexity into the Fisher matrix derivation, so for the time being we neglect the impact of the inertial reference point, which is a valid approximation for event timescales which are short compared with the orbital period of the inertial reference point. Note that both G13 and M16 also neglect this effect. We consider events whose photometry is not blended with the light from neighboring (unrelated) stars (i.e $g=0$). Note that the authors in M16 stressed that blending can have a serious effect on the parallax detection. We also consider continuous observations to reduce complexity. M16 indicates that while the Earth’s umbra effectively decreases the sensitivity of LEO satellites, it does not invalidate the method. We assume Keplerian orbits, so the period of our observatories is obtained from Kepler’s law $P^2 = 4\pi^2/(GM)R^3$, the mass depending on the system considered. We also assume Gaussian errors due to the nature of space-based observations. Finally, throughout this study, we assume that the source is located in the Galactic Bulge (i.e $D_s=8$ kpc), the lens is located at $4$ kpc and a relative source-lens speed $V=200$ km/s, leading to : $$\pi_E = 4.3 ~\bigg({{\rm{1~day}}\over{t_E}}\bigg)$$ Single observatory {#sec:single} ================== The parallax seen from the Moon {#sec:moon} ------------------------------- In the following we consider the case of a single telescope based on the surface of the Moon. For a higher sky visibility, the preferable observatory location is on the lunar dark side, but could raise practical difficulties, especially communications. The Earth-facing lunar hemisphere seems to be more practical. This is the choice made by the China National Space Administration to place the first robotic telescope on the Moon [@Wang2015b]. The 15 cm diameter Lunar-based Ultraviolet Telescope currently operates from the *Mare Imbrium* with a photometric precision of $\sigma\sim 0.05$ for a $\sim 17.5$ mag star (in AB photometric system) [@Wang2015b]. To understand the power of a lunar-based observatory with respect to microlensing parallax measurements, we consider the orbit of the Moon around the Earth to be circular, with an orbital radius $R=381600$ km and a photometric precision of $\sigma_m = 0.01$ mag for the event baseline magnitude. We select $\log_{10}(u_0) \in [-5,0.3] $ and $\log_{10}(t_E) \in [-1,2]$. This parameter range is typical for microlensing events observed in the Galactic Bulge. We also consider $\theta=45^{\circ}$, $\phi=0^{\circ}$ and $\lambda = 35 ^{\circ}$. The last assumption comes from the fact that the Moon’s orbital inclination to the ecliptic plane is roughly $5^{\circ}$. Finally we construct the observing strategy as follows. We assume the lunar telescope observes a given event during two observing windows separated by a time interval of P (i.e $\sim$ 28 days). Each window consists of 14 days of continuous observations with 15 min sampling. The first observing window is centered on $t_0$. The aim now is to calculate the minimum parallax error as a function of $u_0$ and $t_E$ from a general Fisher matrix formulation. ![Minimum expected error on the parallax measurement $\sigma_{\pi_E,min}/\pi_E$ (color coded in $\log_{10}$ scale, in the range \[-3,3\]), for a telescope placed on the Moon. The small-dashed, dashed and solid contour curves indicate the 1, 2 and 3 $\sigma$ detection regions. The blank pixels on the top right indicates ill-observed event, leading to $\sigma^2_{\pi_E,min}(0)<0$.[]{data-label="fig:Fisher1"}](./Moonparallax.pdf){width="9cm"} Results of our simulations can be seen in Figure \[fig:Fisher1\]. Similarly to M16, the relative error is separated in two regimes, $u_0t_E<<P$ and $u_0t_E>>P$. From the figure we find that long timescale events ($t_E>40$ days) are ideal to securely estimate the associated parallax effect well within the $3 \sigma$ detection limit. In general, lunar-based parallax measurements with errors less than $3 \sigma$ have $t_E > 10$ days. For $\sigma_{\pi_E,min}/\pi_E>3$, the parallax estimate is less well constrained, corresponding to events with timescales shorter than $t_E<5$ days. Such events could be caused by free-floating planets [@Sumi2010; @Mroz2017]. Given that the (Galactic Bulge) microlensing timescale distribution peaks around $t_E\sim20$ days [@Sumi2010; @Mroz2017], we conclude that a dedicated microlensing monitoring telescope placed on the Moon could provide a valuable observing platform for the systematic and accurate sampling of most microlensing parallax measurements. WFIRST {#sec:wfirstpotential} ------ In this section we carry out a similar study considering NASA’s WFIRST space satellite mission, which will survey the Galactic Bulge in the near-infrared, with six observing windows of $\sim$ 70 days [@Spergel2015]. Contrary to the assumptions of G13 and M16, it has recently been decided that WFIRST will be placed in a so-called halo-orbit at the Lagrangian point L2. This location offers many operational benefits, see for example @Crowley2016. It is likely that the orbital elements of WFIRST will be similar to the Lissajous orbits of GAIA [@Perryman2001] and Planck [@Tauber2010; @Pilbratt2010]. An L2 halo orbit has a relatively long period $P\sim180$ days and a orbital radius of few percent of an AU. Following @Henderson2016, we consider WFIRST orbital parameters similar to the GAIA space mission : $P=180$ days and $R=300000$ km. For clarity, we assume $P$ to be the orbital period of WFIRST around the unstable Lagrangian point L2 (i.e. we did not include the movement of L2 around the Sun due to the motion of the Earth) at a fix distance $R$. We choose $\theta=45^{\circ}$, $\phi=0^{\circ}$, $\lambda = 30 ^{\circ}$ and set the monitoring window to 70 days centered around the peak magnification with a 15 min observing cadences. We follow M16 and G13 and assume a photometric precision $\sigma_m = 0.01$ mag as well as $\sigma_m = 0.001$ mag. ![Same as Figure \[fig:Fisher1\], for the WFIRST mission with observation parameters detailed in the text. *Left:* Using $0.01$ mag photometric precision. *Right:* Using $0.001$ mag photometric precision.[]{data-label="fig:Fisher2"}](./WFIRSTparallax.pdf){width="9cm"} In Figure \[fig:Fisher2\] we show the minimum parallax error for two photometric precisions attributed to WFIRST platform specifications. From the left panel, for a photometric precision of 0.01 mag, we find that WFIRST is not suitable to reliably measure the parallax which is explained by the long (L2 halo) orbital period. The minimum parallax error is given by (see M16): $$\sigma_{\pi_E,min}/\pi_E \propto P^{0.5}u_0^{0.5}R^{-1}$$ For brighter lensing events the photometric precision increases which could decrease the minimum parallax error. From the Fisher matrix formulation, we have therefore calculated the minimum parallax error for a photometric precision of 0.001 mag. The results are shown in the right panel of Figure \[fig:Fisher2\] and demonstrate that WFIRST is capable of measuring the event parallax for event timescales $t_E > 10$ days. It is important to recall that we consider the Lagrangian point L2 stationary during WFIRST observations. This hypothesis breaks for longer events, where the combination of the two movements can in fact constrain the parallax well, see G13. Moreover, contemporaneous observations from WFIRST and ground-based observatories will allow the measurement of the so called space-based parallax [@Refsdal1966; @CalchiNovati2015; @Street2016; @Henderson2016]. However, these follow-up observations from ground could be challenging, due to potential high-extinction fields (that require near infrared observations) and/or low overlap between the observability windows from Earth and L2 observatory. The parallax from a telescope constellation {#sec:fleet} ============================================ Telescope constellation of small satellites, such as NASA CubeSat, is a relatively new and low-cost technology that could be competitive with fewer and larger satellites in the future. Here we consider a fleet of space telescopes in various orbital configurations. Since we consider several observatories, we need to choose a common origin. We define the origin of the system as the center of the trajectories. Then, the problem definitions are slightly changed and Equation \[eq:positions\] becomes: $$\begin{aligned} o_1 &=& \epsilon_\parallel\cos{\Omega}\\ o_2 &=& \epsilon_\bot\sin{\Omega} \end{aligned}$$ This implies that the microlensing parameters refer now to this origin ($u_0$ and $t_0$ especially), but the Fisher matrix formalism is unchanged since we subtracted constants. This is similar to the heliocentric and geocentric approaches for the annual parallax, see @Gould2004. We consider the fleet composed of $N_{sat} \in$ \[1,20\] spacecraft. To study the effect of varying telescope aperture we considered different photometric precision with $\sigma_m \in [0.1,1]$ mag. We assume $u_0=0.1$, $t_E=10 $ days, $\theta=45^{\circ}$ and $\lambda = 30 ^{\circ}$. We distribute the fleet of telescopes equally in mean anomaly within the orbit. For example, in the case of three satellites, the phases are $0$, $2\pi/3$ and $4\pi/3$. We select an observing window of 72 days around $t_0$ with a 1 hour cadence. Since the Fisher information is additive, we simply sum the Fisher matrix of each satellite before the inversion to obtain the covariance matrix. Results can be seen in the Figure \[fig:fleet\] and in the following we discuss details for various observing scenarios. ![image](./Fleet.pdf){width="\textwidth"} Fleet in solar orbit {#sec:SO} --------------------- In this fleet configuration each space-telescope is orbiting the Sun at 1 AU. Then, the distance $d$ between two satellites is $d = 2\sin(\pi/N_{\rm{sat}})$ AU where $N_{\rm{sat}}$ is the total number of satellites (and assuming the telescope are evenly distributed on the orbit). The solar orbit present some advantages like low-cost thermal control. The main drawback is the distance with the Earth which seriously impact the required communications. The advantage of such a configuration is the large orbital radius which produces large shifts $(\delta\tau,\delta\beta)$ in the various lightcurves. In fact, it is well known that the microlensing parallax is highly constrained with two observatories in this situation; it is the space-based parallax [@Refsdal1966; @CalchiNovati2015; @Street2016; @Henderson2016]. It is worth noting however that $\geq$ 5 telescopes with low precision (i.e. $\sigma_m\sim 1$ mag) can still strongly constrain the parallax, meaning that relatively small telescopes on inexpensive cube satellites could be a viable option. Fleet in geosynchronous orbit {#sec:GO} ------------------------------ A special orbit for a space-telescope is the geo-synchronous orbit in which the telescope stays above the same geographic location at a relatively large distance from Earth. This orbit has practical disadvantages as it is costly to reach and the risk of collision is comparatively high due to the existence of numerous commercial geosynchronous satellites. We choose $R=42048$ km and a daily period for the simulations. If we assume that each observatory provides the same information $F_i$ to the parallax constrain, we can write: $$F_{tot} \approx N_{sat} F_i$$ where $F_{tot}$ is the total Fisher information. This directly leads to: $$\sigma_{\pi_E}/\pi_E \propto \sigma_m/N_{sat}^{0.5} \label{eq:error}$$ We can rewrite this equation and show that the required photometric precision to obtain a relative error on the parallax estimation $\delta=\sigma_{\pi_E}/\pi_E$ is: $$\sigma_m \propto N_{sat}^{0.5}\delta \label{eq:error}$$ This trend is seen in the middle and right panels in Figure \[fig:fleet\]. We can see that the parallax is well constrained if $\sigma_m\leq 0.01$ mag. Low Earth Orbit {#sec:LOE} ---------------- Space Agencies are more and more interested in the potential use of LEO satellite constellations. These constellations are extremely useful for simultaneous Earth observations. The benefits of this approach are multiple. One is the relative low-cost of orbital access. For example, the India Space Agency recently successfully released 104 small size satellites in a single mission[^1], mostly tasked with Earth observations. It is also simple to use a Target of Opportunity (ToO) rapid-response mode, since communication with the satellites is relatively easy. As shown by M16, a satellite in LEO is able to constrain the microlensing parallax, despite the relative low amplitude of the microlensing lightcurve’s distortion due to the small orbital radius. @Schvartzvald2016 obtained ToO observation from *Swift* in order to constrain the parallax of the binary event OGLE-2015-BLG-1319. They showed that *Swift* should have been able to constrain the parallax in principle. However, due to low sampling and low photometric precision, this was not the case for this event. For this case, we selected $R=7000$ km (i.e $P\sim0.07$ days). From Figure \[fig:fleet\] right panel, it is clear that the parallax detection requires high photometric accuracy. The real parallax detection efficiency {#sec:real} ====================================== In the Section \[sec:moon\], we have seen that a telescope placed on the Moon should be able to efficiently measure the parallax for the vast majority of microlensing events towards the Galactic Bulge. However, Section \[sec:moon\], as well as G13 and M16, assumes that the model for an ongoing microlensing event is known. In fact, it is important to keep in mind that when an event is in progress, the microlensing model is usually not known. In other words, the Paczynski parameters (ie $t_0$, $u_0$ and $t_E$) need to be modeled at the same time as the parallax vector. This obviously adds complexity and one should expect that the theoretical results obtained in the previous section will be degraded. Moreover, the finite sampling and measurement precision directly lead to a fitted model different from the “true” model [@Bachelet2017]. M16 shows that : $$\sigma_{\pi_E}/\pi_E \propto u_0^{0.5} \label{eq:error}$$ This clearly indicates that the parallax measurement depends on the $u_0$ fitted value. Moreover, it is non trivial to select between different models based on real data. In practice, a $\Delta\chi^2$ is often used, using various thresholds to ensure a safe detection [@Yee2013]. In the present work, it is possible to use a more robust statistic, since we can simulate pure Gaussian errors. In this case, the Bayesian Information Criterion (BIC) is a efficient tool to distinguish real detections from overfitting, see for example @Bachelet2012a [@Bramich2016]. To illustrate this, we use the pyLIMA software package [@Bachelet2017] [^2] to simulate and model lightcurves corresponding to the Section \[sec:moon\]. We realize one fit with and one fit without the Moon parallax, and compute the $\Delta BIC$ for each events. ![Parallax detection for microlensing events observed from the Moon. The positive detection region is reduce in comparison of the Figure \[fig:Fisher1\].[]{data-label="fig:reality"}](./Reality.pdf){width="9cm"} As can be see in Figure \[fig:reality\], the parallax detection is much harder than expected. All values with $\Delta BIC\sim20$ corresponds to $2\log(1454)$ (1454 is the number of data points for each lightcurve), and so corresponds to $\Delta\chi^2\sim0$. The reason is that the fitting process can slightly adjust the Paczynski parameters in order to fit the parallax. This problem is well known for the parallax constrain with ground data, see Appendix \[sec:annualparallax\]. Conclusions {#sec:conclusions} =========== We have studied the potential of various space observatories to systematically measure the microlensing parallax and hence to characterize the microlensing events. We first derive the exact Fisher matrix and compare our results to previous works. We then simulate various configurations corresponding to plausible future space missions. We show that the Moon is an ideal observatory to measure the parallax, assuming a moderate photometric precision (0.01 mag). However, we moderate this conclusion in Section \[sec:real\], since real observations require modeling and model selection, directly leading to a higher detection threshold. This is already well known for parallax measurement made with Earth observations (i.e the annual parallax) as discussed in the Appendix \[sec:annualparallax\]. We also simulate the potential of the WFIRST mission to detected the parallax on its own. We found that it is possible only for bright and long events (i.e $t_E>10$ days for a baseline photometric precision of $\sigma_m\sim0.001$ mag). Constellations of telescopes are promising. We confirm that telescopes orbiting the Sun at 1 AU have the strongest potential, as demonstrated in practice. However, both geosynchronous and low Earth orbits constellation are able to well constrain the parallax vector, assuming a sufficient number of satellite and/or good photometric precision since $\sigma_{\pi_E}/\pi_E \propto \sigma_mN_{sat}^{-0.5}$. Acknowledgements {#acknowledgements .unnumbered} ================ The authors thank the anonymous referee for the constructive comments. This research has made use of NASA’s Astrophysics Data System. Work by EB and RAS is support by the NASA grant NNX15AC97G. TCH acknowledges financial support from KASI grant 2017-1-830-03. Fisher matrix analysis for the annual parallax {#sec:annualparallax} ============================================== The annual parallax is the standard method used to measure the microlensing parallax. It is well known that such a measurement is in general possible only for long timescale events ($t_E>30$ days is a minimum). This is due to the relatively long period and semi-major axis of the Earth’s orbit around the Sun. Here we show that the Fisher matrix analysis can lead to overconfident conclusions. We conduct a similar study to that in Section \[sec:moon\] for the annual parallax, using the same simulation parameters, with the exception that $P=365.25$, $R=1$ AU, an observing window of 90 days around the event maximum and a one day cadence. We also simulate two baseline photometric precisions, namely 0.01 mag and 0.05 mag. As can be seen in Figure \[fig:annual\], the Fisher matrix analysis predicts that events with $t_E>15$ days should allow the systematic measurement of microlensing parallax, at least for the minimum photometric precision. However, it has been established from previous surveys that annual parallax measurements are extremely difficult for events with $t_E<30$ days, see for example @Penny2016. \[ht\] ![Similar to Figure \[fig:Fisher1\] for the annual parallax. *Left:* Using 0.05 mag photometric precision. *Right:* Using 0.01 mag photometric precision. Again, the blank pixels correspond to $\sigma^2_{\pi_E,min}(0)<0$ which is a signature of ill-observed event. []{data-label="fig:annual"}](./EarthPara.pdf "fig:"){width="9cm"} Details of derivatives {#sec:derivatives} ====================== Here is the details of the model derivatives required for the Fisher matrix derivation. $$\begin{aligned} \frac{\partial A}{\partial u} &= {{-8}\over{u^2(u^2+4)^{3/2}}} & \\ \frac{\partial u}{\partial u_1} &= u_1/u & \frac{\partial u}{\partial u_2} &= u_2/u\\ \frac{\partial o_1}{\partial t_0} &= \omega\epsilon_\parallel\sin{\Omega} & \frac{\partial o_2}{\partial t_0} &= -\omega\epsilon_\bot\cos{\Omega} \\ \frac{\partial\delta\tau}{\partial t_0} &= \pi_E\cos{\theta}\frac{\partial o_1}{\partial t_0} + \pi_E\sin{\theta}\frac{\partial o_2}{\partial t_0} & \frac{\partial\delta\beta}{\partial t_0} &= -\pi_E\cos{\theta}\frac{\partial o_1}{\partial t_0} + \pi_E\sin{\theta}\frac{\partial o_2}{\partial t_0}\\ \frac{\partial u_1}{\partial t_0} &= (-1/t_E+\frac{\partial\delta\tau}{\partial t_0})\cos{\theta}-\frac{\partial\delta\beta}{\partial t_0}\sin{\theta} & \frac{\partial u_2}{\partial t_0} &= (-1/t_E+\frac{\partial\delta\tau}{\partial t_0})\sin{\theta}+\frac{\partial\delta\beta}{\partial t_0}\cos{\theta}\\ \frac{\partial u_1}{\partial u_0} &= \sin{\theta} & \frac{\partial u_2}{\partial u_0} &= \cos{\theta} \\ \frac{\partial u_1}{\partial t_E} &= -(t-t_0)/t_E^2\cos{\theta} & \frac{\partial u_2}{\partial t_E} &= -(t-t_0)/t_E^2\sin{\theta} \\ \frac{\partial u_1}{\partial \pi_\parallel} &= o_1\cos{\theta}+o_2\sin{\theta} & \frac{\partial u_2}{\partial \pi_\parallel} &= o_1\sin{\theta}+o_2\cos{\theta} \\ \frac{\partial u_1}{\partial \pi_\bot} &= -o_1\sin{\theta}+o_2\cos{\theta} & \frac{\partial u_2}{\partial \pi_\bot}&= -o_1\cos{\theta}+o_2\sin{\theta} \\\end{aligned}$$ [^1]: https://www.isro.gov.in/pslv-c37-successfully-launches-104-satellites-single-flight [^2]: https://github.com/ebachelet/pyLIMA
--- abstract: 'We prove that the strong polarized relation $\binom{\theta}{\omega} \rightarrow \binom{\theta}{\omega}^{1,1}_2$, applied simultaneously for every $\theta\in[\aleph_1,2^{\aleph_0}]$, is consistent with ZFC. Consequently, $\binom{inv}{\omega} \rightarrow \binom{inv}{\omega}^{1,1}_2$ is consistent for every cardinal invariant of the continuum. Some results in this direction are generalized to higher cardinals. Nous prouvons que la relation polarisée forte $\binom{\theta}{\omega} \rightarrow \binom{\theta}{\omega}^{1,1}_2$, appliquée simultanément à chaque cardinal $\theta\in[\aleph_1,2^{\aleph_0}]$, est en accord avec ZFC. Par conséquent, la relation $\binom{inv}{\omega} \rightarrow \binom{inv}{\omega}^{1,1}_2$ est en accord avec ZFC pour chaque caractéristique sur le continu. Nous étudions plusieurs généralisations pour certains cardinaux élevés.' address: - 'Institute of Mathematics The Hebrew University of Jerusalem Jerusalem 91904, Israel' - 'Institute of Mathematics The Hebrew University of Jerusalem Jerusalem 91904, Israel and Department of Mathematics Rutgers University New Brunswick, NJ 08854, USA' author: - Shimon Garti - Saharon Shelah bibliography: - 'arlist.bib' title: Partition calculus and cardinal invariants --- \[\] introduction ============ The strong polarized relation $\binom{\lambda}{\kappa} \rightarrow \binom{\lambda}{\kappa}^{1,1}_2$ means that for every function $c : \lambda \times \kappa \rightarrow 2$ there are $A \subseteq \lambda$ and $B \subseteq \kappa$ such that $|A|=\lambda, |B|=\kappa$ and $c \upharpoonright (A \times B)$ is constant. The history of this relation begins with [@MR0081864], and later [@MR0202613]. A comprehensive discussion on the basic results for this relation appears in [@williams]. For a modern discussion see [@partitions]. Cardinal invariants of the continuum are discussed in [@MR2768685]. Every cardinal invariant isolates some property of the continuum (i.e., $^\omega 2,^\omega \omega$, or $[\omega]^\omega$ and so forth) and seeks for the minimal cardinality of a set with this property. The value of each cardinal invariant belongs to the interval $[\aleph_1,\mathfrak{c}]$, and except of the trivial invariants (which are the first uncountable cardinal, and $\mathfrak{c}$), the value of each invariant can fall on a large spectrum of cardinals in this interval. We are interested in the following general problem, from [@GaThesis]: \[iinv\] Cardinal invariants and the polarized relation. Let *inv* be a cardinal invariant of the continuum. Is the relation $\binom{inv}{\omega} \rightarrow \binom{inv}{\omega}^{1,1}_2$ consistent with ZFC? Since the continuum hypothesis implies $\binom{\aleph_1}{\aleph_0} \nrightarrow \binom{\aleph_1}{\aleph_0}^{1,1}_2$ (as proved in [@MR0081864]), and $inv=\aleph_1$ for every cardinal invariant under the continuum hypothesis, we know that the negative relation $\binom{inv}{\omega} \nrightarrow \binom{inv}{\omega}^{1,1}_2$ is always consistent. This is the background behind problem \[iinv\]. In [@MR2927607] it is proved that $\binom{\mathfrak{c}}{\omega} \rightarrow \binom{\mathfrak{c}}{\omega}^{1,1}_2$ is consistent with ZFC, and one can judge $\mathfrak{c}$ as a cardinal invariant, giving a positive answer (in this case) for the above problem. But in the model constructed in [@MR2927607] there exists an uncountable cardinal $\theta<\mathfrak{c}$ so that $\binom{\theta}{\omega} \nrightarrow \binom{\theta}{\omega}^{1,1}_2$. This gives rise to the following: \[aaalll\] Simultaneous positive relations. Is the relation $\binom{\theta}{\omega} \rightarrow \binom{\theta}{\omega}^{1,1}_2$ consistent with ZFC for every $\theta\in[\aleph_1,2^{\aleph_0}]$ simultaneously? By the way we mention that the opposite situation holds in the Cohen model. Namely, adding $\lambda$-many Cohen reals implies $\binom{\theta}{\omega} \nrightarrow \binom{\theta}{\omega}^{1,1}_2$ for every $\theta\in[\aleph_1,2^{\aleph_0}]$. An explicit proof can be found in [@gash962], remark 1.4. Let us state the known results so far. By [@MR2927607], if $\kappa < \mathfrak{s}$ then $\binom{\kappa}{\omega} \rightarrow \binom{\kappa}{\omega}^{1,1}_2$ iff ${{\rm cf}}(\kappa)>\aleph_0$. Hence forcing $\aleph_0<{{\rm cf}}(inv)\leq inv<\mathfrak{s}$ settles problem \[iinv\] for such an invariant (the cofinality requirement is easy, in general). For instance, it gives the consistency of $\binom{\mathfrak{b}}{\omega} \rightarrow \binom{\mathfrak{b}}{\omega}^{1,1}_2$, as well as $\binom{\mathfrak{a}}{\omega} \rightarrow \binom{\mathfrak{a}}{\omega}^{1,1}_2$, due to [@MR1623206] (chapter VI, §6). So we focus on invariants above $\mathfrak{s}$. In a sense, $\mathfrak{s}$ is a natural invariant for getting ‘downward positive relations’ like $\binom{\kappa}{\omega} \rightarrow \binom{\kappa}{\omega}^{1,1}_2$, whenever $\kappa<\mathfrak{s}$. Here we shall see that the reaping number $\mathfrak{r}$ is a natural invariant for ‘upward positive relations’, namely $\binom{\kappa}{\omega} \rightarrow \binom{\kappa}{\omega}^{1,1}_2$ for every $\kappa>\mathfrak{r}$ whose cofinality is large enough. Inasmuch as $\mathfrak{r}<\mathfrak{s}$ is consistent with ZFC, we can cover simultaneously every $\theta\in[\aleph_1,2^{\aleph_0}]$. In the model of [@MR879489], $\aleph_1=\mathfrak{r}<\mathfrak{s}=\aleph_2=\mathfrak{c}$. This gives a positive answer to problem \[aaalll\], hence also to problem \[iinv\], since every cardinal invariant falls into $\{\aleph_1,\aleph_2\}$ in this model. Another result is related to $\mathfrak{d}$. It is not known, yet, if one can increase the continuum above $\aleph_2$ while keeping $\mathfrak{r}<\mathfrak{s}$. Anyhow, dealing with the dominating number $\mathfrak{d}$ one can force $\mathfrak{r}<\mathfrak{d}$ for every prescribed regular value of $\mathfrak{d}$ above $\aleph_1$, as proved in [@MR1005010]. Consequently, the relation $\binom{\mathfrak{d}}{\omega} \rightarrow \binom{\mathfrak{d}}{\omega}^{1,1}_2$ is consistent with ZFC for arbitrarily large $\mathfrak{d}$. Can we generalize these results to uncountable cardinals? We need some large cardinal assumptions. If $\lambda$ is a supercompact cardinal we can force $\mathfrak{r}_\lambda=\mathfrak{u}_\lambda=\lambda^+$, yielding positive relation for every regular cardinal above $\lambda^+$. We believe that some sort of large cardinals assumption is needed, yet supercompactness is not vital. We still do not know what happens in the general case of an uncountable $\lambda$. If $\mu$ is a singular cardinal (a limit of strongly inaccessibles, or a parallel assumption) then we can increase $2^\mu$ and prove $\binom{\theta}{\mu} \rightarrow\binom{\theta}{\mu}^{1,1}_2$ for many $\theta$-s in the interval $(\mu,2^\mu]$. We use standard notation. We employ the letters $\theta, \kappa, \lambda, \mu, \chi$ for infinite cardinals, and $\alpha, \beta, \gamma, \delta, \varepsilon, \zeta$ for ordinals. Topological cardinal invariants of the continuum are denoted as in [@MR776622] and [@MR2768685]. We denote the continuum by $\mathfrak{c}$. For $A,B\subseteq\lambda$ we denote almost inclusion by $\subseteq^*$, so $A\subseteq^*B$ means $|A\setminus B|<\lambda$. For a regular cardinal $\kappa$ we denote the ideal of bounded subsets of $\kappa$ by $J^{\rm bd}_\kappa$. Given a product of regular cardinals, we denote its true cofinality by ${\rm tcf}$. We adopt the Jerusalem notation in forcing notions, namely $p\leq q$ means that the condition $q$ gives more information than the condition $p$. We shall use Mathias forcing, relativized to some ultrafilter, and we assume throughout the paper that every ultrafilter is uniform (hence, in particular, non-principal). We thank the referee for many comments, mathematical corrections and a meaningful improvement of the exposition. Cardinal invariants =================== Let us begin with basic definitions of some cardinal invariants. We introduce the general definition, applied to every infinite cardinal $\lambda$ (but in most cases, the definition makes sense only for regular cardinals). Omitting the subscript means that $\lambda=\aleph_0$. Here is the first definition: \[ssss\] The splitting number $\mathfrak{s}_\lambda$. 1. Suppose $B\in[\lambda]^\lambda$ and $S\subseteq\lambda$. $S$ splits $B$ if $|S\cap B|=|(\lambda\setminus S)\cap B|=\lambda$. 2. $\{S_\alpha:\alpha<\kappa\}$ is a splitting family in $\lambda$ if for every $B\in[\lambda]^\lambda$ there exists an ordinal $\alpha<\kappa$ so that $S_\alpha$ splits $B$. 3. The splitting number $\mathfrak{s}_\lambda$ is the minimal cardinality of a splitting family in $\lambda$. The following claim is explicit in [@MR2927607] only for the case $\lambda=\aleph_0$ (by our convention, the splitting number is denoted by $\mathfrak{s}$ in this case). Claim 1.3 of [@gash962] is also related (but deals with a variant of $\mathfrak{s}$, called the strong splitting number). For completeness, we repeat the proof here, this time in the general context of $\mathfrak{s}_\lambda$. Notice that the assumption $\lambda<\mathfrak{s}_\lambda$ in the following claim implies that $\lambda$ is weakly compact (we consider $\aleph_0$ as a weakly compact cardinal). A proof appears in [@MR1450512] for the case $\lambda$ is regular. We do not know what happens when $\lambda$ is singular (although in some cases a similar result can be proved). \[ddownward\] The downward positive relation. Suppose $\lambda={{\rm cf}}(\lambda)<\mu<\mathfrak{s}_\lambda$.  $\binom{\mu}{\lambda} \rightarrow \binom{\mu}{\lambda}^{1,1}_2$ iff ${{\rm cf}}(\mu)\neq\lambda$. *Proof*. Assume ${{\rm cf}}(\mu)\neq\lambda$. Let $c:\mu\times\lambda\rightarrow 2$ be any coloring. Set $S_\alpha=\{\gamma\in\lambda:c(\alpha,\gamma)=0\}$ for every $\alpha<\mu$. We collect these sets into the family $\mathcal{F}=\{S_\alpha:\alpha<\mu\}$. Since $|\mathcal{F}|\leq\mu<\mathfrak{s}_\lambda$ we infer that $\mathcal{F}$ is not a splitting family. Let $B\in[\lambda]^\lambda$ exemplify this fact. It means that $B\subseteq^* S_\alpha$ or $B\subseteq^* (\lambda\setminus S_\alpha)$ for every $\alpha<\mu$. At least one of these options occurs $\mu$-many times, so without loss of generality $B\subseteq^* S_\alpha$ for every $\alpha<\mu$. By the very definition of almost inclusion, for every $\alpha<\mu$ there exists $\beta_\alpha<\lambda$ such that $B\setminus\beta_\alpha\subseteq S_\alpha$ (here we use the regularity of $\lambda$). Since ${{\rm cf}}(\mu)\neq\lambda$ there exists $\beta<\lambda$, and $H_0\in[\mu]^\mu$ so that $\beta_\alpha\leq\beta$ for every $\alpha\in H_0$. Let $H_1$ be $B\setminus\beta$, so $H_1\in[\lambda]^\lambda$. Suppose $\alpha\in H_0, \gamma\in H_1$. By the definition of $H_1$, $\gamma\in B\setminus\beta= B\setminus\beta_\alpha$, and since $\alpha\in H_0$ we conclude that $c(\alpha,\gamma)=0$, completing this direction. Now assume that ${{\rm cf}}(\mu)=\lambda$. Choose a disjoint decomposition $\{A_\gamma:\gamma<\lambda\}$ of $\mu$, such that $|A_\gamma|<\mu$ for every $\gamma<\lambda$. Without loss of generality, the union of every subcollection of less than $\lambda$-many $A_\gamma$-s has size less than $\lambda$. Here we use the assumption ${{\rm cf}}(\mu)=\lambda$. For every $\alpha<\mu$ let $\xi(\alpha)$ be the unique ordinal so that $\alpha\in A_{\xi(\alpha)}$. Define $c:\mu\times\lambda\rightarrow 2$ as follows. For $\alpha\in\mu\wedge\beta\in\lambda$ let: $$c(\alpha,\beta)=0 \Leftrightarrow \beta\leq\xi(\alpha)$$ We claim that $c$ exemplifies our claim. Indeed, assume that $|H_0|=\mu$ and $|H_1|=\lambda$. Choose $(\alpha,\beta)\in H_0\times H_1$, and suppose $c(\alpha,\beta)=0$. It means that $\beta\leq\xi(\alpha)$. But $\xi(\alpha)$ is an ordinal below $\lambda$, and $H_1$ is unbounded in $\lambda$, hence one can pick an ordinal $\beta'\in H_1$ so that $\beta'>\xi(\alpha)$. It follows that $c(\alpha,\beta')=1$, so the product $H_0\times H_1$ is not monochromatic in this case. Now suppose $c(\alpha,\beta)=1$. It means that $\xi(\alpha)<\beta$. Clearly, there is some $\alpha'\in H_0$ so that $\xi(\alpha')\geq\beta$. Consequently, $c(\alpha',\beta)=0$, so again $H_0\times H_1$ is not monochromatic, and the proof is completed. For the next claim we need the following definition: \[rrrr\] The reaping number. Let $\lambda$ be an infinite cardinal. 1. $\{T_\alpha:\alpha<\kappa\}$ is an unreaped family if there is no $S\in[\lambda]^\lambda$ so that $S$ splits $T_\alpha$ for every $\alpha<\kappa$. 2. the reaping number $\mathfrak{r}_\lambda$ is the minimal cardinality of an unreaped family. Our second claim works in the opposite direction to the first claim: \[uupward\] The upward positive relation. Suppose $\mathfrak{r}_\lambda<\mu\leq 2^\lambda$, $\lambda$ is a regular cardinal.  $\binom{\mu}{\lambda} \rightarrow \binom{\mu}{\lambda}^{1,1}_2$ whenever ${{\rm cf}}(\mu)>\mathfrak{r}_\lambda$. *Proof*. Let $\mathcal{A}\subseteq[\lambda]^\lambda$ exemplify $\mathfrak{r}_\lambda$. It means that $|\mathcal{A}|=\mathfrak{r}_\lambda$, and there is no single $B\in[\lambda]^\lambda$ which splits all the members of $\mathcal{A}$. Assume $c:\mu\times\lambda\rightarrow 2$ is any coloring. For every $\alpha<\mu$ let $B_\alpha=\{\beta<\lambda:c(\alpha,\beta)=0\}$. Choose $A_\alpha\in\mathcal{A}$ such that $A_\alpha\subseteq^*B_\alpha$ or $A_\alpha\subseteq^*\lambda\setminus B_\alpha$. Without loss of generality, $A_\alpha\subseteq^*B_\alpha$ for every $\alpha<\mu$, so one can choose an ordinal $\beta_\alpha<\lambda$ so that $A_\alpha\setminus\beta_\alpha\subseteq B_\alpha$. As ${{\rm cf}}(\mu)>\mathfrak{r}_\lambda$, there are $H\in[\mu]^\mu,\beta<\lambda$ and $A\in\mathcal{A}$ such that $\alpha\in H\Rightarrow\beta_\alpha=\beta$ and $A_\alpha= A$. It follows that $c\upharpoonright(H\times A\setminus\beta)=0$, so the proof is completed. Combining the above claims, we can prove the main theorem of this section: \[mt\] The main theorem. It is consistent that $\binom{\theta}{\omega}\rightarrow\binom{\theta}{\omega}^{1,1}_2$ for every $\aleph_1\leq\theta\leq\ 2^{\aleph_0}$. *Proof*. In the model of [@MR879489] we have $\mathfrak{r}=\mathfrak{u}=\aleph_1$, while $\mathfrak{s}=\mathfrak{c}=\aleph_2$. By claim \[ddownward\] we conclude that $\binom{\aleph_1}{\aleph_0}\rightarrow\binom{\aleph_1}{\aleph_0}^{1,1}_2$, and by virtue of claim \[uupward\] we have $\binom{\aleph_2}{\aleph_0}\rightarrow\binom{\aleph_2}{\aleph_0}^{1,1}_2$, so we are done. \[everyinv\] Polarized relations and cardinal invariants. Let $inv$ be any cardinal invariant of the continuum.  $\binom{inv}{\omega}\rightarrow\binom{inv}{\omega}^{1,1}_2$ is consistent with ZFC. Notice that $\mathfrak{c}=\aleph_2$ in the model of [@MR879489]. Dealing with the dominating number $\mathfrak{d}$, the model in [@MR1005010] supplies $\mathfrak{u}=\mu_0<\mu_1=\mathfrak{d}$ for every pair of regular cardinals $(\mu_0,\mu_1)$. It follows that $\binom{\mathfrak{d}}{\omega} \rightarrow\binom{\mathfrak{d}}{\omega}^{1,1}_2$ is consistent for arbitrarily large value of $\mathfrak{d}$, as $\mathfrak{r}\leq\mathfrak{u}$. We conclude with another open problem from [@GaThesis]: \[pppp\] The splitting number and the pseudointersection number. 1. Is it consistent that $\mathfrak{p}=\mathfrak{s}$ and $\binom{\mathfrak{p}}{\omega}\rightarrow\binom{\mathfrak{p}}{\omega}^{1,1}_2$? 2. Is it consistent that $\mathfrak{c}=\mathfrak{s}>\aleph_2$ and $\binom{\mathfrak{s}}{\omega}\rightarrow\binom{\mathfrak{s}}{\omega}^{1,1}_2$ (hence $\binom{\theta}{\omega}\rightarrow\binom{\theta}{\omega}^{1,1}_2$ whenever ${{\rm cf}}(\theta)>\aleph_0$)? Notice that in the above models we have $\mathfrak{p}<\mathfrak{s}$, so a different method is required for this problem. Nevertheless, we believe that a positive answer is consistent for both parts of the question. Large cardinals =============== In this section we deal with uncountable cardinals, with respect to the problems in the previous section. As can be seen, we need some large cardinals assumption. We distinguish two cases. In the first one, $\lambda$ is a regular cardinal. In this case we shall assume that $\lambda$ is a supercompact cardinal, aiming to show that many polarized relations are consistent, above $\lambda$. Secondly, we deal with a singular cardinal. Let us begin with the regular case. We shall make use of the Mathias forcing, generalized for uncountable cardinals. Notice that for the combinatorial theorems we need a specific version of the Mathias forcing, relativized to some ultrafilter. We begin with the definition of this forcing notion: \[ggeneralmat\] The generalized Mathias forcing. Let $\lambda$ be a supercompact (or even just measurable) cardinal, and $D$ a nonprincipal $\lambda$-complete ultrafilter on $\lambda$. The forcing notion $\mathbb{M}_D^\lambda$ consists of pairs $(a, A)$ such that $a \in [\lambda]^{< \lambda}, A \in D$. For the order, $(a_1, A_1) \leq (a_2, A_2)$ iff $a_1 \subseteq a_2, A_1 \supseteq A_2$ and $a_2 \setminus a_1 \subseteq A_1$. Notice that $\mathbb{M}_D^\lambda$ is $\lambda^+$-centered as always $(a,A_1) \parallel (a,A_2)$ and there are only $\lambda=\lambda^{<\lambda}$ many $a$-s. It follows that $\mathbb{M}_D^\lambda$ is $\lambda^+$-cc. Also, $\mathbb{M}_D^\lambda$ is $<\lambda$-directed closed (here we employ the $\lambda$-completeness of the ultrafilter $D$). We emphasize that these properties are preserved by $<\lambda$-support iterations, hence such an iteration collapses no cardinals. If $\mathbb{M}_D^\lambda$ is a $\lambda$-Mathias forcing, then for defining the Mathias $\lambda$-real we take a generic $G \subseteq \mathbb{M}_D^\lambda$, and define $x_G = \bigcup \{ a : (\exists A \in D)((a, A) \in G) \}$. As in the original Mathias forcing, $x_G$ is endowed with the property $x_G \subseteq^* A \vee x_G \subseteq^* \lambda \setminus A$ for every $A \in [\lambda]^\lambda$ of the ground model. Let us mention another cardinal invariant: \[uuuu\] The ultrafilter number $\mathfrak{u}_\lambda$. Let $\lambda$ be a regular cardinal, and $\mathcal{F}$ a filter on $\lambda$. 1. A base $\mathcal{A}$ for $\mathcal{F}$ is a subfamily of $\mathcal{F}$ such that for every $X\in\mathcal{F}$ there is some $Y\in\mathcal{A}$ with the property $Y\subseteq^*X$. 2. $\mathfrak{u}_\lambda$ is the minimal cardinality of a filter base, for some uniform ultrafilter on $\lambda$. One can show that $\mathfrak{u}_\lambda>\lambda$ for every $\lambda$. The following claim employs known facts, so we give just an outline of the proof: \[uuuandsss\] Polarized relations above a supercompact cardinal. Suppose $\lambda$ is a supercompact cardinal. 1. For every $\mu={{\rm cf}}(\mu)\in[\lambda^+,2^\lambda]$, one can force $\mathfrak{s}_\lambda=\mu$ without changing the value of $2^\lambda$. 2. One can force $\mathfrak{u}_\lambda=\lambda^+$ while $2^\lambda$ is arbitrarily large. *Outline of proof*. For $(a)$ we iterate $\mathbb{M}_D^\lambda$, the length of the iteration being $\mu$. We assume without loss of generality that $\lambda$ is Laver-indestructible, so in particular it remains supercompact (hence measurable) along the iteration. It enables us to choose a $\lambda$-complete ultrafilter at any stage, hence the forcing does not collapse cardinals. We use $<\lambda$-support. It follows that $\mathfrak{s}_\lambda$ equals $\mu$ in the forcing extension. For a detailed proof see also [@MR2927607]. For $(b)$ we use an iteration of length $\lambda^+$. But we choose the $\lambda$-complete ultrafilter (at every stage) more carefully. Along the iteration we create a $\subseteq^*$-decreasing sequence of subsets of $\lambda$. This is done by choosing an ultrafilter which contains the sequence from the previous stages. For the limit stages of the iteration, one has to employ the arguments in [@MR1976583]. The main point there is using some prediction principle on $\lambda^+$ in order to make sure that an appropriate ultrafilter is chosen enough times. At the end, we can show that the sequence (of length $\lambda^+$) generates an ultrafilter, hence $\mathfrak{u}_\lambda=\lambda^+$. We also refer the reader to [@adbt] for a detailed proof of this assertion. We indicate that the consistency of $\mathfrak{u}_\lambda=\lambda^+$ while $2^\lambda$ is arbitrarily large is proved for some singular cardinal $\lambda$ in [@MR2992547] (but here we deal with a supercompact cardinal). Let us phrase the following conclusion from the above claim: \[cconc\] Many positive relations above a supercompact. Suppose $\lambda$ is a supercompact cardinal. 1. the positive relation $\binom{\mu}{\lambda}\rightarrow\binom{\mu}{\lambda}^{1,1}_2$ is consistent simultaneously for every regular $\mu$ above $\lambda$ but $2^\lambda$. 2. the positive relation $\binom{\mu}{\lambda}\rightarrow\binom{\mu}{\lambda}^{1,1}_2$ is consistent simultaneously for every regular $\mu$ in the interval $(\lambda^+,2^\lambda]$. *Proof*. $(a)$ is valid when $\mathfrak{s}_\lambda=2^\lambda$ and $(b)$ holds in a model of $\mathfrak{r}_\lambda=\lambda^+$ (notice that for getting merely $\mathfrak{r}_\lambda=\lambda^+$ we do not need the arguments of [@MR1976583]). \[uulessthanss\] Is it consistent that $\mathfrak{u}_\lambda<\mathfrak{s}_\lambda$ (or at least $\mathfrak{r}_\lambda<\mathfrak{s}_\lambda$) for some uncountable cardinal $\lambda$? We turn now to the main theorem of this section. We show that getting a positive polarized relation for many cardinals in the interval $(\mu,2^\mu]$ is consistent for some singular cardinal $\mu$ (under some pcf assumptions). In particular, it holds for $\mu^+$. We shall prove the following: \[mmtt\] Polarized relations above a singular cardinal. Assume $\kappa={{\rm cf}}(\mu)<\mu<\lambda$, and $\theta<\kappa$. If $\circledast$ holds  $\binom{\lambda}{\mu}\rightarrow \binom{\lambda}{\mu}^{1,1}_\theta$ holds, when $\circledast$ means: 1. $2^\kappa<{{\rm cf}}(\lambda)$, 2. $J^{\rm bd}_\kappa\subseteq J$ is an ideal on $\kappa$, 3. $\langle\lambda_\varepsilon:\varepsilon<\kappa\rangle$ is an increasing sequence of cardinals which tends to $\mu$, 4. $2^{\lambda_\varepsilon}=\lambda_\varepsilon^+$ for every $\varepsilon<\kappa$, 5. $\lambda_\varepsilon$ is strongly inaccessible for every $\varepsilon<\kappa$, 6. $\Upsilon_\ell= {\rm tcf}(\prod\limits_{\varepsilon<\kappa}\lambda_\varepsilon^{+\ell},<_J)$ is well defined for $\ell\in\{0,1\}$, 7. ${{\rm cf}}(\lambda)\notin\{\Upsilon_0,\Upsilon_1\}$. *Proof*. Suppose a coloring $c:\lambda\times\mu\rightarrow\theta$ is given. For every $\alpha<\lambda,\varepsilon<\kappa,\iota<\theta$ we let $A_{\alpha,\varepsilon,\iota}$ be $\{\gamma<\lambda_\varepsilon:c(\alpha,\gamma)=\iota\}$. Fixing $\alpha$ and $\varepsilon$, we have produced a partition $\{A_{\alpha,\varepsilon,\iota}: \iota<\theta\}$ of $\lambda_\varepsilon$ into a small (i.e., just $\theta$-many) number of sets. Enumerate $\mathcal{P}(\lambda_\varepsilon)$ as $\langle B_{\varepsilon,i}:i<\lambda^+_\varepsilon\rangle$. For every $\alpha<\lambda$ and $\iota<\theta$ we define a function $g_{\alpha,\iota}\in\prod\limits_{\varepsilon<\kappa}\lambda_\varepsilon^+$ as follows: $$g_{\alpha,\iota}(\varepsilon)={\rm min} \{i<\lambda^+_\varepsilon:A_{\alpha,\varepsilon,\iota}=B_{\varepsilon,i}\}$$ Here we have used the assumption that $2^{\lambda_\varepsilon}=\lambda^+_\varepsilon$. For every $\alpha<\lambda$ let $g_\alpha\in\prod\limits_{\varepsilon<\kappa}\lambda_\varepsilon^+$ be defined by $g_\alpha(\varepsilon)= {\rm sup}\{g_{\alpha,\iota}(\varepsilon): \iota<\theta\}$. Note that $g_\alpha(\varepsilon)$ is well defined since each $\lambda_\varepsilon^+$ is regular (but all we need is $\theta<{{\rm cf}}(\lambda_\varepsilon^+)$, to be used in the sequel). Recall that $\Upsilon_1={\rm tcf}(\prod\limits_{\varepsilon<\kappa}\lambda_\varepsilon^+,<_J)$ and ${{\rm cf}}(\lambda)\neq\Upsilon_1$, hence there exists a function $g\in\prod\limits_{\varepsilon<\kappa}\lambda_\varepsilon^+$ and a set $S_1$ of size $\lambda$ so that $\alpha\in S_1\Rightarrow g_\alpha<_J g$. We may assume, without loss of generality, that $g(\varepsilon)>\lambda_\varepsilon$ for every $\varepsilon<\kappa$. Denote the set $\{\varepsilon<\kappa: g_\alpha(\varepsilon)<g(\varepsilon)\}$ by $u_\alpha$, for every $\alpha<\lambda$. Since $2^\kappa<{{\rm cf}}(\lambda)$, there are $u\subseteq\kappa$ and $S_2\in[S_1]^\lambda$ such that $u=\kappa\ {\rm mod\ }J$ and $\alpha\in S_2\Rightarrow u_\alpha=u$. Without loss of generality, $u=\kappa$. Take a closer look at the collection $\{B_{\varepsilon,i}:i<g(\varepsilon)\}$ (for every $\varepsilon<\kappa$). By the nature of the function $g$, this is a family of $\lambda_\varepsilon$-many sets, hence we can enumerate its members as $\{B_{\varepsilon,i}^1:i<\lambda_\varepsilon\}$. Notice that for every $\alpha\in S_2,\varepsilon<\kappa$ and $\iota<\theta$ we know that $A_{\alpha,\varepsilon,\iota}\in \{B_{\varepsilon,i}^1:i<\lambda_\varepsilon\}$. We need another round of unifying. By the same token as above, we define for every $\alpha\in S_2$ and $\iota<\theta$ the function $h_{\alpha,\iota}\in \prod\limits_{\varepsilon<\kappa}\lambda_\varepsilon$ as follows: $$h_{\alpha,\iota}(\varepsilon)={\rm min}\{i<\lambda_\varepsilon: B_{\varepsilon,i}^1=A_{\alpha,\varepsilon,\iota}\}$$ Now for $\alpha\in S_2$ set $h_\alpha(\varepsilon)={\rm sup}\{h_{\alpha,\iota}(\varepsilon)+1:\iota<\theta\}$ (for every $\varepsilon<\kappa$). Again, by our assumptions, $h_\alpha$ belongs to the product $\prod\limits_{\varepsilon<\kappa}\lambda_\varepsilon$ for every $\alpha\in S_2$. Since ${{\rm cf}}(\lambda)\neq\Upsilon_0$ (recall that $\Upsilon_0= {\rm tcf}(\prod\limits_{\varepsilon<\kappa}\lambda_\varepsilon,<_J)$) we can choose a function $h$ which bounds many $h_\alpha$-s. In other words, there are $h$ and $S_3\in[S_2]^\lambda$ so that $\alpha\in S_3\Rightarrow h_\alpha<_Jh$. Let $v_\alpha$ be the set $\{\varepsilon<\kappa: h_\alpha(\varepsilon)<h(\varepsilon)\}$, for every $\alpha\in S_3$. As before, since $2^\kappa<{{\rm cf}}(\lambda)$ one can find $v\subseteq\kappa$ and $S_4\in[S_3]^\lambda$ so that $\alpha\in S_4\Rightarrow v_\alpha=v$. Without loss of generality we assume, as usual, that $v=\kappa$. For every $\varepsilon<\kappa$ we define an equivalence relation $E_\varepsilon$ on $\lambda_\varepsilon$ as follows: $$\forall\gamma_1,\gamma_2\in\lambda_\varepsilon, \gamma_1E_\varepsilon\gamma_2 \Leftrightarrow (\gamma_1\in B_{\varepsilon,j}^1\equiv\gamma_2\in B_{\varepsilon,j}^1,\forall j<h(\varepsilon))$$ Observe that $E_\varepsilon$ has less than $\lambda_\varepsilon$ equivalence classes for every $\varepsilon<\kappa$, since each $\lambda_\varepsilon$ is an inaccessible cardinal. Consequently, we can choose an equivalence class $X_\varepsilon$ of size $\lambda_\varepsilon$ in each $E_\varepsilon$. For every $\alpha\in S_4$ let $\iota_{\alpha,\varepsilon}<\theta$ be the color associated with $X_\varepsilon$ (i.e., $c(\alpha,\gamma)=\iota_{\alpha,\varepsilon}$ for every $\gamma\in X_\varepsilon$). We arrived at the last stage of unifying $\varepsilon$-s. For every $\alpha\in S_4$ there is a color $\iota_\alpha$ so that the set $w_\alpha=\{\varepsilon<\kappa: \iota_{\alpha,\varepsilon}=\iota_\alpha\}$ is of size $\kappa$. Hence there are a color $\iota<\theta,w\in[\kappa]^\kappa$ and $S_5\in[S_4]^\lambda$ such that $\alpha\in S_5\Rightarrow \iota_\alpha= \iota,w_\alpha=w$. Set $A=S_5$ and $B=\bigcup\{X_\varepsilon:\varepsilon\in w\}$. Clearly, $A\in [\lambda]^\lambda,B\in[\mu]^\mu$. We claim that the product $A\times B$ exemplifies the positive relation $\binom{\lambda}{\mu}\rightarrow \binom{\lambda}{\mu}^{1,1}_2$. Indeed, if $\alpha\in A$ and $\beta\in B$ then $\alpha\in S_5$ and $\beta\in X_\varepsilon$ for some $\varepsilon\in w$. Consequently, $\iota_\alpha=\iota$ (for this specific $\alpha$) and $c(\alpha,\beta)=\iota_{\alpha,\varepsilon}=\iota_\alpha=\iota$ so we are done. \[muplus\] Positive relation for successor of singular. Suppose $(\kappa,\mu,\mu^+)$ satisfy $\circledast$ of Theorem \[mmtt\] (stipulating $\mu^+=\lambda$).  $\binom{\mu^+}{\mu}\rightarrow \binom{\mu^+}{\mu}^{1,1}_2$. In particular, this positive relation is consistent with ZFC. *Proof*. We refer to [@GaSh949], where the assumptions of the theorem are forced (and in fact, much more), but see also the discussion following the next remark below. \[949\] A similar result is forced in [@GaSh949], under the assumption that $\mu$ is a singular cardinal which is a limit of measurables. In the forcing extension of [@GaSh949], one has to admit the existence of a supercompact cardinal in the ground model. Nevertheless, the polarized relation there is slightly stronger. Being a limit of measurables entails $\binom{\lambda}{\mu}\rightarrow \binom{\lambda}{\mu}^{1,<\omega}_2$ there (which means that for every $c:\lambda\times[\mu]^{<\omega}\rightarrow 2$ there are $H_0\in[\lambda]^\lambda,H_1\in[\mu]^\mu$ such that for every $n\in\omega, c\upharpoonright(H_0\times[H_1]^n)$ is constant). We also indicate that the assumption $2^{\lambda_\varepsilon}=\lambda^+_\varepsilon$ is stronger than needed here. The value of $2^{\lambda_\varepsilon}$ can be replaced by a larger cardinal, provided that all the relevant products have true cofinality. Anyhow, some restriction should be imposed. If $2^{\lambda_\varepsilon}= \lambda_\varepsilon^{+\zeta(\varepsilon)}$ for every $\varepsilon<\kappa$, and the sequence $\langle\zeta(\varepsilon):\varepsilon<\kappa\rangle$ tends to $\mu$, then the argument breaks down. We can modify the proof above, to include another case. The consistency proof of the assumptions below is similar to those of Theorem \[mmtt\], yet comparing to [@GaSh949] we need less than supercompactness (in both theorems). A sufficient assumption in order to force the assumptions of these theorems is the existence of a strong cardinal in the ground model (and even slightly less, namely a $\tau$-strong cardinal for some suitable $\tau$). We hope to shed light on this subject in [@GaMgShF1211]. \[sssinggg\] Positive relation for limit of strong limit cardinals. Assume $\kappa={{\rm cf}}(\mu)<\mu<\lambda$, and $\theta<\kappa$. If $\circledcirc$ holds  $\binom{\lambda}{\mu}\rightarrow \binom{\lambda}{\mu}^{1,1}_\theta$ holds, when $\circledcirc$ means: 1. $2^\kappa<{{\rm cf}}(\lambda)$, 2. $J^{\rm bd}_\kappa\subseteq J$ is an ideal on $\kappa$, 3. $\langle\lambda_\varepsilon:\varepsilon<\kappa\rangle$ is an increasing sequence of cardinals which tends to $\mu$, 4. $2^{\lambda_\varepsilon}=\lambda_\varepsilon^+$ for every $\varepsilon<\kappa$, 5. $\lambda_\varepsilon$ is strong limit and ${{\rm cf}}(\lambda_\varepsilon)>\kappa$ for every $\varepsilon<\kappa$, 6. $\prod\limits_{\varepsilon<\kappa}{{\rm cf}}(\lambda_\varepsilon)< {{\rm cf}}(\lambda)$, 7. $\Upsilon_\ell= {\rm tcf}(\prod\limits_{\varepsilon<\kappa}\lambda_\varepsilon^{+\ell},<_J)$ is well defined for $\ell\in\{0,1\}$, 8. ${{\rm cf}}(\lambda)\notin\{\Upsilon_0,\Upsilon_1\}$. *Proof*. Proceed as in the proof of Theorem \[mmtt\], till the stage of defining the equivalence relations $E_\varepsilon$ on each $\lambda_\varepsilon$. At this stage we have isolated a large equivalence class (for every $\varepsilon<\kappa$), using the regularity of $\lambda_\varepsilon$. But here, $\lambda_\varepsilon$ is a singular cardinal, so we have to be more careful. For every $\varepsilon<\kappa$ we choose a sequence $\langle X_{\varepsilon,j} :j<{{\rm cf}}(\lambda_\varepsilon)\rangle$ so that each $X_{\varepsilon,j}$ is an equivalence class of $E_\varepsilon$, and $\Sigma\{|X_{\varepsilon,j}|: j<{{\rm cf}}(\lambda_\varepsilon)\}=\lambda_\varepsilon$. For every $\alpha\in S_4, \varepsilon<\kappa$ and $j<{{\rm cf}}(\lambda_\varepsilon)$ we choose a color $\iota_{\alpha,\varepsilon,j}<\theta$ so that: $$\gamma\in X_{\varepsilon,j}\Rightarrow c(\alpha,\gamma)=\iota_{\alpha,\varepsilon,j}$$ We claim that there are $S_5\in[S_4]^\lambda$ and a sequence of colors $\langle\iota_{\varepsilon,j}:\varepsilon<\kappa,j< {{\rm cf}}(\lambda_\varepsilon)\rangle$ such that $\alpha\in S_5\Rightarrow \iota_{\alpha,\varepsilon,j}=\iota_{\varepsilon,j}$ (here we use assumption $(f)$ of the present theorem). Moreover, there is a single color $\iota<\theta$ so that $\Sigma\{|X_{\varepsilon,j}|: \iota_{\varepsilon,j}=\iota, \varepsilon<\kappa,j<{{\rm cf}}(\lambda_\varepsilon)\}=\mu$. For this, notice that $\mu=\Sigma_{\varepsilon<\kappa} \lambda_\varepsilon = \Sigma_{\varepsilon<\kappa}\Sigma \{|X_{\varepsilon,j}|: j<{{\rm cf}}(\lambda_\varepsilon)\}$. Now we can define $A=S_5$ and $B=\bigcup \{X_{\varepsilon,j}: \iota_{\varepsilon,j}=\iota, \varepsilon<\kappa, j<{{\rm cf}}(\lambda_\varepsilon)\}$. It follows that $A\in[\lambda]^\lambda$ and $B\in[\mu]^\mu$. Since the product $A\times B$ is monochromatic, we are done. \[ffff\] The assumption $\prod\limits_{\varepsilon<\kappa}{{\rm cf}}(\lambda_\varepsilon)< {{\rm cf}}(\lambda)$ ((f) in the last theorem) can be omitted. We have to choose an equivalence class $X_\varepsilon$ of $E_\varepsilon$ of size at least $(\sum\limits_{\zeta<\varepsilon}\lambda_\zeta)^+$ so that ${\rm min}(X_\varepsilon)>\sum\{\lambda_\zeta:\zeta<\varepsilon\}$. But in some sense we get less. We conclude this section with the following: \[alll\] It is consistent that there is a singular $\mu,\kappa={{\rm cf}}(\mu)$, such that $\binom{\lambda}{\mu}\rightarrow\binom{\lambda}{\mu}^{1,1}_2$ holds for all $\lambda\in(\mu,2^\mu]$. *Proof*. By enlarging $2^\mu$ to a large enough value below $\mu^{+\omega}$, one can choose two sequences, $\langle\lambda_\varepsilon :\varepsilon<\kappa\rangle$ and $\langle\kappa_\varepsilon :\varepsilon<\kappa\rangle$ of inaccessibles (for simplicity) whose limit is $\mu$, and $\{\Upsilon_0^{\bar{\lambda}},\Upsilon_1^{\bar{\lambda}}\} \cap \{\Upsilon_0^{\bar{\kappa}},\Upsilon_1^{\bar{\kappa}}\}=\emptyset$ (the ideal we use is $J_\kappa^{\rm bd}$, see, for instance, the models constructed in [@MR1900900]). Notice that all the cardinals in the interval $[\mu^+,2^\mu]$ are regular. Now use Theorem \[mmtt\].